Image Title

Search Results for Power Protect:

Pierluca Chiodelli, Dell Technologies & Dan Cummins, Dell Technologies | MWC Barcelona 2023


 

(intro music) >> "theCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> We're not going to- >> Hey everybody, welcome back to the Fira in Barcelona. My name is Dave Vellante, I'm here with Dave Nicholson, day four of MWC23. I mean, it's Dave, it's, it's still really busy. And you walking the floors, you got to stop and start. >> It's surprising. >> People are cheering. They must be winding down, giving out the awards. Really excited. Pier, look at you and Elias here. He's the vice president of Engineering Technology for Edge Computing Offers Strategy and Execution at Dell Technologies, and he's joined by Dan Cummins, who's a fellow and vice president of, in the Edge Business Unit at Dell Technologies. Guys, welcome. >> Thank you. >> Thank you. >> I love when I see the term fellow. You know, you don't, they don't just give those away. What do you got to do to be a fellow at Dell? >> Well, you know, fellows are senior technical leaders within Dell. And they're usually tasked to help Dell solve you know, a very large business challenge to get to a fellow. There's only, I think, 17 of them inside of Dell. So it is a small crowd. You know, previously, really what got me to fellow, is my continued contribution to transform Dell's mid-range business, you know, VNX two, and then Unity, and then Power Store, you know, and then before, and then after that, you know, they asked me to come and, and help, you know, drive the technology vision for how Dell wins at the Edge. >> Nice. Congratulations. Now, Pierluca, I'm looking at this kind of cool chart here which is Edge, Edge platform by Dell Technologies, kind of this cube, like cubes course, you know. >> AK project from here. >> Yeah. So, so tell us about the Edge platform. What, what's your point of view on all that at Dell? >> Yeah, absolutely. So basically in a, when we create the Edge, and before even then was bringing aboard, to create this vision of the platform, and now building the platform when we announced project from here, was to create solution for the Edge. Dell has been at the edge for 30 years. We sold a lot of compute. But the reality was people want outcome. And so, and the Edge is a new market, very exciting, but very siloed. And so people at the Edge have different personas. So quickly realize that we need to bring in Dell, people with expertise, quickly realize as well that doing all these solution was not enough. There was a lot of problem to solve because the Edge is outside of the data center. So you are outside of the wall of the data center. And what is going to happen is obviously you are in the land of no one. And so you have million of device, thousand of million of device. All of us at home, we have all connected thing. And so we understand that the, the capability of Dell was to bring in technology to secure, manage, deploy, with zero touch, zero trust, the Edge. And all the edge the we're speaking about right now, we are focused on everything that is outside of a normal data center. So, how we married the computer that we have for many years, the new gateways that we create, so having the best portfolio, number one, having the best solution, but now, transforming the way that people deploy the Edge, and secure the Edge through a software platform that we create. >> You mentioned Project Frontier. I like that Dell started to do these sort of project, Project Alpine was sort of the multi-cloud storage. I call it "The Super Cloud." The Project Frontier. It's almost like you develop, it's like mission based. Like, "Okay, that's our North Star." People hear Project Frontier, they know, you know, internally what you're talking about. Maybe use it for external communications too, but what have you learned since launching Project Frontier? What's different about the Edge? I mean you're talking about harsh environments, you're talking about new models of connectivity. So, what have you learned from Project Frontier? What, I'd love to hear the fellow perspective as well, and what you guys are are learning so far. >> Yeah, I mean start and then I left to them, but we learn a lot. The first thing we learn that we are on the right path. So that's good, because every conversation we have, there is nobody say to us, you know, "You are crazy. "This is not needed." Any conversation we have this week, start with the telco thing. But after five minutes it goes to, okay, how I can solve the Edge, how I can bring the compute near where the data are created, and how I can do that secure at scale, and with the right price. And then can speak about how we're doing that. >> Yeah, yeah. But before that, we have to really back up and understand what Dell is doing with Project Frontier, which is an Edge operations platform, to simplify your Edge use cases. Now, Pierluca and his team have a number of verticalized applications. You want to be able to securely deploy those, you know, at the Edge. But you need a software platform that's going to simplify both the life cycle management, and the security at the Edge, with the ability to be able to construct and deploy distributed applications. Customers are looking to derive value near the point of generation of data. We see a massive explosion of data. But in particular, what's different about the Edge, is the different computing locations, and the constraints that are on those locations. You know, for example, you know, in a far Edge environment, the people that service that equipment are not trained in the IT, or train, trained in it. And they're also trained in the safety and security protocols of that environment. So you necessarily can't apply the same IT techniques when you're managing infrastructure and deploying applications, or servicing in those locations. So Frontier was designed to solve for those constraints. You know, often we see competitors that are doing similar things, that are starting from an IT mindset, and trying to shift down to cover Edge use cases. What we've done with Frontier, is actually first understood the constraints that they have at the Edge. Both the operational constraints and technology constraints, the service constraints, and then came up with a, an architecture and technology platform that allows them to start from the Edge, and bleed into the- >> So I'm laughing because you guys made the same mistake. And you, I think you learned from that mistake, right? You used to take X86 boxes and throw 'em over the fence. Now, you're building purpose-built systems, right? Project Frontier I think is an example of the learnings. You know, you guys an IT company, right? Come on. But you're learning fast, and that's what I'm impressed about. >> Well Glenn, of course we're here at MWC, so it's all telecom, telecom, telecom, but really, that's a subset of Edge. >> Yes. >> Fair to say? >> Yes. >> Can you give us an example of something that is, that is, orthogonal to, to telecom, you know, maybe off to the side, that maybe overlaps a little bit, but give us an, give us an example of Edge, that isn't specifically telecom focused. >> Well, you got the, the Edge verticals. and Pierluca could probably speak very well to this. You know, you got manufacturing, you got retail, you got automotive, you got oil and gas. Every single one of them are going to make different choices in the software that they're going to use, the hyperscaler investments that they're going to use, and then write some sort of automation, you know, to deploy that, right? And the Edge is highly fragmented across all of these. So we certainly could deploy a private wireless 5G solution, orchestrate that deployment through Frontier. We can also orchestrate other use cases like connected worker, or overall equipment effectiveness in manufacturing. But Pierluca you have a, you have a number. >> Well, but from your, so, but just to be clear, from your perspective, the whole idea of, for example, private 5g, it's a feature- >> Yes. >> That might be included. It happened, it's a network topology, a network function that might be a feature of an Edge environment. >> Yes. But it's not the center of the discussion. >> So, it enables the outcome. >> Yeah. >> Okay. >> So this, this week is a clear example where we confirm and establish this. The use case, as I said, right? They, you say correctly, we learned very fast, right? We brought people in that they came from industry that was not IT industry. We brought people in with the things, and we, we are Dell. So we have the luxury to be able to interview hundreds of customers, that just now they try to connect the OT with the IT together. And so what we learn, is really, at the Edge is different personas. They person that decide what to do at the Edge, is not the normal IT administrator, is not the normal telco. >> Who is it? Is it an engineer, or is it... >> It's, for example, the store manager. >> Yeah. >> It's, for example, the, the person that is responsible for the manufacturing process. Those people are not technology people by any means. But they have a business goal in mind. Their goal is, "I want to raise my productivity by 30%," hence, I need to have a preventive maintenance solution. How we prescribe this preventive maintenance solution? He doesn't prescribe the preventive maintenance solution. He goes out, he has to, a consult or himself, to deploy that solution, and he choose different fee. Now, the example that I was doing from the houses, all of us, we have connected device. The fact that in my house, I have a solar system that produce energy, the only things I care that I can read, how much energy I produce on my phone, and how much energy I send to get paid back. That's the only thing. The fact that inside there is a compute that is called Dell or other things is not important to me. Same persona. Now, if I can solve the security challenge that the SI, or the user need to implement this technology because it goes everywhere. And I can manage this in extensively, and I can put the supply chain of Dell on top of that. And I can go every part in the world, no matter if I have in Papua New Guinea, or I have an oil ring in Texas, that's the winning strategy. That's why people, they are very interested to the, including Telco, the B2B business in telco is looking very, very hard to how they recoup the investment in 5g. One of the way, is to reach out with solution. And if I can control and deploy things, more than just SD one or other things, or private mobility, that's the key. >> So, so you have, so you said manufacturing, retail, automotive, oil and gas, you have solutions for each of those, or you're building those, or... >> Right now we have solution for manufacturing, with for example, PTC. That is the biggest company. It's actually based in Boston. >> Yeah. Yeah, it is. There's a company that the market's just coming right to them. >> We have a, very interesting. Another solution with Litmus, that is a startup that, that also does manufacturing aggregation. We have retail with Deep North. So we can do detecting in the store, how many people they pass, how many people they doing, all of that. And all theses solution that will be, when we will have Frontier in the market, will be also in Frontier. We are also expanding to energy, and we going vertical by vertical. But what is they really learn, right? You said, you know you are an IT company. What, to me, the Edge is a pre virtualization area. It's like when we had, you know, I'm, I've been in the company for 24 years coming from EMC. The reality was before there was virtualization, everybody was starting his silo. Nobody thought about, "Okay, I can run this thing together "with security and everything, "but I need to do it." Because otherwise in a manufacturing, or in a shop, I can end up with thousand of devices, just because someone tell to me, I'm a, I'm a store manager, I don't know better. I take this video surveillance application, I take these things, I take a, you know, smart building solution, suddenly I have five, six, seven different infrastructure to run this thing because someone say so. So we are here to democratize the Edge, to secure the Edge, and to expand. That's the idea. >> So, the Frontier platform is really the horizontal platform. And you'll build specific solutions for verticals. On top of that, you'll, then I, then the beauty is ISV's come in. >> Yes. >> 'Cause it's open, and the developers. >> We have a self certification program already for our solution, as well, for the current solution, but also for Frontier. >> What does that involve? Self-certification. You go through you, you go through some- >> It's basically a, a ISV can come. We have a access to a lab, they can test the thing. If they pass the first screen, then they can become part of our ecosystem very easily. >> Ah. >> So they don't need to spend days or months with us to try to architect the thing. >> So they get the premature of being certified. >> They get the Dell brand associated with it. Maybe there's some go-to-market benefits- >> Yes. >> As well. Cool. What else do we need to know? >> So, one thing I, well one thing I just want to stress, you know, when we say horizontal platform, really, the Edge is really a, a distributed edge computing problem, right? And you need to almost create a mesh of different computing locations. So for example, even though Dell has Edge optimized infrastructure, that we're going to deploy and lifecycle manage, customers may also have compute solutions, existing compute solutions in their data center, or at a co-location facility that are compute destinations. Project Frontier will connect to those private cloud stacks. They'll also collect to, connect to multiple public cloud stacks. And then, what they can do, is the solutions that we talked about, they construct that using an open based, you know, protocol, template, that describes that distributed application that produces that outcome. And then through orchestration, we can then orchestrate across all of these locations to produce that outcome. That's what the platform's doing. >> So it's a compute mesh, is what you just described? >> Yeah, it's, it's a, it's a software orchestration mesh. >> Okay. >> Right. And allows customers to take advantage of their existing investments. Also allows them to, to construct solutions based on the ISV of their choice. We're offering solutions like Pierluca had talked about, you know, in manufacturing with Litmus and PTC, but they could put another use case that's together based on another ISV. >> Is there a data mesh analog here? >> The data mesh analog would run on top of that. We don't offer that as part of Frontier today, but we do have teams working inside of Dell that are working on this technology. But again, if there's other data mesh technology or packages, that they want to deploy as a solution, if you will, on top of Frontier, Frontier's extensible in that way as well. >> The open nature of Frontier is there's a, doesn't, doesn't care. It's just a note on the mesh. >> Yeah. >> Right. Now, of course you'd rather, you'd ideally want it to be Dell technology, and you'll make the business case as to why it should be. >> They get additional benefits if it's Dell. Pierluca talked a lot about, you know, deploying infrastructure outside the walls of an IT data center. You know, this stuff can be tampered with. Somebody can move it to another room, somebody can open up. In the supply chain with, you know, resellers that are adding additional people, can open these devices up. We're actually deploying using an Edge technology called Secure Device Onboarding. And it solves a number of things for us. We, as a manufacturer can initialize the roots of trust in the Dell hardware, such that we can validate, you know, tamper detection throughout the supply chain, and securely transfer ownership. And that's different. That is not an IT technique. That's an edge technique. And that's just one example. >> That's interesting. I've talked to other people in IT about how they're using that technique. So it's, it's trickling over to that side of the business. >> I'm almost curious about the friction that you, that you encounter because the, you know, you paint a picture of a, of a brave new world, a brave new future. Ideally, in a healthy organization, they have, there's a CTO, or at least maybe a CIO, with a CTO mindset. They're seeking to leverage technology in the service of whatever the mission of the organization is. But they've got responsibilities to keep the lights on, as well as innovate. In that mix, what are you seeing as the inhibitors? What's, what's the push back against Frontier that you're seeing in most cases? Is it, what, what is it? >> Inside of Dell? >> No, not, I'm saying out, I'm saying with- >> Market friction. >> Market, market, market friction. What is the push back? >> I think, you know, as I explained, do yourself is one of the things that probably is the most inhibitor, because some people, they think that they are better already. They invest a lot in this, and they have the content. But those are again, silo solutions. So, if you go into some of the huge things that they already established, thousand of store and stuff like that, there is an opportunity there, because also they want to have a refresh cycle. So when we speak about softer, softer, softer, when you are at the Edge, the software needs to run on something that is there. So the combination that we offer about controlling the security of the hardware, plus the operating system, and provide an end-to-end platform, allow them to solve a lot of problems that today they doing by themselves. Now, I met a lot of customers, some of them, one actually here in Spain, I will not make the name, but it's a large automotive. They have the same challenge. They try to build, but the problem is this is just for them. And they want to use something that is a backup and provide with the Dell service, Dell capability of supply chain in all the world, and the diversity of the portfolio we have. These guys right now, they need to go out and find different types of compute, or try to adjust thing, or they need to have 20 people there to just prepare the device. We will take out all of this. So I think the, the majority of the pushback is about people that they already established infrastructure, and they want to use that. But really, there is an opportunity here. Because the, as I said, the IT/OT came together now, it's a reality. Three years ago when we had our initiative, they've pointed out, sarcastically. We, we- >> Just trying to be honest. (laughing) >> I can't let you get away with that. >> And we, we failed because it was too early. And we were too focused on, on the fact to going. Push ourself to the boundary of the IOT. This platform is open. You want to run EdgeX, you run EdgeX, you want OpenVINO, you want Microsoft IOT, you run Microsoft IOT. We not prescribe the top. We are locking down the bottom. >> What you described is the inertia of, of sunk dollars, or sunk euro into an infrastructure, and now they're hanging onto that. >> Yeah. >> But, I mean, you know, I, when we say horizontal, we think scale, we think low cost, at volume. That will, that will win every time. >> There is a simplicity at scale, right? There is a, all the thing. >> And the, and the economics just overwhelm that siloed solution. >> And >> That's inevitable. >> You know, if you want to apply security across the entire thing, if you don't have a best practice, and a click that you can do that, or bring down an application that you need, you need to touch each one of these silos. So, they don't know yet, but we going to be there helping them. So there is no pushback. Actually, this particular example I did, this guy said you know, there are a lot of people that come here. Nobody really described the things we went through. So we are on the right track. >> Guys, great conversation. We really appreciate you coming on "theCUBE." >> Thank you. >> Pleasure to have you both. >> Okay. >> Thank you. >> All right. And thank you for watching Dave Vellante for Dave Nicholson. We're live at the Fira. We're winding up day four. Keep it right there. Go to siliconangle.com. John Furrier's got all the news on "theCUBE.net." We'll be right back right after this break. "theCUBE," at MWC 23. (outro music)

Published Date : Mar 2 2023

SUMMARY :

that drive human progress. And you walking the floors, in the Edge Business Unit the term fellow. and help, you know, drive cubes course, you know. about the Edge platform. and now building the platform when I like that Dell started to there is nobody say to us, you know, and the security at the Edge, an example of the learnings. Well Glenn, of course you know, maybe off to the side, in the software that they're going to use, a network function that might be a feature But it's not the center of the discussion. is really, at the Edge Who is it? that the SI, or the user So, so you have, so That is the biggest company. There's a company that the market's just I take a, you know, is really the horizontal platform. and the developers. We have a self What does that involve? We have a access to a lab, to try to architect the thing. So they get the premature They get the Dell As well. is the solutions that we talked about, it's a software orchestration mesh. on the ISV of their choice. that they want to deploy It's just a note on the mesh. as to why it should be. In the supply chain with, you know, to that side of the business. In that mix, what are you What is the push back? So the combination that we offer about Just trying to be honest. on the fact to going. What you described is the inertia of, you know, I, when we say horizontal, There is a, all the thing. overwhelm that siloed solution. and a click that you can do that, you coming on "theCUBE." And thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TelcoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Dan CumminsPERSON

0.99+

Dave NicholsonPERSON

0.99+

SpainLOCATION

0.99+

DellORGANIZATION

0.99+

EliasPERSON

0.99+

PierlucaPERSON

0.99+

TexasLOCATION

0.99+

Papua New GuineaLOCATION

0.99+

Pierluca ChiodelliPERSON

0.99+

30%QUANTITY

0.99+

BostonLOCATION

0.99+

Dave NicholsonPERSON

0.99+

GlennPERSON

0.99+

telcoORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

DavePERSON

0.99+

FrontierORGANIZATION

0.99+

EdgeORGANIZATION

0.99+

John FurrierPERSON

0.99+

LitmusORGANIZATION

0.99+

20 peopleQUANTITY

0.99+

fiveQUANTITY

0.99+

hundredsQUANTITY

0.99+

BarcelonaLOCATION

0.99+

24 yearsQUANTITY

0.99+

EMCORGANIZATION

0.99+

PTCORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

one exampleQUANTITY

0.99+

this weekDATE

0.99+

five minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

first screenQUANTITY

0.98+

sixQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

PierPERSON

0.98+

sevenQUANTITY

0.98+

Three years agoDATE

0.98+

EdgeTITLE

0.98+

OpenVINOTITLE

0.97+

Project FrontierORGANIZATION

0.97+

firstQUANTITY

0.97+

thousandQUANTITY

0.97+

BothQUANTITY

0.96+

first thingQUANTITY

0.96+

EdgeXTITLE

0.96+

Breaking Analysis: Enterprise Technology Predictions 2023


 

(upbeat music beginning) >> From the Cube Studios in Palo Alto and Boston, bringing you data-driven insights from the Cube and ETR, this is "Breaking Analysis" with Dave Vellante. >> Making predictions about the future of enterprise tech is more challenging if you strive to lay down forecasts that are measurable. In other words, if you make a prediction, you should be able to look back a year later and say, with some degree of certainty, whether the prediction came true or not, with evidence to back that up. Hello and welcome to this week's Wikibon Cube Insights, powered by ETR. In this breaking analysis, we aim to do just that, with predictions about the macro IT spending environment, cost optimization, security, lots to talk about there, generative AI, cloud, and of course supercloud, blockchain adoption, data platforms, including commentary on Databricks, snowflake, and other key players, automation, events, and we may even have some bonus predictions around quantum computing, and perhaps some other areas. To make all this happen, we welcome back, for the third year in a row, my colleague and friend Eric Bradley from ETR. Eric, thanks for all you do for the community, and thanks for being part of this program. Again. >> I wouldn't miss it for the world. I always enjoy this one. Dave, good to see you. >> Yeah, so let me bring up this next slide and show you, actually come back to me if you would. I got to show the audience this. These are the inbounds that we got from PR firms starting in October around predictions. They know we do prediction posts. And so they'll send literally thousands and thousands of predictions from hundreds of experts in the industry, technologists, consultants, et cetera. And if you bring up the slide I can show you sort of the pattern that developed here. 40% of these thousands of predictions were from cyber. You had AI and data. If you combine those, it's still not close to cyber. Cost optimization was a big thing. Of course, cloud, some on DevOps, and software. Digital... Digital transformation got, you know, some lip service and SaaS. And then there was other, it's kind of around 2%. So quite remarkable, when you think about the focus on cyber, Eric. >> Yeah, there's two reasons why I think it makes sense, though. One, the cybersecurity companies have a lot of cash, so therefore the PR firms might be working a little bit harder for them than some of their other clients. (laughs) And then secondly, as you know, for multiple years now, when we do our macro survey, we ask, "What's your number one spending priority?" And again, it's security. It just isn't going anywhere. It just stays at the top. So I'm actually not that surprised by that little pie chart there, but I was shocked that SaaS was only 5%. You know, going back 10 years ago, that would've been the only thing anyone was talking about. >> Yeah. So true. All right, let's get into it. First prediction, we always start with kind of tech spending. Number one is tech spending increases between four and 5%. ETR has currently got it at 4.6% coming into 2023. This has been a consistently downward trend all year. We started, you know, much, much higher as we've been reporting. Bottom line is the fed is still in control. They're going to ease up on tightening, is the expectation, they're going to shoot for a soft landing. But you know, my feeling is this slingshot economy is going to continue, and it's going to continue to confound, whether it's supply chains or spending. The, the interesting thing about the ETR data, Eric, and I want you to comment on this, the largest companies are the most aggressive to cut. They're laying off, smaller firms are spending faster. They're actually growing at a much larger, faster rate as are companies in EMEA. And that's a surprise. That's outpacing the US and APAC. Chime in on this, Eric. >> Yeah, I was surprised on all of that. First on the higher level spending, we are definitely seeing it coming down, but the interesting thing here is headlines are making it worse. The huge research shop recently said 0% growth. We're coming in at 4.6%. And just so everyone knows, this is not us guessing, we asked 1,525 IT decision-makers what their budget growth will be, and they came in at 4.6%. Now there's a huge disparity, as you mentioned. The Fortune 500, global 2000, barely at 2% growth, but small, it's at 7%. So we're at a situation right now where the smaller companies are still playing a little bit of catch up on digital transformation, and they're spending money. The largest companies that have the most to lose from a recession are being more trepidatious, obviously. So they're playing a "Wait and see." And I hope we don't talk ourselves into a recession. Certainly the headlines and some of their research shops are helping it along. But another interesting comment here is, you know, energy and utilities used to be called an orphan and widow stock group, right? They are spending more than anyone, more than financials insurance, more than retail consumer. So right now it's being driven by mid, small, and energy and utilities. They're all spending like gangbusters, like nothing's happening. And it's the rest of everyone else that's being very cautious. >> Yeah, so very unpredictable right now. All right, let's go to number two. Cost optimization remains a major theme in 2023. We've been reporting on this. You've, we've shown a chart here. What's the primary method that your organization plans to use? You asked this question of those individuals that cited that they were going to reduce their spend and- >> Mhm. >> consolidating redundant vendors, you know, still leads the way, you know, far behind, cloud optimization is second, but it, but cloud continues to outpace legacy on-prem spending, no doubt. Somebody, it was, the guy's name was Alexander Feiglstorfer from Storyblok, sent in a prediction, said "All in one becomes extinct." Now, generally I would say I disagree with that because, you know, as we know over the years, suites tend to win out over, you know, individual, you know, point products. But I think what's going to happen is all in one is going to remain the norm for these larger companies that are cutting back. They want to consolidate redundant vendors, and the smaller companies are going to stick with that best of breed and be more aggressive and try to compete more effectively. What's your take on that? >> Yeah, I'm seeing much more consolidation in vendors, but also consolidation in functionality. We're seeing people building out new functionality, whether it's, we're going to talk about this later, so I don't want to steal too much of our thunder right now, but data and security also, we're seeing a functionality creep. So I think there's further consolidation happening here. I think niche solutions are going to be less likely, and platform solutions are going to be more likely in a spending environment where you want to reduce your vendors. You want to have one bill to pay, not 10. Another thing on this slide, real quick if I can before I move on, is we had a bunch of people write in and some of the answer options that aren't on this graph but did get cited a lot, unfortunately, is the obvious reduction in staff, hiring freezes, and delaying hardware, were three of the top write-ins. And another one was offshore outsourcing. So in addition to what we're seeing here, there were a lot of write-in options, and I just thought it would be important to state that, but essentially the cost optimization is by and far the highest one, and it's growing. So it's actually increased in our citations over the last year. >> And yeah, specifically consolidating redundant vendors. And so I actually thank you for bringing that other up, 'cause I had asked you, Eric, is there any evidence that repatriation is going on and we don't see it in the numbers, we don't see it even in the other, there was, I think very little or no mention of cloud repatriation, even though it might be happening in this in a smattering. >> Not a single mention, not one single mention. I went through it for you. Yep. Not one write-in. >> All right, let's move on. Number three, security leads M&A in 2023. Now you might say, "Oh, well that's a layup," but let me set this up Eric, because I didn't really do a great job with the slide. I hid the, what you've done, because you basically took, this is from the emerging technology survey with 1,181 responses from November. And what we did is we took Palo Alto and looked at the overlap in Palo Alto Networks accounts with these vendors that were showing on this chart. And Eric, I'm going to ask you to explain why we put a circle around OneTrust, but let me just set it up, and then have you comment on the slide and take, give us more detail. We're seeing private company valuations are off, you know, 10 to 40%. We saw a sneak, do a down round, but pretty good actually only down 12%. We've seen much higher down rounds. Palo Alto Networks we think is going to get busy. Again, they're an inquisitive company, they've been sort of quiet lately, and we think CrowdStrike, Cisco, Microsoft, Zscaler, we're predicting all of those will make some acquisitions and we're thinking that the targets are somewhere in this mess of security taxonomy. Other thing we're predicting AI meets cyber big time in 2023, we're going to probably going to see some acquisitions of those companies that are leaning into AI. We've seen some of that with Palo Alto. And then, you know, your comment to me, Eric, was "The RSA conference is going to be insane, hopping mad, "crazy this April," (Eric laughing) but give us your take on this data, and why the red circle around OneTrust? Take us back to that slide if you would, Alex. >> Sure. There's a few things here. First, let me explain what we're looking at. So because we separate the public companies and the private companies into two separate surveys, this allows us the ability to cross-reference that data. So what we're doing here is in our public survey, the tesis, everyone who cited some spending with Palo Alto, meaning they're a Palo Alto customer, we then cross-reference that with the private tech companies. Who also are they spending with? So what you're seeing here is an overlap. These companies that we have circled are doing the best in Palo Alto's accounts. Now, Palo Alto went and bought Twistlock a few years ago, which this data slide predicted, to be quite honest. And so I don't know if they necessarily are going to go after Snyk. Snyk, sorry. They already have something in that space. What they do need, however, is more on the authentication space. So I'm looking at OneTrust, with a 45% overlap in their overall net sentiment. That is a company that's already existing in their accounts and could be very synergistic to them. BeyondTrust as well, authentication identity. This is something that Palo needs to do to move more down that zero trust path. Now why did I pick Palo first? Because usually they're very inquisitive. They've been a little quiet lately. Secondly, if you look at the backdrop in the markets, the IPO freeze isn't going to last forever. Sooner or later, the IPO markets are going to open up, and some of these private companies are going to tap into public equity. In the meantime, however, cash funding on the private side is drying up. If they need another round, they're not going to get it, and they're certainly not going to get it at the valuations they were getting. So we're seeing valuations maybe come down where they're a touch more attractive, and Palo knows this isn't going to last forever. Cisco knows that, CrowdStrike, Zscaler, all these companies that are trying to make a push to become that vendor that you're consolidating in, around, they have a chance now, they have a window where they need to go make some acquisitions. And that's why I believe leading up to RSA, we're going to see some movement. I think it's going to pretty, a really exciting time in security right now. >> Awesome. Thank you. Great explanation. All right, let's go on the next one. Number four is, it relates to security. Let's stay there. Zero trust moves from hype to reality in 2023. Now again, you might say, "Oh yeah, that's a layup." A lot of these inbounds that we got are very, you know, kind of self-serving, but we always try to put some meat in the bone. So first thing we do is we pull out some commentary from, Eric, your roundtable, your insights roundtable. And we have a CISO from a global hospitality firm says, "For me that's the highest priority." He's talking about zero trust because it's the best ROI, it's the most forward-looking, and it enables a lot of the business transformation activities that we want to do. CISOs tell me that they actually can drive forward transformation projects that have zero trust, and because they can accelerate them, because they don't have to go through the hurdle of, you know, getting, making sure that it's secure. Second comment, zero trust closes that last mile where once you're authenticated, they open up the resource to you in a zero trust way. That's a CISO of a, and a managing director of a cyber risk services enterprise. Your thoughts on this? >> I can be here all day, so I'm going to try to be quick on this one. This is not a fluff piece on this one. There's a couple of other reasons this is happening. One, the board finally gets it. Zero trust at first was just a marketing hype term. Now the board understands it, and that's why CISOs are able to push through it. And what they finally did was redefine what it means. Zero trust simply means moving away from hardware security, moving towards software-defined security, with authentication as its base. The board finally gets that, and now they understand that this is necessary and it's being moved forward. The other reason it's happening now is hybrid work is here to stay. We weren't really sure at first, large companies were still trying to push people back to the office, and it's going to happen. The pendulum will swing back, but hybrid work's not going anywhere. By basically on our own data, we're seeing that 69% of companies expect remote and hybrid to be permanent, with only 30% permanent in office. Zero trust works for a hybrid environment. So all of that is the reason why this is happening right now. And going back to our previous prediction, this is why we're picking Palo, this is why we're picking Zscaler to make these acquisitions. Palo Alto needs to be better on the authentication side, and so does Zscaler. They're both fantastic on zero trust network access, but they need the authentication software defined aspect, and that's why we think this is going to happen. One last thing, in that CISO round table, I also had somebody say, "Listen, Zscaler is incredible. "They're doing incredibly well pervading the enterprise, "but their pricing's getting a little high," and they actually think Palo Alto is well-suited to start taking some of that share, if Palo can make one move. >> Yeah, Palo Alto's consolidation story is very strong. Here's my question and challenge. Do you and me, so I'm always hardcore about, okay, you've got to have evidence. I want to look back at these things a year from now and say, "Did we get it right? Yes or no?" If we got it wrong, we'll tell you we got it wrong. So how are we going to measure this? I'd say a couple things, and you can chime in. One is just the number of vendors talking about it. That's, but the marketing always leads the reality. So the second part of that is we got to get evidence from the buying community. Can you help us with that? >> (laughs) Luckily, that's what I do. I have a data company that asks thousands of IT decision-makers what they're adopting and what they're increasing spend on, as well as what they're decreasing spend on and what they're replacing. So I have snapshots in time over the last 11 years where I can go ahead and compare and contrast whether this adoption is happening or not. So come back to me in 12 months and I'll let you know. >> Now, you know, I will. Okay, let's bring up the next one. Number five, generative AI hits where the Metaverse missed. Of course everybody's talking about ChatGPT, we just wrote last week in a breaking analysis with John Furrier and Sarjeet Joha our take on that. We think 2023 does mark a pivot point as natural language processing really infiltrates enterprise tech just as Amazon turned the data center into an API. We think going forward, you're going to be interacting with technology through natural language, through English commands or other, you know, foreign language commands, and investors are lining up, all the VCs are getting excited about creating something competitive to ChatGPT, according to (indistinct) a hundred million dollars gets you a seat at the table, gets you into the game. (laughing) That's before you have to start doing promotion. But he thinks that's what it takes to actually create a clone or something equivalent. We've seen stuff from, you know, the head of Facebook's, you know, AI saying, "Oh, it's really not that sophisticated, ChatGPT, "it's kind of like IBM Watson, it's great engineering, "but you know, we've got more advanced technology." We know Google's working on some really interesting stuff. But here's the thing. ETR just launched this survey for the February survey. It's in the field now. We circle open AI in this category. They weren't even in the survey, Eric, last quarter. So 52% of the ETR survey respondents indicated a positive sentiment toward open AI. I added up all the sort of different bars, we could double click on that. And then I got this inbound from Scott Stevenson of Deep Graham. He said "AI is recession-proof." I don't know if that's the case, but it's a good quote. So bring this back up and take us through this. Explain this chart for us, if you would. >> First of all, I like Scott's quote better than the Facebook one. I think that's some sour grapes. Meta just spent an insane amount of money on the Metaverse and that's a dud. Microsoft just spent money on open AI and it is hot, undoubtedly hot. We've only been in the field with our current ETS survey for a week. So my caveat is it's preliminary data, but I don't care if it's preliminary data. (laughing) We're getting a sneak peek here at what is the number one net sentiment and mindshare leader in the entire machine-learning AI sector within a week. It's beating Data- >> 600. 600 in. >> It's beating Databricks. And we all know Databricks is a huge established enterprise company, not only in machine-learning AI, but it's in the top 10 in the entire survey. We have over 400 vendors in this survey. It's number eight overall, already. In a week. This is not hype. This is real. And I could go on the NLP stuff for a while. Not only here are we seeing it in open AI and machine-learning and AI, but we're seeing NLP in security. It's huge in email security. It's completely transforming that area. It's one of the reasons I thought Palo might take Abnormal out. They're doing such a great job with NLP in this email side, and also in the data prep tools. NLP is going to take out data prep tools. If we have time, I'll discuss that later. But yeah, this is, to me this is a no-brainer, and we're already seeing it in the data. >> Yeah, John Furrier called, you know, the ChatGPT introduction. He said it reminded him of the Netscape moment, when we all first saw Netscape Navigator and went, "Wow, it really could be transformative." All right, number six, the cloud expands to supercloud as edge computing accelerates and CloudFlare is a big winner in 2023. We've reported obviously on cloud, multi-cloud, supercloud and CloudFlare, basically saying what multi-cloud should have been. We pulled this quote from Atif Kahn, who is the founder and CTO of Alkira, thanks, one of the inbounds, thank you. "In 2023, highly distributed IT environments "will become more the norm "as organizations increasingly deploy hybrid cloud, "multi-cloud and edge settings..." Eric, from one of your round tables, "If my sources from edge computing are coming "from the cloud, that means I have my workloads "running in the cloud. "There is no one better than CloudFlare," That's a senior director of IT architecture at a huge financial firm. And then your analysis shows CloudFlare really growing in pervasion, that sort of market presence in the dataset, dramatically, to near 20%, leading, I think you had told me that they're even ahead of Google Cloud in terms of momentum right now. >> That was probably the biggest shock to me in our January 2023 tesis, which covers the public companies in the cloud computing sector. CloudFlare has now overtaken GCP in overall spending, and I was shocked by that. It's already extremely pervasive in networking, of course, for the edge networking side, and also in security. This is the number one leader in SaaSi, web access firewall, DDoS, bot protection, by your definition of supercloud, which we just did a couple of weeks ago, and I really enjoyed that by the way Dave, I think CloudFlare is the one that fits your definition best, because it's bringing all of these aspects together, and most importantly, it's cloud agnostic. It does not need to rely on Azure or AWS to do this. It has its own cloud. So I just think it's, when we look at your definition of supercloud, CloudFlare is the poster child. >> You know, what's interesting about that too, is a lot of people are poo-pooing CloudFlare, "Ah, it's, you know, really kind of not that sophisticated." "You don't have as many tools," but to your point, you're can have those tools in the cloud, Cloudflare's doing serverless on steroids, trying to keep things really simple, doing a phenomenal job at, you know, various locations around the world. And they're definitely one to watch. Somebody put them on my radar (laughing) a while ago and said, "Dave, you got to do a breaking analysis on CloudFlare." And so I want to thank that person. I can't really name them, 'cause they work inside of a giant hyperscaler. But- (Eric laughing) (Dave chuckling) >> Real quickly, if I can from a competitive perspective too, who else is there? They've already taken share from Akamai, and Fastly is their really only other direct comp, and they're not there. And these guys are in poll position and they're the only game in town right now. I just, I don't see it slowing down. >> I thought one of your comments from your roundtable I was reading, one of the folks said, you know, CloudFlare, if my workloads are in the cloud, they are, you know, dominant, they said not as strong with on-prem. And so Akamai is doing better there. I'm like, "Okay, where would you want to be?" (laughing) >> Yeah, which one of those two would you rather be? >> Right? Anyway, all right, let's move on. Number seven, blockchain continues to look for a home in the enterprise, but devs will slowly begin to adopt in 2023. You know, blockchains have got a lot of buzz, obviously crypto is, you know, the killer app for blockchain. Senior IT architect in financial services from your, one of your insight roundtables said quote, "For enterprises to adopt a new technology, "there have to be proven turnkey solutions. "My experience in talking with my peers are, "blockchain is still an open-source component "where you have to build around it." Now I want to thank Ravi Mayuram, who's the CTO of Couchbase sent in, you know, one of the predictions, he said, "DevOps will adopt blockchain, specifically Ethereum." And he referenced actually in his email to me, Solidity, which is the programming language for Ethereum, "will be in every DevOps pro's playbook, "mirroring the boom in machine-learning. "Newer programming languages like Solidity "will enter the toolkits of devs." His point there, you know, Solidity for those of you don't know, you know, Bitcoin is not programmable. Solidity, you know, came out and that was their whole shtick, and they've been improving that, and so forth. But it, Eric, it's true, it really hasn't found its home despite, you know, the potential for smart contracts. IBM's pushing it, VMware has had announcements, and others, really hasn't found its way in the enterprise yet. >> Yeah, and I got to be honest, I don't think it's going to, either. So when we did our top trends series, this was basically chosen as an anti-prediction, I would guess, that it just continues to not gain hold. And the reason why was that first comment, right? It's very much a niche solution that requires a ton of custom work around it. You can't just plug and play it. And at the end of the day, let's be very real what this technology is, it's a database ledger, and we already have database ledgers in the enterprise. So why is this a priority to move to a different database ledger? It's going to be very niche cases. I like the CTO comment from Couchbase about it being adopted by DevOps. I agree with that, but it has to be a DevOps in a very specific use case, and a very sophisticated use case in financial services, most likely. And that's not across the entire enterprise. So I just think it's still going to struggle to get its foothold for a little bit longer, if ever. >> Great, thanks. Okay, let's move on. Number eight, AWS Databricks, Google Snowflake lead the data charge with Microsoft. Keeping it simple. So let's unpack this a little bit. This is the shared accounts peer position for, I pulled data platforms in for analytics, machine-learning and AI and database. So I could grab all these accounts or these vendors and see how they compare in those three sectors. Analytics, machine-learning and database. Snowflake and Databricks, you know, they're on a crash course, as you and I have talked about. They're battling to be the single source of truth in analytics. They're, there's going to be a big focus. They're already started. It's going to be accelerated in 2023 on open formats. Iceberg, Python, you know, they're all the rage. We heard about Iceberg at Snowflake Summit, last summer or last June. Not a lot of people had heard of it, but of course the Databricks crowd, who knows it well. A lot of other open source tooling. There's a company called DBT Labs, which you're going to talk about in a minute. George Gilbert put them on our radar. We just had Tristan Handy, the CEO of DBT labs, on at supercloud last week. They are a new disruptor in data that's, they're essentially making, they're API-ifying, if you will, KPIs inside the data warehouse and dramatically simplifying that whole data pipeline. So really, you know, the ETL guys should be shaking in their boots with them. Coming back to the slide. Google really remains focused on BigQuery adoption. Customers have complained to me that they would like to use Snowflake with Google's AI tools, but they're being forced to go to BigQuery. I got to ask Google about that. AWS continues to stitch together its bespoke data stores, that's gone down that "Right tool for the right job" path. David Foyer two years ago said, "AWS absolutely is going to have to solve that problem." We saw them start to do it in, at Reinvent, bringing together NoETL between Aurora and Redshift, and really trying to simplify those worlds. There's going to be more of that. And then Microsoft, they're just making it cheap and easy to use their stuff, you know, despite some of the complaints that we hear in the community, you know, about things like Cosmos, but Eric, your take? >> Yeah, my concern here is that Snowflake and Databricks are fighting each other, and it's allowing AWS and Microsoft to kind of catch up against them, and I don't know if that's the right move for either of those two companies individually, Azure and AWS are building out functionality. Are they as good? No they're not. The other thing to remember too is that AWS and Azure get paid anyway, because both Databricks and Snowflake run on top of 'em. So (laughing) they're basically collecting their toll, while these two fight it out with each other, and they build out functionality. I think they need to stop focusing on each other, a little bit, and think about the overall strategy. Now for Databricks, we know they came out first as a machine-learning AI tool. They were known better for that spot, and now they're really trying to play catch-up on that data storage compute spot, and inversely for Snowflake, they were killing it with the compute separation from storage, and now they're trying to get into the MLAI spot. I actually wouldn't be surprised to see them make some sort of acquisition. Frank Slootman has been a little bit quiet, in my opinion there. The other thing to mention is your comment about DBT Labs. If we look at our emerging technology survey, last survey when this came out, DBT labs, number one leader in that data integration space, I'm going to just pull it up real quickly. It looks like they had a 33% overall net sentiment to lead data analytics integration. So they are clearly growing, it's fourth straight survey consecutively that they've grown. The other name we're seeing there a little bit is Cribl, but DBT labs is by far the number one player in this space. >> All right. Okay, cool. Moving on, let's go to number nine. With Automation mixer resurgence in 2023, we're showing again data. The x axis is overlap or presence in the dataset, and the vertical axis is shared net score. Net score is a measure of spending momentum. As always, you've seen UI path and Microsoft Power Automate up until the right, that red line, that 40% line is generally considered elevated. UI path is really separating, creating some distance from Automation Anywhere, they, you know, previous quarters they were much closer. Microsoft Power Automate came on the scene in a big way, they loom large with this "Good enough" approach. I will say this, I, somebody sent me a results of a (indistinct) survey, which showed UiPath actually had more mentions than Power Automate, which was surprising, but I think that's not been the case in the ETR data set. We're definitely seeing a shift from back office to front soft office kind of workloads. Having said that, software testing is emerging as a mainstream use case, we're seeing ML and AI become embedded in end-to-end automations, and low-code is serving the line of business. And so this, we think, is going to increasingly have appeal to organizations in the coming year, who want to automate as much as possible and not necessarily, we've seen a lot of layoffs in tech, and people... You're going to have to fill the gaps with automation. That's a trend that's going to continue. >> Yep, agreed. At first that comment about Microsoft Power Automate having less citations than UiPath, that's shocking to me. I'm looking at my chart right here where Microsoft Power Automate was cited by over 60% of our entire survey takers, and UiPath at around 38%. Now don't get me wrong, 38% pervasion's fantastic, but you know you're not going to beat an entrenched Microsoft. So I don't really know where that comment came from. So UiPath, looking at it alone, it's doing incredibly well. It had a huge rebound in its net score this last survey. It had dropped going through the back half of 2022, but we saw a big spike in the last one. So it's got a net score of over 55%. A lot of people citing adoption and increasing. So that's really what you want to see for a name like this. The problem is that just Microsoft is doing its playbook. At the end of the day, I'm going to do a POC, why am I going to pay more for UiPath, or even take on another separate bill, when we know everyone's consolidating vendors, if my license already includes Microsoft Power Automate? It might not be perfect, it might not be as good, but what I'm hearing all the time is it's good enough, and I really don't want another invoice. >> Right. So how does UiPath, you know, and Automation Anywhere, how do they compete with that? Well, the way they compete with it is they got to have a better product. They got a product that's 10 times better. You know, they- >> Right. >> they're not going to compete based on where the lowest cost, Microsoft's got that locked up, or where the easiest to, you know, Microsoft basically give it away for free, and that's their playbook. So that's, you know, up to UiPath. UiPath brought on Rob Ensslin, I've interviewed him. Very, very capable individual, is now Co-CEO. So he's kind of bringing that adult supervision in, and really tightening up the go to market. So, you know, we know this company has been a rocket ship, and so getting some control on that and really getting focused like a laser, you know, could be good things ahead there for that company. Okay. >> One of the problems, if I could real quick Dave, is what the use cases are. When we first came out with RPA, everyone was super excited about like, "No, UiPath is going to be great for super powerful "projects, use cases." That's not what RPA is being used for. As you mentioned, it's being used for mundane tasks, so it's not automating complex things, which I think UiPath was built for. So if you were going to get UiPath, and choose that over Microsoft, it's going to be 'cause you're doing it for more powerful use case, where it is better. But the problem is that's not where the enterprise is using it. The enterprise are using this for base rote tasks, and simply, Microsoft Power Automate can do that. >> Yeah, it's interesting. I've had people on theCube that are both Microsoft Power Automate customers and UiPath customers, and I've asked them, "Well you know, "how do you differentiate between the two?" And they've said to me, "Look, our users and personal productivity users, "they like Power Automate, "they can use it themselves, and you know, "it doesn't take a lot of, you know, support on our end." The flip side is you could do that with UiPath, but like you said, there's more of a focus now on end-to-end enterprise automation and building out those capabilities. So it's increasingly a value play, and that's going to be obviously the challenge going forward. Okay, my last one, and then I think you've got some bonus ones. Number 10, hybrid events are the new category. Look it, if I can get a thousand inbounds that are largely self-serving, I can do my own here, 'cause we're in the events business. (Eric chuckling) Here's the prediction though, and this is a trend we're seeing, the number of physical events is going to dramatically increase. That might surprise people, but most of the big giant events are going to get smaller. The exception is AWS with Reinvent, I think Snowflake's going to continue to grow. So there are examples of physical events that are growing, but generally, most of the big ones are getting smaller, and there's going to be many more smaller intimate regional events and road shows. These micro-events, they're going to be stitched together. Digital is becoming a first class citizen, so people really got to get their digital acts together, and brands are prioritizing earned media, and they're beginning to build their own news networks, going direct to their customers. And so that's a trend we see, and I, you know, we're right in the middle of it, Eric, so you know we're going to, you mentioned RSA, I think that's perhaps going to be one of those crazy ones that continues to grow. It's shrunk, and then it, you know, 'cause last year- >> Yeah, it did shrink. >> right, it was the last one before the pandemic, and then they sort of made another run at it last year. It was smaller but it was very vibrant, and I think this year's going to be huge. Global World Congress is another one, we're going to be there end of Feb. That's obviously a big big show, but in general, the brands and the technology vendors, even Oracle is going to scale down. I don't know about Salesforce. We'll see. You had a couple of bonus predictions. Quantum and maybe some others? Bring us home. >> Yeah, sure. I got a few more. I think we touched upon one, but I definitely think the data prep tools are facing extinction, unfortunately, you know, the Talons Informatica is some of those names. The problem there is that the BI tools are kind of including data prep into it already. You know, an example of that is Tableau Prep Builder, and then in addition, Advanced NLP is being worked in as well. ThoughtSpot, Intelius, both often say that as their selling point, Tableau has Ask Data, Click has Insight Bot, so you don't have to really be intelligent on data prep anymore. A regular business user can just self-query, using either the search bar, or even just speaking into what it needs, and these tools are kind of doing the data prep for it. I don't think that's a, you know, an out in left field type of prediction, but it's the time is nigh. The other one I would also state is that I think knowledge graphs are going to break through this year. Neo4j in our survey is growing in pervasion in Mindshare. So more and more people are citing it, AWS Neptune's getting its act together, and we're seeing that spending intentions are growing there. Tiger Graph is also growing in our survey sample. I just think that the time is now for knowledge graphs to break through, and if I had to do one more, I'd say real-time streaming analytics moves from the very, very rich big enterprises to downstream, to more people are actually going to be moving towards real-time streaming, again, because the data prep tools and the data pipelines have gotten easier to use, and I think the ROI on real-time streaming is obviously there. So those are three that didn't make the cut, but I thought deserved an honorable mention. >> Yeah, I'm glad you did. Several weeks ago, we did an analyst prediction roundtable, if you will, a cube session power panel with a number of data analysts and that, you know, streaming, real-time streaming was top of mind. So glad you brought that up. Eric, as always, thank you very much. I appreciate the time you put in beforehand. I know it's been crazy, because you guys are wrapping up, you know, the last quarter survey in- >> Been a nuts three weeks for us. (laughing) >> job. I love the fact that you're doing, you know, the ETS survey now, I think it's quarterly now, right? Is that right? >> Yep. >> Yep. So that's phenomenal. >> Four times a year. I'll be happy to jump on with you when we get that done. I know you were really impressed with that last time. >> It's unbelievable. This is so much data at ETR. Okay. Hey, that's a wrap. Thanks again. >> Take care Dave. Good seeing you. >> All right, many thanks to our team here, Alex Myerson as production, he manages the podcast force. Ken Schiffman as well is a critical component of our East Coast studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hoof is our editor-in-chief. He's at siliconangle.com. He's just a great editing for us. Thank you all. Remember all these episodes that are available as podcasts, wherever you listen, podcast is doing great. Just search "Breaking analysis podcast." Really appreciate you guys listening. I publish each week on wikibon.com and siliconangle.com, or you can email me directly if you want to get in touch, david.vellante@siliconangle.com. That's how I got all these. I really appreciate it. I went through every single one with a yellow highlighter. It took some time, (laughing) but I appreciate it. You could DM me at dvellante, or comment on our LinkedIn post and please check out etr.ai. Its data is amazing. Best survey data in the enterprise tech business. This is Dave Vellante for theCube Insights, powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis." (upbeat music beginning) (upbeat music ending)

Published Date : Jan 29 2023

SUMMARY :

insights from the Cube and ETR, do for the community, Dave, good to see you. actually come back to me if you would. It just stays at the top. the most aggressive to cut. that have the most to lose What's the primary method still leads the way, you know, So in addition to what we're seeing here, And so I actually thank you I went through it for you. I'm going to ask you to explain and they're certainly not going to get it to you in a zero trust way. So all of that is the One is just the number of So come back to me in 12 So 52% of the ETR survey amount of money on the Metaverse and also in the data prep tools. the cloud expands to the biggest shock to me "Ah, it's, you know, really and Fastly is their really the folks said, you know, for a home in the enterprise, Yeah, and I got to be honest, in the community, you know, and I don't know if that's the right move and the vertical axis is shared net score. So that's really what you want Well, the way they compete So that's, you know, One of the problems, if and that's going to be obviously even Oracle is going to scale down. and the data pipelines and that, you know, Been a nuts three I love the fact I know you were really is so much data at ETR. and we'll see you next time

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MyersonPERSON

0.99+

EricPERSON

0.99+

Eric BradleyPERSON

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Rob HoofPERSON

0.99+

AmazonORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10QUANTITY

0.99+

Ravi MayuramPERSON

0.99+

Cheryl KnightPERSON

0.99+

George GilbertPERSON

0.99+

Ken SchiffmanPERSON

0.99+

AWSORGANIZATION

0.99+

Tristan HandyPERSON

0.99+

DavePERSON

0.99+

Atif KahnPERSON

0.99+

NovemberDATE

0.99+

Frank SlootmanPERSON

0.99+

APACORGANIZATION

0.99+

ZscalerORGANIZATION

0.99+

PaloORGANIZATION

0.99+

David FoyerPERSON

0.99+

FebruaryDATE

0.99+

January 2023DATE

0.99+

DBT LabsORGANIZATION

0.99+

OctoberDATE

0.99+

Rob EnsslinPERSON

0.99+

Scott StevensonPERSON

0.99+

John FurrierPERSON

0.99+

69%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

CrowdStrikeORGANIZATION

0.99+

4.6%QUANTITY

0.99+

10 timesQUANTITY

0.99+

2023DATE

0.99+

ScottPERSON

0.99+

1,181 responsesQUANTITY

0.99+

Palo AltoORGANIZATION

0.99+

third yearQUANTITY

0.99+

BostonLOCATION

0.99+

AlexPERSON

0.99+

thousandsQUANTITY

0.99+

OneTrustORGANIZATION

0.99+

45%QUANTITY

0.99+

33%QUANTITY

0.99+

DatabricksORGANIZATION

0.99+

two reasonsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

BeyondTrustORGANIZATION

0.99+

7%QUANTITY

0.99+

IBMORGANIZATION

0.99+

Seamus Jones & Milind Damle


 

>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.

Published Date : Dec 9 2022

SUMMARY :

I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

70QUANTITY

0.99+

40QUANTITY

0.99+

55%QUANTITY

0.99+

fiveQUANTITY

0.99+

DavePERSON

0.99+

220%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

121%QUANTITY

0.99+

96 coresQUANTITY

0.99+

CaliforniaLOCATION

0.99+

AMDORGANIZATION

0.99+

Shamus JonesPERSON

0.99+

12 coresQUANTITY

0.99+

ShamusORGANIZATION

0.99+

ShamusPERSON

0.99+

2023DATE

0.99+

eightQUANTITY

0.99+

96 coreQUANTITY

0.99+

300QUANTITY

0.99+

bothQUANTITY

0.99+

twoQUANTITY

0.99+

dozensQUANTITY

0.99+

seven yearQUANTITY

0.99+

5QUANTITY

0.99+

FerrariORGANIZATION

0.99+

96 scoresQUANTITY

0.99+

60%QUANTITY

0.99+

90%QUANTITY

0.99+

Milland DoleyPERSON

0.99+

first guestQUANTITY

0.99+

thirdQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

amdORGANIZATION

0.99+

todayDATE

0.98+

LinPERSON

0.98+

20 years agoDATE

0.98+

MelindaPERSON

0.98+

One terabyteQUANTITY

0.98+

SeamusORGANIZATION

0.98+

one coreQUANTITY

0.98+

MelindPERSON

0.98+

fourth generationQUANTITY

0.98+

this yearDATE

0.97+

7 yearsQUANTITY

0.97+

Seamus JonesPERSON

0.97+

DallasLOCATION

0.97+

OneQUANTITY

0.97+

MelinPERSON

0.97+

oneQUANTITY

0.97+

6QUANTITY

0.96+

Milind DamlePERSON

0.96+

MelanPERSON

0.96+

firstQUANTITY

0.95+

8QUANTITY

0.94+

second generationQUANTITY

0.94+

SeamusPERSON

0.94+

TP C XTITLE

0.93+

Manu Parbhakar, AWS & Joel Jackson, Red Hat | AWS re:Invent 2022


 

>>Hello, brilliant humans and welcome back to Las Vegas, Nevada, where we are live from the AWS Reinvent Show floor here with the cube. My name is Savannah Peterson, joined with Dave Valante, and we have a very exciting conversation with you. Two, two companies you may have heard of. We've got AWS and Red Hat in the house. Manu and Joel, thank you so much for being here. Love this little fist bump. Started off, that's right. Before we even got rolling, Manu, you said that you wanted this to be the best segment of, of the cubes airing. We we're doing over a hundred segments, so you're gonna have to bring the heat. >>We're ready. We're did go. Are we ready? Yeah, go. We're ready. Let's bring it on. >>We're ready. All right. I'm, I'm ready. Dave's ready. Let's do it. How's the show going for you guys real quick before we dig in? >>Yeah, I think after Covid, it's really nice to see that we're back into the 2019 level and, you know, people just want to get out, meet people, have that human touch with each other, and I think a lot of trust gets built as a functional that, so it's super amazing to see our partners and customers here at Reedman. Yeah, >>And you've got a few in the house. That's true. Just a few maybe, maybe a couple >>Very few shows can say that, by the way. Yeah, it's maybe a handful. >>I think one of the things we were saying, it's almost like the entire Silicon Valley descended in the expo hall area, so >>Yeah, it's >>For a few different reasons. There's a few different silicon defined. Yeah, yeah, yeah. Don't have strong on for you. So far >>It's, it's, it is amazing. It's the 10th year, right? It's decade, I think I've been to five and it's, it grows every single year. It's the, you have to be here. It's as simple as that. And customers from every single industry are here too. You don't get, a lot of shows have every single industry and almost every single location around the globe. So it's, it's a must, must be >>Here. Well, and the personas evolved, right? I was at reinvent number two. That was my first, and it was all developers, not all, but a lot of developers. And today it's a business mix, really is >>Totally, is a business mix. And I just, I've talked about it a little bit down the show, but the diversity on the show floor, it's the first time I've had to wait in line for the ladies' room at a tech conference. Almost a two decade career. It is, yeah. And it was really refreshing. I'm so impressed. So clearly there's a commitment to community, but also a commitment to diversity. Yeah. And, and it's brilliant to see on the show floor. This is a partnership that is robust and has been around for a little while. Money. Why don't you tell us a little bit about the partnership here? >>Yes. So Red Hand and AWS are best friends, you know, forever together. >>Aw, no wonder we got the fist bumps and all the good vibes coming out. I know, it's great. I love that >>We have a decade of working together. I think the relationship in the first phase was around running rail bundled with E two. Sure. We have about 70,000 customers that are running rail, which are running mission critical workloads such as sap, Oracle databases, bespoke applications across the state of verticals. Now, as more and more enterprise customers are finally, you know, endorsing and adopting public cloud, I think that business is just gonna continue to grow. So a, a lot of progress there. The second titration has been around, you know, developers tearing Red Hat and aws, Hey, listen, we wanna, it's getting competitive. We wanna deliver new features faster, quicker, we want scale and we want resilience. So just entire push towards devs containers. So that's the second chapter with, you know, red Hat OpenShift on aws, which launched as a, a joint manage service in 2021 last year. And I think the third phase, which you're super excited about, is just bringing the ease of consumption, one click deployment, and then having our customers, you know, benefit from the joint committed spend programs together. So, you know, making sure that re and Ansible and JBoss, the entire portfolio of Red Hat products are available on AWS marketplace. So that's the 1, 2, 3, it of our relationship. It's a decade of working together and, you know, best friends are super committed to making sure our customers and partners continue successful. >>Yeah, that he said it, he said it perfectly. 2008, I know you don't like that, but we started with Rel on demand just in 2008 before E two even had a console. So the partnership has been there, like Manu says, for a long time, we got the partnership, we got the products up there now, and we just gotta finalize that, go to market and get that gas on the fire. >>Yeah. So Graviton Outpost, local zones, you lead it into all the new stuff. So that portends, I mean, 2008, we're talking two years after the launch of s3. >>That's right. >>Right. So, and now look, so is this a harbinger of things to come with these new innovations? >>Yeah, I, I would say, you know, the innovation is a key tenant of our partnership, our relationship. So if you look at from a product standpoint, red Hat or Rel was one of the first platforms that made a support for graviton, which is basically 40% better price performance than any other distribution. Then that translated into making sure that Rel is available on all of our regions globally. So this year we launched Switzerland, Spain, India, and Red Hat was available on launch there, support for Nitro support for Outpost Rosa support on Outpost as well. So I think that relationship, that innovation on the product side, that's pretty visible. I think that innovation again then translates into what we are doing on marketplace with one click deployments we spoke about. I think the third aspect of the know innovation is around making sure that we are making our partners and our customers successful. So one of the things that we've done so far is Joe leads a, you know, a black belt team that really goes into each customer opportunity, making sure how can we help you be successful. We launched and you know, we should be able to share that on a link. After this, we launched like a big playlist, which talks about every single use case on how do you get successful and running OpenShift on aws. So that innovation on behalf of our customers partners to make them successful, that's been a key tenant for us together as >>Well. That's right. And that team that Manu is talking about, we're gonna, gonna 10 x that team this year going into January. Our fiscal yield starts in January. Love that. So yeah, we're gonna have a lot of no hiring freeze over here. Nope. No ma'am. No. Yeah, that's right. Yeah. And you know what I love about working with aws and, and, and Manu just said it very, all of that's customer driven. Every single event that we, that he just talked about in that timeline, it's customer driven, right? Customers wanted rail on demand, customers want JBoss up in the cloud, Ansible this week, you know, everything's up there now. So it's just getting that go to market tight and we're gonna, we're gonna get that done. >>So what's the algorithm for customer driven in terms of taking the input? Because if every customers saying, Hey, I this a >>Really similar >>Question right up, right? I, that's what I want. And if you know, 95% of the customers say it, Jay, maybe that's a good idea. >>Yeah, that's right. Trends. But >>Yeah. You know, 30% you might be like, mm, you know, 20%, you know, how do you guys decide when to put gas on the fire? >>No, that, I think, as I mentioned, there are about 70,000 large customers that are running rail on Easy Two, many of these customers are informing our product strategy. So we have, you know, close to about couple of thousand power users. We have customer advisory booths, and these are the, you know, customers are informing us, Hey, let's get all of the Red Hat portfolio and marketplace support for graviton, support for Outpost. Why don't we, why are we not able to dip into the consumption committed spend programs for both Red Hat and aws? That's right. So it's these power users both at the developer level as well as the guys who are actually doing large commercial consumption. They are the ones who are informing the roadmap for both Red Hat and aws. >>But do, do you codify the the feedback? >>Yeah, I'm like, I wanna see the database, >>The, I think it was, I don't know, it was maybe Chasy, maybe it was Besos, that that data beats intuition. So do you take that information and somehow, I mean, it's global, 70,000 customers, right? And they have different weights, different spending patterns, different levels of maturity. Yeah. Do you, how do you codify that and then ultimately make the decision? Yeah, I >>If, I mean, well you, you've got the strategic advisory boards, which are made up of customers and partners and you know, you get, you get a good, you gotta get a good slice of your customer base to get, and you gotta take their feedback and you gotta do something with it, right? That's the, that's the way we do it and codify it at the product level, I'm sure open source. That's, that's basically how we work at the product level, right? The most elegant solution in open source wins. And that's, that's pretty much how we do that at the, >>I would just add, I think it's also just the implicit trust that the two companies had built with each other, working in the trenches, making our customers and partners successful over the last decade. And Alex, give an example. So that manifests itself in context of like, you know, Amazon and Red Hat just published the entire roadmap for OpenShift. What are the new features that are becoming over the next six to nine to 12 months? It's open source available on GitHub. Customers can see, and then they can basically come back and give feedback like, Hey, you know, we want hip compliance. We just launched. That was a big request that was coming from our >>Customers. That is not any process >>Also for Graviton or Nvidia instances. So I I I think it's a, >>Here's the thing, the reason I'm pounding on this is because you guys have a pretty high hit rate, and I think as a >>Customer, mildly successful company >>As, as a customer advocate, the better, you know, if, if you guys make bets that pay off, it's gonna pay off for customers. Right. And because there's a lot of failures in it. Yeah. I mean, let's face it. That's >>Right. And I think, I think you said the key word bets. You place a lot of small bets. Do you have the, the innovation engine to do that? AWS is the perfect place to place those small bets. And then you, you know, pour gas on the fire when, when they take off. >>Yeah, it's a good point. I mean, it's not expensive to experiment. Yeah. >>Especially in the managed service world. Right? >>And I know you love taking things to market and you're a go to market guy. Let's talk gtm, what's got your snow pumped about GTM for 2023? >>We, we are gonna, you know, 10 x the teams that's gonna be focused on these products, right? So we're gonna also come out with a hybrid committed spend program for our customers that meet them where they want to go. So they're coming outta the data center going into a cloud. We're gonna have a nice financial model for them to do that. And that's gonna take a lot of the friction out. >>Yeah. I mean, you've nailed it. I, I think the, the fact that now entire Red Hat portfolio is available on marketplace, you can do it on one click deployment. It's deeply integrated with Amazon services and the most important part that Joel was making now customers can double dip. They can drive benefit from the consumption committed spend programs, both from Red Hat and from aws, which is amazing. Which is a game changer That's right. For many of our large >>Customers. That's right. And that, so we're gonna, we're gonna really go to town on that next year. That's, and all the, all the resources that I have, which are the technology sellers and the sas, you know, the engineers we're growing this team the most out that team. So it's, >>When you say 10 x, how many are you at now? I'm >>Curious to see where you're headed. Tell you, okay. There's not right? Oh no, there's not one. It's triple digit. Yeah, yeah. >>Today. Oh, sweet. Awesome. >>So, and it's a very sizable team. They're actually making sure that each of our customers are successful and then really making sure that, you know, no customer left behind policy. >>And it's a great point that customers love when Amazonians and Red Hats show up, they love it and it's, they want to get more of it, and we're gonna, we're gonna give it to 'em. >>Must feel great to be loved like that. >>Yeah, that's right. Yeah. Yeah. I would say yes. >>Seems like it's safe to say that there's another decade of partnership between your two companies. >>Hope so. That's right. That's the plan. >>Yeah. And I would say also, you know, just the IBM coming into the mix here. Yeah. I, you know, red Hat has informed the way we have turned around our partnership with ibm, essentially we, we signed the strategic collaboration agreement with the company. All of IBM software now runs on Rosa. So that is now also providing a lot of tailwinds both to our rail customers and as well as Rosa customers. And I think it's a very net creative, very positive for our partnership. >>That's right. It's been very positive. Yep. Yeah. >>You see the >>Billboards positive. Yeah, right. Also that, that's great. Great point, Dave. Yep. We have a, we have a new challenge, a new tradition on the cube here at Reinvent where we're, well, it's actually kind of a glamor moment for you, depending on how you leverage it. We're looking for your 32nd hot take your Instagram reel, your sizzle thought leadership, biggest takeaway, most important theme from this year's show. I know you want, right, Joel? I mean, you TM boy, I feel like you can spit the time. >>Yeah. It is all about Rosa for us. It is all in on that, that's the native OpenShift offering on aws and that's, that's the soundbite we're going go to town with. Now, I don't wanna forget all the other products that are in there, but Rosa is a, is a very key push for us this year. >>Fantastic. All right. Manu. >>I think our customers, it's getting super competitive. Our customers want to innovate just a >>Little bit. >>The enterprise customers see the cloud native companies. I wanna do what these guys are doing. I wanna develop features at a fast clip. I wanna scale, I wanna be resilient. And I think that's really the spirit that's coming out. So to Joel's point, you know, move to worlds containers, serverless, DevOps, which was like, you know, aha, something that's happening on the side of an enterprise is not becoming mainstream. The business is demanding it. The, it is becoming the centerpiece in the business strategy. So that's been really like the aha. Big thing that's happening here. >>Yeah. And those architectures are coming together, aren't they? That's correct. Right. You know, VMs and containers, it used to be one architecture and then at the other end of the spectrum is serverless. People thought of those as different things and now it's a single architecture and, and it's kind of right approach for the right job. >>And, and a compliments say to Red Hat, they do an incredible job of hiding that complexity. Yeah. Yes. And making sure that, you know, for example, just like, make it easier for the developers to create value and then, and you know, >>Yeah, that's right. Those, they were previously siloed architectures and >>That's right. OpenShift wanna be place where you wanna run containers or virtual machines. We want that to be this Yeah. Single place. Not, not go bolt on another piece of architecture to just do one or the other. Yeah. >>And hey, the hybrid cloud vision is working for ibm. No question. You know, and it's achievable. Yeah. I mean, I just, I've said unlike, you know, some of the previous, you know, visions on fixing the world with ai, hybrid cloud is actually a real problem that you're attacking and it's showing the results. Agreed. Oh yeah. >>Great. Alright. Last question for you guys. Cause it might be kind of fun, 10 years from now, oh, we're at another, we're sitting here, we all look the same. Time has passed, but we are not aging, which is a part of the new technology that's come out in skincare. That's my, I'm just throwing that out there. Why not? What do you guys hope that you can say about the partnership and, and your continued commitment to community? >>Oh, that's a good question. You go first this time. Yeah. >>I think, you know, the, you know, for looking into the future, you need to look into the past. And Amazon has always been driven by working back from our customers. That's like our key tenant, principle number 1 0 1. >>Couple people have said that on this stage this week. Yeah. >>Yeah. And I think our partnership, I hope over the next decade continues to keep that tenant as a centerpiece. And then whatever comes out of that, I think we, we are gonna be, you know, working through that. >>Yeah. I, I would say this, I think you said that, well, the customer innovation is gonna lead us to wherever that is. And it's, it's, it's gonna be in the cloud for sure. I think we can say that in 10 years. But yeah, anything from, from AI to the quant quantum computing that IBM's really pushing behind that, you know, those are, those are gonna be things that hopefully we show up on a, on a partnership with Manu in 10 years, maybe sooner. >>Well, whatever happens next, we'll certainly be covering it here on the cube. That's right. Thank you both for being here. Joel Manu, fantastic interview. Thanks to see you guys. Yeah, good to see you brought the energy. I think you're definitely ranking high on the top interviews. We >>Love that for >>The day. >>Thank >>My pleasure >>Job, guys. Now that you're competitive at all, and thank you all for tuning in to our live coverage here from AWS Reinvent in Las Vegas, Nevada, with Dave Valante. I'm Savannah Peterson. You're watching The Cube, the leading source for high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

Manu and Joel, thank you so much for being here. Are we ready? How's the show going for you guys real and, you know, people just want to get out, meet people, have that human touch with each other, And you've got a few in the house. Very few shows can say that, by the way. So far It's the, you have to be here. I was at reinvent number two. And I just, I've talked about it a little bit down the show, but the diversity on the show floor, you know, forever together. I love that you know, benefit from the joint committed spend programs together. 2008, I know you don't like that, but we started So that portends, I mean, 2008, we're talking two years after the launch of s3. harbinger of things to come with these new innovations? Yeah, I, I would say, you know, the innovation is a key tenant of our So it's just getting that go to market tight and we're gonna, we're gonna get that done. And if you know, 95% of the customers say it, Yeah, that's right. how do you guys decide when to put gas on the fire? So we have, you know, close to about couple of thousand power users. So do you take that information and somehow, I mean, it's global, you know, you get, you get a good, you gotta get a good slice of your customer base to get, context of like, you know, Amazon and Red Hat just published the entire roadmap for OpenShift. That is not any process So I I I think it's a, As, as a customer advocate, the better, you know, if, if you guys make bets AWS is the perfect place to place those small bets. I mean, it's not expensive to experiment. Especially in the managed service world. And I know you love taking things to market and you're a go to market guy. We, we are gonna, you know, 10 x the teams that's gonna be focused on these products, Red Hat portfolio is available on marketplace, you can do it on one click deployment. you know, the engineers we're growing this team the most out that team. Curious to see where you're headed. then really making sure that, you know, no customer left behind policy. And it's a great point that customers love when Amazonians and Red Hats show up, I would say yes. That's the plan. I, you know, red Hat has informed the way we have turned around our partnership with ibm, That's right. I mean, you TM boy, I feel like you can spit the time. It is all in on that, that's the native OpenShift offering I think our customers, it's getting super competitive. So to Joel's point, you know, move to worlds containers, and it's kind of right approach for the right job. And making sure that, you know, for example, just like, make it easier for the developers to create value and Yeah, that's right. OpenShift wanna be place where you wanna run containers or virtual machines. I mean, I just, I've said unlike, you know, some of the previous, What do you guys hope that you can say about Yeah. I think, you know, the, you know, Couple people have said that on this stage this week. you know, working through that. you know, those are, those are gonna be things that hopefully we show up on a, on a partnership with Manu Yeah, good to see you brought the energy. Now that you're competitive at all, and thank you all for tuning in to our live coverage here from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Savannah PetersonPERSON

0.99+

Dave ValantePERSON

0.99+

AmazonORGANIZATION

0.99+

ManuPERSON

0.99+

IBMORGANIZATION

0.99+

Manu ParbhakarPERSON

0.99+

40%QUANTITY

0.99+

AWSORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Joel ManuPERSON

0.99+

2021DATE

0.99+

TwoQUANTITY

0.99+

JanuaryDATE

0.99+

two companiesQUANTITY

0.99+

two companiesQUANTITY

0.99+

DavePERSON

0.99+

Red HandORGANIZATION

0.99+

95%QUANTITY

0.99+

fiveQUANTITY

0.99+

third phaseQUANTITY

0.99+

2019DATE

0.99+

Joel JacksonPERSON

0.99+

RosaORGANIZATION

0.99+

second chapterQUANTITY

0.99+

firstQUANTITY

0.99+

2008DATE

0.99+

20%QUANTITY

0.99+

Red HatORGANIZATION

0.99+

30%QUANTITY

0.99+

next yearDATE

0.99+

OutpostORGANIZATION

0.99+

red HatORGANIZATION

0.99+

10th yearQUANTITY

0.99+

JayPERSON

0.99+

Silicon ValleyLOCATION

0.99+

bothQUANTITY

0.99+

10 yearsQUANTITY

0.99+

first phaseQUANTITY

0.99+

TodayDATE

0.99+

ibmORGANIZATION

0.99+

awsORGANIZATION

0.99+

JoePERSON

0.99+

70,000 customersQUANTITY

0.99+

2023DATE

0.99+

oneQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.98+

CovidPERSON

0.98+

first timeQUANTITY

0.98+

nineQUANTITY

0.98+

12 monthsQUANTITY

0.98+

10QUANTITY

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

this yearDATE

0.98+

about 70,000 customersQUANTITY

0.98+

GravitonORGANIZATION

0.98+

this weekDATE

0.97+

AmazoniansORGANIZATION

0.97+

third aspectQUANTITY

0.97+

SpainLOCATION

0.97+

E twoEVENT

0.97+

each customerQUANTITY

0.97+

The CubeTITLE

0.96+

David Schmidt, Dell Technologies and Scott Clark, Intel | SuperComputing 22


 

(techno music intro) >> Welcome back to theCube's coverage of SuperComputing Conference 2022. We are here at day three covering the amazing events that are occurring here. I'm Dave Nicholson, with my co-host Paul Gillin. How's it goin', Paul? >> Fine, Dave. Winding down here, but still plenty of action. >> Interesting stuff. We got a full day of coverage, and we're having really, really interesting conversations. We sort of wrapped things up at Supercomputing 22 here in Dallas. I've got two very special guests with me, Scott from Intel and David from Dell, to talk about yeah supercomputing, but guess what? We've got some really cool stuff coming up after this whole thing wraps. So not all of the holiday gifts have been unwrapped yet, kids. Welcome gentlemen. >> Thanks so much for having us. >> Thanks for having us. >> So, let's start with you, David. First of all, explain the relationship in general between Dell and Intel. >> Sure, so obviously Intel's been an outstanding partner. We built some great solutions over the years. I think the market reflects that. Our customers tell us that. The feedback's strong. The products you see out here this week at Supercompute, you know, put that on display for everybody to see. And then as we think about AI in machine learning, there's so many different directions we need to go to help our customers deliver AI outcomes. Right, so we recognize that AI has kind of spread outside of just the confines of everything we've seen here this week. And now we've got really accessible AI use cases that we can explain to friends and family. We can talk about going into retail environments and how AI is being used to track inventory, to monitor traffic, et cetera. But really what that means to us as a bunch of hardware folks is we have to deliver the right platforms and the right designs for a variety of environments, both inside and outside the data center. And so if you look at our portfolio, we have some great products here this week, but we also have other platforms, like the XR4000, our shortest rack server ever that's designed to go into Edge environments, but is also built for those Edge AI use cases that supports GPUs. It supports AI on the CPU as well. And so there's a lot of really compelling platforms that we're starting to talk about, have already been talking about, and it's going to really enable our customers to deliver AI in a variety of ways. >> You mentioned AI on the CPU. Maybe this is a question for Scott. What does that mean, AI on the CPU? >> Well, as David was talking about, we're just seeing this explosion of different use cases. And some of those on the Edge, some of them in the Cloud, some of them on Prem. But within those individual deployments, there's often different ways that you can do AI, whether that's training or inference. And what we're seeing is a lot of times the memory locality matters quite a bit. You don't want to have to pay necessarily a cost going across the PCI express bus, especially with some of our newer products like the CPU Max series, where you can have a huge about of high bandwidth memory just sitting right on the CPU. Things that traditionally would have been accelerator only, can now live on a CPU, and that includes both on the inference side. We're seeing some really great things with images, where you might have a giant medical image that you need to be able to do extremely high resolution inference on or even text, where you might have a huge corpus of extremely sparse text that you need to be able to randomly sample very efficiently. >> So how are these needs influencing the evolution of Intel CPU architectures? >> So, we're talking to our customers. We're talking to our partners. This presents both an opportunity, but also a challenge with all of these different places that you can put these great products, as well as applications. And so we're very thoughtfully trying to go to the market, see where their needs are, and then meet those needs. This industry obviously has a lot of great players in it, and it's no longer the case that if you build it, they will come. So what we're doing is we're finding where are those choke points, how can we have that biggest difference? Sometimes there's generational leaps, and I know David can speak to this, can be huge from one system to the next just because everything's accelerated on the software side, the hardware side, and the platforms themselves. >> That's right, and we're really excited about that leap. If you take what Scott just described, we've been writing white papers, our team with Scott's team, we've been talking about those types of use cases using doing large image analysis and leveraging system memory, leveraging the CPU to do that, we've been talking about that for several generations now. Right, going back to Cascade Lake, going back to what we would call 14th generation power Edge. And so now as we prepare and continue to unveil, kind of we're in launch season, right, you and I were talking about how we're in launch season. As we continue to unveil and launch more products, the performance improvements are just going to be outstanding and we'll continue that evolution that Scott described. >> Yeah, I'd like to applaud Dell just for a moment for its restraint. Because I know you could've come in and taken all of the space in the convention center to show everything that you do. >> Would have loved to. >> In the HPC space. Now, worst kept secrets on earth at this point. Vying for number one place is the fact that there is a new Mission Impossible movie coming. And there's also new stuff coming from Intel. I know, I think allegedly we're getting close. What can you share with us on that front? And I appreciate it if you can't share a ton of specifics, but where are we going? David just alluded to it. >> Yeah, as David talked about, we've been working on some of these things for many years. And it's just, this momentum is continuing to build, both in respect to some of our hardware investments. We've unveiled some things both here, both on the CPU side and the accelerator side, but also on the software side. OneAPI is gathering more and more traction and the ecosystem is continuing to blossom. Some of our AI and HPC workloads, and the combination thereof, are becoming more and more viable, as well as displacing traditional approaches to some of these problems. And it's this type of thing where it's not linear. It all builds on itself. And we've seen some of these investments that we've made for a better half of a decade starting to bear fruit, but that's, it's not just a one time thing. It's just going to continue to roll out, and we're going to be seeing more and more of this. >> So I want to follow up on something that you mentioned. I don't know if you've ever heard that the Charlie Brown saying that sometimes the most discouraging thing can be to have immense potential. Because between Dell and Intel, you offer so many different versions of things from a fit for function perspective. As a practical matter, how do you work with customers, and maybe this is a question for you, David. How do you work with customers to figure out what the right fit is? >> I'll give you a great example. Just this week, customer conversations, and we can put it in terms of kilowatts to rack, right. How many kilowatts are you delivering at a rack level inside your data center? I've had an answer anywhere from five all the way up to 90. There's some that have been a bit higher that probably don't want to talk about those cases, kind of customers we're meeting with very privately. But the range is really, really large, right, and there's a variety of environments. Customers might be ready for liquid today. They may not be ready for it. They may want to maximize air cooling. Those are the conversations, and then of course it all maps back to the workloads they wish to enable. AI is an extremely overloaded term. We don't have enough time to talk about all the different things that tuck under that umbrella, but the workloads and the outcomes they wish to enable, we have the right solutions. And then we take it a step further by considering where they are today, where they need to go. And I just love that five to 90 example of not every customer has an identical cookie cutter environment, so we've got to have the right platforms, the right solutions, for the right workloads, for the right environments. >> So, I like to dive in on this power issue, to give people who are watching an idea. Because we say five kilowatts, 90 kilowatts, people are like, oh wow, hmm, what does that mean? 90 kilowatts is more than 100 horse power if you want to translate it over. It's a massive amount of power, so if you think of EV terms. You know, five kilowatts is about a hairdryer's around a kilowatt, 1,000 watts, right. But the point is, 90 kilowatts in a rack, that's insane. That's absolutely insane. The heat that that generates has got to be insane, and so it's important. >> Several houses in the size of a closet. >> Exactly, exactly. Yeah, in a rack I explain to people, you know, it's like a refrigerator. But, so in the arena of thermals, I mean is that something during the development of next gen architectures, is that something that's been taken into consideration? Or is it just a race to die size? >> Well, you definitely have to take thermals into account, as well as just the power of consumption themselves. I mean, people are looking at their total cost of ownership. They're looking at sustainability. And at the end of the day, they need to solve a problem. There's many paths up that mountain, and it's about choosing that right path. We've talked about this before, having extremely thoughtful partners, we're just not going to common-torily try every single solution. We're going to try to find the ones that fit that right mold for that customer. And we're seeing more and more people, excuse me, care about this, more and more people wanting to say, how do I do this in the most sustainable way? How do I do this in the most reliable way, given maybe different fluctuations in their power consumption or their power pricing? We're developing more software tools and obviously partnering with great partners to make sure we do this in the most thoughtful way possible. >> Intel put a lot of, made a big investment by buying Habana Labs for its acceleration technology. They're based in Israel. You're based on the west coast. How are you coordinating with them? How will the Habana technology work its way into more mainstream Intel products? And how would Dell integrate those into your servers? >> Good question. I guess I can kick this off. So Habana is part of the Intel family now. They've been integrated in. It's been a great journey with them, as some of their products have launched on AWS, and they've had some very good wins on MLPerf and things like that. I think it's about finding the right tool for the job, right. Not every problem is a nail, so you need more than just a hammer. And so we have the Xeon series, which is incredibly flexible, can do so many different things. It's what we've come to know and love. On the other end of the spectrum, we obviously have some of these more deep learning focused accelerators. And if that's your problem, then you can solve that problem in incredibly efficient ways. The accelerators themselves are somewhere in the middle, so you get that kind of Goldilocks zone of flexibility and power. And depending on your use case, depending on what you know your workloads are going to be day in and day out, one of these solutions might work better for you. A combination might work better for you. Hybrid compute starts to become really interesting. Maybe you have something that you need 24/7, but then you only need a burst to certain things. There's a lot of different options out there. >> The portfolio approach. >> Exactly. >> And then what I love about the work that Scott's team is doing, customers have told us this week in our meetings, they do not want to spend developer's time porting code from one stack to the next. They want that flexibility of choice. Everyone does. We want it in our lives, in our every day lives. They need that flexibility of choice, but they also, there's an opportunity cost when their developers have to choose to port some code over from one stack to another or spend time improving algorithms and doing things that actually generate, you know, meaningful outcomes for their business or their research. And so if they are, you know, desperately searching I would say for that solution and for help in that area, and that's what we're working to enable soon. >> And this is what I love about oneAPI, our software stack, it's open first, heterogeneous first. You can take SYCL code, it can run on competitor's hardware. It can run on Intel hardware. It's one of these things that you have to believe long term, the future is open. Wall gardens, the walls eventually crumble. And we're just trying to continue to invest in that ecosystem to make sure that the in-developer at the end of the day really gets what they need to do, which is solving their business problem, not tinkering with our drivers. >> Yeah, I actually saw an interesting announcement that I hadn't been tracking. I hadn't been tracking this area. Chiplets, and the idea of an open standard where competitors of Intel from a silicone perspective can have their chips integrated via a universal standard. And basically you had the top three silicone vendors saying, yeah, absolutely, let's work together. Cats and dogs. >> Exactly, but at the end of the day, it's whatever menagerie solves the problem. >> Right, right, exactly. And of course Dell can solve it from any angle. >> Yeah, we need strong partners to build the platforms to actually do it. At the end of the day, silicone without software is just sand. Sand with silicone is poorly written prose. But without an actual platform to put it on, it's nothing, it's a box that sits in the corner. >> David, you mentioned that 90% of power age servers now support GPUs. So how is this high-performing, the growth of high performance computing, the demand, influencing the evolution of your server architecture? >> Great question, a couple of ways. You know, I would say 90% of our platforms support GPUs. 100% of our platforms support AI use cases. And it goes back to the CPU compute stack. As we look at how we deliver different form factors for customers, we go back to that range, I said that power range this week of how do we enable the right air coolant solutions? How do we deliver the right liquid cooling solutions, so that wherever the customer is in their environment, and whatever footprint they have, we're ready to meet it? That's something you'll see as we go into kind of the second half of launch season and continue rolling out products. You're going to see some very compelling solutions, not just in air cooling, but liquid cooling as well. >> You want to be more specific? >> We can't unveil everything at Supercompute. We have a lot of great stuff coming up here in the next few months, so. >> It's kind of like being at a great restaurant when they offer you dessert, and you're like yeah, dessert would be great, but I just can't take anymore. >> It's a multi course meal. >> At this point. Well, as we wrap, I've got one more question for each of you. Same question for each of you. When you think about high performance computing, super computing, all of the things that you're doing in your partnership, driving artificial intelligence, at that tip of the spear, what kind of insights are you looking forward to us being able to gain from this technology? In other words, what cool thing, what do you think is cool out there from an AI perspective? What problem do you think we can solve in the near future? What problems would you like to solve? What gets you out of bed in the morning? Cause it's not the little, it's not the bits and the bobs and the speeds and the feats, it's what we're going to do with them, so what do you think, David? >> I'll give you an example. And I think, I saw some of my colleagues talk about this earlier in the week, but for me what we could do in the past two years to unable our customers in a quarantine pandemic environment, we were delivering platforms and solutions to help them do their jobs, help them carry on in their lives. And that's just one example, and if I were to map that forward, it's about enabling that human progress. And it's, you know, you ask a 20 year version of me 20 years ago, you know, if you could imagine some of these things, I don't know what kind of answer you would get. And so mapping forward next decade, next two decades, I can go back to that example of hey, we did great things in the past couple of years to enable our customers. Just imagine what we're going to be able to do going forward to enable that human progress. You know, there's great use cases, there's great image analysis. We talked about some. The images that Scott was referring to had to do with taking CAT scan images and being able to scan them for tumors and other things in the healthcare industry. That is stuff that feels good when you get out of bed in the morning, to know that you're enabling that type of progress. >> Scott, quick thoughts? >> Yeah, and I'll echo that. It's not one specific use case, but it's really this wave front of all of these use cases, from the very micro of developing the next drug to finding the next battery technology, all the way up to the macro of trying to have an impact on climate change or even the origins of the universe itself. All of these fields are seeing these massive gains, both from the software, the hardware, the platforms that we're bringing to bear to these problems. And at the end of the day, humanity is going to be fundamentally transformed by the computation that we're launching and working on today. >> Fantastic, fantastic. Thank you, gentlemen. You heard it hear first, Intel and Dell just committed to solving the secrets of the universe by New Years Eve 2023. >> Well, next Supercompute, let's give us a little time. >> The next Supercompute Convention. >> Yeah, next year. >> Yeah, SC 2023, we'll come back and see what problems have been solved. You heard it hear first on theCube, folks. By SC 23, Dell and Intel are going to reveal the secrets of the universe. From here, at SC 22, I'd like to thank you for joining our conversation. I'm Dave Nicholson, with my co-host Paul Gillin. Stay tuned to theCube's coverage of Supercomputing Conference 22. We'll be back after a short break. (techno music)

Published Date : Nov 17 2022

SUMMARY :

covering the amazing events Winding down here, but So not all of the holiday gifts First of all, explain the and the right designs for What does that mean, AI on the CPU? that you need to be able to and it's no longer the case leveraging the CPU to do that, all of the space in the convention center And I appreciate it if you and the ecosystem is something that you mentioned. And I just love that five to 90 example But the point is, 90 kilowatts to people, you know, And at the end of the day, You're based on the west coast. So Habana is part of the Intel family now. and for help in that area, in that ecosystem to make Chiplets, and the idea of an open standard Exactly, but at the end of the day, And of course Dell can that sits in the corner. the growth of high performance And it goes back to the CPU compute stack. in the next few months, so. when they offer you dessert, and the speeds and the feats, in the morning, to know And at the end of the day, of the universe by New Years Eve 2023. Well, next Supercompute, From here, at SC 22, I'd like to thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MaribelPERSON

0.99+

JohnPERSON

0.99+

KeithPERSON

0.99+

EquinixORGANIZATION

0.99+

Matt LinkPERSON

0.99+

Dave VellantePERSON

0.99+

IndianapolisLOCATION

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ScottPERSON

0.99+

Dave NicholsonPERSON

0.99+

Tim MinahanPERSON

0.99+

Paul GillinPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Stephanie CoxPERSON

0.99+

AkanshkaPERSON

0.99+

BudapestLOCATION

0.99+

IndianaLOCATION

0.99+

Steve JobsPERSON

0.99+

OctoberDATE

0.99+

IndiaLOCATION

0.99+

StephaniePERSON

0.99+

NvidiaORGANIZATION

0.99+

Chris LavillaPERSON

0.99+

2006DATE

0.99+

Tanuja RanderyPERSON

0.99+

CubaLOCATION

0.99+

IsraelLOCATION

0.99+

Keith TownsendPERSON

0.99+

AkankshaPERSON

0.99+

DellORGANIZATION

0.99+

Akanksha MehrotraPERSON

0.99+

LondonLOCATION

0.99+

September 2020DATE

0.99+

IntelORGANIZATION

0.99+

David SchmidtPERSON

0.99+

90%QUANTITY

0.99+

$45 billionQUANTITY

0.99+

October 2020DATE

0.99+

AfricaLOCATION

0.99+

Dell Technologies |The Future of Multicloud Data Protection is Here 11-14


 

>>Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell the cube privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly we've seen this dynamic with supply chain challenges and there's a little doubt. We're also seeing it in the area of cybersecurity generally, and data recovery. Specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cybercrime. Well, security in the public clouds is certainly world class. The result of multi-cloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. >>And at the end of the day, more, not less complexity, but there's a positive side to this story. The good news is that public policy industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today there's heightened awareness that the world of data protection is not only an adjacency to, but it's becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection, people, technologies, and processes must be more tightly coordinated with security operations. Hello and welcome to the future of Multi-Cloud Data Protection Made Possible by Dell in collaboration with the Cube. My name is Dave Ante and I'll be your host today. In this segment, we welcome into the cube, two senior executives from Dell who will share details on new technology announcements that directly address these challenges. >>Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group, isg, and he's gonna share his perspectives on the market and the challenges he's hearing from customers. And we're gonna ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now, Jeff is gonna be joined by Travis Vhi. Travis is the senior Vice President of product management for ISG at Dell Technologies, and he's gonna give us details on the products that are being announced today and go into the hard news. Now, we're also gonna challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. We're here with Jeff Padre and Travis Behill. We're gonna dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks >>For coming in. Good to see you. Thank you for having us. >>You're very welcome. Right. Let's start off, Jeff, with the high level, you know, I'd like to talk about the customer, what challenges they're facing. You're talking to customers all the time, What are they telling you? >>Sure. As you know, we do, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. Notice no surprise to any of us, that data is a key theme that they talk about. It's one of their most important, important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So they need to make sure that that data is accessible, it's secure in its recoverable, especially in today's world with the increased cyber attacks. >>Okay. So maybe we could get into some of those, those challenges. I mean, when, when you talk about things like data sprawl, what do you mean by that? What should people know? Sure. >>So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at an unprecedented rates. It's the gravity of that data and the reality of the multi-cloud sprawl. So stuff is just everywhere, right? Which increases that service a tax base for cyber criminals. >>And by gravity you mean the data's there and people don't wanna move it. >>It's everywhere, right? And so when it lands someplace, I think edge, core or cloud, it's there and that's, it's something we have to help our customers with. >>Okay, so just it's nuanced cuz complexity has other layers. What are those >>Layers? Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multi-cloud complexity and we talk about multi-cloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this, you know, multi-cloud world. Then that drives their security complexity. So we talk about that increased attack surface, but this really drives a lot of operational complexity for their teams. Think about we're lack consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >>So how does that affect the cyber strategies and the, I mean, I've often said the ciso now they have this shared responsibility model, they have to do that across multiple clouds. Every cloud has its own security policies and, and frameworks and syntax. So maybe you could double click on your perspective on that. >>Sure. I'd say the big, you know, the big challenge customers have seen, it's really inadequate cyber resiliency. And specifically they're feeling, feeling very exposed. And today as the world with cyber tax being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a, a big topic for CEOs and businesses around the world. >>You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses were optimized for efficiency and now they're like, wow, we have to actually put some headroom into the system to be more resilient. You know, I you hearing >>That? Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things, one's about simplifying it, two, it is really helping them to extract more value from their data. And then the third big, big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multi-cloud world just recently, I don't know if you've seen it, but the global data protected, excuse me, the global data protection index gdp. >>I, Yes. Jesus. Not to be confused with gdpr, >>Actually that was released today and confirms everything we just talked about around customer challenges, but also it highlights an importance of having a very cyber, a robust cyber resilient data protection strategy. >>Yeah, I haven't seen the latest, but I, I want to dig into it. I think this, you've done this many, many years in a row. I like to look at the, the, the time series and see how things have changed. All right. At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? >>Sure. So we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multi-cloud data protection solution and needs. We take a modern, a simple and resilient approach. >>What does that mean? What, what do you mean by modern? >>Sure. So modern, we talk about our software defined architecture, right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud and any workload. So we have a proven track record doing this today. We have more than 1700 customers that trust us to protect them more than 14 exabytes of their data in the cloud today. >>Okay, so you said modern, simple and resilient. What, what do you mean by simple? Sure. >>We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment to consumption to management and support. So our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever customer may desire. So traditional subscription or as a service. >>And when you, when you talk about resilient, I mean, I, I put forth that premise, but it's hard because people say, Well, that's gonna gonna cost us more. Well, it may, but you're gonna also reduce your, your risk. So what's your point of view on resilience? >>Yeah, I think it's, it's something all customers need. So we're gonna be providing a comprehensive and resilient portfolio of cyber solutions that are secured by design. We have some ver some unique capabilities and a combination of things like built in amenability, physical and logical isolation. We have intelligence built in with AI par recovery. And just one, I guess fun fact for everybody is we have our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >>So it's interesting when you think about the, the NIST framework for cybersecurity, it's all about about layers. You're sort of bringing that now to, to data protection, correct? Yeah. All right. In a minute we're gonna come back with Travis and dig into the news. We're gonna take a short break. Keep it right there. Okay. We're back with Jeff and Travis Vhi to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >>Yeah, thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern simple power protect data managers software. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions, they want us to help them modernize and they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So tho that's the backdrop to the news today. And, and as we go through the news, I think you'll, you'll agree that each of these announcements deliver on those pillars. And in particular today we're announcing the Power Protect data manager appliance. We are announcing power protect cyber recovery enhancements, and we are announcing enhancements to our Apex data storage >>Services. Okay, so three pieces. Let's, let's dig to that. It's interesting appliance, everybody wants software, but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, right? It performs great. So, so what do we need to know about the appliance? What's the news there? Well, >>You know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of, of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple integrated appliance that really combines the best of the target appliance capabilities we have with that modern simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >>So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is that, that that data protection is becoming an extension of your, your cybersecurity strategies. So I'm interested in your perspective on cyber recovery, you specific news that you have there. >>Yeah, you know, we, we are, in addition to simplifying things via the, the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud with this announcement. It means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running, you know, on premises in a colo at the edge in the public cloud. And the other nice thing about this, this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on premises or from that vault back to running their cyber cyber protection or their data protection solutions in the public cloud. >>I always invoke my, my favorite Matt Baker here. It's not a zero sum game, but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We can talk about edge all day, but that's a different topic. Okay, so my, my other question Travis, is how does this all fit into Apex? We hear a lot about Apex as a service, it's sort of the new hot thing. What's happening there? What's the news around Apex? >>Yeah, we, we've seen incredible momentum with our Apex solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being, you know, providing solutions that allow customers to consume flexibly. And so what we're announcing specifically is that we're expanding Apex data storage services to include a data protection option. And it's like with all Apex offers, it's a pay as you go solution really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is, you know, specify their base capacity, they specify their performance tier, they tell us do they want a a one year term or a three year term and we take it from there. We, we get them up and running so they can start deploying and consuming flexibly. And it's, as with many of our Apex solutions, it's a simple user experience all exposed through a unified Apex console. >>Okay. So it's you keeping it simple, like I think large, medium, small, you know, we hear a lot about t-shirt sizes. I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, what I, what I need, how different is this? I wonder if you guys could, could, could address this. Jeff, maybe you can, >>You can start. Sure. I'll start and then pitch me, you know, Travis, you you jump in when I screw up here. So, awesome. So first I'd say we offer innovative multi-cloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis and all the major public clouds. We have a broad ecosystem of workload support and I guess the, the great news is we're up to 80% more cost effective than any of the competition. >>80%. 80%, That's a big number, right? Travis, what's your point of view on this? Yeah, >>I, I think number one, end to end data protection. We, we are that one stop shop that I talked about. Whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the edge, whether it's integrated appliances, target appliances, software, we have solutions that span the gamut as a service. I mentioned the Apex solution as well. So really we can, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives edge core to cloud. The other thing that we hear as a, as a, a big differentiator for Dell and, and Jeff touched on on this a little bit earlier, is our intelligent cyber resiliency. We have a unique combination in, in the market where we can offer immutability or protection against deletion as, as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking, talking about data vaults or cyber vaults and cyber recovery. And the, at more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks, it can help customers speed time to recovery and really provides AI and ML to help early diagnosis of a cyber attack and fast recovery should a cyber attack occur. And, and you know, if you look at customer adoption of that solution specifically in the clouds, we have over 1300 customers utilizing power protect cyber recovery. >>So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, you know, your finance team, Michael Dell, et cetera, that end to end capability that that, that your ability to manage throughout the supply chain. We actually just did a a, an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting, you know, the client's burden to your r and d now they have a lot of work to do, so it's, it's not like they can go home and just relax, but, but that's a key part of the partnership that I see. Jeff, I wonder if you could give us the, the, the final thoughts. >>Sure. Dell has a long history of being a trusted partner with it, right? So we have unmatched capabilities. Going back to your point, we have the broadest portfolio, we have, you know, we're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale, we have innovation that is just unmatched within data protection itself. We have the trusted market leader, no, if and or buts, we're number one for both data protection software in appliances per idc and we would just name for the 17th consecutive time the leader in the, the Gartner Magic Quadrant. So bottom line is customers can count on Dell. >>Yeah, and I think again, we're seeing the evolution of, of data protection. It's not like the last 10 years, it's really becoming an adjacency and really a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for Thank you sir. Thanks Travis. Good to see you. All right, in a moment I'm gonna come right back and summarize what we learned today, what actions you can take for your business. You're watching the future of multi-cloud data protection made possible by Dell and collaboration with the cube, your leader in enterprise and emerging tech coverage right back >>In our data driven world. Protecting data has never been more critical to guard against everything from cyber incidents to unplanned outages. You need a cyber resilient, multi-cloud data protection strategy. >>It's not a matter of if you're gonna get hacked, it's a matter of when. And I wanna know that I can recover and continue to recover each day. >>It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >>Power protects. Data manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >>We chose Power Protect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologies has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >>With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >>Got installed it by myself, learned it by myself with very intuitive >>While restoring a machine with Power Protect Data Manager is fast. We can fully manage Power Protect through the center. We can recover a whole machine in seconds. >>Data Manager offers innovation such as Transparent snapshots to simplify virtual machine backups and it goes beyond backup and restore to provide valuable insights and to protected data workloads and VMs. >>In our previous environment, it would take anywhere from three to six hours at night to do a single backup of each vm. Now we're backing up hourly and it takes two to three seconds with the transparent snapshots. >>With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >>Data is extremely important. We can't afford to lose any data. We need things just to work. >>Start your journey to modern data protection with Dell Power Protect Data manager. Visit dell.com/power Protect Data Manager. >>We put forth the premise in our introduction that the worlds of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default this should include modern hardware and software. Now in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. There were three that we talked about today. First, the Power Protect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in the moment. Second is Dell's Power Protect cyber recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and and Azure support. >>Now finally, Dell has made its target backup appliances available in Apex. You might recall earlier this year we saw the introduction from Dell of Apex backup services and then in May at Dell Technologies world, we heard about the introduction of Apex Cyber Recovery Services. And today Dell is making its most popular backup appliances available and Apex. Now I wanna come back to the Power Protect data manager appliance because it's a new integrated appliance. And I asked Dell off camera really what is so special about these new systems and what's really different from the competition because look, everyone offers some kind of integrated appliance. So I heard a number of items, Dell talked about simplicity and efficiency and containers and Kubernetes. So I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. >>Dell claims that customers can deploy the system in half the time relative to the competition. So we're talking minutes to deploy and of course that's gonna lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the data protectionPage@dell.com. You can find that at dell.com/data protection. And all the content here and other videos are available on demand@thecube.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to silicon angle.com for all the news and analysis related to these and other announcements. This is Dave Valante. Thanks for watching the future of multi-cloud protection made possible by Dell in collaboration with the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Nov 17 2022

SUMMARY :

And the lack of that business And at the end of the day, more, not less complexity, Jeff Boudreau is the president and general manager of Dell's Infrastructure Solutions Group, Good to see you. Let's start off, Jeff, with the high level, you know, I'd like to talk about the So they need to make sure that that data data sprawl, what do you mean by that? So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts it's something we have to help our customers with. Okay, so just it's nuanced cuz complexity has other layers. We talked about that growth and gravity of the data. So how does that affect the cyber strategies and the, And today as the world with cyber tax being more and more sophisticated, You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses that very, you know, the multi-cloud world just recently, I don't know if you've seen it, but the global data protected, Not to be confused with gdpr, Actually that was released today and confirms everything we just talked about around customer challenges, At, at a high level, Jeff, can you kind of address why Dell and from your point of We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, It's really designed to meet the needs What, what do you mean by simple? We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment So what's your point of view on resilience? Harbor that meets all the needs of the financial sector. So it's interesting when you think about the, the NIST framework for cybersecurity, it's all about about layers. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, You know, part of the reason I gave you some of those stats to begin with is that we have this strong foundation of, So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is an increasing area of interest and deployment that we see with our customers. it's sort of the new hot thing. All a customer really needs to do is, you know, specify their base capacity, I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, We support as Travis and all the major public clouds. Travis, what's your point of view on of that solution specifically in the clouds, So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, We have the trusted market leader, no, if and or buts, we're number one for both data protection software in what we learned today, what actions you can take for your business. Protecting data has never been more critical to guard against that I can recover and continue to recover each day. It is important to have a cyber security and a cyber resiliency Data manager from Dell Technologies helps deliver the data protection and security We chose Power Protect Data Manager because we've been a strategic partner with With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency We can fully manage Power Data Manager offers innovation such as Transparent snapshots to simplify virtual Now we're backing up hourly and it takes two to three seconds with the transparent With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available We need things just to work. Start your journey to modern data protection with Dell Power Protect Data manager. We put forth the premise in our introduction that the worlds of data protection in cybersecurity So I kind of kept pushing and got to what I think is the heart of the matter in two really Dell claims that customers can deploy the system in half the time relative to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Dave ValantePERSON

0.99+

Jeff BoudreauPERSON

0.99+

TravisPERSON

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

10 billionQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

threeQUANTITY

0.99+

Travis BehillPERSON

0.99+

FirstQUANTITY

0.99+

demand@thecube.netOTHER

0.99+

AWSORGANIZATION

0.99+

20 billionQUANTITY

0.99+

Dave AntePERSON

0.99+

twoQUANTITY

0.99+

Jeff PadrePERSON

0.99+

Sheltered HarborORGANIZATION

0.99+

Matt BakerPERSON

0.99+

more than 1700 customersQUANTITY

0.99+

MayDATE

0.99+

SecondQUANTITY

0.99+

1700 customersQUANTITY

0.99+

more than 14 exabytesQUANTITY

0.99+

Michael DellPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

two senior executivesQUANTITY

0.99+

three secondsQUANTITY

0.99+

secondQUANTITY

0.99+

ApexORGANIZATION

0.99+

eachQUANTITY

0.99+

three piecesQUANTITY

0.99+

thirdQUANTITY

0.99+

two partsQUANTITY

0.99+

TodayDATE

0.99+

six hoursQUANTITY

0.99+

each dayQUANTITY

0.99+

bothQUANTITY

0.98+

over 1300 customersQUANTITY

0.98+

Solutions GroupORGANIZATION

0.98+

three thingsQUANTITY

0.98+

dell.com/powerOTHER

0.98+

JesusPERSON

0.98+

GartnerORGANIZATION

0.98+

thousands of peopleQUANTITY

0.97+

The Future of Multicloud Data Protection is Here FULL EPISODE V3


 

>>Prior to the pandemic, organizations were largely optimized for efficiency as the best path to bottom line profits. Many CIOs tell the cube privately that they were caught off guard by the degree to which their businesses required greater resiliency beyond their somewhat cumbersome disaster recovery processes. And the lack of that business resilience has actually cost firms because they were unable to respond to changing market forces. And certainly we've seen this dynamic with supply chain challenges and there's a little doubt. We're also seeing it in the area of cybersecurity generally, and data recovery. Specifically. Over the past 30 plus months, the rapid adoption of cloud to support remote workers and build in business resilience had the unintended consequences of expanding attack vectors, which brought an escalation of risk from cyber crime. Well, security in the public clouds is certainly world class. The result of multi-cloud has brought with it multiple shared responsibility models, multiple ways of implementing security policies across clouds and on-prem. >>And at the end of the day, more, not less complexity, but there's a positive side to this story. The good news is that public policy industry collaboration and technology innovation is moving fast to accelerate data protection and cybersecurity strategies with a focus on modernizing infrastructure, securing the digital supply chain, and very importantly, simplifying the integration of data protection and cybersecurity. Today there's heightened awareness that the world of data protection is not only an adjacency to, but it's becoming a fundamental component of cybersecurity strategies. In particular, in order to build more resilience into a business, data protection, people, technologies, and processes must be more tightly coordinated with security operations. Hello and welcome to the future of Multi-Cloud Data Protection Made Possible by Dell in collaboration with the Cube. My name is Dave Valante and I'll be your host today. In this segment, we welcome into the Cube, two senior executives from Dell who will share details on new technology announcements that directly address these challenges. >>Jeff Boudreaux is the president and general manager of Dell's Infrastructure Solutions Group, isg, and he's gonna share his perspectives on the market and the challenges he's hearing from customers. And we're gonna ask Jeff to double click on the messages that Dell is putting into the marketplace and give us his detailed point of view on what it means for customers. Now Jeff is gonna be joined by Travis Vhi. Travis is the senior Vice President of product management for ISG at Dell Technologies, and he's gonna give us details on the products that are being announced today and go into the hard news. Now, we're also gonna challenge our guests to explain why Dell's approach is unique and different in the marketplace. Thanks for being with us. Let's get right into it. We're here with Jeff Padro and Travis Behill. We're gonna dig into the details about Dell's big data protection announcement. Guys, good to see you. Thanks >>For coming in. Good to see you. Thank you for having us. >>You're very welcome. Right. Let's start off, Jeff, with a high level, you know, I'd like to talk about the customer, what challenges they're facing. You're talking to customers all the time, What are they telling you? >>Sure. As you know, we do, we spend a lot of time with our customers, specifically listening, learning, understanding their use cases, their pain points within their specific environments. They tell us a lot. Notice no surprise to any of us, that data is a key theme that they talk about. It's one of their most important, important assets. They need to extract more value from that data to fuel their business models, their innovation engines, their competitive edge. So they need to make sure that that data is accessible, it's secure in its recoverable, especially in today's world with the increased cyber attacks. >>Okay. So maybe we could get into some of those, those challenges. I mean, when, when you talk about things like data sprawl, what do you mean by that? What should people know? Sure. >>So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of data. It's the growth of that data, which is growing at an unprecedented rates. It's the gravity of that data and the reality of the multi-cloud sprawl. So stuff is just everywhere, right? Which increases that service a tax base for cyber criminals. >>And and by gravity you mean the data's there and people don't wanna move it. >>It's everywhere, right? And so when it lands someplace, I think edge, core or cloud, it's there and that's, it's something we have to help our customers with. >>Okay, so just it's nuanced cuz complexity has other layers. What, what are those >>Layers? Sure. When we talk to our customers, they tell us complexity is one of their big themes. And specifically it's around data complexity. We talked about that growth and gravity of the data. We talk about multi-cloud complexity and we talk about multi-cloud sprawl. So multiple vendors, multiple contracts, multiple tool chains, and none of those work together in this, you know, multi-cloud world. Then that drives their security complexity. So we talk about that increased attack surface, but this really drives a lot of operational complexity for their teams. Think about we're a lack consistency through everything. So people, process, tools, all that stuff, which is really wasting time and money for our customers. >>So how does that affect the cyber strategies and the, I mean, I've often said the ciso now they have this shared responsibility model, they have to do that across multiple clouds. Every cloud has its own security policies and, and frameworks and syntax. So maybe you could double click on your perspective on that. >>Sure. I'd say the big, you know, the big challenge customers have seen, it's really inadequate cyber resiliency. And specifically they're feeling, feeling very exposed. And today as the world with cyber tax being more and more sophisticated, if something goes wrong, it is a real challenge for them to get back up and running quickly. And that's why this is such a, a big topic for CEOs and businesses around the world. >>You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses were optimized for efficiency and now they're like, Wow, we have to actually put some headroom into the system to be more resilient. You know, I you hearing >>That? Yeah, we absolutely are. I mean, the customers really, they're asking us for help, right? It's one of the big things we're learning and hearing from them. And it's really about three things, one's about simplifying it, two, it's really helping them to extract more value from their data. And then the third big, big piece is ensuring their data is protected and recoverable regardless of where it is going back to that data gravity and that very, you know, the multicloud world just recently, I don't know if you've seen it, but the global data protected, excuse me, the global data protection index gdp. >>I, Yes. Jesus. Not to be confused with gdpr, >>Actually that was released today and confirms everything we just talked about around customer challenges, but also it highlights an importance of having a very cyber, a robust cyber resilient data protection strategy. >>Yeah, I haven't seen the latest, but I, I want to dig into it. I think this is, you've done this many, many years in a row. I like to look at the, the, the time series and see how things have changed. All right. At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? >>Sure. So we believe there's a better way or a better approach on how to handle this. We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, for that cyber resilient multi-cloud data protection solution in needs. We take a modern, a simple and resilient approach, >>But what does that mean? What, what do you mean by modern? >>Sure. So modern, we talk about our software defined architecture, right? It's really designed to meet the needs not only of today, but really into the future. And we protect data across any cloud in any workload. So we have a proven track record doing this today. We have more than 1700 customers that trust us to protect them more than 14 exabytes of their data in the cloud today. >>Okay, so you said modern, simple and resilient. What, what do you mean by simple? Sure. >>We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment to consumption to management and support. So our offers will deploy in minutes. They are easy to operate and use, and we support flexible consumption models for whatever the customer may desire. So traditional subscription or as a service. >>And when you, when you talk about resilient, I mean, I, I put forth that premise, but it's hard because people say, Well, that's gonna gonna cost us more. Well, it may, but you're gonna also reduce your, your risk. So how, what's your point of view on resilience? >>Yeah, I think it's, it's something all customers need. So we're gonna be providing a comprehensive and resilient portfolio of cyber solutions that are secured by design. We have some ver some unique capabilities in a combination of things like built in amenability, physical and logical isolation. We have intelligence built in with AI par recovery and just one, I guess fun fact for everybody is we have our cyber vault is the only solution in the industry that is endorsed by Sheltered Harbor that meets all the needs of the financial sector. >>So it's interesting when you think about the, the NIST framework for cyber security, it's all about about layers. You're sort of bringing that now to, to data protection, correct? Yeah. All right. In a minute we're gonna come back with Travis and dig into the news. We're gonna take a short break. Keep it right there. Okay. We're back with Jeff and Travis Vehill to dig deeper into the news. Guys, again, good to see you. Travis, if you could, maybe you, before we get into the news, can you set the business context for us? What's going on out there? >>Yeah, thanks for that question, Dave. To set a little bit of the context, when you look at the data protection market, Dell has been a leader in providing solutions to customers for going on nearly two decades now. We have tens of thousands of people using our appliances. We have multiple thousands of people using our latest modern simple power protect data managers software. And as Jeff mentioned, we have, you know, 1700 customers protecting 14 exabytes of data in the public clouds today. And that foundation gives us a unique vantage point. We talked to a lot of customers and they're really telling us three things. They want simple solutions, they want us to help them modernize and they want us to add as the highest priority, maintain that high degree of resiliency that they expect from our data protection solutions. So tho that's the backdrop to the news today. And, and as we go through the news, I think you'll, you'll agree that each of these announcements deliver on those pillars. And in particular today we're announcing the Power Protect data manager appliance. We are announcing power protect cyber recovery enhancements, and we are announcing enhancements to our Apex data storage >>Services. Okay, so three pieces. Let's, let's dig to that. It's interesting appliance, everybody wants software, but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, right? Performs great. So, so what do we need to know about the appliance? What's the news there? Well, >>You know, part of the reason I gave you some of those stats to begin with is that we have at this strong foundation of, of experience, but also intellectual property components that we've taken that have been battle tested in the market. And we've put them together in a new simple integrated appliance that really combines the best of the target appliance capabilities we have with that modern simple software. And we've integrated it from the, you know, sort of taking all of those pieces, putting them together in a simple, easy to use and easy to scale interface for customers. >>So the premise that I've been putting forth for, you know, months now, probably well, well over a year, is that, that that data protection is becoming an extension of your, your cybersecurity strategies. So I'm interested in your perspective on cyber recovery. You, you have specific news that you have there? >>Yeah, you know, we, we are, in addition to simplifying things via the, the appliance, we are providing solutions for customers no matter where they're deploying. And cyber recovery, especially when it comes to cloud deployments, is an increasing area of interest and deployment that we see with our customers. So what we're announcing today is that we're expanding our cyber recovery services to be available in Google Cloud with this announcement. It means we're available in all three of the major clouds and it really provides customers the flexibility to secure their data no matter if they're running, you know, on premises in a colo at the edge in the public cloud. And the other nice thing about this, this announcement is that you have the ability to use Google Cloud as a cyber recovery vault that really allows customers to isolate critical data and they can recover that critical data from the vault back to on-premises or from that vault back to running their cyber cyber protection or their data protection solutions in the public cloud. >>I always invoke my, my favorite Matt Baker here. It's not a zero sum game, but this is a perfect example where there's opportunities for a company like Dell to partner with the public cloud provider. You've got capabilities that don't exist there. You've got the on-prem capabilities. We could talk about edge all day, but that's a different topic. Okay, so Mike, my other question Travis, is how does this all fit into Apex? We hear a lot about Apex as a service, it's sort of the new hot thing. What's happening there? What's the news around Apex? >>Yeah, we, we've seen incredible momentum with our Apex solutions since we introduced data protection options into them earlier this year. And we're really building on that momentum with this announcement being, you know, providing solutions that allow customers to consume flexibly. And so what we're announcing specifically is that we're expanding Apex data storage services to include a data protection option. And it's like with all Apex offers, it's a pay as you go solution really streamlines the process of customers purchasing, deploying, maintaining and managing their backup software. All a customer really needs to do is, you know, specify their base capacity, they specify their performance tier, they tell us do they want a a one year term or a three year term and we take it from there. We, we get them up and running so they can start deploying and consuming flexibly. And it's, as with many of our Apex solutions, it's a simple user experience all exposed through a unified Apex console. >>Okay. So it's you keeping it simple, like I think large, medium, small, you know, we hear a lot about t-shirt sizes. I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, what I, what I need, how different is this? I wonder if you guys could, could, could address this. Jeff, maybe you can, >>You can start. Sure. I'll start and then pitch me, you know, Travis, you you jump in when I screw up here. So, awesome. So first I'd say we offer innovative multi-cloud data protection solutions. We provide that deliver performance, efficiency and scale that our customers demand and require. We support as Travis at all the major public clouds. We have a broad ecosystem of workload support and I guess the, the great news is we're up to 80% more cost effective than any of the competition. >>80%. 80%, That's a big number, right. Travis, what's your point of view on this? Yeah, >>I, I think number one, end to end data protection. We, we are that one stop shop that I talked about. Whether it's a simplified appliance, whether it's deployed in the cloud, whether it's at the edge, whether it's integrated appliances, target appliances, software, we have solutions that span the gamut as a service. I mentioned the Apex solution as well. So really we can, we can provide solutions that help support customers and protect them, any workload, any cloud, anywhere that data lives edge core to cloud. The other thing that we hear as a, as a, a big differentiator for Dell and, and Jeff touched on on this a little bit earlier, is our intelligent cyber resiliency. We have a unique combination in, in the market where we can offer immutability or protection against deletion as, as sort of that first line of defense. But we can also offer a second level of defense, which is isolation, talking, talking about data vaults or cyber vaults and cyber recovery. And the, at more importantly, the intelligence that goes around that vault. It can look at detecting cyber attacks, it can help customers speed time to recovery and really provides AI and ML to help early diagnosis of a cyber re attack and fast recovery should a cyber attack occur. And, and you know, if you look at customer adoption of that solution specifically in the clouds, we have over 1300 customers utilizing power protect cyber recovery. >>So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, you know, your finance team, Michael Dell, et cetera, that end to end capability that that, that your ability to manage throughout the supply chain. We actually just did a a, an event recently with you guys where you went into what you're doing to make infrastructure trusted. And so my take on that is you, in a lot of respects, you're shifting, you know, the client's burden to your r and d now they have a lot of work to do, so it's, it's not like they can go home and just relax, but, but that's a key part of the partnership that I see. Jeff, I wonder if you could give us the, the, the final thoughts. >>Sure. Dell has a long history of being a trusted partner with it, right? So we have unmatched capabilities. Going back to your point, we have the broadest portfolio, we have, you know, we're a leader in every category that we participate in. We have a broad deep breadth of portfolio. We have scale, we have innovation that is just unmatched within data protection itself. We are the trusted market leader, no if and or bots, we're number one for both data protection software in appliances per idc. And we would just name for the 17th consecutive time the leader in the, the Gartner Magic Quadrant. So bottom line is customers can count on Dell. >>Yeah, and I think again, we're seeing the evolution of, of data protection. It's not like the last 10 years, it's really becoming an adjacency and really a key component of your cyber strategy. I think those two parts of the organization are coming together. So guys, really appreciate your time. Thanks for Thank you sir. Thanks Dave. Travis, good to see you. All right, in a moment I'm gonna come right back and summarize what we learned today, what actions you can take for your business. You're watching the future of multi-cloud data protection made possible by Dell and collaboration with the cube, your leader in enterprise and emerging tech coverage right back >>In our data driven world. Protecting data has never been more critical to guard against everything from cyber incidents to unplanned outages. You need a cyber resilient, multi-cloud data protection strategy. >>It's not a matter of if you're gonna get hacked, it's a matter of when. And I wanna know that I can recover and continue to recover each day. >>It is important to have a cyber security and a cyber resiliency plan in place because the threat of cyber attack are imminent. >>Power protects. Data manager from Dell Technologies helps deliver the data protection and security confidence you would expect from a trusted partner and market leader. >>We chose Power Protect Data Manager because we've been a strategic partner with Dell Technologies for roughly 20 years now. Our partnership with Dell Technologists has provided us with the ability to scale and grow as we've transitioned from 10 billion in assets to 20 billion. >>With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency and reduce costs. >>Got installed it by myself, learned it by myself with very intuitive >>While restoring a machine with Power Protect Data Manager is fast. We can fully manage Power Protect through the center. We can recover a whole machine in seconds. >>Data Manager offers innovation such as Transparent snapshots to simplify virtual machine backups and it goes beyond backup and restore to provide valuable insights and to protected data workloads and VMs. >>In our previous environment, it would take anywhere from three to six hours at night to do a single backup of each vm. Now we're backing up hourly and it takes two to three seconds with the transparent snapshots. >>With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available whenever you need it. >>Data is extremely important. We can't afford to lose any data. We need things just to work. >>Start your journey to modern data protection with Dell Power Protect Data manager. Visit dell.com/power Protect Data Manager. >>We put forth the premise in our introduction that the world's of data protection in cybersecurity must be more integrated. We said that data recovery strategies have to be built into security practices and procedures and by default this should include modern hardware and software. Now in addition to reviewing some of the challenges that customers face, which have been pretty well documented, we heard about new products that Dell Technologies is bringing to the marketplace that specifically address these customer concerns. There were three that we talked about today. First, the Power Protect Data Manager Appliance, which is an integrated system taking advantage of Dell's history in data protection, but adding new capabilities. And I want to come back to that in the moment. Second is Dell's Power Protect cyber recovery for Google Cloud platform. This rounds out the big three public cloud providers for Dell, which joins AWS and and Azure support. >>Now finally, Dell has made its target backup appliances available in Apex. You might recall earlier this year we saw the introduction from Dell of Apex backup services and then in May at Dell Technologies world, we heard about the introduction of Apex Cyber Recovery Services. And today Dell is making its most popular backup appliances available and Apex. Now I wanna come back to the Power Protect data manager appliance because it's a new integrated appliance. And I asked Dell off camera really what is so special about these new systems and what's really different from the competition because look, everyone offers some kind of integrated appliance. So I heard a number of items, Dell talked about simplicity and efficiency and containers and Kubernetes. So I kind of kept pushing and got to what I think is the heart of the matter in two really important areas. One is simplicity. >>Dell claims that customers can deploy the system in half the time relative to the competition. So we're talking minutes to deploy and of course that's gonna lead to much simpler management. And the second real difference I heard was backup and restore performance for VMware workloads. In particular, Dell has developed transparent snapshot capabilities to fundamentally change the way VMs are protected, which leads to faster backup and restores with less impact on virtual infrastructure. Dell believes this new development is unique in the market and claims that in its benchmarks the new appliance was able to back up 500 virtual machines in 47% less time compared to a leading competitor. Now this is based on Dell benchmarks, so hopefully these are things that you can explore in more detail with Dell to see if and how they apply to your business. So if you want more information, go to the data protectionPage@dell.com. You can find that at dell.com/data protection. And all the content here and other videos are available on demand@thecube.net. Check out our series on the blueprint for trusted infrastructure, it's related and has some additional information. And go to silicon angle.com for all the news and analysis related to these and other announcements. This is Dave Valante. Thanks for watching the future of multi-cloud protection made possible by Dell in collaboration with the Cube, your leader in enterprise and emerging tech coverage.

Published Date : Oct 28 2022

SUMMARY :

And the lack of that business And at the end of the day, more, not less complexity, Jeff Boudreaux is the president and general manager of Dell's Infrastructure Solutions Group, Good to see you. Let's start off, Jeff, with a high level, you know, I'd like to talk about the So they need to make sure that that data data sprawl, what do you mean by that? So for those big three themes, I'd say, you know, you have data sprawl, which is the big one, which is all about the massive amounts of it's something we have to help our customers with. What, what are those We talked about that growth and gravity of the data. So how does that affect the cyber strategies and the, And today as the world with cyber tax being more and more sophisticated, You know, it's funny, I said this in my open, I, I think that prior to the pandemic businesses that very, you know, the multicloud world just recently, I don't know if you've seen it, but the global data protected, Not to be confused with gdpr, Actually that was released today and confirms everything we just talked about around customer challenges, At, at a high level, Jeff, can you kind of address why Dell and from your point of view is best suited? We think Dell is uniquely positioned to help our customers as a one stop shop, if you will, It's really designed to meet the needs What, what do you mean by simple? We wanna provide simplicity everywhere, going back to helping with the complexity challenge, and that's from deployment So how, what's your point of view on resilience? Harbor that meets all the needs of the financial sector. So it's interesting when you think about the, the NIST framework for cyber security, it's all about about layers. the context, when you look at the data protection market, Dell has been a leader in providing solutions but then you talk to customers and they're like, Well, we actually want appliances because we just wanna put it in and it works, You know, part of the reason I gave you some of those stats to begin with is that we have at this strong foundation of, So the premise that I've been putting forth for, you know, months now, probably well, well over a year, it really provides customers the flexibility to secure their data no matter if they're running, you know, it's sort of the new hot thing. All a customer really needs to do is, you know, specify their base capacity, I I'm a big fan of that cuz you guys should be smart enough to figure out, you know, based on my workload, We support as Travis at all the major public clouds. Travis, what's your point of view on of that solution specifically in the clouds, So I think it's fair to say that your, I mean your portfolio has obvious been a big differentiator whenever I talk to, We are the trusted market leader, no if and or bots, we're number one for both data protection software in what we learned today, what actions you can take for your business. Protecting data has never been more critical to guard against that I can recover and continue to recover each day. It is important to have a cyber security and a cyber resiliency Data manager from Dell Technologies helps deliver the data protection and security We chose Power Protect Data Manager because we've been a strategic partner with With Power Protect Data Manager, you can enjoy exceptional ease of use to increase your efficiency We can fully manage Power Data Manager offers innovation such as Transparent snapshots to simplify virtual Now we're backing up hourly and it takes two to three seconds with the transparent With Power Protects Data Manager, you get the peace of mind knowing that your data is safe and available We need things just to work. Start your journey to modern data protection with Dell Power Protect Data manager. We put forth the premise in our introduction that the world's of data protection in cybersecurity So I kind of kept pushing and got to what I think is the heart of the matter in two really Dell claims that customers can deploy the system in half the time relative to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Dave ValantePERSON

0.99+

Jeff BoudreauxPERSON

0.99+

DellORGANIZATION

0.99+

TravisPERSON

0.99+

DavePERSON

0.99+

MikePERSON

0.99+

20 billionQUANTITY

0.99+

Travis BehillPERSON

0.99+

threeQUANTITY

0.99+

Jeff PadroPERSON

0.99+

10 billionQUANTITY

0.99+

Matt BakerPERSON

0.99+

AWSORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

Sheltered HarborORGANIZATION

0.99+

Travis VehillPERSON

0.99+

Michael DellPERSON

0.99+

secondQUANTITY

0.99+

demand@thecube.netOTHER

0.99+

MayDATE

0.99+

more than 14 exabytesQUANTITY

0.99+

more than 1700 customersQUANTITY

0.99+

1700 customersQUANTITY

0.99+

SecondQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

two senior executivesQUANTITY

0.99+

FirstQUANTITY

0.99+

three piecesQUANTITY

0.99+

todayDATE

0.99+

two partsQUANTITY

0.99+

twoQUANTITY

0.99+

six hoursQUANTITY

0.99+

bothQUANTITY

0.99+

thirdQUANTITY

0.99+

three secondsQUANTITY

0.99+

OneQUANTITY

0.99+

TodayDATE

0.99+

over 1300 customersQUANTITY

0.99+

Solutions GroupORGANIZATION

0.99+

ApexORGANIZATION

0.98+

three thingsQUANTITY

0.98+

500 virtual machinesQUANTITY

0.98+

eachQUANTITY

0.98+

20 yearsQUANTITY

0.98+

80%QUANTITY

0.98+

theCUBE Previews Supercomputing 22


 

(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)

Published Date : Oct 25 2022

SUMMARY :

And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Danny HillisPERSON

0.99+

Steve ChenPERSON

0.99+

NECORGANIZATION

0.99+

FujitsuORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Steve WallachPERSON

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

NASAORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Steve FrankPERSON

0.99+

NvidiaORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Seymour CrayPERSON

0.99+

John FurrierPERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

UnisysORGANIZATION

0.99+

1997DATE

0.99+

SavannahPERSON

0.99+

DallasLOCATION

0.99+

EUORGANIZATION

0.99+

Controlled Data CorporationsORGANIZATION

0.99+

IntelORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Penguin SolutionsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TuesdayDATE

0.99+

siliconangle.comOTHER

0.99+

AMDORGANIZATION

0.99+

21st centuryDATE

0.99+

iPhone 12COMMERCIAL_ITEM

0.99+

10QUANTITY

0.99+

CrayPERSON

0.99+

one terabyteQUANTITY

0.99+

CDCORGANIZATION

0.99+

thecube.netOTHER

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

Kendall Square ResearchORGANIZATION

0.99+

iPhone 14COMMERCIAL_ITEM

0.99+

john@siliconangle.comOTHER

0.99+

$2 millionQUANTITY

0.99+

November 13thDATE

0.99+

firstQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

TodayDATE

0.99+

more than half a billion dollarsQUANTITY

0.99+

20QUANTITY

0.99+

seven peopleQUANTITY

0.99+

hundredsQUANTITY

0.99+

mid 1960sDATE

0.99+

three daysQUANTITY

0.99+

ConvexORGANIZATION

0.99+

70'sDATE

0.99+

SC22EVENT

0.99+

david.vellante@siliconangle.comOTHER

0.99+

late 80'sDATE

0.98+

80'sDATE

0.98+

ES7000COMMERCIAL_ITEM

0.98+

todayDATE

0.98+

almost $2 millionQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

20 years laterDATE

0.98+

tens of millions of dollarsQUANTITY

0.98+

SundayDATE

0.98+

JapaneseOTHER

0.98+

90'sDATE

0.97+

Breaking Analysis: CEO Nuggets from Microsoft Ignite & Google Cloud Next


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> This past week we saw two of the Big 3 cloud providers present the latest update on their respective cloud visions, their business progress, their announcements and innovations. The content at these events had many overlapping themes, including modern cloud infrastructure at global scale, applying advanced machine intelligence, AKA AI, end-to-end data platforms, collaboration software. They talked a lot about the future of work automation. And they gave us a little taste, each company of the Metaverse Web 3.0 and much more. Despite these striking similarities, the differences between these two cloud platforms and that of AWS remains significant. With Microsoft leveraging its massive application software footprint to dominate virtually all markets and Google doing everything in its power to keep up with the frenetic pace of today's cloud innovation, which was set into motion a decade and a half ago by AWS. Hello and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this Breaking Analysis, we unpack the immense amount of content presented by the CEOs of Microsoft and Google Cloud at Microsoft Ignite and Google Cloud Next. We'll also quantify with ETR survey data the relative position of these two cloud giants in four key sectors: cloud IaaS, BI analytics, data platforms and collaboration software. Now one thing was clear this past week, hybrid events are the thing. Google Cloud Next took place live over a 24-hour period in six cities around the world, with the main gathering in New York City. Microsoft Ignite, which normally is attended by 30,000 people, had a smaller event in Seattle, in person with a virtual audience around the world. AWS re:Invent, of course, is much different. Yes, there's a virtual component at re:Invent, but it's all about a big live audience gathering the week after Thanksgiving, in the first week of December in Las Vegas. Regardless, Satya Nadella keynote address was prerecorded. It was highly produced and substantive. It was visionary, energetic with a strong message that Azure was a platform to allow customers to build their digital businesses. Doing more with less, which was a key theme of his. Nadella covered a lot of ground, starting with infrastructure from the compute, highlighting a collaboration with Arm-based, Ampere processors. New block storage, 60 regions, 175,000 miles of fiber cables around the world. He presented a meaningful multi-cloud message with Azure Arc to support on-prem and edge workloads, as well as of course the public cloud. And talked about confidential computing at the infrastructure level, a theme we hear from all cloud vendors. He then went deeper into the end-to-end data platform that Microsoft is building from the core data stores to analytics, to governance and the myriad tooling Microsoft offers. AI was next with a big focus on automation, AI, training models. He showed demos of machines coding and fixing code and machines automatically creating designs for creative workers and how Power Automate, Microsoft's RPA tooling, would combine with Microsoft Syntex to understand documents and provide standard ways for organizations to communicate with those documents. There was of course a big focus on Azure as developer cloud platform with GitHub Copilot as a linchpin using AI to assist coders in low-code and no-code innovations that are coming down the pipe. And another giant theme was a workforce transformation and how Microsoft is using its heritage and collaboration and productivity software to move beyond what Nadella called productivity paranoia, i.e., are remote workers doing their jobs? In a world where collaboration is built into intelligent workflows, and he even showed a glimpse of the future with AI-powered avatars and partnerships with Meta and Cisco with Teams of all firms. And finally, security with a bevy of tools from identity, endpoint, governance, et cetera, stressing a suite of tools from a single provider, i.e., Microsoft. So a couple points here. One, Microsoft is following in the footsteps of AWS with silicon advancements and didn't really emphasize that trend much except for the Ampere announcement. But it's building out cloud infrastructure at a massive scale, there is no debate about that. Its plan on data is to try and provide a somewhat more abstracted and simplified solutions, which differs a little bit from AWS's approach of the right database tool, for example, for the right job. Microsoft's automation play appears to provide simple individual productivity tools, kind of a ground up approach and make it really easy for users to drive these bottoms up initiatives. We heard from UiPath that forward five last month, a little bit of a different approach of horizontal automation, end-to-end across platforms. So quite a different play there. Microsoft's angle on workforce transformation is visionary and will continue to solidify in our view its dominant position with Teams and Microsoft 365, and it will drive cloud infrastructure consumption by default. On security as well as a cloud player, it has to have world-class security, and Azure does. There's not a lot of debate about that, but the knock on Microsoft is Patch Tuesday becomes Hack Wednesday because Microsoft releases so many patches, it's got so much Swiss cheese in its legacy estate and patching frequently, it becomes a roadmap and a trigger for hackers. Hey, patch Tuesday, these are all the exploits that you can go after so you can act before the patches are implemented. And so it's really become a problem for users. As well Microsoft is competing with many of the best-of-breed platforms like CrowdStrike and Okta, which have market momentum and appear to be more attractive horizontal plays for customers outside of just the Microsoft cloud. But again, it's Microsoft. They make it easy and very inexpensive to adopt. Now, despite the outstanding presentation by Satya Nadella, there are a couple of statements that should raise eyebrows. Here are two of them. First, as he said, Azure is the only cloud that supports all organizations and all workloads from enterprises to startups, to highly regulated industries. I had a conversation with Sarbjeet Johal about this, to make sure I wasn't just missing something and we were both surprised, somewhat, by this claim. I mean most certainly AWS supports more certifications for example, and we would think it has a reasonable case to dispute that claim. And the other statement, Nadella made, Azure is the only cloud provider enabling highly regulated industries to bring their most sensitive applications to the cloud. Now, reasonable people can debate whether AWS is there yet, but very clearly Oracle and IBM would have something to say about that statement. Now maybe it's not just, would say, "Oh, they're not real clouds, you know, they're just going to hosting in the cloud if you will." But still, when it comes to mission-critical applications, you would think Oracle is really the the leader there. Oh, and Satya also mentioned the claim that the Edge browser, the Microsoft Edge browser, no questions asked, he said, is the best browser for business. And we could see some people having some questions about that. Like isn't Edge based on Chrome? Anyway, so we just had to question these statements and challenge Microsoft to defend them because to us it's a little bit of BS and makes one wonder what else in such as awesome keynote and it was awesome, it was hyperbole. Okay, moving on to Google Cloud Next. The keynote started with Sundar Pichai doing a virtual session, he was remote, stressing the importance of Google Cloud. He mentioned that Google Cloud from its Q2 earnings was on a $25-billion annual run rate. What he didn't mention is that it's also on a 3.6 billion annual operating loss run rate based on its first half performance. Just saying. And we'll dig into that issue a little bit more later in this episode. He also stressed that the investments that Google has made to support its core business and search, like its global network of 22 subsea cables to support things like, YouTube video, great performance obviously that we all rely on, those innovations there. Innovations in BigQuery to support its search business and its threat analysis that it's always had and its AI, it's always been an AI-first company, he's stressed, that they're all leveraged by the Google Cloud Platform, GCP. This is all true by the way. Google has absolutely awesome tech and the talk, as well as his talk, Pichai, but also Kurian's was forward thinking and laid out a vision of the future. But it didn't address in our view, and I talked to Sarbjeet Johal about this as well, today's challenges to the degree that Microsoft did and we expect AWS will at re:Invent this year, it was more out there, more forward thinking, what's possible in the future, somewhat less about today's problem, so I think it's resonates less with today's enterprise players. Thomas Kurian then took over from Sundar Pichai and did a really good job of highlighting customers, and I think he has to, right? He has to say, "Look, we are in this game. We have customers, 9 out of the top 10 media firms use Google Cloud. 8 out of the top 10 manufacturers. 9 out of the top 10 retailers. Same for telecom, same for healthcare. 8 out of the top 10 retail banks." He and Sundar specifically referenced a number of companies, customers, including Avery Dennison, Groupe Renault, H&M, John Hopkins, Prudential, Minna Bank out of Japan, ANZ bank and many, many others during the session. So you know, they had some proof points and you got to give 'em props for that. Now like Microsoft, Google talked about infrastructure, they referenced training processors and regions and compute optionality and storage and how new workloads were emerging, particularly data-driven workloads in AI that required new infrastructure. He explicitly highlighted partnerships within Nvidia and Intel. I didn't see anything on Arm, which somewhat surprised me 'cause I believe Google's working on that or at least has come following in AWS's suit if you will, but maybe that's why they're not mentioning it or maybe I got to do more research there, but let's park that for a minute. But again, as we've extensively discussed in Breaking Analysis in our view when it comes to compute, AWS via its Annapurna acquisition is well ahead of the pack in this area. Arm is making its way into the enterprise, but all three companies are heavily investing in infrastructure, which is great news for customers and the ecosystem. We'll come back to that. Data and AI go hand in hand, and there was no shortage of data talk. Google didn't mention Snowflake or Databricks specifically, but it did mention, by the way, it mentioned Mongo a couple of times, but it did mention Google's, quote, Open Data cloud. Now maybe Google has used that term before, but Snowflake has been marketing the data cloud concept for a couple of years now. So that struck as a shot across the bow to one of its partners and obviously competitor, Snowflake. At BigQuery is a main centerpiece of Google's data strategy. Kurian talked about how they can take any data from any source in any format from any cloud provider with BigQuery Omni and aggregate and understand it. And with the support of Apache Iceberg and Delta and Hudi coming in the future and its open Data Cloud Alliance, they talked a lot about that. So without specifically mentioning Snowflake or Databricks, Kurian co-opted a lot of messaging from these two players, such as life and tech. Kurian also talked about Google Workspace and how it's now at 8 million users up from 6 million just two years ago. There's a lot of discussion on developer optionality and several details on tools supported and the open mantra of Google. And finally on security, Google brought out Kevin Mandian, he's a CUBE alum, extremely impressive individual who's CEO of Mandiant, a leading security service provider and consultancy that Google recently acquired for around 5.3 billion. They talked about moving from a shared responsibility model to a shared fate model, which is again, it's kind of a shot across AWS's bow, kind of shared responsibility model. It's unclear that Google will pay the same penalty if a customer doesn't live up to its portion of the shared responsibility, but we can probably assume that the customer is still going to bear the brunt of the pain, nonetheless. Mandiant is really interesting because it's a services play and Google has stated that it is not a services company, it's going to give partners in the channel plenty of room to play. So we'll see what it does with Mandiant. But Mandiant is a very strong enterprise capability and in the single most important area security. So interesting acquisition by Google. Now as well, unlike Microsoft, Google is not competing with security leaders like Okta and CrowdStrike. Rather, it's partnering aggressively with those firms and prominently putting them forth. All right. Let's get into the ETR survey data and see how Microsoft and Google are positioned in four key markets that we've mentioned before, IaaS, BI analytics, database data platforms and collaboration software. First, let's look at the IaaS cloud. ETR is just about to release its October survey, so I cannot share the that data yet. I can only show July data, but we're going to give you some directional hints throughout this conversation. This chart shows net score or spending momentum on the vertical axis and overlap or presence in the data, i.e., how pervasive the platform is. That's on the horizontal axis. And we've inserted the Wikibon estimates of IaaS revenue for the companies, the Big 3. Actually the Big 4, we included Alibaba. So a couple of points in this somewhat busy data chart. First, Microsoft and AWS as always are dominant on both axes. The red dotted line there at 40% on the vertical axis. That represents a highly elevated spending velocity and all of the Big 3 are above the line. Now at the same time, GCP is well behind the two leaders on the horizontal axis and you can see that in the table insert as well in our revenue estimates. Now why is Azure bigger in the ETR survey when AWS is larger according to the Wikibon revenue estimates? And the answer is because Microsoft with products like 365 and Teams will often be considered by respondents in the survey as cloud by customers, so they fit into that ETR category. But in the insert data we're stripping out applications and SaaS from Microsoft and Google and we're only isolating on IaaS. The other point is when you take a look at the early October returns, you see downward pressure as signified by those dotted arrows on every name. The only exception was Dell, or Dell and IBM, which showing slightly improved momentum. So the survey data generally confirms what we know that AWS and Azure have a massive lead and strong momentum in the marketplace. But the real story is below the line. Unlike Google Cloud, which is on pace to lose well over 3 billion on an operating basis this year, AWS's operating profit is around $20 billion annually. Microsoft's Intelligent Cloud generated more than $30 billion in operating income last fiscal year. Let that sink in for a moment. Now again, that's not to say Google doesn't have traction, it does and Kurian gave some nice proof points and customer examples in his keynote presentation, but the data underscores the lead that Microsoft and AWS have on Google in cloud. And here's a breakdown of ETR's proprietary net score methodology, that vertical axis that we showed you in the previous chart. It asks customers, are you adopting the platform new? That's that lime green. Are you spending 6% or more? That's the forest green. Is you're spending flat? That's the gray. Is you're spending down 6% or worse? That's the pinkest color. Or are you replacing the platform, defecting? That's the bright red. You subtract the reds from the greens and you get a net score. Now one caveat here, which actually is really favorable from Microsoft, the Microsoft data that we're showing here is across the entire Microsoft portfolio. The other point is, this is July data, we'll have an update for you once ETR releases its October results. But we're talking about meaningful samples here, the ends. 620 for AWS over a thousand from Microsoft in more than 450 respondents in the survey for Google. So the real tell is replacements, that bright red. There is virtually no churn for AWS and Microsoft, but Google's churn is 5x, those two in the survey. Now 5% churn is not high, but you'd like to see three things for Google given it's smaller size. One is less churn, two is much, much higher adoption rates in the lime green. Three is a higher percentage of those spending more, the forest green. And four is a lower percentage of those spending less. And none of these conditions really applies here for Google. GCP is still not growing fast enough in our opinion, and doesn't have nearly the traction of the two leaders and that shows up in the survey data. All right, let's look at the next sector, BI analytics. Here we have that same XY dimension. Again, Microsoft dominating the picture. AWS very strong also in both axes. Tableau, very popular and respectable of course acquired by Salesforce on the vertical axis, still looking pretty good there. And again on the horizontal axis, big presence there for Tableau. And Google with Looker and its other platforms is also respectable, but it again, has some work to do. Now notice Streamlit, that's a recent Snowflake acquisition. It's strong in the vertical axis and because of Snowflake's go-to-market (indistinct), it's likely going to move to the right overtime. Grafana is also prominent in the Y axis, but a glimpse at the most recent survey data shows them slightly declining while Looker actually improves a bit. As does Cloudera, which we'll move up slightly. Again, Microsoft just blows you away, doesn't it? All right, now let's get into database and data platform. Same X Y dimensions, but now database and data warehouse. Snowflake as usual takes the top spot on the vertical axis and it is actually keeps moving to the right as well with again, Microsoft and AWS is dominant in the market, as is Oracle on the X axis, albeit it's got less spending velocity, but of course it's the database king. Google is well behind on the X axis but solidly above the 40% line on the vertical axis. Note that virtually all platforms will see pressure in the next survey due to the macro environment. Microsoft might even dip below the 40% line for the first time in a while. Lastly, let's look at the collaboration and productivity software market. This is such an important area for both Microsoft and Google. And just look at Microsoft with 365 and Teams up into the right. I mean just so impressive in ubiquitous. And we've highlighted Google. It's in the pack. It certainly is a nice base with 174 N, which I can tell you that N will rise in the next survey, which is an indication that more people are adopting. But given the investment and the tech behind it and all the AI and Google's resources, you'd really like to see Google in this space above the 40% line, given the importance of this market, of this collaboration area to Google's success and the degree to which they emphasize it in their pitch. And look, this brings up something that we've talked about before on Breaking Analysis. Google doesn't have a tech problem. This is a go-to-market and marketing challenge that Google faces and it's up against two go-to-market champs and Microsoft and AWS. And Google doesn't have the enterprise sales culture. It's trying, it's making progress, but it's like that racehorse that has all the potential in the world, but it's just missing some kind of key ingredient to put it over at the top. It's always coming in third, (chuckles) but we're watching and Google's obviously, making some investments as we shared with earlier. All right. Some final thoughts on what we learned this week and in this research: customers and partners should be thrilled that both Microsoft and Google along with AWS are spending so much money on innovation and building out global platforms. This is a gift to the industry and we should be thankful frankly because it's good for business, it's good for competitiveness and future innovation as a platform that can be built upon. Now we didn't talk much about multi-cloud, we haven't even mentioned supercloud, but both Microsoft and Google have a story that resonates with customers in cross cloud capabilities, unlike AWS at this time. But we never say never when it comes to AWS. They sometimes and oftentimes surprise you. One of the other things that Sarbjeet Johal and John Furrier and I have discussed is that each of the Big 3 is positioning to their respective strengths. AWS is the best IaaS. Microsoft is building out the kind of, quote, we-make-it-easy-for-you cloud, and Google is trying to be the open data cloud with its open-source chops and excellent tech. And that puts added pressure on Snowflake, doesn't it? You know, Thomas Kurian made some comments according to CRN, something to the effect that, we are the only company that can do the data cloud thing across clouds, which again, if I'm being honest is not really accurate. Now I haven't clarified these statements with Google and often things get misquoted, but there's little question that, as AWS has done in the past with Redshift, Google is taking a page out of Snowflake, Databricks as well. A big difference in the Big 3 is that AWS doesn't have this big emphasis on the up-the-stack collaboration software that both Microsoft and Google have, and that for Microsoft and Google will drive captive IaaS consumption. AWS obviously does some of that in database, a lot of that in database, but ISVs that compete with Microsoft and Google should have a greater affinity, one would think, to AWS for competitive reasons. and the same thing could be said in security, we would think because, as I mentioned before, Microsoft competes very directly with CrowdStrike and Okta and others. One of the big thing that Sarbjeet mentioned that I want to call out here, I'd love to have your opinion. AWS specifically, but also Microsoft with Azure have successfully created what Sarbjeet calls brand distance. AWS from the Amazon Retail, and even though AWS all the time talks about Amazon X and Amazon Y is in their product portfolio, but you don't really consider it part of the retail organization 'cause it's not. Azure, same thing, has created its own identity. And it seems that Google still struggles to do that. It's still very highly linked to the sort of core of Google. Now, maybe that's by design, but for enterprise customers, there's still some potential confusion with Google, what's its intentions? How long will they continue to lose money and invest? Are they going to pull the plug like they do on so many other tools? So you know, maybe some rethinking of the marketing there and the positioning. Now we didn't talk much about ecosystem, but it's vital for any cloud player, and Google again has some work to do relative to the leaders. Which brings us to supercloud. The ecosystem and end customers are now in a position this decade to digitally transform. And we're talking here about building out their own clouds, not by putting in and building data centers and installing racks of servers and storage devices, no. Rather to build value on top of the hyperscaler gift that has been presented. And that is a mega trend that we're watching closely in theCUBE community. While there's debate about the supercloud name and so forth, there little question in our minds that the next decade of cloud will not be like the last. All right, we're going to leave it there today. Many thanks to Sarbjeet Johal, and my business partner, John Furrier, for their input to today's episode. Thanks to Alex Myerson who's on production and manages the podcast and Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE, who does some wonderful editing. And check out SiliconANGLE, a lot of coverage on Google Cloud Next and Microsoft Ignite. Remember, all these episodes are available as podcast wherever you listen. Just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com. And you can always get in touch with me via email, david.vellante@siliconangle.com or you can DM me at dvellante or comment on my LinkedIn posts. And please do check out etr.ai, the best survey data in the enterprise tech business. This is Dave Vellante for the CUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (gentle music)

Published Date : Oct 15 2022

SUMMARY :

with Dave Vellante. and the degree to which they

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

NadellaPERSON

0.99+

Alex MyersonPERSON

0.99+

NvidiaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Kevin MandianPERSON

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Kristen MartinPERSON

0.99+

Thomas KurianPERSON

0.99+

DellORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

OctoberDATE

0.99+

Satya NadellaPERSON

0.99+

SeattleLOCATION

0.99+

John FurrierPERSON

0.99+

3.6 billionQUANTITY

0.99+

Rob HofPERSON

0.99+

SundarPERSON

0.99+

PrudentialORGANIZATION

0.99+

JulyDATE

0.99+

New York CityLOCATION

0.99+

H&MORGANIZATION

0.99+

KurianPERSON

0.99+

twoQUANTITY

0.99+

6%QUANTITY

0.99+

Minna BankORGANIZATION

0.99+

5xQUANTITY

0.99+

Sarbjeet JohalPERSON

0.99+

DV trusted Infrastructure part 2 Open


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for, eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now, that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers, which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges, and I'm not saying that SecOps pros are now talented. They are. There just aren't enough of them to go around, and the adversary is also talented and very creative, and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically, we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante, and I'm your host now. Previously, we looked at what trusted infrastructure means >>And the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that devs SEC op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies, and after that we're gonna bring on Mahesh Naar oim, who was a consultant in the networking product management area at Dell. And finally, we're closed with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program.

Published Date : Oct 5 2022

SUMMARY :

provider secures, for example, the S three bucket and what the customer is responsible But the diversity of alternatives and infrastructure implementations continues to Now, one of the very important roles that a technology vendor can play is to take how the industry generally in Dell specifically, are adapting to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jerome WestPERSON

0.99+

DellORGANIZATION

0.99+

FirstQUANTITY

0.99+

Dave AntePERSON

0.99+

todayDATE

0.99+

second partQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Mahesh Naar oimPERSON

0.99+

oneQUANTITY

0.98+

DeepakPERSON

0.98+

bothQUANTITY

0.98+

part 2OTHER

0.97+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.95+

HCIORGANIZATION

0.95+

single cloudQUANTITY

0.94+

CubeORGANIZATION

0.9+

WhackamoleTITLE

0.89+

one companyQUANTITY

0.85+

Power EdgeORGANIZATION

0.7+

part twoQUANTITY

0.65+

DevOpsORGANIZATION

0.6+

SecOpsTITLE

0.6+

pointQUANTITY

0.54+

Deepak Rangaraj, Dell technologies


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges. And I'm not saying that SecOps pros are not talented. They are. There just aren't enough of them to go around and the adversary is also talented and very creative and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante and I'm your host now. Previously we looked at what trusted infrastructure means and the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. >>Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that dev sec op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies. And after that we're gonna bring on Mahesh Nagar oim, who was consultant in the networking product management area at Dell. And finally, we're close with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program. Deepak Arage shoes powered security product manager at Dell Technologies. Deepak, great to have you on the program. Thank you. >>Thank you for having me. >>So we're going through the infrastructure stack and in part one of this series we looked at the landscape overall and how cyber has changed and specifically how Dell thinks about data protection in, in security in a manner that both secures infrastructure and minimizes organizational friction. We also hit on the storage part of the portfolio. So now we want to dig into servers. So my first question is, what are the critical aspects of securing server infrastructure that our audience should be aware of? >>Sure. So if you look at compute in general, right, it has rapidly evolved over the past couple of years, especially with trends toward software defined data centers and with also organizations having to deal with hybrid environments where they have private clouds, public cloud locations, remote offices, and also remote workers. So on top of this, there's also an increase in the complexity of the supply chain itself, right? There are companies who are dealing with hundreds of suppliers as part of their supply chain. So all of this complexity provides a lot of opportunity for attackers because it's expanding the threat surface of what can be attacked, and attacks are becoming more frequent, more severe and more sophisticated. And this has also triggered around in the regulatory and mandates around the security needs. >>And these regulations are not just in the government sector, right? So it extends to critical infrastructure and eventually it also get into the private sector. In addition to this, organizations are also looking at their own internal compliance mandates. And this could be based on the industry in which they're operating in, or it could be their own security postures. And this is the landscape in which servers they're operating today. And given that servers are the foundational blocks of the data center, it becomes extremely important to protect them. And given how complex the modern server platforms are, it's also extremely difficult and it takes a lot of effort. And this means protecting everything from the supply chain to the manufacturing and then eventually the assuring the hardware and software integrity of the platforms and also the operations. And there are very few companies that go to the lens that Dell does in order to secure the server. We truly believe in the notion and the security mentality that, you know, security should enable our customers to go focus on their business and proactively innovate on their business and it should not be a burden to them. And we heavily invest to make that possible for our customers. >>So this is really important because the premise that I set up at the beginning of this was really that I, as of security pro, I'm not a security pro, but if I were, I wouldn't want to be doing all this infrastructure stuff because I now have all these new things I gotta deal with. I want a company like Dell who has the resources to build that security in to deal with the supply chain to ensure the providence, et cetera. So I'm glad you you, you hit on that, but so given what you just said, what does cybersecurity resilience mean from a server perspective? For example, are there specific principles that Dell adheres to that are non-negotiable? Let's say, how does Dell ensure that its customers can trust your server infrastructure? >>Yeah, like when, when it comes to security at Dell, right? It's ingrained in our product, so that's the best way to put it. And security is nonnegotiable, right? It's never an afterthought where we come up with a design and then later on figure out how to go make it secure, right? Our security development life cycle, the products are being designed to counter these threats right from the big. And in addition to that, we are also testing and evaluating these products continuously to identify vulnerabilities. We also have external third party audits which supplement this process. And in addition to this, Dell makes the commitment that we will rapidly respond to any mitigations and vulnerability, any vulnerabilities and exposures found out in the field and provide mitigations and patches for in attacking manner. So this security principle is also built into our server life cycle, right? Every phase of it. >>So we want our products to provide cutting edge capabilities when it comes to security. So as part of that, we are constantly evaluating what our security model is done. We are building on it and continuously improving it. So till a few years ago, our model was primarily based on the N framework of protect, detect and rigor. And it's still aligns really well to that framework, but over the past couple of years we have seen how computers evolved, how the threads have evolved, and we have also seen the regulatory trends and we recognize the fact that the best security strategy for the modern world is a zero trust approach. And so now when we are building our infrastructure and tools and offerings for customers, first and foremost, they're cyber resilient, right? What we mean by that is they're capable of anticipating threats, withstanding attacks and rapidly recurring from attacks and also adapting to the adverse conditions in which they're deployed. The process of designing these capabilities and identifying these capabilities however, is done through the zero press framework. And that's very important because now we are also anticipating how our customers will end up using these capabilities at there and to enable their own zero trust IT environments and IT zero trusts deployments. We have completely adapted our security approach to make it easier for customers to work with us no matter where they are in their journey towards zero trust option. >>So thank you for that. You mentioned the, this framework, you talked about zero trust. When I think about n I think as well about layered approaches. And when I think about zero trust, I think about if you, if you don't have access to it, you're not getting access, you've gotta earn that, that access and you've got layers and then you still assume that bad guys are gonna get in. So you've gotta detect that and you've gotta response. So server infrastructure security is so fundamental. So my question is, what is Dell providing specifically to, for example, detect anomalies and breaches from unauthorized activity? How do you enable fast and easy or facile recovery from malicious incidents? >>Right? What is that is exactly right, right? Breachers are bound to happen. And given how complex our current environment is, it's extremely distributed and extremely connected, right? Data and users are no longer contained with an offices where we can set up a perimeter firewall and say, Yeah, everything within that is good. We can trust everything within it. That's no longer true. The best approach to protect data and infrastructure in the current world is to use a zero trust approach, which uses the principles. Nothing is ever trusted, right? Nothing is trusted implicitly. You're constantly verifying every single user, every single device, and every single access in your system at every single level of your ID environment. And this is the principles that we use on power Edge, right? But with an increased focus on providing granular controls and checks based on the principles of these privileged access. >>So the idea is that service first and foremost need to make sure that the threats never enter and they're rejected at the point of entry. But we recognize breaches are going to occur and if they do, they need to be minimized such that the sphere of damage cost by attacker is minimized. So they're not able to move from one part of the network to something else laterally or escalate their privileges and cause more damage, right? So the impact radius for instance, has to be radius. And this is done through features like automated detection capabilities and automation, automated remediation capabilities. So some examples are as part of our end to end boot resilience process, we have what they call a system lockdown, right? We can lock down the configuration of the system and lock on the form versions and all changes to the system. And we have capabilities which automatically detect any drift from that lockdown configuration and we can figure out if the drift was caused to authorized changes or unauthorized changes. >>And if it is an unauthorize change can log it, generate security alerts, and we even have capabilities to automatically roll the firm where, and always versions back to a known good version and also the configurations, right? And this becomes extremely important because as part of zero trust, we need to respond to these things at machine speed and we cannot do it at a human speed. And having these automated capabilities is a big deal when achieving that zero trust strategy. And in addition to this, we also have chassis inclusion detection where if the chassis, the box, the several box is opened up, it logs alerts, and you can figure out even later if there's an AC power cycle, you can go look at the logs to see that the box is opened up and figure out if there was a, like a known authorized access or some malicious actor opening and chain something in your system. >>Great, thank you for that lot. Lot of detail and and appreciate that. I want to go somewhere else now cuz Dell has a renowned supply chain reputation. So what about securing the, the supply chain and the server bill of materials? What does Dell specifically do to track the providence of components it uses in its systems so that when the systems arrive, a customer can be a hundred percent certain that that system hasn't been compromised, >>Right? And we've talked about how complex the modern supply chain is, right? And that's no different for service. We have hundreds of confidence on the server and a lot of these form where in order to be configured and run and this former competence could be coming from third parties suppliers. So now the complexity that we are dealing with like was the end to end approach. And that's where Dell pays a lot of attention into assuring the security approach approaching. And it starts all the way from sourcing competence, right? And then through the design and then even the manufacturing process where we are wetting the personnel leather factories and wetting the factories itself. And the factories also have physical controls, physical security controls built into them and even shipping, right? We have GPS tagging of packages. So all of this is built to ensure supply chain security. >>But a critical aspect of this is also making sure that the systems which are built in the factories are delivered to the customers without any changes or any tapper. And we have a feature called the secure component verification, which is capable of doing this. What the feature does this, when the system gets built in a factory, it generates an inventory of all the competence in the system and it creates a cryptographic certificate based on the signatures presented to this by the competence. And this certificate is stored separately and sent to the customers separately from the system itself. So once the customers receive the system at their end, they can run out to, it generates an inventory of the competence on the system at their end and then compare it to the golden certificate to make sure nothing was changed. And if any changes are detected, we can figure out if there's an authorized change or unauthorize change. >>Again, authorized changes could be like, you know, upgrades to the drives or memory and ized changes could be any sort of temper. So that's the supply chain aspect of it. And bill of metal use is also an important aspect to galing security, right? And we provide a software bill of materials, which is basically a list of ingredients of all the software pieces in the platform. So what it allows our customers to do is quickly take a look at all the different pieces and compare it to the vulnerability database and see if any of the vulner, which have been discovered out in the wild affected platform. So that's a quick way of figuring out if the platform has any known vulnerabilities and it has not been patched. >>Excellent. That's really good. My last question is, I wonder if you, you know, give us the sort of summary from your perspective, what are the key strengths of Dell server portfolio from a security standpoint? I'm really interested in, you know, the uniqueness and the strong suit that Dell brings to the table, >>Right? Yeah. We have talked enough about the complexity of the environment and how zero risk is necessary for the modern ID environment, right? And this is integral to Dell powered service. And as part of that like you know, security starts with the supply chain. We already talked about the second component verification, which is a beneath feature that Dell platforms have. And on top of it we also have a silicon place platform mode of trust. So this is a key which is programmed into the silicon on the black service during manufacturing and can never be changed after. And this immutable key is what forms the anchor for creating the chain of trust that is used to verify everything in the platform from the hardware and software integrity to the boot, all pieces of it, right? In addition to that, we also have a host of data protection features. >>Whether it is protecting data at risk in news or inflight, we have self encrypting drives, which provides scalable and flexible encryption options. And this couple with external key management provides really good protection for your data address. External key management is important because you know, somebody could physically steam the server, walk away, but then the keys are not stored on the server, it stood separately. So that provides your action layer of security. And we also have dual layer encryption where you can compliment the hardware encryption on the secure encrypted drives with software level encryption. Inion to this we have identity and access management features like multifactor authentication, single sign on roles, scope and time based access controls, all of which are critical to enable that granular control and checks for zero trust approach. So I would say like, you know, if you look at the Dell feature set, it's pretty comprehensive and we also have the flexibility built in to meet the needs of all customers no matter where they fall in the spectrum of, you know, risk tolerance and security sensitivity. And we also have the capabilities to meet all the regulatory requirements and compliance requirements. So in a nutshell, I would say that, you know, Dell Power Service cyber resident infrastructure helps accelerate zero tested option for customers. >>Got it. So you've really thought this through all the various things that that you would do to sort of make sure that your server infrastructure is secure, not compromised, that your supply chain is secure so that your customers can focus on some of the other things that they have to worry about, which are numerous. Thanks Deepak, appreciate you coming on the cube and participating in the program. >>Thank you for having >>You're welcome. In a moment I'll be back to dig into the networking portion of the infrastructure. Stay with us for more coverage of a blueprint for trusted infrastructure and collaboration with Dell Technologies on the cube. Your leader in enterprise and emerging tech coverage.

Published Date : Oct 4 2022

SUMMARY :

So the game of Whackamole continues. But the diversity of alternatives and infrastructure implementations continues to how the industry generally in Dell specifically, are adapting to Deepak, great to have you on the program. We also hit on the storage part of the portfolio. So all of this complexity provides a lot of opportunity for attackers because it's expanding of the data center, it becomes extremely important to protect them. in to deal with the supply chain to ensure the providence, et cetera. And in addition to that, we are also testing and evaluating how the threads have evolved, and we have also seen the regulatory trends and And when I think about zero trust, I think about if And this is the principles that we use on power Edge, part of our end to end boot resilience process, we have what they call a system And in addition to this, we also have chassis inclusion detection where if What does Dell specifically do to track the So now the complexity that we are dealing with like was And this certificate is stored separately and sent to the customers So that's the supply chain aspect of it. the uniqueness and the strong suit that Dell brings to the table, from the hardware and software integrity to the boot, all pieces of it, And we also have dual layer encryption where you of the other things that they have to worry about, which are numerous. In a moment I'll be back to dig into the networking portion of the infrastructure.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DeepakPERSON

0.99+

DellORGANIZATION

0.99+

Jerome WestPERSON

0.99+

Deepak RangarajPERSON

0.99+

Dave AntePERSON

0.99+

second partQUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

Deepak AragePERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

second componentQUANTITY

0.99+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.98+

bothQUANTITY

0.98+

hundredsQUANTITY

0.98+

hundred percentQUANTITY

0.98+

Mahesh NagarPERSON

0.98+

zero trustQUANTITY

0.98+

single cloudQUANTITY

0.96+

every single deviceQUANTITY

0.94+

one partQUANTITY

0.94+

firstQUANTITY

0.94+

every single accessQUANTITY

0.92+

every single userQUANTITY

0.92+

CubeORGANIZATION

0.92+

zero riskQUANTITY

0.9+

WhackamoleTITLE

0.88+

zeroQUANTITY

0.82+

past couple of yearsDATE

0.81+

a few years agoDATE

0.76+

every single levelQUANTITY

0.74+

singleQUANTITY

0.68+

PowerCOMMERCIAL_ITEM

0.66+

part oneOTHER

0.65+

HCIORGANIZATION

0.61+

SecOpsTITLE

0.58+

confidenceQUANTITY

0.57+

SECORGANIZATION

0.55+

part twoQUANTITY

0.54+

suppliersQUANTITY

0.54+

pointQUANTITY

0.53+

DevOpsORGANIZATION

0.52+

Jerome West, Dell Technologies V2


 

>>We're back with Jerome West, product management security lead at for HCI at Dell Technologies Hyper-converged infrastructure. Jerome, welcome. >>Thank you, David. >>Hey, Jerome, In this series, A blueprint for trusted infrastructure, we've been digging into the different parts of the infrastructure stack, including storage, servers and networking, and now we want to cover hyperconverged infrastructure. So my first question is, what's unique about HCI that presents specific security challenges? What do we need to know? >>So what's unique about Hyperconverge infrastructure is the breadth of the security challenge. We can't simply focus on a single type of IT system, so like a server or a storage system or a virtualization piece of software. I mean, HCI is all of those things. So luckily we have excellent partners like VMware, Microsoft, and internal partners like the Dell Power Edge team, the Dell storage team, the Dell networking team, and on and on. These partnerships, in these collaborations are what make us successful from a security standpoint. So let me give you an example to illustrate. In the recent past, we're seeing growing scope and sophistication in supply chain attacks. This mean an attacker is going to attack your software supply chain upstream so that hopefully a piece of code, malicious code that wasn't identified early in the software supply chain is distributed like a large player, like a VMware or Microsoft or a Dell. So to confront this kind of sophisticated hard to defeat problem, we need short term solutions and we need long term solutions as well. >>So for the short term solution, the obvious thing to do is to patch the vulnerability. The complexity is for our HCI portfolio. We build our software on VMware, so we would have to consume a patch that VMware would produce and provide it to our customers in a timely manner. Luckily, VX Rail's engineering team has co engineered a release process with VMware that significantly shortens our development life cycle so that VMware will produce a patch and within 14 days we will integrate our own code. With the VMware release, we will have tested and validated the update and we will give an update to our customers within 14 days of that VMware release. That as a result of this kind of rapid development process, Vxl had over 40 releases of software updates last year for a longer term solution. We're partnering with VMware and others to develop a software bill of materials. We work with VMware to consume their software manifest, including their upstream vendors and their open source providers to have a comprehensive list of software components. Then we aren't caught off guard by an unforeseen vulnerability and we're more able to easily detect where the software problem lies so that we can quickly address it. So these are the kind of relationships and solutions that we can co engineer with effective collaborations with our, with our partners. >>Great, Thank you for that. That description. So if I had to define what cybersecurity resilience means to HCI or converged infrastructure, and to me my takeaway was you gotta have a short term instant patch solution and then you gotta do an integration in a very short time, you know, two weeks to then have that integration done. And then longer term you have to have a software bill of materials so that you can ensure the providence of all the components help us. Is that a right way to think about cybersecurity resilience? Do you have, you know, a additives to that definition? >>I do. I really think that site cybersecurity and resilience for hci, because like I said, it has sort of unprecedented breadth across our portfolio. It's not a single thing, it's a bit of everything. So really the strength or the secret sauce is to combine all the solutions that our partner develops while integrating them with our own layer. So let me, let me give you an example. So hci, it's a, basically taking a software abstraction of hardware functionality and implementing it into something called the virtualized layer. It's basically the virtual virtualizing hardware functionality, like say a storage controller, you could implement it in a hardware, but for hci, for example, in our VX rail portfolio, we, or our vxl product, we integrate it into a product called vsan, which is provided by our partner VMware. So that portfolio strength is still, you know, through our, through our partnerships. >>So what we do, we integrate these, these security functionality and features in into our product. So our partnership grows to our ecosystem through products like VMware, products like nsx, Verizon, Carbon Black and Bsphere. All of them integrate seamlessly with VMware. And we also leverage VMware's software, par software partnerships on top of that. So for example, VX supports multifactor authentication through bsphere integration with something called Active Directory Federation services for adfs. So there is a lot of providers that support adfs, including Microsoft Azure. So now we can support a wide array of identity providers such as Off Zero or I mentioned Azure or Active Directory through that partnership. So we can leverage all of our partners partnerships as well. So there's sort of a second layer. So being able to secure all of that, that provides a lot of options and flexibility for our customers. So basically to summarize my my answer, we consume all of the security advantages of our partners, but we also expand on that to make a product that is comprehensively secured at multiple layers from the hardware layer that's provided by Dell through Power Edge to the hyper-converged software that we build ourselves to the virtualization layer that we get through our partnerships with Microsoft and VMware. >>Great. I mean that's super helpful. You've mentioned nsx, Horizon, Carbon Black, all the, you know, the VMware component OTH zero, which the developers are gonna love. You got Azure identity, so it's really an ecosystem. So you may have actually answered my next question, but I'm gonna ask it anyway cuz you've got this software defined environment and you're managing servers and networking and storage with this software led approach, how do you ensure that the entire system is secure end to end? >>That's a really great question. So the, the answer is we do testing and validation as part of the engineering process. It's not just bolted on at the end. So when we do, for example, the xra is the market's only co engineered solution with VMware, other vendors sell VMware as a hyperconverged solution, but we actually include security as part of the co-engineering process with VMware. So it's considered when VMware builds their code and their process dovetails with ours because we have a secure development life cycle, which other products might talk about in their discussions with you that we integrate into our engineering life cycle. So because we follow the same framework, all of the, all of the codes should interoperate from a security standpoint. And so when we do our final validation testing when we do a software release, we're already halfway there in ensuring that all these features will give the customers what we promised. >>That's great. All right, let's, let's close pitch me, what would you say is the strong suit summarize the, the strengths of the Dell hyperconverged infrastructure and converged infrastructure portfolio specifically from a security perspective? Jerome? >>So I talked about how hyper hyper-converged infrastructure simplifies security management because basically you're gonna take all of these features that are abstracted in in hardware, they're now abstracted in the virtualization layer. Now you can manage them from a single point of view, whether it would be, say, you know, in for VX rail would be b be center, for example. So by abstracting all this, you make it very easy to manage security and highly flexible because now you don't have limitations around a single vendor. You have a multiple array of choices and partnerships to select. So I would say that is the, the key to making it to hci. Now, what makes Dell the market leader in HCI is not only do we have that functionality, but we also make it exceptionally useful to you because it's co engineered, it's not bolted on. So I gave the example of, I gave the example of how we, we modify our software release process with VMware to make it very responsive. >>A couple of other features that we have specific just to HCI are digitally signed LCM updates. This is an example of a feature that we have that's only exclusive to Dell that's not done through a partnership. So we digitally sign our software updates so you, the user can be sure that the, the update that they're installing into their system is an authentic and unmodified product. So we give it a Dell signature that's invalidated prior to installation. So not only do we consume the features that others develop in a seamless and fully validated way, but we also bolt on our own specific HCI security features that work with all the other partnerships and give the user an exceptional security experience. So for, for example, the benefit to the customer is you don't have to create a complicated security framework that's hard for your users to use and it's hard for your system administrators to manage. It all comes in a package. So it, it can be all managed through vCenter, for example, or, and then the specific hyper, hyper-converged functions can be managed through VxRail manager or through STDC manager. So there's very few pains of glass that the, the administrator or user ever has to worry about. It's all self contained and manageable. >>That makes a lot of sense. So you got your own infrastructure, you're applying your best practices to that, like the digital signatures, you've got your ecosystem, you're doing co-engineering with the ecosystems, delivering security in a package, minimizing the complexity at the infrastructure level. The reason Jerome, this is so important is because SecOps teams, you know, they gotta deal with cloud security, they gotta deal with multiple clouds. Now they have their shared responsibility model going across multiple, They got all this other stuff that they have to worry, they gotta secure containers and the run time and, and, and, and, and the platform and so forth. So they're being asked to do other things. If they have to worry about all the things that you just mentioned, they'll never get, you know, the, the securities is gonna get worse. So what my takeaway is, you're removing that infrastructure piece and saying, Okay guys, you now can focus on those other things that is not necessarily Dell's, you know, domain, but you, you know, you can work with other partners to, and your own teams to really nail that. Is that a fair summary? >>I think that is a fair summary because absolutely the worst thing you can do from a security perspective is provide a feature that's so unusable that the administrator disables it or other key security features. So when I work with my partners to define, to define and develop a new security feature, the thing I keep foremost in mind is, will this be something our users want to use in our administrators want to administer? Because if it's not, if it's something that's too difficult or onerous or complex, then I try to find ways to make it more user friendly and practical. And this is a challenge sometimes because we are, our products operate in highly regulated environments and sometimes they have to have certain rules and certain configurations that aren't the most user friendly or management friendly. So I, I put a lot of effort into thinking about how can we make this feature useful while still complying with all the regulations that we have to comply with. And by the way, we're very successful in a highly regulated space. We sell a lot of VxRail, for example, into the Department of Defense and banks and, and other highly regulated environments, and we're very successful >>There. Excellent. Okay, Jerome, thanks. We're gonna leave it there for now. I'd love to have you back to talk about the progress that you're making down the road. Things always, you know, advance in the tech industry and so would appreciate that. >>I would look forward to it. Thank you very much, Dave. >>You're really welcome. In a moment I'll be back to summarize the program and offer some resources that can help you on your journey to secure your enterprise infrastructure. I wanna thank our guests for their contributions and helping us understand how investments by a company like Dell can both reduce the need for dev sec up teams to worry about some of the more fundamental security issues around infrastructure and have greater confidence in the quality providence and data protection designed in to core infrastructure like servers, storage, networking, and hyper-converged systems. You know, at the end of the day, whether your workloads are in the cloud, OnPrem or at the edge, you are responsible for your own security. But vendor r and d and vendor process must play an important role in easing the burden faced by security devs and operation teams. And on behalf of the cube production content and social teams as well as Dell Technologies, we want to thank you for watching a blueprint for trusted infrastructure. Remember part one of this series as well as all the videos associated with this program, and of course, today's program are available on demand@thecube.net with additional coverage@siliconangle.com. And you can go to dell.com/security solutions dell.com/security solutions to learn more about Dell's approach to securing infrastructure. And there's tons of additional resources that can help you on your journey. This is Dave Valante for the Cube, your leader in enterprise and emerging tech coverage. We'll see you next time.

Published Date : Oct 4 2022

SUMMARY :

We're back with Jerome West, product management security lead at for HCI So my first question is, So let me give you an example to illustrate. So for the short term solution, the obvious thing to do is to patch bill of materials so that you can ensure the providence of all the components help So really the strength or the secret sauce is to combine all the So basically to summarize my my answer, we consume all of the security So you may have actually answered my next question, but I'm gonna ask it anyway cuz So the, the answer is we do All right, let's, let's close pitch me, what would you say is the strong suit summarize So I gave the example of, I gave the So for, for example, the benefit to the customer is you So you got your own infrastructure, you're applying your best practices to that, all the regulations that we have to comply with. I'd love to have you back to talk about the progress that you're making down Thank you very much, Dave. in the quality providence and data protection designed in to core infrastructure like

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

DavidPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

Dave ValantePERSON

0.99+

Jerome WestPERSON

0.99+

DellORGANIZATION

0.99+

demand@thecube.netOTHER

0.99+

VerizonORGANIZATION

0.99+

first questionQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

coverage@siliconangle.comOTHER

0.99+

last yearDATE

0.99+

second layerQUANTITY

0.99+

hciORGANIZATION

0.99+

todayDATE

0.99+

two weeksQUANTITY

0.99+

BsphereORGANIZATION

0.99+

Department of DefenseORGANIZATION

0.98+

HCIORGANIZATION

0.98+

14 daysQUANTITY

0.98+

bothQUANTITY

0.98+

nsxORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

VX RailORGANIZATION

0.98+

AzureTITLE

0.98+

dell.com/securityOTHER

0.98+

single thingQUANTITY

0.97+

over 40 releasesQUANTITY

0.97+

vCenterTITLE

0.96+

VxRailTITLE

0.96+

Carbon BlackORGANIZATION

0.96+

single pointQUANTITY

0.92+

single vendorQUANTITY

0.85+

part oneQUANTITY

0.84+

xraTITLE

0.81+

Power EdgeTITLE

0.8+

single typeQUANTITY

0.75+

VxlORGANIZATION

0.73+

SecOpsORGANIZATION

0.72+

CubeORGANIZATION

0.71+

HorizonORGANIZATION

0.69+

CarbonORGANIZATION

0.68+

bsphereORGANIZATION

0.67+

VXTITLE

0.64+

VxRailORGANIZATION

0.62+

Off ZeroORGANIZATION

0.61+

PowerCOMMERCIAL_ITEM

0.59+

vsanORGANIZATION

0.56+

DirectoryTITLE

0.51+

EdgeORGANIZATION

0.5+

Blueprint for Trusted Insfrastructure Episode 2 Full Episode 10-4 V2


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges. And I'm not saying that SecOps pros are not talented, They are. There just aren't enough of them to go around and the adversary is also talented and very creative, and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante and I'm your host now. Previously we looked at what trusted infrastructure means and the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. >>Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that dev sec op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies. And after that we're gonna bring on Mahesh Nagar oim, who was consultant in the networking product management area at Dell. And finally, we're close with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program. Deepak Arage shoes powered security product manager at Dell Technologies. Deepak, great to have you on the program. Thank you. >>Thank you for having me. >>So we're going through the infrastructure stack and in part one of this series we looked at the landscape overall and how cyber has changed and specifically how Dell thinks about data protection in, in security in a manner that both secures infrastructure and minimizes organizational friction. We also hit on the storage part of the portfolio. So now we want to dig into servers. So my first question is, what are the critical aspects of securing server infrastructure that our audience should be aware of? >>Sure. So if you look at compute in general, right, it has rapidly evolved over the past couple of years, especially with trends toward software defined data centers and with also organizations having to deal with hybrid environments where they have private clouds, public cloud locations, remote offices, and also remote workers. So on top of this, there's also an increase in the complexity of the supply chain itself, right? There are companies who are dealing with hundreds of suppliers as part of their supply chain. So all of this complexity provides a lot of opportunity for attackers because it's expanding the threat surface of what can be attacked, and attacks are becoming more frequent, more severe and more sophisticated. And this has also triggered around in the regulatory and mandates around the security needs. >>And these regulations are not just in the government sector, right? So it extends to critical infrastructure and eventually it also get into the private sector. In addition to this, organizations are also looking at their own internal compliance mandates. And this could be based on the industry in which they're operating in, or it could be their own security postures. And this is the landscape in which servers they're operating today. And given that servers are the foundational blocks of the data center, it becomes extremely important to protect them. And given how complex the modern server platforms are, it's also extremely difficult and it takes a lot of effort. And this means protecting everything from the supply chain to the manufacturing and then eventually the assuring the hardware and software integrity of the platforms and also the operations. And there are very few companies that go to the lens that Dell does in order to secure the server. We truly believe in the notion and the security mentality that, you know, security should enable our customers to go focus on their business and proactively innovate on their business and it should not be a burden to them. And we heavily invest to make that possible for our customers. >>So this is really important because the premise that I set up at the beginning of this was really that I, as of security pro, I'm not a security pro, but if I were, I wouldn't want to be doing all this infrastructure stuff because I now have all these new things I gotta deal with. I want a company like Dell who has the resources to build that security in to deal with the supply chain to ensure the providence, et cetera. So I'm glad you you, you hit on that, but so given what you just said, what does cybersecurity resilience mean from a server perspective? For example, are there specific principles that Dell adheres to that are non-negotiable? Let's say, how does Dell ensure that its customers can trust your server infrastructure? >>Yeah, like when, when it comes to security at Dell, right? It's ingrained in our product, so that's the best way to put it. And security is nonnegotiable, right? It's never an afterthought where we come up with a design and then later on figure out how to go make it secure, right? Our security development life cycle, the products are being designed to counter these threats right from the big. And in addition to that, we are also testing and evaluating these products continuously to identify vulnerabilities. We also have external third party audits which supplement this process. And in addition to this, Dell makes the commitment that we will rapidly respond to any mitigations and vulnerability, any vulnerabilities and exposures found out in the field and provide mitigations and patches for in attacking manner. So this security principle is also built into our server life cycle, right? Every phase of it. >>So we want our products to provide cutting edge capabilities when it comes to security. So as part of that, we are constantly evaluating what our security model is done. We are building on it and continuously improving it. So till a few years ago, our model was primarily based on the N framework of protect, detect and rigor. And it's still aligns really well to that framework, but over the past couple of years, we have seen how computers evolved, how the threads have evolved, and we have also seen the regulatory trends and we recognize the fact that the best security strategy for the modern world is a zero trust approach. And so now when we are building our infrastructure and tools and offerings for customers, first and foremost, they're cyber resilient, right? What we mean by that is they're capable of anticipating threats, withstanding attacks and rapidly recurring from attacks and also adapting to the adverse conditions in which they're deployed. The process of designing these capabilities and identifying these capabilities however, is done through the zero press framework. And that's very important because now we are also anticipating how our customers will end up using these capabilities at there and to enable their own zero trust IT environments and IT zero trusts deployments. We have completely adapted our security approach to make it easier for customers to work with us no matter where they are in their journey towards zero trust option. >>So thank you for that. You mentioned the, this framework, you talked about zero trust. When I think about n I think as well about layered approaches. And when I think about zero trust, I think about if you, if you don't have access to it, you're not getting access, you've gotta earn that, that access and you've got layers and then you still assume that bad guys are gonna get in. So you've gotta detect that and you've gotta response. So server infrastructure security is so fundamental. So my question is, what is Dell providing specifically to, for example, detect anomalies and breaches from unauthorized activity? How do you enable fast and easy or facile recovery from malicious incidents, >>Right? What is that is exactly right, right? Breachers are bound to happen and given how complex our current environment is, it's extremely distributed and extremely connected, right? Data and users are no longer contained with an offices where we can set up a perimeter firewall and say, Yeah, everything within that is good. We can trust everything within it. That's no longer true. The best approach to protect data and infrastructure in the current world is to use a zero trust approach, which uses the principles. Nothing is ever trusted, right? Nothing is trusted implicitly. You're constantly verifying every single user, every single device, and every single access in your system at every single level of your ID environment. And this is the principles that we use on power Edge, right? But with an increased focus on providing granular controls and checks based on the principles of these privileged access. >>So the idea is that service first and foremost need to make sure that the threats never enter and they're rejected at the point of entry, but we recognize breaches are going to occur and if they do, they need to be minimized such that the sphere of damage cost by attacker is minimized so they're not able to move from one part of the network to something else laterally or escalate their privileges and cause more damage, right? So the impact radius for instance, has to be radius. And this is done through features like automated detection capabilities and automation, automated remediation capabilities. So some examples are as part of our end to end boot resilience process, we have what they call a system lockdown, right? We can lock down the configuration of the system and lock on the form versions and all changes to the system. And we have capabilities which automatically detect any drift from that lockdown configuration and we can figure out if the drift was caused to authorized changes or unauthorized changes. >>And if it is an unauthorize change can log it, generate security alerts, and we even have capabilities to automatically roll the firm where, and always versions back to a known good version and also the configurations, right? And this becomes extremely important because as part of zero trust, we need to respond to these things at machine speed and we cannot do it at a human speed. And having these automated capabilities is a big deal when achieving that zero trust strategy. And in addition to this, we also have chassis inclusion detection where if the chassis, the box, the several box is opened up, it logs alerts, and you can figure out even later if there's an AC power cycle, you can go look at the logs to see that the box is opened up and figure out if there was a, like a known authorized access or some malicious actor opening and chain something in your system. >>Great, thank you for that lot. Lot of detail and and appreciate that. I want to go somewhere else now cuz Dell has a renowned supply chain reputation. So what about securing the, the supply chain and the server bill of materials? What does Dell specifically do to track the providence of components it uses in its systems so that when the systems arrive, a customer can be a hundred percent certain that that system hasn't been compromised, >>Right? And we've talked about how complex the modern supply chain is, right? And that's no different for service. We have hundreds of confidence on the server and a lot of these form where in order to be configured and run and this former competence could be coming from third parties suppliers. So now the complexity that we are dealing with like was the end to end approach and that's where Dell pays a lot of attention into assuring the security approach approaching and it starts all the way from sourcing competence, right? And then through the design and then even the manufacturing process where we are wetting the personnel leather factories and wetting the factories itself. And the factories also have physical controls, physical security controls built into them and even shipping, right? We have GPS tagging of packages. So all of this is built to ensure supply chain security. >>But a critical aspect of this is also making sure that the systems which are built in the factories are delivered to the customers without any changes or any tapper. And we have a feature called the secure component verification, which is capable of doing this. What the feature does this, when the system gets built in a factory, it generates an inventory of all the competence in the system and it creates a cryptographic certificate based on the signatures presented to this by the competence. And this certificate is stored separately and sent to the customers separately from the system itself. So once the customers receive the system at their end, they can run out to, it generates an inventory of the competence on the system at their end and then compare it to the golden certificate to make sure nothing was changed. And if any changes are detected, we can figure out if there's an authorized change or unauthorize change. >>Again, authorized changes could be like, you know, upgrades to the drives or memory and ized changes could be any sort of temper. So that's the supply chain aspect of it and bill of metal use is also an important aspect to galing security, right? And we provide a software bill of materials, which is basically a list of ingredients of all the software pieces in the platform. So what it allows our customers to do is quickly take a look at all the different pieces and compare it to the vulnerability database and see if any of the vulner which have been discovered out in the wild affected platform. So that's a quick way of figuring out if the platform has any known vulnerabilities and it has not been patched. >>Excellent. That's really good. My last question is, I wonder if you, you know, give us the sort of summary from your perspective, what are the key strengths of Dell server portfolio from a security standpoint? I'm really interested in, you know, the uniqueness and the strong suit that Dell brings to the table, >>Right? Yeah. We have talked enough about the complexity of the environment and how zero risk is necessary for the modern ID environment, right? And this is integral to Dell powered service. And as part of that like you know, security starts with the supply chain. We already talked about the second component verification, which is a beneath feature that Dell platforms have. And on top of it we also have a silicon place platform mode of trust. So this is a key which is programmed into the silicon on the black service during manufacturing and can never be changed after. And this immutable key is what forms the anchor for creating the chain of trust that is used to verify everything in the platform from the hardware and software integrity to the boot, all pieces of it, right? In addition to that, we also have a host of data protection features. >>Whether it is protecting data at risk in news or inflight, we have self encrypting drives which provides scalable and flexible encryption options. And this couple with external key management provides really good protection for your data address. External key management is important because you know, somebody could physically steam the server walk away, but then the keys are not stored on the server, it stood separately. So that provides your action layer of security. And we also have dual layer encryption where you can compliment the hardware encryption on the secure encrypted drives with software level encryption. Inion to this we have identity and access management features like multifactor authentication, single sign on roles, scope and time based access controls, all of which are critical to enable that granular control and checks for zero trust approach. So I would say like, you know, if you look at the Dell feature set, it's pretty comprehensive and we also have the flexibility built in to meet the needs of all customers no matter where they fall in the spectrum of, you know, risk tolerance and security sensitivity. And we also have the capabilities to meet all the regulatory requirements and compliance requirements. So in a nutshell, I would say that you know, Dell Power Service cyber resident infrastructure helps accelerate zero tested option for customers. >>Got it. So you've really thought this through all the various things that that you would do to sort of make sure that your server infrastructure is secure, not compromised, that your supply chain is secure so that your customers can focus on some of the other things that they have to worry about, which are numerous. Thanks Deepak, appreciate you coming on the cube and participating in the program. >>Thank you for having >>You're welcome. In a moment I'll be back to dig into the networking portion of the infrastructure. Stay with us for more coverage of a blueprint for trusted infrastructure and collaboration with Dell Technologies on the cube, your leader in enterprise and emerging tech coverage. We're back with a blueprint for trusted infrastructure and partnership with Dell Technologies in the cube. And we're here with Mahesh Nager, who is a consultant in the area of networking product management at Dell Technologies. Mahesh, welcome, good to see you. >>Hey, good morning Dell's, nice to meet, meet to you as well. >>Hey, so we've been digging into all the parts of the infrastructure stack and now we're gonna look at the all important networking components. Mahesh, when we think about networking in today's environment, we think about the core data center and we're connecting out to various locations including the cloud and both the near and the far edge. So the question is from Dell's perspective, what's unique and challenging about securing network infrastructure that we should know about? >>Yeah, so few years ago IT security and an enterprise was primarily putting a wrapper around data center out because it was constrained to an infrastructure owned and operated by the enterprise for the most part. So putting a rapid around it like a parameter or a firewall was a sufficient response because you could basically control the environment and data small enough control today with the distributed data, intelligent software, different systems, multi-cloud environment and asset service delivery, you know, the infrastructure for the modern era changes the way to secure the network infrastructure In today's, you know, data driven world, it operates everywhere and data has created and accessed everywhere so far from, you know, the centralized monolithic data centers of the past. The biggest challenge is how do we build the network infrastructure of the modern era that are intelligent with automation enabling maximum flexibility and business agility without any compromise on the security. We believe that in this data era, the security transformation must accompany digital transformation. >>Yeah, that's very good. You talked about a couple of things there. Data by its very nature is distributed. There is no perimeter anymore, so you can't just, as you say, put a rapper around it. I like the way you phrase that. So when you think about cyber security resilience from a networking perspective, how do you define that? In other words, what are the basic principles that you adhere to when thinking about securing network infrastructure for your customers? >>So our belief is that cybersecurity and cybersecurity resilience, they need to be holistic, they need to be integrated, scalable, one that span the entire enterprise and with a co and objective and policy implementation. So cybersecurity needs to span across all the devices and running across any application, whether the application resets on the cloud or anywhere else in the infrastructure. From a networking standpoint, what does it mean? It's again, the same principles, right? You know, in order to prevent the threat actors from accessing changing best destroy or stealing sensitive data, this definition holds good for networking as well. So if you look at it from a networking perspective, it's the ability to protect from and withstand attacks on the networking systems as we continue to evolve. This will also include the ability to adapt and recover from these attacks, which is what cyber resilience aspect is all about. So cybersecurity best practices, as you know, is continuously changing the landscape primarily because the cyber threats also continue to evolve. >>Yeah, got it. So I like that. So it's gotta be integrated, it's gotta be scalable, it's gotta be comprehensive, comprehensive and adaptable. You're saying it can't be static, >>Right? Right. So I think, you know, you had a second part of a question, you know, that says what do we, you know, what are the basic principles? You know, when you think about securing network infrastructure, when you're looking at securing the network infrastructure, it revolves around core security capability of the devices that form the network. And what are these security capabilities? These are access control, software integrity and vulnerability response. When you look at access control, it's to ensure that only the authenticated users are able to access the platform and they're able to access only the kind of the assets that they're authorized to based on their user level. Now accessing a network platform like a switch or a rotor for example, is typically used for say, configuration and management of the networking switch. So user access is based on say roles for that matter in a role based access control, whether you are a security admin or a network admin or a storage admin. >>And it's imperative that logging is enable because any of the change to the configuration is actually logged and monitored as that. Talking about software's integrity, it's the ability to ensure that the software that's running on the system has not been compromised. And, and you know, this is important because it could actually, you know, get hold of the system and you know, you could get UND desire results in terms of say validation of the images. It's, it needs to be done through say digital signature. So, so it's important that when you're talking about say, software integrity, a, you are ensuring that the platform is not compromised, you know, is not compromised and be that any upgrades, you know, that happens to the platform is happening through say validated signature. >>Okay. And now, now you've now, so there's access control, software integrity, and I think you, you've got a third element which is i I think response, but please continue. >>Yeah, so you know, the third one is about civil notability. So we follow the same process that's been followed by the rest of the products within the Dell product family. That's to report or identify, you know, any kind of a vulnerability that's being addressed by the Dell product security incident response team. So the networking portfolio is no different, you know, it follows the same process for identification for tri and for resolution of these vulnerabilities. And these are addressed either through patches or through new reasons via networking software. >>Yeah, got it. Okay. So I mean, you didn't say zero trust, but when you were talking about access control, you're really talking about access to only those assets that people are authorized to access. I know zero trust sometimes is a buzzword, but, but you I think gave it, you know, some clarity there. Software integrity, it's about assurance validation, your digital signature you mentioned and, and that there's been no compromise. And then how you respond to incidents in a standard way that can fit into a security framework. So outstanding description, thank you for that. But then the next question is, how does Dell networking fit into the construct of what we've been talking about Dell trusted infrastructure? >>Okay, so networking is the key element in the Dell trusted infrastructure. It provides the interconnect between the service and the storage world. And you know, it's part of any data center configuration for a trusted infrastructure. The network needs to have access control in place where only the authorized nels are able to make change to the network configuration and logging off any of those changes is also done through the logging capabilities. Additionally, we should also ensure that the configuration should provide network isolation between say the management network and the data traffic network because they need to be separate and distinct from each other. And furthermore, even if you look at the data traffic network and now you have things like segmentation isolated segments and via VRF or, or some micro segmentation via partners, this allows various level of security for each of those segments. So it's important you know, that, that the network infrastructure has the ability, you know, to provide all this, this services from a Dell networking security perspective, right? >>You know, there are multiple layer of defense, you know, both at the edge and in the network in this hardware and in the software and essentially, you know, a set of rules and a configuration that's designed to sort of protect the integrity, confidentiality, and accessibility of the network assets. So each network security layer, it implements policies and controls as I said, you know, including send network segmentation. We do have capabilities sources, centralized management automation and capability and scalability for that matter. Now you add all of these things, you know, with the open networking standards or software, different principles and you essentially, you know, reach to the point where you know, you're looking at zero trust network access, which is essentially sort of a building block for increased cloud adoption. If you look at say that you know the different pillars of a zero trust architecture, you know, if you look at the device aspect, you know, we do have support for security for example, we do have say trust platform in a trusted platform models tpms on certain offer products and you know, the physical security know plain, simple old one love port enable from a user trust perspective, we know it's all done via access control days via role based access control and say capability in order to provide say remote authentication or things like say sticky Mac or Mac learning limit and so on. >>If you look at say a transport and decision trust layer, these are essentially, you know, how do you access, you know, this switch, you know, is it by plain hotel net or is it like secure ssh, right? And you know, when a host communicates, you know, to the switch, we do have things like self-signed or is certificate authority based certification. And one of the important aspect is, you know, in terms of, you know, the routing protocol, the routing protocol, say for example BGP for example, we do have the capability to support MD five authentication between the b g peers so that there is no, you know, manages attack, you know, to the network where the routing table is compromised. And the other aspect is about second control plane is here, you know, you know, it's, it's typical that if you don't have a control plane here, you know, it could be flooded and you know, you know, the switch could be compromised by city denial service attacks. >>From an application test perspective, as I mentioned, you know, we do have, you know, the application specific security rules where you could actually define, you know, the specific security rules based on the specific applications, you know, that are running within the system. And I did talk about, say the digital signature and the cryptographic check that we do for authentication and for, I mean rather for the authenticity and the validation of, you know, of the image and the BS and so on and so forth. Finally, you know, the data trust, we are looking at, you know, the network separation, you know, the network separation could happen or VRF plain old wheel Ls, you know, which can bring about sales multi 10 aspects. We talk about some microsegmentation as it applies to nsx for example. The other aspect is, you know, we do have, with our own smart fabric services that's enabled in a fabric, we have a concept of c cluster security. So all of this, you know, the different pillars, they sort of make up for the zero trust infrastructure for the networking assets of an infrastructure. >>Yeah. So thank you for that. There's a, there's a lot to unpack there. You know, one of the premise, the premise really of this, this, this, this segment that we're setting up in this series is really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility of the security team. And, and the premise that we're putting forth is that because security teams are so stretched thin, you, you gotta shift the vendor community. Dell specifically is shifting a lot of those tasks to their own r and d and taking care of a lot of that. So, cuz scop teams got a lot of other stuff to, to worry about. So my question relates to things like automation, which can help and scalability, what about those topics as it relates to networking infrastructure? >>Okay, our >>Portfolio, it enables state of the automation software, you know, that enables simplifying of the design. So for example, we do have, you know, you know the fabric design center, you know, a tool that automates the design of the fabric and you know, from a deployment and you know, the management of the network infrastructure that are simplicities, you know, using like Ansible s for Sonic for example are, you know, for a better sit and tell story. You know, we do have smart fabric services that can automate the entire fabric, you know, for a storage solution or for, you know, for one of the workloads for example. Now we do help reduce the complexity by closely integrating the management of the physical and the virtual networking infrastructure. And again, you know, we have those capabilities using Sonic or Smart Traffic services. If you look at Sonic for example, right? >>It delivers automated intent based secure containerized network and it has the ability to provide some network visibility and Avan has and, and all of these things are actually valid, you know, for a modern networking infrastructure. So now if you look at Sonic, you know, it's, you know, the usage of those tools, you know, that are available, you know, within the Sonic no is not restricted, you know, just to the data center infrastructure is, it's a unified no, you know, that's well applicable beyond the data center, you know, right up to the edge. Now if you look at our north from a smart traffic OS 10 perspective, you know, as I mentioned, we do have smart traffic services which essentially, you know, simplifies the deployment day zero, I mean rather day one, day two deployment expansion plans and the lifecycle management of our conversion infrastructure and hyper and hyper conversion infrastructure solutions. And finally, in order to enable say, zero touch deployment, we do have, you know, a VP solution with our SD van capability. So these are, you know, ways by which we bring down the complexity by, you know, enhancing the automation capability using, you know, a singular loss that can expand from a data center now right to the edge. >>Great, thank you for that. Last question real quick, just pitch me, what can you summarize from your point of view, what's the strength of the Dell networking portfolio? >>Okay, so from a Dell networking portfolio, we support capabilities at multiple layers. As I mentioned, we're talking about the physical security for examples, say disabling of the unused interface. Sticky Mac and trusted platform modules are the things that to go after. And when you're talking about say secure boot for example, it delivers the authenticity and the integrity of the OS 10 images at the startup. And Secure Boot also protects the startup configuration so that, you know, the startup configuration file is not compromised. And Secure port also enables the workload of prediction, for example, that is at another aspect of software image integrity validation, you know, wherein the image is data for the digital signature, you know, prior to any upgrade process. And if you are looking at secure access control, we do have things like role based access control, SSH to the switches, control plane access control that pre do tags and say access control from multifactor authentication. >>We do have various tech ads for entry control to the network and things like CSE and PRV support, you know, from a federal perspective we do have say logging wherein, you know, any event, any auditing capabilities can be possible by say looking at the clog service, you know, which are pretty much in our transmitter from the devices overts for example, and last we talked about say network segment, you know, say network separation and you know, these, you know, separation, you know, ensures that are, that is, you know, a contained say segment, you know, for a specific purpose or for the specific zone and, you know, just can be implemented by a, a micro segmentation, you know, just a plain old wheel or using virtual route of framework VR for example. >>A lot there. I mean I think frankly, you know, my takeaway is you guys do the heavy lifting in a very complicated topic. So thank you so much for, for coming on the cube and explaining that in in quite some depth. Really appreciate it. >>Thank you indeed. >>Oh, you're very welcome. Okay, in a moment I'll be back to dig into the hyper-converged infrastructure part of the portfolio and look at how when you enter the world of software defined where you're controlling servers and storage and networks via software led system, you could be sure that your infrastructure is trusted and secure. You're watching a blueprint for trusted infrastructure made possible by Dell Technologies and collaboration with the cube, your leader in enterprise and emerging tech coverage, your own west product management security lead at for HCI at Dell Technologies hyper-converged infrastructure. Jerome, welcome. >>Thank you Dave. >>Hey Jerome, in this series of blueprint for trusted infrastructure, we've been digging into the different parts of the infrastructure stack, including storage servers and networking, and now we want to cover hyperconverged infrastructure. So my first question is, what's unique about HCI that presents specific security challenges? What do we need to know? >>So what's unique about hyper-converge infrastructure is the breadth of the security challenge. We can't simply focus on a single type of IT system. So like a server or storage system or a virtualization piece of software, software. I mean HCI is all of those things. So luckily we have excellent partners like VMware, Microsoft, and internal partners like the Dell Power Edge team, the Dell storage team, the Dell networking team, and on and on. These partnerships in these collaborations are what make us successful from a security standpoint. So let me give you an example to illustrate. In the recent past we're seeing growing scope and sophistication in supply chain attacks. This mean an attacker is going to attack your software supply chain upstream so that hopefully a piece of code, malicious code that wasn't identified early in the software supply chain is distributed like a large player, like a VMware or Microsoft or a Dell. So to confront this kind of sophisticated hard to defeat problem, we need short term solutions and we need long term solutions as well. >>So for the short term solution, the obvious thing to do is to patch the vulnerability. The complexity is for our HCI portfolio. We build our software on VMware, so we would have to consume a patch that VMware would produce and provide it to our customers in a timely manner. Luckily VX rail's engineering team has co engineered a release process with VMware that significantly shortens our development life cycle so that VMware would produce a patch and within 14 days we will integrate our own code with the VMware release we will have tested and validated the update and we will give an update to our customers within 14 days of that VMware release. That as a result of this kind of rapid development process, VHA had over 40 releases of software updates last year for a longer term solution. We're partnering with VMware and others to develop a software bill of materials. We work with VMware to consume their software manifest, including their upstream vendors and their open source providers to have a comprehensive list of software components. Then we aren't caught off guard by an unforeseen vulnerability and we're more able to easily detect where the software problem lies so that we can quickly address it. So these are the kind of relationships and solutions that we can co engineer with effective collaborations with our, with our partners. >>Great, thank you for that. That description. So if I had to define what cybersecurity resilience means to HCI or converged infrastructure, and to me my takeaway was you gotta have a short term instant patch solution and then you gotta do an integration in a very short time, you know, two weeks to then have that integration done. And then longer term you have to have a software bill of materials so that you can ensure the providence of all the components help us. Is that a right way to think about cybersecurity resilience? Do you have, you know, a additives to that definition? >>I do. I really think that's site cybersecurity and resilience for hci because like I said, it has sort of unprecedented breadth across our portfolio. It's not a single thing, it's a bit of everything. So really the strength or the secret sauce is to combine all the solutions that our partner develops while integrating them with our own layer. So let me, let me give you an example. So hci, it's a, basically taking a software abstraction of hardware functionality and implementing it into something called the virtualized layer. It's basically the virtual virtualizing hardware functionality, like say a storage controller, you could implement it in hardware, but for hci, for example, in our VX rail portfolio, we, our Vxl product, we integrated it into a product called vsan, which is provided by our partner VMware. So that portfolio of strength is still, you know, through our, through our partnerships. >>So what we do, we integrate these, these security functionality and features in into our product. So our partnership grows to our ecosystem through products like VMware, products like nsx, Horizon, Carbon Black and vSphere. All of them integrate seamlessly with VMware and we also leverage VMware's software, part software partnerships on top of that. So for example, VX supports multifactor authentication through vSphere integration with something called Active Directory Federation services for adfs. So there's a lot of providers that support adfs including Microsoft Azure. So now we can support a wide array of identity providers such as Off Zero or I mentioned Azure or Active Directory through that partnership. So we can leverage all of our partners partnerships as well. So there's sort of a second layer. So being able to secure all of that, that provides a lot of options and flexibility for our customers. So basically to summarize my my answer, we consume all of the security advantages of our partners, but we also expand on them to make a product that is comprehensively secured at multiple layers from the hardware layer that's provided by Dell through Power Edge to the hyper-converged software that we build ourselves to the virtualization layer that we get through our partnerships with Microsoft and VMware. >>Great, I mean that's super helpful. You've mentioned nsx, Horizon, Carbon Black, all the, you know, the VMware component OTH zero, which the developers are gonna love. You got Azure identity, so it's really an ecosystem. So you may have actually answered my next question, but I'm gonna ask it anyway cuz you've got this software defined environment and you're managing servers and networking and storage with this software led approach, how do you ensure that the entire system is secure end to end? >>That's a really great question. So the, the answer is we do testing and validation as part of the engineering process. It's not just bolted on at the end. So when we do, for example, VxRail is the market's only co engineered solution with VMware, other vendors sell VMware as a hyper converged solution, but we actually include security as part of the co-engineering process with VMware. So it's considered when VMware builds their code and their process dovetails with ours because we have a secure development life cycle, which other products might talk about in their discussions with you that we integrate into our engineering life cycle. So because we follow the same framework, all of the, all of the codes should interoperate from a security standpoint. And so when we do our final validation testing when we do a software release, we're already halfway there in ensuring that all these features will give the customers what we promised. >>That's great. All right, let's, let's close pitch me, what would you say is the strong suit summarize the, the strengths of the Dell hyper-converged infrastructure and converged infrastructure portfolio specifically from a security perspective? Jerome? >>So I talked about how hyper hyper-converged infrastructure simplifies security management because basically you're gonna take all of these features that are abstracted in in hardware, they're now abstracted in the virtualization layer. Now you can manage them from a single point of view, whether it would be, say, you know, in for VX rail would be b be center, for example. So by abstracting all this, you make it very easy to manage security and highly flexible because now you don't have limitations around a single vendor. You have a multiple array of choices and partnerships to select. So I would say that is the, the key to making it to hci. Now, what makes Dell the market leader in HCI is not only do we have that functionality, but we also make it exceptionally useful to you because it's co engineered, it's not bolted on. So I gave the example of spo, I gave the example of how we, we modify our software release process with VMware to make it very responsive. >>A couple of other features that we have specific just to HCI are digitally signed LCM updates. This is an example of a feature that we have that's only exclusive to Dell that's not done through a partnership. So we digitally signed our software updates so the user can be sure that the, the update that they're installing into their system is an authentic and unmodified product. So we give it a Dell signature that's invalidated prior to installation. So not only do we consume the features that others develop in a seamless and fully validated way, but we also bolt on our own a specific HCI security features that work with all the other partnerships and give the user an exceptional security experience. So for, for example, the benefit to the customer is you don't have to create a complicated security framework that's hard for your users to use and it's hard for your system administrators to manage it all comes in a package. So it, it can be all managed through vCenter, for example, or, and then the specific hyper, hyper-converged functions can be managed through VxRail manager or through STDC manager. So there's very few pains of glass that the, the administrator or user ever has to worry about. It's all self contained and manageable. >>That makes a lot of sense. So you've got your own infrastructure, you're applying your best practices to that, like the digital signatures, you've got your ecosystem, you're doing co-engineering with the ecosystems, delivering security in a package, minimizing the complexity at the infrastructure level. The reason Jerome, this is so important is because SecOps teams, you know, they gotta deal with cloud security, they gotta deal with multiple clouds. Now they have their shared responsibility model going across multiple cl. They got all this other stuff that they have to worry, they gotta secure the containers and the run time and and, and, and, and the platform and so forth. So they're being asked to do other things. If they have to worry about all the things that you just mentioned, they'll never get, you know, the, the securities is gonna get worse. So what my takeaway is, you're removing that infrastructure piece and saying, Okay guys, you now can focus on those other things that is not necessarily Dell's, you know, domain, but you, you know, you can work with other partners to and your own teams to really nail that. Is that a fair summary? >>I think that is a fair summary because absolutely the worst thing you can do from a security perspective is provide a feature that's so unusable that the administrator disables it or other key security features. So when I work with my partners to define, to define and develop a new security feature, the thing I keep foremost in mind is, will this be something our users want to use and our administrators want to administer? Because if it's not, if it's something that's too difficult or onerous or complex, then I try to find ways to make it more user friendly and practical. And this is a challenge sometimes because we are, our products operate in highly regulated environments and sometimes they have to have certain rules and certain configurations that aren't the most user friendly or management friendly. So I, I put a lot of effort into thinking about how can we make this feature useful while still complying with all the regulations that we have to comply with. And by the way, we're very successful in a highly regulated space. We sell a lot of VxRail, for example, into the Department of Defense and banks and, and other highly regulated environments and we're very successful there. >>Excellent. Okay, Jerome, thanks. We're gonna leave it there for now. I'd love to have you back to talk about the progress that you're making down the road. Things always, you know, advance in the tech industry and so would appreciate that. >>I would look forward to it. Thank you very much, Dave. >>You're really welcome. In a moment I'll be back to summarize the program and offer some resources that can help you on your journey to secure your enterprise infrastructure. I wanna thank our guests for their contributions in helping us understand how investments by a company like Dell can both reduce the need for dev sec up teams to worry about some of the more fundamental security issues around infrastructure and have greater confidence in the quality providence and data protection designed in to core infrastructure like servers, storage, networking, and hyper-converged systems. You know, at the end of the day, whether your workloads are in the cloud, on prem or at the edge, you are responsible for your own security. But vendor r and d and vendor process must play an important role in easing the burden faced by security devs and operation teams. And on behalf of the cube production content and social teams as well as Dell Technologies, we want to thank you for watching a blueprint for trusted infrastructure. Remember part one of this series as well as all the videos associated with this program and of course today's program are available on demand@thecube.net with additional coverage@siliconangle.com. And you can go to dell.com/security solutions dell.com/security solutions to learn more about Dell's approach to securing infrastructure. And there's tons of additional resources that can help you on your journey. This is Dave Valante for the Cube, your leader in enterprise and emerging tech coverage. We'll see you next time.

Published Date : Oct 4 2022

SUMMARY :

So the game of Whackamole continues. But the diversity of alternatives and infrastructure implementations continues to how the industry generally in Dell specifically, are adapting to We're thrilled to have you here and hope you enjoy the program. We also hit on the storage part of the portfolio. So all of this complexity provides a lot of opportunity for attackers because it's expanding and the security mentality that, you know, security should enable our customers to go focus So I'm glad you you, you hit on that, but so given what you just said, what And in addition to this, Dell makes the commitment that we will rapidly how the threads have evolved, and we have also seen the regulatory trends and So thank you for that. And this is the principles that we use on power Edge, So the idea is that service first and foremost the chassis, the box, the several box is opened up, it logs alerts, and you can figure Great, thank you for that lot. So now the complexity that we are dealing with like was So once the customers receive the system at their end, do is quickly take a look at all the different pieces and compare it to the vulnerability you know, give us the sort of summary from your perspective, what are the key strengths of And as part of that like you know, security starts with the supply chain. And we also have dual layer encryption where you of the other things that they have to worry about, which are numerous. Technologies on the cube, your leader in enterprise and emerging tech coverage. So the question is from Dell's perspective, what's unique and to secure the network infrastructure In today's, you know, data driven world, it operates I like the way you phrase that. So if you look at it from a networking perspective, it's the ability to protect So I like that. kind of the assets that they're authorized to based on their user level. And it's imperative that logging is enable because any of the change to and I think you, you've got a third element which is i I think response, So the networking portfolio is no different, you know, it follows the same process for identification for tri and And then how you respond to incidents in a standard way has the ability, you know, to provide all this, this services from a Dell networking security You know, there are multiple layer of defense, you know, both at the edge and in the network in And one of the important aspect is, you know, in terms of, you know, the routing protocol, the specific security rules based on the specific applications, you know, that are running within the system. really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility design of the fabric and you know, from a deployment and you know, the management of the network and all of these things are actually valid, you know, for a modern networking infrastructure. just pitch me, what can you summarize from your point of view, is data for the digital signature, you know, prior to any upgrade process. can be possible by say looking at the clog service, you know, I mean I think frankly, you know, my takeaway is you of the portfolio and look at how when you enter the world of software defined where you're controlling different parts of the infrastructure stack, including storage servers this kind of sophisticated hard to defeat problem, we need short term So for the short term solution, the obvious thing to do is to patch bill of materials so that you can ensure the providence of all the components help So really the strength or the secret sauce is to combine all the So our partnership grows to our ecosystem through products like VMware, you know, the VMware component OTH zero, which the developers are gonna love. life cycle, which other products might talk about in their discussions with you that we integrate into All right, let's, let's close pitch me, what would you say is the strong suit summarize So I gave the example of spo, I gave the example of how So for, for example, the benefit to the customer is you The reason Jerome, this is so important is because SecOps teams, you know, they gotta deal with cloud security, And by the way, we're very successful in a highly regulated space. I'd love to have you back to talk about the progress that you're making down the Thank you very much, Dave. in the quality providence and data protection designed in to core infrastructure like

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

DeepakPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

Mahesh NagerPERSON

0.99+

DellORGANIZATION

0.99+

Jerome WestPERSON

0.99+

MaheshPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

demand@thecube.netOTHER

0.99+

Department of DefenseORGANIZATION

0.99+

Dave AntePERSON

0.99+

second partQUANTITY

0.99+

first questionQUANTITY

0.99+

VX railORGANIZATION

0.99+

FirstQUANTITY

0.99+

two weeksQUANTITY

0.99+

last yearDATE

0.99+

Deepak AragePERSON

0.99+

14 daysQUANTITY

0.99+

second componentQUANTITY

0.99+

second layerQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.99+

hundredsQUANTITY

0.99+

one partQUANTITY

0.99+

bothQUANTITY

0.98+

VMwareORGANIZATION

0.98+

VHAORGANIZATION

0.98+

coverage@siliconangle.comOTHER

0.98+

hundred percentQUANTITY

0.98+

eachQUANTITY

0.98+

vSphereTITLE

0.98+

dell.com/securityOTHER

0.98+

Andy Thurai, Constellation Research & Daniel Newman, Futurum Research | UiPath Forward5 2022


 

The Cube Presents UI Path Forward five. Brought to you by UI Path. >>I Ready, Dave Ante with David Nicholson. We're back at UI Path forward. Five. We're getting ready for the big guns to come in, the two co CEOs, but we have a really special analyst panel now. We're excited to have Daniel Newman here. He's the Principal analyst at Future and Research. And Andy Dai, who's the Vice president and Principal Analyst at Constellation Research. Guys, good to see you. Thanks for making some time to come on the queue. >>Glad to be here. Always >>Good. So, >>Andy, you're deep into ai. You and I have been talking about having you come to our maor office. I'm, I'm really excited that we're able to meet here. What have you seen at the show so far? What are your big takeaways? You know, day one and a half? >>Yeah, well, so first of all, I'm d AI because my last name has AI and I >>Already talk about, >>So, but, but all jokes aside, there are a lot of good things I heard from the conference, right? I mean, one is the last two years because of the pandemic, the growth has been phenomenal for, for a lot of those robotic automation intelligent automation companies, right? So because the low hanging through position making processes have been already taken care of where they going to find the next growth spot, right? That was the question I was looking answers to. And they have some inverse, one good acquisition. They had intelligent document processing, but more importantly they're trying to move from detrimental rules based RPA automation into AI based, more probabilistic subjective decision making areas. That's a huge market, tons of money involved in it, but it's going to be a harder problem to solve. Love to see the execut. >>Well, it's also a big pivot for the, for the company. It started out as sort of a a point product and now is moving to, to platform. But to end of the macro is not in UI pass favor. It's not really in any, you know, tech company's favor, but especially, you know, a company that's going into a transition transitioning to go to market cetera. What are you seeing, what's your take on the macro? I mean, I know you follow the financial markets very closely. There's a lot of negative sentiment right now. Are you as negative as the sentiment? >>Well, the, the broad sentiment comes with some pretty good historical data, right? We've had probably one of the worst market years in multiple decades. And of course we're coming into a situation where all the, the factors are really not in our favor. You've got in interest rates climbing, you've got wildly high inflation, you've had a, you know, helicopters dumping money on the economy for a period of time. And we're, we're gonna get into this great reset is what I keep talking about. But, you know, I had the opportunity to talk to Bill McDermott recently on one of my shows and Bill's CEO of ServiceNow, in case anybody there doesn't know, but >>Former, >>Yeah, really well spoken guy. But you know, him and I kind of went back and forth and we came up with this kind of concept that we were gonna have to tech our way out of what's about to come. You can almost be certain recession is gonna come. But for companies like UiPath, I actually think there's a tremendous opportunity because the bottom line is companies are gonna be looking at their bottom line. A year ago it was all about growth a deal, like the Adobe Figma deal would've been, been lauded, people would've been excited. Now everybody's looking at going, how are they paying that price? Everybody's discounting the future growth. They're looking at the situation, say, what's gonna happen next? Well, bottom line is now they're looking at that. How profitable are we? Are you making money? Are you growing that bottom line? Are you creating earnings? We're >>Gonna come in >>Era, we're gonna come into an era where companies are gonna say, you know what? People are expensive. The inflationary cost of hiring is expensive. You know, what's less expensive? Investing in the cloud, investing in ai, investing in workflow and automation and things that actually enable businesses to expand, keep costs somewhat contained fixed costs, and scale their businesses and get themselves in a good position for when the economy turns to return to >>Grow. So since prior to the pandemic cloud containers, m l and RPA slash automation have been the big four that from a spending data standpoint have been above the line above all kind of the rest in terms of spending momentum up until last quarter, AI and RPA slash automation declined. So my question is, are those two areas discretionary or more discretionary than other technology investments you heard? >>Well, I, I think we're in a, a period where companies are, I won't say they've stopped spending, but you listened to Mark Benioff, you talked about the elongated sales cycle, right? I think companies right now are being very reflective and they're doing a lot of introspection. They're looking at their business and saying, We hired a lot of people. We hired really fast. Do we need to cut? Do we need to freeze? We've made investments in technology, are we getting a return on 'em? We all know that the analytics, whether it's you know, digital adoption platforms or just analytics in the business, say, What is all this money we've been spending doing for us and how productive are we? But I will tell you universally, the companies are looking at workflow automations that enable things. Whether that's onboarding customers, whether that's delivering experiences, whether that's, you know, full, you know, price to quote technologies, automate, automate, automate. By doing that, they're gonna bring down the cost, they're gonna control themselves as best as possible in a tough macro. And then when they come out of it, these processes are gonna be beneficiary in a, in a growth environment even more so, >>Andy UiPath rocketed to a leadership position, largely due to the, the product and the simplicity of the product relative to the competition. And then as you well know, they expanded into, you know, platform. So how do you see the competitive environment? A UI path is again focusing on that platform play Automation Anywhere couldn't get to public market. They had turnover at the go to market level. Chris Riley joined a lot of, lot of hope left Microsoft joined into the fray, obviously is having an impact that you're certainly seeing spending momentum around Microsoft. Then SAP service Now Salesforce, every software company the planet thinks they should get every dollar spent on software. You know, they, they see UI pass momentum and they say, Hey, we can, we can take some of that off the table. How do you see the competitive environment right now? >>So first of all, in in my mind, UI path is slightly better because of a couple of reasons. One, as you said, it's ease of use. >>They're able to customize it variable to what they want. So that's a real easy development advantage. And then the, when you develop the bots and equal, it takes on an average anywhere between two to maybe six weeks, generally speaking, in some industries regulated government might take more so that it's faster, quicker, easier than others in a sense. So people love using that. The second advantage of what they have in my mind is that not only they are available as a managed SA solution on, on cloud, on Azure Cloud, but also they have this version that you can install, maintain, manage any way you want, whether it's a public cloud or, or your own data center and so on so forth. That's not available with almost, not all of them have it, Few have it, but not all of the competitors have it. So they have an advantage there as well. Where it could become useful would be one of the areas that they haven't even expanded is the government. >>Government is the what, >>Sorry? The government. Yeah, related solutions, right? Defense, government, all of those areas when you go, which haven't even started for various reasons. For example, they're worried about laying off people, worried about cost, worried about automating things. There's a lot of hurdles to overcome. But once you overcome that, if you want to go there, nobody's going to use, or most of them will be very of using something on the cloud. So they have a solution for version variation of that. So they are set up to come to that next level. I mean, I don't know if you guys were at the keynote, the CEO talked about how their plans to go from 1 billion to 5 billion in ar. So they're set up to capture the market. But again, as you said, every big software company saw their momentum, they want to get into it, they want to compete with them. So >>Well, to get to 5 billion, they've gotta accelerate growth. I mean, if you do 20% cer over the next, you know, through the end of the decade, they don't quite get there. So they're gonna have to, you know, they lowered their forecast out of the high 20 or mid twenties to 18%. They're gonna have to accelerate that. And we've seen that before. We see it in cloud where cloud, you know, accelerates growth even though you got the lower large numbers. Go ahead Dave. >>Yeah, so Daniel, then how do we, how do we think of this market? How do we measure the TAM for total addressable market for automation? I mean, you know, what's that? What's that metric that shows how unautomated are we, how inefficient are we? Is there a, is there a 5% efficiency that can be gained? Is there a 40% efficiency that can be gained? Because if you're talking about, you know, how much much of the market can UI path capture, first of all, how big is the market? And then is UI path poised to take advantage of that compared to the actual purveyors of the software that people are interacting with? I'm interacting with an E R p, an ER P system that has built into it the ability to automate processes. Then why do I need 'EM UI path? So first, how do you evaluate TAM? Second, how do you evaluate whether UI Path is gonna have a chance in this market where RPAs built into the applications that we actually use? Yeah, >>I think that TAM is evolving, and I don't have it in front of me right now, but what I'll tell you about the TAM is there's sort of the legacy RPA tam and then there's what I would sort of evolve to call the IPA and workflow automation tam that is being addressed by many of these software companies that you asked in the competitive equation. In the, in the, in the question, what we're seeing is a world where companies are gonna say, if we can automate it, we will automate it. That's, it's actually non-negotiable. Now, the process in the ability to a arrive at automation at scale has long been a battle front within the nor every organization. We've been able to automate things for a long time. Why has it more been done? It's the same thing with analytics. There's been numerous studies in analytics that have basically shown companies that have been able to embrace, adopt, and implement analytics, have significantly better performances, better performances on revenue growth, better performances and operational cost management, better performances with customer experience. >>Guess what? Not everybody, every company can get to this. Now there's a couple of things behind this and I'm gonna, I'm gonna try to close my answer out cause I'm getting a little long winded here. But the first thing is automation is a cultural challenge in most organizations. We've done endless research on companies digitally transforming and automating their business. And what we've found is largely the technology are somewhat comparable. Meaning, you know, I, I've heard what he is saying about some of the advantages of partnership with Microsoft, very compelling. But you know what, all these companies that have automation offerings, whether it's you know, through a Salesforce, Microsoft, whether it's a specialized rpa like an Automation Anywhere or a UI path, their solutions can be deployed and successful. The company's ability to take the investment, implement it successfully and get buy in across the organization tends to always be the hurdle. An old CIO stat, 50% of IT projects fail. That stat is still almost accurate today. It's not 50% of technology is bad, but those failures are because the culture doesn't get behind it. And automation's a tricky one because there's a lot of people that feel on the outside rather than the inside of an automation transformation. >>So, Andy, so how do you think about the, to Dave's question, the SAPs the service nows trying to, you know, at least take some red crumbs off the table. They, they're gonna, they're gonna create these automation stove pipes, but in Automation Anywhere or, or UI path is a horizontal play, are they not? And so how do you think about that progression? Well, so >>First of all, all of this other companies, when they, they, whether it's a build, acquire, what have you, these guys already have what, five, seven years on them. So it's gonna be difficult for them to catch up with the Center of Excellence knowledge on the use cases, what they got to catch up with them. That's gonna be a lot of catch up. Just to give you an idea, Microsoft Power Automate has been there for a while, right? They're supposedly doing well as well, but they still choose to partner with the UiPath as well to get them to the next level. So there's going to be competition coming from all areas, but it's, it's about, you know, highlights. >>So, so who is the competition? Is it Microsoft chipping away an individual productivity? Is it a service now? Who's got a platform play? Is it themselves just being able to execute >>All plus also, but I think the, the most, I wouldn't say competition, but it's more people are not aware of what areas need to be automated, right? For example, one of the things I was talking about with a couple of customers is, so they have a automation hub where you can put the, the process and and task that need to be automated and then you prioritize and start working on it. And, and almost all of them that I speak to, they keep saying that most of the process and task identification that they need to do for automation, it's manual right now. So, which means it's limited, you have to go and execute it. When people find out and tell you that's what need to be fixed, you try to go and fix that. But imagine if there is a way, I mean the have solutions they're showcasing now if it becomes popular, if you're able to identify tasks that are very inefficient or or process that's very inefficient, automatically score them up saying that, you know what, this is what is going to be ROI and you execute on it. That's going to be huge. So >>I think ts right, there's no shortage of, of a market. I would, I would agree with you Rob Sland this morning talked about the progression. He sort of compared it to e R P of the early days. I sort of have a love hate with E R P cuz of the complexity of the implementation and the, and the cost. However, first of all, a couple points and I love to get your thoughts for you. If you went back, I know 25 years, you, you wouldn't have been able to pick SAP out of a lineup and say that's gonna be the leader in E R P and they ended up, you know, doing really, really well. But the more interesting angle is if you could have figured out the customers that were implementing e r p in, in a really high quality fashion, those are the companies that really did well. You buy their stocks, they really took off cuz they were killing their other industry competitors. So, fast forward to automation. Will automation live up to its hype and your opinion, will it be as transformative and will the, the practitioners of automation see the same type of uplift in their markets, in their market caps, in their competitiveness as did sort of the early adopters and the excellent adopters of brp? What are your thoughts? Well, >>I think it's an interesting comparison. Maybe answer it slightly different way. I think the future is that automation is a non-negotiable in every enterprise organization. I think if you're a large organization, we have absolutely filled our, our organizations with waste too much overhead, too much expense, too much technical debt and automation is an answer. This is the way we want to interact, right? We want a chat bot that actually gives us good answers that can answer on a Tuesday at 11:00 PM at night when we want to know if the right dog food, you know, and I'm saying that, you know, that's what we want. That's the outcome we want. And businesses have to be driven by the outcome. Here's what I'm not sure about, Dave, is we have an era where over the last three to five years, a lot of products have become companies and a lot of 'EM products became companies ended up in public markets. >>And so the RPA space is one of those areas that got this explosive amount of growth. And you look at it and there's two ways. Is this horizontally a business rpa or is this going to be something that's gonna be a target of those Microsofts and those SAPs and say, Look, we need hyper automation to be deeply integrated at the E R P crm, hcm SCM level. We're gonna build by this or we're gonna build this. And you're already hearing it in the partnerships, but this is how I think the story ends. I I think either the companies like UiPath get much bigger, they get much more rounded in their, in their offerings. Or you're gonna have a large company like a Microsoft come in and say, you know what? Buy it rather >>Than build can they can, they can, can this company, maybe not so much here, but can a company like Automation Anywhere stay acquisition? Well, >>I use the, I use the Service now as an, as a parallel because they're a company that I thought would always end up inside of a bigger company and now you're like, I think they're too big. I think they've they've dropped >>That, that chart. Yeah, they're acquisition proof. I would agree. But these guys aren't yet Nora's automation. They work for >>A while and it's not necessarily a bad thing. Sometimes getting bit bought is good, but what I mean is it's gonna be core and these big companies know it cuz they're all talking >>About, but as independent analysts, we want to see independent companies. >>I wanna see the right thing. >>It just makes it fun. >>The right thing >>Customers. Yeah, but you know, okay, Oracle buy more customers, more >>Customers. >>I'm kidding. Yeah, I guess it's the right thing. It just makes it more fun when you have really good independent competitors that >>We >>Absolutely so, and, and spend way more on r and d than these big companies who spend a lot more on stock buyback. But I know you gotta go. Thanks so much for spending some time, making time for Cube Andy. Great to see you. Good to see as well. All right, we are wrapping up day one, Dave Blan and Dave Nicholson live. You can hear the action behind us, forward in five on the Cube, right back.

Published Date : Sep 29 2022

SUMMARY :

Brought to you by UI guns to come in, the two co CEOs, but we have a really special analyst panel now. Glad to be here. You and I have been talking about having you come to our I mean, one is the last two years because of It's not really in any, you know, tech company's favor, but especially, you know, you know, I had the opportunity to talk to Bill McDermott recently on one of my shows and But you know, him and I kind of went back and forth and we came up with this Era, we're gonna come into an era where companies are gonna say, you know what? or more discretionary than other technology investments you heard? But I will tell you universally, And then as you well know, they expanded into, you know, platform. One, as you said, it's ease of use. And then the, when you develop the bots and equal, it takes on an average anywhere between Defense, government, all of those areas when you go, So they're gonna have to, you know, they lowered their forecast out I mean, you know, I think that TAM is evolving, and I don't have it in front of me right now, but what I'll tell you about the TAM is there's investment, implement it successfully and get buy in across the organization tends to always be the hurdle. trying to, you know, at least take some red crumbs off the table. Just to give you an idea, Microsoft Power Automate has of the process and task identification that they need to do for automation, it's manual right now. a lineup and say that's gonna be the leader in E R P and they ended up, you know, doing really, you know, and I'm saying that, you know, that's what we want. And you look at it and there's two ways. I think they've they've dropped I would agree. Sometimes getting bit bought is good, but what I mean is it's gonna be core and Yeah, but you know, okay, Oracle buy more customers, more It just makes it more fun when you have really good independent But I know you gotta go.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Mark BenioffPERSON

0.99+

DanielPERSON

0.99+

Daniel NewmanPERSON

0.99+

David NicholsonPERSON

0.99+

AndyPERSON

0.99+

Dave NicholsonPERSON

0.99+

Andy ThuraiPERSON

0.99+

Dave BlanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Chris RileyPERSON

0.99+

OracleORGANIZATION

0.99+

5 billionQUANTITY

0.99+

1 billionQUANTITY

0.99+

5%QUANTITY

0.99+

Constellation ResearchORGANIZATION

0.99+

50%QUANTITY

0.99+

MicrosoftsORGANIZATION

0.99+

fiveQUANTITY

0.99+

40%QUANTITY

0.99+

Rob SlandPERSON

0.99+

Andy DaiPERSON

0.99+

20%QUANTITY

0.99+

Andy UiPathPERSON

0.99+

BillPERSON

0.99+

25 yearsQUANTITY

0.99+

six weeksQUANTITY

0.99+

Futurum ResearchORGANIZATION

0.99+

UiPathORGANIZATION

0.99+

seven yearsQUANTITY

0.99+

SecondQUANTITY

0.99+

18%QUANTITY

0.99+

last quarterDATE

0.99+

Bill McDermottPERSON

0.99+

A year agoDATE

0.99+

two waysQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

second advantageQUANTITY

0.98+

Dave AntePERSON

0.98+

Azure CloudTITLE

0.98+

ServiceNowORGANIZATION

0.98+

OneQUANTITY

0.98+

twoQUANTITY

0.98+

two areasQUANTITY

0.96+

SalesforceORGANIZATION

0.95+

mid twentiesDATE

0.95+

FiveQUANTITY

0.93+

Future and ResearchORGANIZATION

0.92+

day oneQUANTITY

0.92+

pandemicEVENT

0.91+

FirstQUANTITY

0.89+

end of theDATE

0.89+

five yearsQUANTITY

0.88+

todayDATE

0.87+

AutomationORGANIZATION

0.87+

last two yearsDATE

0.87+

2022DATE

0.85+

two co CEOsQUANTITY

0.83+

this morningDATE

0.82+

UI PathORGANIZATION

0.81+

coupleQUANTITY

0.81+

fourQUANTITY

0.8+

Tuesday at 11:00 PM at nightDATE

0.8+

threeQUANTITY

0.79+

day one and a halfQUANTITY

0.77+

TAMTITLE

0.76+

pathTITLE

0.74+

first thingQUANTITY

0.73+

couple pointsQUANTITY

0.73+

SAPORGANIZATION

0.69+

CubeCOMMERCIAL_ITEM

0.66+

SalesforceTITLE

0.65+

AMD & Oracle Partner to Power Exadata X9M


 

(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)

Published Date : Sep 22 2022

SUMMARY :

in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JuanPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

MarkPERSON

0.99+

10QUANTITY

0.99+

20%QUANTITY

0.99+

Mark PapermasterPERSON

0.99+

AMDORGANIZATION

0.99+

Last AprilDATE

0.99+

11 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

both companiesQUANTITY

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

X9MTITLE

0.99+

50%QUANTITY

0.99+

fourth genQUANTITY

0.98+

todayDATE

0.98+

FirstQUANTITY

0.98+

ZenCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

X86COMMERCIAL_ITEM

0.97+

first appearanceQUANTITY

0.97+

ExadataTITLE

0.97+

third genQUANTITY

0.96+

earlier this centuryDATE

0.96+

seven nanometerQUANTITY

0.96+

ExadataORGANIZATION

0.94+

firstQUANTITY

0.92+

Eastern Pacific Yacht ClubORGANIZATION

0.9+

EPYCORGANIZATION

0.87+

bothQUANTITY

0.86+

OCITITLE

0.85+

One thingQUANTITY

0.83+

Exadata X9MCOMMERCIAL_ITEM

0.81+

Power ExadataORGANIZATION

0.81+

The CubeORGANIZATION

0.8+

OCIORGANIZATION

0.79+

The CubeCOMMERCIAL_ITEM

0.79+

ZenORGANIZATION

0.78+

three yearsQUANTITY

0.78+

Exadata X9MCOMMERCIAL_ITEM

0.74+

X9MCOMMERCIAL_ITEM

0.74+

yearsDATE

0.73+

15 years agoDATE

0.7+

10DATE

0.7+

EPYCOTHER

0.65+

ExadaraORGANIZATION

0.64+

Oracle Cloud InfrastructureORGANIZATION

0.61+

last few yearsDATE

0.6+

Exadata Cloud Infrastructure X9MTITLE

0.6+

Jerome West, Dell Technologies


 

(upbeat music) >> We're back with Jerome West, the Product Management Security Lead for HCI at Dell Technologies Hyper-Converged Infrastructure. Jerome, welcome. >> Thank you, Dave. >> Hey, Jerome, in this series "A Blueprint for Trusted Infrastructure," we've been digging into the different parts of the infrastructure stack, including storage servers and networking, and now we want to cover hyper-converged infrastructure. So my first question is what's unique about HCI that presents specific security challenges? What do we need to know? >> So what's unique about hyper-converged infrastructure is the breadth of the security challenge. We can't simply focus on a single type of IT system, so like a server or a storage system or a virtualization piece of software. I mean, HCI is all of those things. So luckily we have excellent partners like VMware, Microsoft and internal partners, like the Dell Power Edge Team, the Dell Storage Team, the Dell Networking Team, and on and on. These partnerships and these collaborations are what make us successful from a security standpoint. So let me give you an example to illustrate. In the recent past, we're seeing growing scope and sophistication in supply chain attacks. This means an attacker is going to attack your software supply chain upstream, so that hopefully a piece of code, malicious code that wasn't identified early in the software supply chain is distributed like a large player, like a VMware or a Microsoft or a Dell. So to confront this kind of sophisticated hard to defeat problem, we need short-term solutions and we need long-term solutions as well. So for the short-term solution, the obvious thing to do is to patch the vulnerability. The complexity is for our HCI portfolio, we build our software on VMware. So we would have to consume a patch that VMware would produce and provide it to our customers in a timely manner. Luckily, VxRail's engineering team has co engineered a release process with VMware that significantly shortens our development life cycle, so that VMware will produce a patch, and within 14 days we will integrate our own code with the VMware release. We will have tested and validated the update, and we will give an update to our customers within 14 days of that VMware release. That as a result of this kind of rapid development process, VxRail had over 40 releases of software updates last year. For a longer term solution, we're partnering with VMware and others to develop a software bill of materials. We work with VMware to consume their software manifest including their upstream vendors and their open source providers to have a comprehensive list of software components. Then we aren't caught off guard by an unforeseen vulnerability, and we're more able to easily detect where the software problem lies so that we can quickly address it. So these are the kind of relationships and solutions that we can co-engineer with effective collaborations with our partners. >> Great, thank you for that description. So if I had to define what cybersecurity resilience means to HCI or converged infrastructure, to me, my takeaway was you got to have a short-term instant patch solution and then you got to do an integration in a very short time, you know, two weeks to then have that integration done. And then longer-term, you have to have a software bill of materials so that you can ensure the provenance of all the components. Help us, is that a right way to think about cybersecurity resilience? Do you have, you know, additives to that definition? >> I do. I really think that cybersecurity and resilience for HCI, because like I said it has sort of unprecedented breadth across our portfolio. It's not a single thing. It's a bit of everything. So really the strength or the secret sauce is to combine all the solutions that our partner develops while integrating them with our own layer. So let me give you an example. So HCI, it's a basically taking a software abstraction of hardware functionality and implementing it into something called the virtualized layer. It's basically the virtualizing hardware functionality, like say a storage controller. You could implement it in the hardware, but for HCI, for example, in our VxRail portfolio, our VxRail product, we integrated it into a product called vSan which is provided by our partner VMware. So that portfolio strength is still, you know, through our partnerships. So what we do, we integrate these security functionality and features into our product. So our partnership grows through our ecosystem through products like VMware products, like NSX, Horizon, Carbon Black and vSphere. All of them integrate seamlessly with VMware. And we also leverage VMware's software partnerships on top of that. So for example, VxRail supports multifactor authentication through vSphere's integration with something called Active Directory Federation Services or ADFS. So there is a lot of providers that support ADFS, including Microsoft Azure. So now we can support a wide array of identity providers such as Auth0, or I mentioned Azure or Active Directory through that partnership. So we can leverage all of our partners' partnerships as well. So there's sort of a second layer. So being able to secure all of that, that provides a lot of options and flexibility for our customers. So basically to summarize my answer, we consume all of the security advantages of our partners, but we also expand on them to make a product that is comprehensively secured at multiple layers from the hardware layer that's provided by Dell through Power Edge to the hyper-converged software that we build ourselves to the virtualization layer that we get through our partnerships with Microsoft and VMware. >> Great, I mean, that's super helpful. You've mentioned NSX, Horizon, Carbon Black, all the you know, the VMware component, Auth0, which the developers are going to love. You got Azure Identity. So it's really an ecosystem. So you may have actually answered my next question, but I'm going to ask it anyway cause you've got this software-defined environment, and you're managing servers and networking and storage with this software-led approach. How do you ensure that the entire system is secure end to end? >> That's a really great question. So the answer is we do testing and validation as part of the engineering process. It's not just bolted on at the end. So when we do, for example VxRail is the market's only co-engineered solution with VMware. Other vendors sell VMware as a hyper-converged solution, but we actually include security as part of the co-engineering process with VMware. So it's considered when VMware builds their code, and their process dovetails with ours because we have a secure development lifecycle which other products might talk about in their discussions with you, that we integrate into our engineering lifecycle. So because we follow the same framework, all of the code should inter-operate from a security standpoint. And so when we do our final validation testing, when we do a software release, we're already halfway there in ensuring that all these features will give the customers what we promised. >> That's great. All right, let's close. Pitch me. What would you say is the strong suit, summarize the the strengths of the Dell hyper-converged infrastructure and converged infrastructure portfolio, specifically from a security perspective, Jerome? >> So I talked about how hyper-converged infrastructure simplifies security management because basically you're going to take all of these features that are abstracted in hardware. They're not abstracted in the virtualization layer. Now you can manage them from a single point of view, whether it would be say, you know, for VxRail it would be vCenter, for example. So by abstracting all this, you make it very easy to manage security and highly flexible because now you don't have limitations around a single vendor. You have a multiple array of choices and partnerships to select. So I would say that is the key to making, to HCI. Now what makes Dell the market leader in HCI is not only do we have that functionality, but we also make it exceptionally useful to you because it's co-engineered. It's not bolted on. So I gave the example of SBOM. I gave the example of how we modify our software release process with VMware to make it very responsive. A couple of other features that we have specific just to HCI are digitally signed LCM updates. This is an example of a feature that we have that's only exclusive to Dell. It's not done through a partnership. So we digitally sign our software updates. So the user can be sure that the update that they're installing into their system is an authentic and unmodified product. So we give it a Dell signature that's invalidated prior to installation. So not only do we consume the features that others develop in a seamless and fully validated way, but we also bolt on our own specific HCI security features that work with all the other partnerships and give the user an exceptional security experience. So for example, the benefit to the customer is you don't have to create a complicated security framework. That's hard for your users to use, and it's hard for your system administrators to manage. It all comes in a package, so it can be all managed through vCenter, for example. And then the specific hyper-converged functions can be managed through VxRail manager or through STDC manager. So there's very few panes of glass that the administrator or user ever has to worry about. It's all self-contained and manageable. >> That makes a lot of sense. So you've got your own infrastructure. You're applying your best practices to that like the digital signatures. You've got your ecosystem. You're doing co-engineering with the ecosystems, delivering security in a package, minimizing the complexity at the infrastructure level. The reason, Jerome, this is so important is because SecOps teams, you know, they got to deal with Cloud security. They got to deal with multiple Clouds. Now they have their shared responsibility model going across multiple. They got all this other stuff that they have to worry. They got to secure the containers and the run time and the platform and so forth. So they're being asked to do other things. If they have to worry about all the things that you just mentioned, they'll never get, you know, the security is just going to get worse. So my takeaway is you're removing that infrastructure piece and saying, okay, guys, you now can focus on those other things that is not necessarily Dell's, you know, domain, but you, you know, you can work with other partners and your own teams to really nail that. Is that a fair summary? >> I think that is a fair summary because absolutely the worst thing you can do from a security perspective is provide a feature that's so unusable that the administrator disables it or other key security features. So when I work with my partners to define and develop a new security feature, the thing I keep foremost in mind is will this be something our users want to use and our administrators want to administer? Because if it's not, if it's something that's too difficult or onerous or complex, then I try to find ways to make it more user-friendly and practical. And this is a challenge sometimes because our products operate in highly regulated environments, and sometimes they have to have certain rules and certain configurations that aren't the most user friendly or management friendly. So I put a lot of effort into thinking about how can we make this feature useful while still complying with all the regulations that we have to comply with. And by the way, we're very successful in a highly regulated space. We sell a lot of VxRail, for example, into the Department of Defense and banks and other highly regulated environments. And we're very successful there. >> Excellent, okay, Jerome, thanks. We're going to leave it there for now. I'd love to have you back to talk about the progress that you're making down the road. Things always, you know, advance in the tech industry, and so would appreciate that >> I would look forward to it. Thank you very much, Dave. >> You're really welcome. In a moment, I'll be back to summarize the program and offer some resources that can help you on your journey to secure your enterprise infrastructure. (upbeat music)

Published Date : Sep 15 2022

SUMMARY :

the Product Management Security Lead and now we want to cover So for the short-term solution, So if I had to define what So really the strength or the secret sauce all the you know, the VMware component, So the answer is we do of the Dell hyper-converged infrastructure So for example, the So they're being asked to do other things. that aren't the most user I'd love to have you back Thank you very much, Dave. and offer some resources that can help you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

Jerome WestPERSON

0.99+

DellORGANIZATION

0.99+

first questionQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

second layerQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

two weeksQUANTITY

0.99+

HCIORGANIZATION

0.99+

last yearDATE

0.99+

VMwareORGANIZATION

0.99+

VxRailORGANIZATION

0.99+

14 daysQUANTITY

0.99+

A Blueprint for Trusted InfrastructureTITLE

0.98+

NSXORGANIZATION

0.98+

VxRailTITLE

0.97+

Dell Networking TeamORGANIZATION

0.97+

vCenterTITLE

0.97+

over 40 releasesQUANTITY

0.95+

AzureTITLE

0.95+

Auth0ORGANIZATION

0.94+

single thingQUANTITY

0.94+

single vendorQUANTITY

0.92+

vSanTITLE

0.91+

Dell Storage TeamORGANIZATION

0.91+

SBOMORGANIZATION

0.9+

HorizonORGANIZATION

0.89+

vSphereTITLE

0.89+

single pointQUANTITY

0.89+

Carbon BlackORGANIZATION

0.85+

Azure IdentityTITLE

0.84+

ADFSTITLE

0.81+

Dell Power Edge TeamORGANIZATION

0.78+

Power EdgeTITLE

0.75+

single typeQUANTITY

0.74+

vSphereORGANIZATION

0.69+

coupleQUANTITY

0.68+

VMwareTITLE

0.6+

HCITITLE

0.47+

SecOpsORGANIZATION

0.45+

HCIOTHER

0.38+

Oracle & AMD Partner to Power Exadata X9M


 

[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]

Published Date : Sep 13 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
20 percentQUANTITY

0.99+

juan loyzaPERSON

0.99+

amdORGANIZATION

0.99+

amazonORGANIZATION

0.99+

8QUANTITY

0.99+

256-wayQUANTITY

0.99+

10QUANTITY

0.99+

OracleORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

87 percentQUANTITY

0.99+

128QUANTITY

0.99+

oracleORGANIZATION

0.99+

two threadsQUANTITY

0.99+

googleORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

todayDATE

0.99+

50QUANTITY

0.99+

200QUANTITY

0.99+

ipodCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

two chipsQUANTITY

0.99+

both companiesQUANTITY

0.99+

10DATE

0.98+

iphoneCOMMERCIAL_ITEM

0.98+

earlier this centuryDATE

0.98+

last aprilDATE

0.98+

third generationQUANTITY

0.98+

juanPERSON

0.98+

64 coresQUANTITY

0.98+

128-wayQUANTITY

0.98+

two socketQUANTITY

0.98+

eight lanesQUANTITY

0.98+

awsORGANIZATION

0.97+

AMDORGANIZATION

0.97+

iosTITLE

0.97+

fourth genQUANTITY

0.96+

168 pcieQUANTITY

0.96+

dave vellantePERSON

0.95+

third genQUANTITY

0.94+

aws azureORGANIZATION

0.94+

appleORGANIZATION

0.94+

thousands of parametersQUANTITY

0.92+

yearsDATE

0.91+

15 yearsQUANTITY

0.9+

Power ExadataORGANIZATION

0.9+

over 90 percentQUANTITY

0.89+

four companiesQUANTITY

0.89+

firstQUANTITY

0.88+

ociORGANIZATION

0.87+

first appearanceQUANTITY

0.85+

one teamQUANTITY

0.84+

almost 15 years agoDATE

0.83+

seven nanometerQUANTITY

0.83+

last few yearsDATE

0.82+

one thingQUANTITY

0.82+

15 years agoDATE

0.82+

epycTITLE

0.8+

over 60QUANTITY

0.79+

amd produceORGANIZATION

0.79+

Ameesh Divatia, Baffle | AWS re:Inforce 2022


 

(upbeat music) >> Okay, welcome back everyone in live coverage here at theCUBE, Boston, Massachusetts, for AWS re:inforce 22 security conference for Amazon Web Services. Obviously reinvent the end of the years' the big celebration, "re:Mars" is the new show that we've covered as well. The res are here with theCUBE. I'm John Furrier, host with a great guest, Ameesh Divatia, co-founder, and CEO of a company called "Baffle." Ameesh, thanks for joining us on theCUBE today, congratulations. >> Thank you. It's good to be here. >> And we got the custom encrypted socks. >> Yup, limited edition >> 64 bitter 128. >> Base 64 encoding. >> Okay.(chuckles) >> Secret message in there. >> Okay.(chuckles) Secret message.(chuckles) We'll have to put a little meme on the internet, figure it out. Well, thanks for comin' on. You guys are goin' hot right now. You guys a hot startup, but you're in an area that's going to explode, we believe. >> Yeah. >> The SuperCloud is here, we've been covering that on theCUBE that people are building on top of the Amazon Hyperscalers. And without the capex, they're building platforms. The application tsunami has come and still coming, it's not stopping. Modern applications are faster, they're better, and they're driving a lot of change under the covers. >> Absolutely. Yeah. >> And you're seeing structural change happening in real time, in ops, the network. You guys got something going on in the encryption area. >> Yes >> Data. Talk about what you guys do. >> Yeah. So we believe very strongly that the next frontier in security is data. We've had multiple waves in security. The next one is data, because data is really where the threats will persist. If the data shows up in the wrong place, you get into a lot of trouble with compliance. So we believe in protecting the data all the way down at the field, or record level. That's what we do. >> And you guys doing all kinds of encryption, or other things? >> Yes. So we do data transformation, which encompasses three different things. It can be tokenization, which is format preserving. We do real encryption with counter mode, or we can do masked views. So tokenization, encryption, and masking, all with the same platform. >> So pretty wide ranging capabilities with respect to having that kind of safety. >> Yes. Because it all depends on how the data is used down the road. Data is created all the time. Data flows through pipelines all the time. You want to make sure that you protect the data, but don't lose the utility of the data. That's where we provide all that flexibility. >> So Kurt was on stage today on one of the keynotes. He's the VP of the platform at AWS. >> Yes. >> He was talking about encrypts, everything. He said it needs, we need to rethink encryption. Okay, okay, good job. We like that. But then he said, "We have encryption at rest." >> Yes. >> That's kind of been there, done that. >> Yes. >> And, in-flight? >> Yeah. That's been there. >> But what about in-use? >> So that's exactly what we plug. What happens right now is that data at rest is protected because of discs that are already self-encrypting, or you have transparent data encryption that comes native with the database. You have data in-flight that is protected because of SSL. But when the data is actually being processed, it's in the memory of the database or datastore, it is exposed. So the threat is, if the credentials of the database are compromised, as happened back then with Starwood, or if the cloud infrastructure is compromised with some sort of an insider threat like a Capital One, that data is exposed. That's precisely what we solve by making sure that the data is protected as soon as it's created. We use standard encryption algorithms, AES, and we either do format preserving, or true encryption with counter mode. And that data, it doesn't really matter where it ends up, >> Yeah. >> because it's always protected. >> Well, that's awesome. And I think this brings up the point that we want been covering on SiliconAngle in theCUBE, is that there's been structural change that's happened, >> Yes. >> called cloud computing, >> Yes. >> and then hybrid. Okay. Scale, role of data, higher level abstraction of services, developers are in charge, value creations, startups, and big companies. That success is causing now, a new structural change happening now. >> Yes. >> This is one of them. What areas do you see that are happening right now that are structurally changing, that's right in front of us? One is, more cloud native. So the success has become now the problem to solve - >> Yes. >> to get to the next level. >> Yeah. >> What are those, some of those? >> What we see is that instead of security being an afterthought, something that you use as a watchdog, you create ways of monitoring where data is being exposed, or data is being exfiltrated, you want to build security into the data pipeline itself. As soon as data is created, you identify what is sensitive data, and you encrypt it, or tokenize it as it flows into the pipeline using things like Kafka plugins, or what we are very clearly differentiating ourselves with is, proxy architectures so that it's completely transparent. You think you're writing to the datastore, but you're actually writing to the proxy, which in turn encrypts the data before its stored. >> Do you think that's an efficient way to do it, or is the only way to do it? >> It is a much more efficient way of doing it because of the fact that you don't need any app-dev resources. There are many other ways of doing it. In fact, the cloud vendors provide development kits where you can just go do it yourself. So that is actually something that we completely avoid. And what makes it really, really interesting is that once the data is encrypted in the data store, or database, we can do what is known as "Privacy Enhanced Computation." >> Mm. >> So we can actually process that data without decrypting it. >> Yeah. And so proxies then, with cloud computing, can be very fast, not a bottleneck that could be. >> In fact, the cloud makes it so. It's very hard to - >> You believe that? >> do these things in static infrastructure. In the cloud, there's infinite amount of processing available, and there's containerization. >> And you have good network. >> You have very good network, you have load balancers, you have ways of creating redundancy. >> Mm. So the cloud is actually enabling solutions like this. >> And the old way, proxies were seen as an architectural fail, in the old antiquated static web. >> And this is where startups don't have the baggage, right? We didn't have that baggage. (John laughs) We looked at the problem and said, of course we're going to use a proxy because this is the best way to do this in an efficient way. >> Well, you bring up something that's happening right now that I hear a lot of CSOs and CIOs and executives say, CXOs say all the time, "Our", I won't say the word, "Our stuff has gotten complicated." >> Yes. >> So now I have tool sprawl, >> Yeah. >> I have skill gaps, and on the rise, all these new managed services coming at me from the vendors who have never experienced my problem. And their reaction is, they don't get my problem, and they don't have the right solutions, it's more complexity. They solve the complexity by adding more complexity. >> Yes. I think we, again, the proxy approach is a very simple. >> That you're solving that with that approach. >> Exactly. It's very simple. And again, we don't get in the way. That's really the the biggest differentiator. The forcing function really here is compliance, right? Because compliance is forcing these CSOs to actually adopt these solutions. >> All right, so love the compliance angle, love the proxy as an ease of use, take the heavy lifting away, no operational problems, and deviations. Now let's talk about workloads. >> Yeah. >> 'Cause this is where the use is. So you got, or workloads being run large scale, lot a data moving around, computin' as well. What's the challenge there? >> I think it's the volume of the data. Traditional solutions that we're relying on legacy tokenizations, I think would replicate the entire storage because it would create a token wall, for example. You cannot do that at this scale. You have to do something that's a lot more efficient, which is where you have to do it with a cryptography approach. So the workloads are diverse, lots of large files in the workloads as well as structured workloads. What we have is a solution that actually goes across the board. We can do unstructured data with HTTP proxies, we can do structured data with SQL proxies. And that's how we are able to provide a complete solution for the pipeline. >> So, I mean, show about the on-premise versus the cloud workload dynamic right now. Hybrid is a steady state right now. >> Yeah. >> Multi-cloud is a consequence of having multiple vendors, not true multi-cloud but like, okay, they have Azure there, AWS here, I get that. But hybrid really is the steady state. >> Yes. >> Cloud operations. How are the workloads and the analytics the data being managed on-prem, and in the cloud, what's their relationship? What's the trend? What are you seeing happening there? >> I think the biggest trend we see is pipelining, right? The new ETL is streaming. You have these Kafka and Kinesis capabilities that are coming into the picture where data is being ingested all the time. It is not a one time migration. It's a stream. >> Yeah. >> So plugging into that stream is very important from an ingestion perspective. >> So it's not just a watchdog. >> No. >> It's the pipelining. >> It's built in. It's built-in, it's real time, that's where the streaming gets another diverse access to data. >> Exactly. >> Data lakes. You got data lakes, you have pipeline, you got streaming, you mentioned that. So talk about the old school OLTP, the old BI world. I think Power BI's like a $30 billion product. >> Yeah. >> And you got Tableau built on OLTP building cubes. Aren't we just building cubes in a new way, or, >> Well. >> is there any relevance to the old school? >> I think there, there is some relevance and in fact that's again, another place where the proxy architecture really helps, because it doesn't matter when your application was built. You can use Tableau, which nobody has any control over, and still process encrypted data. And so can with Power BI, any Sequel application can be used. And that's actually exactly what we like to. >> So we were, I was talking to your team, I knew you were coming on, and they gave me a sound bite that I'm going to read to the audience and I want to get your reaction to. >> Sure. >> 'Cause I love this. I fell out of my chair when I first read this. "Data is the new oil." In 2010 that was mentioned here on theCUBE, of course. "Data is the new oil, but we have to ensure that it does not become the next asbestos." Okay. That is really clever. So we all know about asbestos. I add to the Dave Vellante, "Lead paint too." Remember lead paint? (Ameesh laughs) You got to scrape it out and repaint the house. Asbestos obviously causes a lot of cancer. You know, joking aside, the point is, it's problematic. >> It's the asset. >> Explain why that sentence is relevant. >> Sure. It's the assets and liabilities argument, right? You have an asset which is data, but thanks to compliance regulations and Gartner says 75% of the world will be subject to privacy regulations by 2023. It's a liability. So if you don't store your data well, if you don't process your data responsibly, you are going to be liable. So while it might be the oil and you're going to get lots of value out of it, be careful about the, the flip side. >> And the point is, there could be the "Grim Reaper" waiting for you if you don't do it right, the consequences that are quantified would be being out of business. >> Yes. But here's something that we just discovered actually from our survey that we did. While 93% of respondents said that they have had lots of compliance related effects on their budgets. 75% actually thought that it makes them better. They can use the security postures as a competitive differentiator. That's very heartening to us. We don't like to sell the fear aspect of this. >> Yeah. We like to sell the fact that you look better compared to your neighbor, if you have better data hygiene, back to the. >> There's the fear of missing out, or as they say, "Keeping up with the Joneses", making sure that your yard looks better than the next one. I get the vanity of that, but you're solving real problems. And this is interesting. And I want to get your thoughts on this. I found, I read that you guys protect more than a 100 billion records across highly regulated industries. Financial services, healthcare, industrial IOT, retail, and government. Is that true? >> Absolutely. Because what we are doing is enabling SaaS vendors to actually allow their customers to control their data. So we've had the SaaS vendor who has been working with us for over three years now. They store confidential data from 30 different banks in the country. >> That's a lot of records. >> That's where the record, and. >> How many customers do you have? >> Well, I think. >> The next round of funding's (Ameesh laughs) probably they're linin' up to put money into you guys. >> Well, again, this is a very important problem, and there are, people's businesses are dependent on this. We're just happy to provide the best tool out there that can do this. >> Okay, so what's your business model behind? I love the success, by the way, I wanted to quote that stat to one verify it. What's the business model service, software? >> The business model is software. We don't want anybody to send us their confidential data. We embed our software into our customers environments. In case of SaaS, we are not even visible, we are completely embedded. We are doing other relationships like that right now. >> And they pay you how? >> They pay us based on the volume of the data that they're protecting. >> Got it. >> That in that case which is a large customers, large enterprise customers. >> Pay as you go. >> It is pay as you go, everything is annual licenses. Although, multi-year licenses are very common because once you adopt the solution, it is very sticky. And then for smaller customers, we do base our pricing also just on databases. >> Got it. >> The number of databases. >> And the technology just reviewed low-code, no-code implementation kind of thing, right? >> It is by definition, no code when it comes to proxy. >> Yeah. >> When it comes to API integration, it could be low code. Yeah, it's all cloud-friendly, cloud-native. >> No disruption to operations. >> Exactly. >> That's the culprit. >> Well, yeah. >> Well somethin' like non-disruptive operations.(laughs) >> No, actually I'll give an example of a migration, right? We can do live migrations. So while the databases are still alive, as you write your. >> Live secure migrations. >> Exactly. You're securing - >> That's the one that manifests. >> your data as it migrates. >> Awright, so how much funding have you guys raised so far? >> We raised 36 and a half, series A, and B now. We raised that late last year. >> Congratulations. >> Thank you. >> Who's the venture funders? >> True Ventures is our largest investor, followed by Celesta Capital, National Grid Partners is an investor, and so is Engineering Capital and Clear Vision Ventures. >> And the seed and it was from Engineering? >> Seed was from Engineering. >> Engineering Capital. >> And then True came in very early on. >> Okay. >> Greenspring is also an investor in us, so is Industrial Ventures. >> Well, privacy has a big concern, big application for you guys. Privacy, secure migrations. >> Very much so. So what we are believe very strongly in the security's personal, security is yours and my data. Privacy is what the data collector is responsible for. (John laughs) So the enterprise better be making sure that they've complied with privacy regulations because they don't tell you how to protect the data. They just fine you. >> Well, you're not, you're technically long, six year old start company. Six, seven years old. >> Yeah. >> Roughly. So yeah, startups can go on long like this, still startup, privately held, you're growing, got big records under management there, congratulations. What's next? >> I think scaling the business. We are seeing lots of applications for this particular solution. It's going beyond just regulated industries. Like I said, it's a differentiating factor now. >> Yeah >> So retail, and a lot of other IOT related industrial customers - >> Yeah. >> are also coming. >> Ameesh, talk about the show here. We're at re:inforce, actually we're live here on the ground, the show floor buzzing. What's your takeaway? What's the vibe this year? What if you had to share what your opinion the top story here at the show, what would be the two top things, or three things? >> I think it's two things. First of all, it feels like we are back. (both laugh) It's amazing to see people on the show floor. >> Yeah. >> People coming in and asking questions and getting to see the product. The second thing that I think is very gratifying is, people come in and say, "Oh, I've heard of you guys." So thanks to digital media, and digital marketing. >> They weren't baffled. They want baffled. >> Exactly. >> They use baffled. >> Looks like, our outreach has helped, >> Yeah. >> and has kept the continuity, which is a big deal. >> Yeah, and now you're a CUBE alumni, welcome to the fold. >> Thank you. >> Appreciate you coming on. And we're looking forward to profiling you some day in our startup showcase, and certainly, we'll see you in the Palo Alto studios. Love to have you come in for a deeper dive. >> Sounds great. Looking forward to it. >> Congratulations on all your success, and thanks for coming on theCUBE, here at re:inforce. >> Thank you, John. >> Okay, we're here in, on the ground live coverage, Boston, Massachusetts for AWS re:inforce 22. I'm John Furrier, your host of theCUBE with Dave Vellante, who's in an analyst session, right? He'll be right back with us on the next interview, coming up shortly. Thanks for watching. (gentle music)

Published Date : Jul 26 2022

SUMMARY :

is the new show that we've It's good to be here. meme on the internet, that people are building on Yeah. on in the encryption area. Talk about what you guys do. strongly that the next frontier So tokenization, encryption, and masking, that kind of safety. Data is created all the time. He's the VP of the platform at AWS. to rethink encryption. by making sure that the data is protected the point that we want been and then hybrid. So the success has become now the problem into the data pipeline itself. of the fact that you don't without decrypting it. that could be. In fact, the cloud makes it so. In the cloud, you have load balancers, you have ways Mm. So the cloud is actually And the old way, proxies were seen don't have the baggage, right? say, CXOs say all the time, and on the rise, all these the proxy approach is a very solving that with that That's really the love the proxy as an ease of What's the challenge there? So the workloads are diverse, So, I mean, show about the But hybrid really is the steady state. and in the cloud, what's coming into the picture So plugging into that gets another diverse access to data. So talk about the old school OLTP, And you got Tableau built the proxy architecture really helps, bite that I'm going to read "Data is the new oil." that sentence is relevant. 75% of the world will be And the point is, there could from our survey that we did. that you look better compared I get the vanity of that, but from 30 different banks in the country. up to put money into you guys. provide the best tool out I love the success, In case of SaaS, we are not even visible, the volume of the data That in that case It is pay as you go, It is by definition, no When it comes to API like still alive, as you write your. Exactly. That's the one that We raised that late last year. True Ventures is our largest investor, Greenspring is also an investor in us, big application for you guys. So the enterprise better be making sure Well, you're not, So yeah, startups can I think scaling the business. Ameesh, talk about the show here. on the show floor. see the product. They want baffled. and has kept the continuity, Yeah, and now you're a CUBE alumni, in the Palo Alto studios. Looking forward to it. and thanks for coming on the ground live coverage,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KurtPERSON

0.99+

Dave VellantePERSON

0.99+

AmeeshPERSON

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

2010DATE

0.99+

National Grid PartnersORGANIZATION

0.99+

JohnPERSON

0.99+

six yearQUANTITY

0.99+

Engineering CapitalORGANIZATION

0.99+

$30 billionQUANTITY

0.99+

SixQUANTITY

0.99+

Celesta CapitalORGANIZATION

0.99+

Ameesh DivatiaPERSON

0.99+

75%QUANTITY

0.99+

Clear Vision VenturesORGANIZATION

0.99+

93%QUANTITY

0.99+

30 different banksQUANTITY

0.99+

GreenspringORGANIZATION

0.99+

True VenturesORGANIZATION

0.99+

TrueORGANIZATION

0.99+

todayDATE

0.99+

2023DATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

oneQUANTITY

0.99+

two thingsQUANTITY

0.99+

GartnerORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

bothQUANTITY

0.99+

Power BITITLE

0.98+

seven yearsQUANTITY

0.98+

over three yearsQUANTITY

0.98+

Dave VellantePERSON

0.98+

FirstQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

TableauTITLE

0.98+

firstQUANTITY

0.97+

three thingsQUANTITY

0.97+

36 and a halfQUANTITY

0.97+

second thingQUANTITY

0.97+

one timeQUANTITY

0.97+

series AOTHER

0.97+

this yearDATE

0.96+

late last yearDATE

0.96+

BaffleORGANIZATION

0.96+

Capital OneORGANIZATION

0.96+

Industrial VenturesORGANIZATION

0.96+

128QUANTITY

0.95+

Boston,LOCATION

0.95+

KafkaTITLE

0.95+

more than a 100 billion recordsQUANTITY

0.95+

StarwoodORGANIZATION

0.94+

two top thingsQUANTITY

0.93+

Boston, MassachusettsLOCATION

0.93+

CUBEORGANIZATION

0.91+

SQLTITLE

0.89+

re:MarsTITLE

0.88+

capexORGANIZATION

0.87+

three different thingsQUANTITY

0.86+

OneQUANTITY

0.85+

64QUANTITY

0.83+

AzureTITLE

0.83+

HyperscalersCOMMERCIAL_ITEM

0.82+

OLTPTITLE

0.8+

MassachusettsLOCATION

0.67+

re:inforce 22 security conferenceEVENT

0.65+

SiliconAngleORGANIZATION

0.59+

ComputationOTHER

0.55+

SuperCloudORGANIZATION

0.55+

SequelTITLE

0.53+

KinesisORGANIZATION

0.48+

2022DATE

0.41+

JonesesTITLE

0.27+

Tony Baer, Doug Henschen and Sanjeev Mohan, Couchbase | Couchbase Application Modernization


 

(upbeat music) >> Welcome to this CUBE Power Panel where we're going to talk about application modernization, also success templates, and take a look at some new survey data to see how CIOs are thinking about digital transformation, as we get deeper into the post isolation economy. And with me are three familiar VIP guests to CUBE audiences. Tony Bear, the principal at DB InSight, Doug Henschen, VP and principal analyst at Constellation Research and Sanjeev Mohan principal at SanjMo. Guys, good to see you again, welcome back. >> Thank you. >> Glad to be here. >> Thanks for having us. >> Glad to be here. >> All right, Doug. Let's get started with you. You know, this recent survey, which was commissioned by Couchbase, 650 CIOs and CTOs, and IT practitioners. So obviously very IT heavy. They responded to the following question, "In response to the pandemic, my organization accelerated our application modernization strategy and of course, an overwhelming majority, 94% agreed or strongly agreed." So I'm sure, Doug, that you're not shocked by that, but in the same survey, modernizing existing technologies was second only behind cyber security is the top investment priority this year. Doug, bring us into your world and tell us the trends that you're seeing with the clients and customers you work with in their modernization initiatives. >> Well, the survey, of course, is spot on. You know, any Constellation Research analyst, any systems integrator will tell you that we saw more transformation work in the last two years than in the prior six to eight years. A lot of it was forced, you know, a lot of movement to the cloud, a lot of process improvement, a lot of automation work, but transformational is aspirational and not every company can be a leader. You know, at Constellation, we focus our research on those market leaders and that's only, you know, the top 5% of companies that are really innovating, that are really disrupting their markets and we try to share that with companies that want to be fast followers, that these are the next 20 to 25% of companies that don't want to get left behind, but don't want to hit some of the same roadblocks and you know, pioneering pitfalls that the real leaders are encountering when they're harnessing new technologies. So the rest of the companies, you know, the cautious adopters, the laggards, many of them fall by the wayside, that's certainly what we saw during the pandemic. Who are these leaders? You know, the old saw examples that people saw at the Amazons, the Teslas, the Airbnbs, the Ubers and Lyfts, but new examples are emerging every year. And as a consumer, you immediately recognize these transformed experiences. One of my favorite examples from the pandemic is Rocket Mortgage. No disclaimer required, I don't own stock and you're not client, but when I wanted to take advantage of those record low mortgage interest rates, I called my current bank and some, you know, stall word, very established conventional banks, I'm talking to you Bank of America, City Bank, and they were taking days and weeks to get back to me. Rocket Mortgage had the locked in commitment that day, a very proactive, consistent communications across web, mobile, email, all customer touchpoints. I closed in a matter of weeks an entirely digital seamless process. This is back in the gloves and masks days and the loan officer came parked in our driveway, wiped down an iPad, handed us that iPad, we signed all those documents digitally, completely electronic workflow. The only wet signatures required were those demanded by the state. So it's easy to spot these transformed experiences. You know, Rocket had most of that in place before the pandemic, and that's why they captured 8% of the national mortgage market by 2020 and they're on track to hit 10% here in 2022. >> Yeah, those are great examples. I mean, I'm not a shareholder either, but I am a customer. I even went through the same thing in the pandemic. It was all done in digital it was a piece of cake and I happened to have to do another one with a different firm and stuck with that firm for a variety of reasons and it was night and day. So to your point, it was a forced merge to digital. If you were there beforehand, you had real advantage, it could accelerate your lead during the pandemic. Okay, now Tony bear. Mr. Bear, I understand you're skeptical about all this buzz around digital transformation. So in that same survey, the data shows that the majority of respondents said that their digital initiatives were largely reactive to outside forces, the pandemic compliance changes, et cetera. But at the same time, they indicated that the results while somewhat mixed were generally positive. So why are you skeptical? >> The reason being, and by the way, I have nothing against application modernization. The problem... I think the problem I ever said, it often gets conflated with digital transformation and digital transformation itself has become such a buzzword and so overused that it's really hard, if not impossible to pin down (coughs) what digital transformation actually means. And very often what you'll hear from, let's say a C level, you know, (mumbles) we want to run like Google regardless of whether or not that goal is realistic you know, for that organization (coughs). The thing is that we've been using, you know, businesses have been using digital data since the days of the mainframe, since the... Sorry that data has been digital. What really has changed though, is just the degree of how businesses interact with their customers, their partners, with the whole rest of the ecosystem and how their business... And how in many cases you take look at the auto industry that the nature of the business, you know, is changing. So there is real change of foot, the question is I think we need to get more specific in our goals. And when you look at it, if we can boil it down to a couple, maybe, you know, boil it down like really over simplistically, it's really all about connectedness. No, I'm not saying connectivity 'cause that's more of a physical thing, but connectedness. Being connected to your customer, being connected to your supplier, being connected to the, you know, to the whole landscape, that you operate in. And of course today we have many more channels with which we operate, you know, with customers. And in fact also if you take a look at what's happening in the automotive industry, for instance, I was just reading an interview with Bill Ford, you know, their... Ford is now rapidly ramping up their electric, you know, their electric vehicle strategy. And what they realize is it's not just a change of technology, you know, it is a change in their business, it's a change in terms of the relationship they have with their customer. Their customers have traditionally been automotive dealers who... And the automotive dealers have, you know, traditionally and in many cases by state law now have been the ones who own the relationship with the end customer. But when you go to an electric vehicle, the product becomes a lot more of a software product. And in turn, that means that Ford would have much more direct interaction with its end customers. So that's really what it's all about. It's about, you know, connectedness, it's also about the ability to act, you know, we can say agility, it's about ability not just to react, but to anticipate and act. And so... And of course with all the proliferation, you know, the explosion of data sources and connectivity out there and the cloud, which allows much more, you know, access to compute, it changes the whole nature of the ball game. The fact is that we have to avoid being overwhelmed by this and make our goals more, I guess, tangible, more strictly defined. >> Yeah, now... You know, great points there. And I want to just bring in some survey data, again, two thirds of the respondents said their digital strategies were set by IT and only 26% by the C-suite, 8% by the line of business. Now, this was largely a survey of CIOs and CTOs, but, wow, doesn't seem like the right mix. It's a Doug's point about, you know, leaders in lagers. My guess is that Rocket Mortgage, their digital strategy was led by the chief digital officer potentially. But at the same time, you would think, Tony, that application modernization is a prerequisite for digital transformation. But I want to go to Sanjeev in this war in the survey. And respondents said that on average, they want 58% of their IT spend to be in the public cloud three years down the road. Now, again, this is CIOs and CTOs, but (mumbles), but that's a big number. And there was no ambiguity because the question wasn't worded as cloud, it was worded as public cloud. So Sanjeev, what do you make of that? What's your feeling on cloud as flexible architecture? What does this all mean to you? >> Dave, 58% of IT spend in the cloud is a huge change from today. Today, most estimates, peg cloud IT spend to be somewhere around five to 15%. So what this number tells us is that the cloud journey is still in its early days, so we should buckle up. We ain't seen nothing yet, but let me add some color to this. CIOs and CTOs maybe ramping up their cloud deployment, but they still have a lot of problems to solve. I can tell you from my previous experience, for example, when I was in Gartner, I used to talk to a lot of customers who were in a rush to move into the cloud. So if we were to plot, let's say a maturity model, typically a maturity model in any discipline in IT would have something like crawl, walk, run. So what I was noticing was that these organizations were jumping straight to run because in the pandemic, they were under the gun to quickly deploy into the cloud. So now they're kind of coming back down to, you know, to crawl, walk, run. So basically they did what they had to do under the circumstances, but now they're starting to resolve some of the very, very important issues. For example, security, data privacy, governance, observability, these are all very big ticket items. Another huge problem that nav we are noticing more than we've ever seen, other rising costs. Cloud makes it so easy to onboard new use cases, but it leads to all kinds of unexpected increase in spikes in your operating expenses. So what we are seeing is that organizations are now getting smarter about where the workloads should be deployed. And sometimes it may be in more than one cloud. Multi-cloud is no longer an aspirational thing. So that is a huge trend that we are seeing and that's why you see there's so much increased planning to spend money in public cloud. We do have some issues that we still need to resolve. For example, multi-cloud sounds great, but we still need some sort of single pane of glass, control plane so we can have some fungibility and move workloads around. And some of this may also not be in public cloud, some workloads may actually be done in a more hybrid environment. >> Yeah, definitely. I call it Supercloud. People win sometimes-- >> Supercloud. >> At that term, but it's above multi-cloud, it floats, you know, on topic. But so you clearly identified some potholes. So I want to talk about the evolution of the application experience 'cause there's some potholes there too. 81% of their respondents in that survey said, "Our development teams are embracing the cloud and other technologies faster than the rest of the organization can adopt and manage them." And that was an interesting finding to me because you'd think that infrastructure is code and designing insecurity and containers and Kubernetes would be a great thing for organizations, and it is I'm sure in terms of developer productivity, but what do you make of this? Does the modernization path also have some potholes, Sanjeev? What are those? >> So, first of all, Dave, you mentioned in your previous question, there's no ambiguity, it's a public cloud. This one, I feel it has quite a bit of ambiguity because it talks about cloud and other technologies, that sort of opens up the kimono, it's like that's everything. Also, it says that the rest of the organization is not able to adopt and manage. Adoption is a business function, management is an IT function. So I feed this question is a bit loaded. We know that app modernization is here to stay, developing in the cloud removes a lot of traditional barriers or procuring instantiating infrastructure. In addition, developers today have so many more advanced tools. So they're able to develop the application faster because they have like low-code/no-code options, they have notebooks to write the machine learning code, they have the entire DevOps CI/CD tool chain that makes it easy to version control and push changes. But there are potholes. For example, are developers really interested in fixing data quality problems, all data, privacy, data, access, data governance? How about monitoring? I doubt developers want to get encumbered with all of these operationalization management pieces. Developers are very keen to deliver new functionality. So what we are now seeing is that it is left to the data team to figure out all of these operationalization productionization things that the developers have... You know, are not truly interested in that. So which actually takes me to this topic that, Dave, you've been quite actively covering and we've been talking about, see, the whole data mesh. >> Yeah, I was going to say, it's going to solve all those data quality problems, Sanjeev. You know, I'm a sucker for data mesh. (laughing) >> Yeah, I know, but see, what's going to happen with data mesh is that developers are now going to have more domain resident power to develop these applications. What happens to all of the data curation governance quality that, you know, a central team used to do. So there's a lot of open ended questions that still need to be answered. >> Yeah, That gets automated, Tony, right? With computational governance. So-- >> Of course. >> It's not trivial, it's not trivial, but I'm still an optimist by the end of the decade we'll start to get there. Doug, I want to go to you again and talk about the business case. We all remember, you know, the business case for modernization that is... We remember the Y2K, there was a big it spending binge and this was before the (mumbles) of the enterprise, right? CIOs, they'd be asked to develop new applications and the business maybe helps pay for it or offset the cost with the initial work and deployment then IT got stuck managing the sprawling portfolio for years. And a lot of the apps had limited adoption or only served a few users, so there were big pushes toward rationalizing the portfolio at that time, you know? So do I modernize, they had to make a decision, consolidate, do I sunset? You know, it was all based on value. So what's happening today and how are businesses making the case to modernize, are they going through a similar rationalization exercise, Doug? >> Well, the Y2K era experience that you talked about was back in the days of, you know, throw the requirements over the wall and then we had waterfall development that lasted months in some cases years. We see today's most successful companies building cross functional teams. You know, the C-suite the line of business, the operations, the data and analytics teams, the IT, everybody has a seat at the table to lead innovation and modernization initiatives and they don't start, the most successful companies don't start by talking about technology, they start by envisioning a business outcome by envisioning a transformed customer experience. You hear the example of Amazon writing the press release for the product or service it wants to deliver and then it works backwards to create it. You got to work backwards to determine the tech that will get you there. What's very clear though, is that you can't transform or modernize by lifting and shifting the legacy mess into the cloud. That doesn't give you the seamless processes, that doesn't give you data driven personalization, it doesn't give you a connected and consistent customer experience, whether it's online or mobile, you know, bots, chat, phone, everything that we have today that requires a modern, scalable cloud negative approach and agile deliver iterative experience where you're collaborating with this cross-functional team and course correct, again, making sure you're on track to what's needed. >> Yeah. Now, Tony, both Doug and Sanjeev have been, you know, talking about what I'm going to call this IT and business schism, and we've all done surveys. One of the things I'd love to see Couchbase do in future surveys is not only survey the it heavy, but also survey the business heavy and see what they say about who's leading the digital transformation and who's in charge of the customer experience. Do you have any thoughts on that, Tony? >> Well, there's no question... I mean, it's kind like, you know, the more things change. I mean, we've been talking about that IT and the business has to get together, we talked about this back during, and Doug, you probably remember this, back during the Y2K ERP days, is that you need these cross functional teams, we've been seeing this. I think what's happening today though, is that, you know, back in the Y2K era, we were basically going into like our bedrock systems and having to totally re-engineer them. And today what we're looking at is that, okay, those bedrock systems, the ones that basically are keeping the lights on, okay, those are there, we're not going to mess with that, but on top of that, that's where we're going to innovate. And that gives us a chance to be more, you know, more directed and therefore we can bring these related domains together. I mean, that's why just kind of, you know, talk... Where Sanjeev brought up the term of data mesh, I've been a bit of a cynic about data mesh, but I do think that work and work is where we bring a bunch of these connected teams together, teams that have some sort of shared context, though it's everybody that's... Every team that's working, let's say around the customer, for instance, which could be, you know, in marketing, it could be in sales, order processing in some cases, you know, in logistics and delivery. So I think that's where I think we... You know, there's some hope and the fact is that with all the advanced, you know, basically the low-code/no-code tools, they are ways to bring some of these other players, you know, into the process who previously had to... Were sort of, you know, more at the end of like a, you know, kind of a... Sort of like they throw it over the wall type process. So I do believe, but despite all my cynicism, I do believe there's some hope. >> Thank you. Okay, last question. And maybe all of you could answer this. Maybe, Sanjeev, you can start it off and then Doug and Tony can chime in. In the survey, about a half, nearly half of the 650 respondents said they could tangibly show their organizations improve customer experiences that were realized from digital projects in the last 12 months. Now, again, not surprising, but we've been talking about digital experiences, but there's a long way to go judging from our pandemic customer experiences. And we, again, you know, some were great, some were terrible. And so, you know, and some actually got worse, right? Will that improve? When and how will it improve? Where's 5G and things like that fit in in terms of improving customer outcomes? Maybe, Sanjeev, you could start us off here. And by the way, plug any research that you're working on in this sort of area, please do. >> Thank you, Dave. As a resident optimist on this call, I'll get us started and then I'm sure Doug and Tony will have interesting counterpoints. So I'm a technology fan boy, I have to admit, I am in all of all these new companies and how they have been able to rise up and handle extreme scale. In this time that we are speaking on this show, these food delivery companies would have probably handled tens of thousands of orders in minutes. So these concurrent orders, delivery, customer support, geospatial location intelligence, all of this has really become commonplace now. It used to be that, you know, large companies like Apple would be able to handle all of these supply chain issues, disruptions that we've been facing. But now in my opinion, I think we are seeing this in, Doug mentioned Rocket Mortgage. So we've seen it in FinTech and shopping apps. So we've seen the same scale and it's more than 5G. It includes things like... Even in the public cloud, we have much more efficient, better hardware, which can do like deep learning networks much more efficiently. So machine learning, a lot of natural language programming, being able to handle unstructured data. So in my opinion, it's quite phenomenal to see how technology has actually come to rescue and as, you know, billions of us have gone online over the last two years. >> Yeah, so, Doug, so Sanjeev's point, he's saying, basically, you ain't seen nothing yet. What are your thoughts here, your final thoughts. >> Well, yeah, I mean, there's some incredible technologies coming including 5G, but you know, it's only going to pave the cow path if the underlying app, if the underlying process is clunky. You have to modernize, take advantage of, you know, serverless scalability, autonomous optimization, advanced data science. There's lots of cutting edge capabilities out there today, but you know, lifting and shifting you got to get your hands dirty and actually modernize on that data front. I mentioned my research this year, I'm doing a lot of in depth looks at some of the analytical data platforms. You know, these lake houses we've had some conversations about that and helping companies to harness their data, to have a more personalized and predictive and proactive experience. So, you know, we're talking about the Snowflakes and Databricks and Googles and Teradata and Vertica and Yellowbrick and that's the research I'm focusing on this year. >> Yeah, your point about paving the cow path is right on, especially over the pandemic, a lot of the processes were unknown. But you saw this with RPA, paving the cow path only got you so far. And so, you know, great points there. Tony, you get the last word, bring us home. >> Well, I'll put it this way. I think there's a lot of hope in terms of that the new generation of developers that are coming in are a lot more savvy about things like data. And I think also the new generation of people in the business are realizing that we need to have data as a core competence. So I do have optimism there that the fact is, I think there is a much greater consciousness within both the business side and the technical. In the technology side, the organization of the importance of data and how to approach that. And so I'd like to just end on that note. >> Yeah, excellent. And I think you're right. Putting data at the core is critical data mesh I think very well describes the problem and (mumbles) credit lays out a solution, just the technology's not there yet, nor are the standards. Anyway, I want to thank the panelists here. Amazing. You guys are always so much fun to work with and love to have you back in the future. And thank you for joining today's broadcast brought to you by Couchbase. By the way, check out Couchbase on the road this summer at their application modernization summits, they're making up for two years of shut in and coming to you. So you got to go to couchbase.com/roadshow to find a city near you where you can meet face to face. In a moment. Ravi Mayuram, the chief technology officer of Couchbase will join me. You're watching theCUBE, the leader in high tech enterprise coverage. (bright music)

Published Date : May 19 2022

SUMMARY :

Guys, good to see you again, welcome back. but in the same survey, So the rest of the companies, you know, and I happened to have to do another one it's also about the ability to act, So Sanjeev, what do you make of that? Dave, 58% of IT spend in the cloud I call it Supercloud. it floats, you know, on topic. Also, it says that the say, it's going to solve that still need to be answered. Yeah, That gets automated, Tony, right? And a lot of the apps had limited adoption is that you can't transform or modernize One of the things I'd love to see and the business has to get together, nearly half of the 650 respondents and how they have been able to rise up you ain't seen nothing yet. and that's the research paving the cow path only got you so far. in terms of that the new and love to have you back in the future.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DougPERSON

0.99+

TonyPERSON

0.99+

Ravi MayuramPERSON

0.99+

AppleORGANIZATION

0.99+

Tony BearPERSON

0.99+

DavePERSON

0.99+

Doug HenschenPERSON

0.99+

Bank of AmericaORGANIZATION

0.99+

Tony BaerPERSON

0.99+

AmazonORGANIZATION

0.99+

FordORGANIZATION

0.99+

iPadCOMMERCIAL_ITEM

0.99+

Sanjeev MohanPERSON

0.99+

SanjeevPERSON

0.99+

TeradataORGANIZATION

0.99+

94%QUANTITY

0.99+

VerticaORGANIZATION

0.99+

58%QUANTITY

0.99+

Constellation ResearchORGANIZATION

0.99+

YellowbrickORGANIZATION

0.99+

8%QUANTITY

0.99+

2022DATE

0.99+

todayDATE

0.99+

City BankORGANIZATION

0.99+

Bill FordPERSON

0.99+

two yearsQUANTITY

0.99+

GooglesORGANIZATION

0.99+

81%QUANTITY

0.99+

10%QUANTITY

0.99+

DB InSightORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TodayDATE

0.99+

2020DATE

0.99+

CouchbaseORGANIZATION

0.99+

SnowflakesORGANIZATION

0.99+

5%QUANTITY

0.98+

650 CIOsQUANTITY

0.98+

AmazonsORGANIZATION

0.98+

bothQUANTITY

0.98+

OneQUANTITY

0.98+

LyftsORGANIZATION

0.98+

secondQUANTITY

0.98+

SanjMoORGANIZATION

0.98+

26%QUANTITY

0.98+

UbersORGANIZATION

0.98+

three yearsQUANTITY

0.98+

650 respondentsQUANTITY

0.98+

pandemicEVENT

0.97+

this yearDATE

0.97+

15%QUANTITY

0.97+

RocketORGANIZATION

0.97+

more than one cloudQUANTITY

0.97+

25%QUANTITY

0.97+

Tony bearPERSON

0.97+

around fiveQUANTITY

0.96+

two thirdsQUANTITY

0.96+

about a halfQUANTITY

0.96+

Pete Robinson, Salesforce & Shannon Champion, Dell Technologies | Dell Tech World 2022


 

>>The cube presents, Dell technologies world brought to you by Dell. >>Welcome back to the cube. Lisa Martin and Dave Vale are live in Las Vegas. We are covering our third day of covering Dell technologies world 2022. The first live in-person event since 2019. It's been great to be here. We've had a lot of great conversations about all the announcements that Dell has made in the last couple of days. And we're gonna unpack a little bit more of that. Now. One of our alumni is back with us. Shannon champion joins us again, vice president product marketing at Dell technologies, and she's a company by Pete Robinson, the director of infrastructure engineering at Salesforce. Welcome. Thank >>You. >>So Shannon, you had a big announcement yesterday. I run a lot of new software innovations. Did >>You hear about that? I heard a little something >>About that. Unpack that for us. >>Yeah. Awesome. Yeah, it's so exciting to be here in person and have such a big moment across our storage portfolio, to see that on the big stage, the boom to announce major updates across power store, PowerMax and power flex all together, just a ton of innovation across the storage portfolio. And you probably also heard a ton of focus on our software driven innovation across those products, because our goal is really to deliver a continuously modern storage experience. That's what our customers are asking us for that cloud experience. Let's take the most Val get the most value from data no matter where it lives. That's on premises in the public clouds or at the edge. And that's what we, uh, unveil. That's what we're releasing. And that's what we're excited to talk about. >>Now, Pete, you, Salesforce is a long time Dell customer, but you're also its largest PowerMax customer. The biggest in the world. Tell us a little bit about what you guys are doing with PowerMax and your experience. >>Yeah, so, um, for Salesforce, trust is our number one value and that carries over into the infrastructure that we develop, we test and, and we roll out and Parex has been a key part of that. Um, we really like the, um, the technology in terms of availability, reliability, um, performance. And it, it has allowed us to, you know, continue to grow our customers, uh, continue needs for more and more data. >>So what was kind of eye popping to me was the emphasis on security. Not that you've not always emphasized security, but maybe Shannon, you could do a rundown of, yeah. Maybe not all the features, but give us the high level. And at Pete, I, I wonder how I, if you could comment on how, how you think about that as a practitioner, but please give us that. >>Sure. Yeah. So, you know, PowerMax has been leading for, uh, a long time in its space and we're continuing to lean into that and continue to lead in that space. And we're proud to say PowerMax is the world's most secure mission, critical storage platform. And the reason we can say that is because it really is designed for comprehensive cyber resiliency. It's designed with a zero trust security architecture. And in this particular release, there's 19 different security features really embedded in there. So I'm not gonna unpack all 19, but a couple, um, examples, right? So multifactor authentication also continuous ransomware anomaly detection, a leveraging cloud IQ, which is, uh, huge. Um, and last but not least, um, we have the industry's most granular cyber recovery at scale PowerMax can do up to 65 million imutable snapshots per array. So just, uh, and that's 30 times more than our next nearest competitor. So, you know, really when you're talking about recovery point objectives, power max can't be beat. >>So what does that mean to you, Pete? >>Uh, well, it's it's same thing that I was mentioning earlier about that's a trust factor. Uh, security is a big, a big part of that. You know, Salesforce invests heavily into the securing our customer data because it really is the, the core foundation of our success and our customers trust us with their data. And if we, if we were to fail at that, you know, we would lose that trust. And that's simply not, it's not an option. >>Let's talk about that trust for a minute. We know we've heard a lot about trust this week from Michael Dell. Talk to us about trust, your trust, Salesforce's trust and Dell technologies. You've been using them a long time, but cultural alignment yeah. Seems to be pretty spot on. >>I, I would agree. Um, you know, both companies have a customer first mentality, uh, you know, we, we succeed if the customer succeeds and we see that going back and forth in that partnership. So Dell is successful when Salesforce is successful and vice versa. So, um, when we've it's and it goes beyond just the initial, you know, the initial purchase of, of hardware or software, you know, how you operate it, how you manage it, um, how you continue to develop together. You know, our, you know, we work closely with the Dell engineering teams and we've, we've worked closely in development of the new, new PowerMax lines to where it's actually able to help us build our, our business. And, and again, you know, continue to help Dell in the process. So you've >>Got visibility on the new, a lot of these new features you're playing around with them. What I, I, I obviously started with security cuz that's on top of everybody's mind, but what are the things are important to you as a customer? And how do these features the new features kind of map into that? Maybe you could talk about your experience with the, I think you're in beta, maybe with these features. Maybe you could talk about that. >>Yeah. Um, probably the, the biggest thing that we're seeing right now, other than OB the obvious enhancements in hardware, which, which we love, uh, you know, better performance, better scalability, better, and a better density. Um, but also the, some of the software functionality that Dells starting to roll out, you know, we've, we've, we've uh, implemented cloud IQ for all of our PowerMax systems and it's the same thing. We continue to, um, find features that we would like. And we've actually, you know, worked closely with the cloud IQ team. And within a matter of weeks or months, those features are popping up in cloud IQ that we can then continue to, to develop and, and use. >>Yeah. I think trust goes both ways in our partnership, right? So, you know, Salesforce can trust Dell to deliver the, you know, the products they need to deliver their business outcomes, but we also have a relationship to where we can trust that Salesforce is gonna really help us develop the next generation product that's gonna, you know, really deliver the most value. Yeah. >>Can you share some business outcomes that you've achieved so far leveraging power max and how it's really enabled, maybe it's your organization's productivity perspective, but what are some of those outcomes that you've achieved so far? >>Um, there there's so many to, to, to choose from, but I would say the, probably the biggest thing that we've seen is a as we roll out new infrastructure, we have various generations that we deploy. Um, when we went to the new PowerMax, um, initially we were concerned about whether our storage infrastructure could keep up with the new compute, uh, systems that we were rolling out. And when we went through and began testing it, we came to realize that the, the performance improvements alone, that we were seeing were able to keep up with the compute demand, making that transition from the older VMAX platforms to the PMAX practically seamless and able to just deploy the new SKUs as, as they came out. >>Talk about the portfolio that you apply to PowerMax. I mean, it's the highest of the highest end mission critical the toughest workloads in the planet. Salesforce has made a lot of acquisitions. Yeah. Um, do you throw everything at PowerMax? Are you, are you selective? What's your strategy there? So >>It's, it's selective. In other words that there's no square peg that meets every need, um, you know, acquisitions take some time to, to ingest, um, you know, some run into cloud, some run in first, in, in first party. Um, but so we, we try to take a very, very intentional approach to where we deploy that technology. >>So 10 years ago, someone in your position, or maybe someone who works for you was probably do spent a lot of time managing lawns and tuning performance. And how has that changed? >>We don't do that. <laugh> we? >>We can, right. So what do you do with right. Talk, talk more double click on that. So how talk about how that transition occurred from really non-productive activities, managing storage boxes. Yeah. And, and where you are today, what are you doing with those resources? >>It, it, it all comes outta automation. Like, you know, the, you know, hardware is hardware to a point, um, but you reach a point where the, the manageability scale just goes exponential and, and we're way, well past that. And the only way we've been able to meet that, meet that need is to, to automate and really develop our operations, to be able to not just manage at a lung level or even at the system level, but manage at the data center level at the geographical, you know, location level and then being able to, to manage from there. >>Okay. Really stupid question. But I'm gonna ask it cause I wanna hear your answer. True. Why can't you just take a software defined storage platform and just run everything on that? Why do you need all these different platforms and why do you gotta spend all this money on PowerMax? Why, why can't you just do >>That? That's the million dollar question. Uh, I, I ask that all the time. <laugh>, um, I think software defined is it's on its way. Um, it's come a long way just in the last decade. Yeah. Um, but in terms of supporting what I consider mission critical, large scale, uh, applications, it's, it's not, it's just simply not on par just yet with what we do with PowerMax, for example. >>And that's exactly how we position it in our portfolio. Right? So PowerMax runs on 95% of the fortune 100 companies, top 20 healthcare companies, top 10 financial services companies in the world. So it's really mission critical high end has all of the enterprise level features and capabilities to really have that availability. That's so important to a lot of companies like Salesforce and, and Pete's right, you know, software define is on its way and it provides a lot of agility there. But at the end of the day for mission critical storage, it's all about PowerMax. >>I wonder if we're ever gonna get to, I mean, you, you, you, it was interesting answer cuz you kind of, I inferred from your that you're hopeful and even optimistic that someday will get to parody. But I wonder because you can't be just close enough. It's almost, you have to be. >>I think, I think the key answer to that is it's it's the software flying gets you halfway there. The other side of the coin is the application ecosystem has to change to be able to solve that other, other side of it. Cuz if you simply simply take an application that runs on a PowerMax and try to run it, just forklift it over to a software defined. You're not gonna have very much luck. >>Recovery has to be moved up to stack >>Operations recovery, the whole, whole whole works. >>Jenny, can you comment on how customers like Salesforce? Like what's your process for involving them in testing in roadmap and in that direction, strategic direction that you guys are going? Great >>Question. Sure. Yeah. So, you know, customer feedback is huge. You've heard it. I'm sure this is not new right product development and engineering. We love to hear from our customers. And there's multiple ways you heard about beta testing, which we're really fortunate that Salesforce can help us provide that feedback for our new releases. But we have user groups, we have forums. We, we hear directly from our sales teams, our, you know, our customers, aren't shy, they're willing to give us their feedback. And at the end of the day, we take that feedback and make sure that we're prioritizing the right things in our product management and engineering teams so that we're delivering the things that matter. Most first, >>We've heard a lot of that this week. So I would agree guys, thank you so much for joining Dave and me talking about Salesforce. What you doing with PowerMax? All the stuff that you announced yesterday, alone. Hopefully you get to go home and get a little bit of rest. >>Yes. >>I'm sure that there's, there's never a dull moment. Never. Can't wait guys. Great to have you. >>Thank you. You guys, >>For our guests on Dave Volante, I'm Lisa Martin and you're watching the queue. We are live day three of our coverage of Dell technologies world 2022, Dave and I will be right back with our final guest of the show.

Published Date : May 5 2022

SUMMARY :

about all the announcements that Dell has made in the last couple of days. So Shannon, you had a big announcement yesterday. Unpack that for us. And you probably also heard a ton Tell us a little bit about what you guys are doing with it has allowed us to, you know, continue to grow our customers, uh, I, I wonder how I, if you could comment on how, how you think about that as a practitioner, So, you know, really when you're talking about recovery point objectives, power max can't be beat. And if we, if we were to fail at that, you know, we would lose that trust. Talk to us about trust, your trust, Salesforce's trust and Dell technologies. um, when we've it's and it goes beyond just the initial, you know, the initial purchase of, Maybe you could talk about your experience with the, I think you're in beta, maybe with these features. starting to roll out, you know, we've, we've, we've uh, implemented cloud IQ for all of our PowerMax systems Salesforce can trust Dell to deliver the, you know, the products they need to to keep up with the compute demand, making that transition from the older VMAX platforms Talk about the portfolio that you apply to PowerMax. um, you know, acquisitions take some time to, to ingest, um, you know, And how has that changed? We don't do that. So what do you do with right. but manage at the data center level at the geographical, you know, location level and then Why do you need all these different platforms and why do you gotta spend all this money on PowerMax? Uh, I, I ask that all the time. and, and Pete's right, you know, software define is on its way and it provides a lot of agility there. But I wonder because you can't be just close enough. I think, I think the key answer to that is it's it's the software flying gets you halfway there. our, you know, our customers, aren't shy, they're willing to give us their feedback. All the stuff that you announced yesterday, alone. Great to have you. You guys, of our coverage of Dell technologies world 2022, Dave and I will be right back with our final guest of the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

ShannonPERSON

0.99+

Pete RobinsonPERSON

0.99+

DavePERSON

0.99+

Dave ValePERSON

0.99+

30 timesQUANTITY

0.99+

Las VegasLOCATION

0.99+

DellORGANIZATION

0.99+

JennyPERSON

0.99+

95%QUANTITY

0.99+

Michael DellPERSON

0.99+

Dave VolantePERSON

0.99+

PetePERSON

0.99+

SalesforceORGANIZATION

0.99+

OneQUANTITY

0.99+

yesterdayDATE

0.99+

million dollarQUANTITY

0.99+

19 different security featuresQUANTITY

0.99+

Shannon ChampionPERSON

0.99+

both companiesQUANTITY

0.98+

this weekDATE

0.98+

third dayQUANTITY

0.97+

10 years agoDATE

0.97+

PowerMaxORGANIZATION

0.97+

2019DATE

0.97+

ParexORGANIZATION

0.96+

Dell TechnologiesORGANIZATION

0.96+

zero trustQUANTITY

0.94+

PMAXORGANIZATION

0.94+

DellsORGANIZATION

0.94+

firstQUANTITY

0.93+

20 healthcare companiesQUANTITY

0.93+

2022DATE

0.92+

power flexORGANIZATION

0.91+

10 financial services companiesQUANTITY

0.91+

todayDATE

0.91+

both waysQUANTITY

0.91+

last decadeDATE

0.89+

VMAXORGANIZATION

0.87+

100 companiesQUANTITY

0.87+

Dell technologiesORGANIZATION

0.86+

day threeQUANTITY

0.85+

first mentalityQUANTITY

0.85+

up to 65 million imutable snapshotsQUANTITY

0.84+

first partyQUANTITY

0.82+

PowerMaxCOMMERCIAL_ITEM

0.81+

19QUANTITY

0.77+

first live inQUANTITY

0.73+

doubleQUANTITY

0.68+

daysDATE

0.65+

Dell Tech World 2022EVENT

0.62+

world 2022EVENT

0.59+

PeteORGANIZATION

0.59+

Raja Hammoud, Coupa | Coupa Insp!re 2022


 

(upbeat music) >> Hey guys and girls. Welcome back to theCUBE's coverage of Coupa Inspire 2022, from the Cosmopolitan, in bustling Las Vegas. Lisa Martin here, and as I mentioned, day two of our coverage and fresh from the main stage, Raja Hammoud joins me, the Executive Vice President of products at Coupa. Raja, welcome back to theCUBE and happy 10th anniversary at Coupa. >> Oh, thank you, thank you, thank you, and welcome back to Inspire. >> Thank you. It's so great- >> We're so happy you're here. >> It's great to be here. So you're just about coming up on your 10 anniversary with Coupa. You showed some great photos of your time there but you've seen, you've lived the evolution that is this rocket ship that's Coupa. >> Raja: It's been incredible journey. I really couldn't believe at first it's been 10. This is the longest I've ever been anywhere. And I honestly feel more refreshed and excited than even when I joined back in the day 10 years ago. And so much has changed, but also so much has not. >> Lisa: Yeah. >> The size of course. We were like 60 people when I joined, the product development team was one person in, in a product, roughly 12 engineers, and fast forward to the scale that's today, it's phenomenal difference. But what has not changed is the, the core values, how, the hustle, how people love working with each other, how we support customers, how we keep stepping up our game how we believe none of us is as smart as all of us, and the community keeps getting stronger and stronger. It's been, it's been really exciting journey. >> The theme of none of us is as smarter as all of us, I'm not sure if I got that right, but the idea is you feel that when you're talking to Coupa partners, I've had the opportunity to talk with Coupa partners and customers and Coupa folks that, that is not just a value statement, people are living that. >> Raja: Yeah. It's, it's everywhere. In the, in the company walls, outside the company walls, you often see product people in different organizations where, they start living in an ivory tower, they think they know everything, I mean, back to what we were discussing earlier about Barbara, when she talked about, get out of your doors, right? A lot of people can tend to do that. We always, from the beginning, believed in the best ideas are out there and you collaborate with each other. And I truly, truly believe that the success that we have achieved today to our community is in a large, large part, because we believed in that. So like on Monday, we hosted, I can't keep track of the number now, so, so many in-parallel Community Advisory Board meetings, and just talking to the products managers and everybody is buzzing with new ideas. And when we go back, there's so much new innovation that has just been co-created here in this conference, and this keeps going on and on and on. >> Lisa: Yeah. I like how you call it, the Community Advisory Board. I'm still used to hearing CAB as Customer Advisory Board, but what Coupa has built, especially with the launch of the Moonshot, the, the community AI, is, is just that. >> Yes. >> It's a very collaborative community. One of the things that's around here, hashtags everywhere, but #United by the Power of Spend. >> Yes. >> What does that mean to you as the EVP of products, and what do you think that means to the community? >> When I think... What we are doing, we're building this platform that is powering all these businesses out there. And the reality of it is you can only, only do so much when you try to do things alone. When we are doing things together, we are way more successful, we are more profitable, we are more sustainable, we are more efficient. And community.ai from a technology standpoint, is making that happen, because what we are doing is taking AI, applying it to all this 3.3 trillion in data, and then bringing back prescriptions that we give back to each and every customer so that everybody can see where they are, how they up their games, and we connect them with other people like them. Now, people love coming to conferences like this, but even in conferences like this, if you think about it, the people you're going to meet, it's, some people are going to do matchmaking but you are also losing an opportunities of meeting the maximum number of people who've done exactly the thing that you did. But when you have the ability to look at all of that data and you can match make people. So we did that already with, for sourcing professionals. So if you are somebody who source a certain category, we can tell somebody else has done something like this in this geography and we offer you to connect to each other. >> Lisa: Wow. >> So this is incredibly powerful way where we are really uniting the whole community by spend, making everybody truly stronger together. >> Lisa: Matchmaker in, in a sense. >> It is matchmaking. >> But it's, but it's- >> It's Spend matchmaking. >> Spend matchmaking, but it's also the opportunity to unite professionals across sourcing, procurement- >> Raja: Yes. >> ... finance, treasury. >> Raja: Yes. >> To your point, and, and Rob said this in his keynote, and he said it here on theCUBE, you know, we've got to break down these silos. >> Raja: Yes. >> People and companies functioning in silos are not going to be successful. >> Raja: Yes. This has been one of the, probably one of the things that we were talking earlier, what has changed, what hasn't. This is one of the fundamental things that has never changed since I've joined. The vision has been very clear. The execution on it, of how we drive successful business spend management program is by breaking down the silos and this idea of sweet synergy, where in product, you start building these capabilities that helps these professionals in the different organizations to actually connect on the touch points, where, where things really matter. >> Lisa: Sweet synergy, was that thing from a concept perspective, did that come from the community, in terms of Coupa going, this is actually what's happening, this synergy across the BSM suite? >> Yes. So in the very beginning, it was early idea. I would say in the first two Inspires that we did, we hadn't given it actually the name itself, and we used to call it unified capabilities, and it started with the first silos we broke down. The first silos we broke down were procurement and AP. And they didn't even used to talk in the same room or even want to care about each other. So we started building so many capabilities that brought these teams together and little by little the community started to feel that and see the value of that. And then the community started to ask us to go break down more silos. So in the beginning, I would say the, the vision before I even joined, the company was on that trajectory. And the early customers saw that and they championed it and then they drove us to do more. So they came to us and said could you please do what you did here in contract? Could you please do what you did here in sourcing? And I was in a meeting last week, a leadership meeting, and one question was asked to leaders in the services team about what are they hearing about, from the customers, about a particular area. And it was music to our ears when we heard the customers are asking for more synergy, right? So, they even have the name for it and they're asking for more and more, and we have built hundreds of these already, but the reality is there is so much opportunity. >> Lisa: Right. >> The world is siloed, no technology has attempted to do that. And I think that's what's a exciting is to go and forge new grounds and do something very special to unite everyone together. >> You guys talked about the waves. Rob talked about the waves yesterday. You talked about it again this morning. And when I think of Inspired community, as that third wave, I see it on both sides. I see the Inspired community that is the Coupa community, but also what you just talked about, that flywheel of that sort of symbiotic relationship that you guys have with your customers as Coupa in and of itself being in a community inspired by the community that it has built. >> Raja: Yes, it's, it's very, very, it's a circular effect. Like it, we inspire one another, and we strengthen one another, and it's, it's just a beautiful, beautiful thing. One of the special things that we are starting to do is we want to take the whole product experience itself, to be a complete community experience. So anywhere you are going to Coupa, when it makes sense, of course, you are not only looking at your data, you are getting connected with people for that particular thing. So we've done that already for 15 different product areas and we're constantly doing more and more and more and more. You can imagine one day we can, where we can start within the product pages themselves, where we host community experts to talk via video and connect with others. So you bring that whole community experience alive in a product in enterprise software, which has not been done. >> Kind of like creating your own influencer network. >> Yes, yes, yes. And give people their voice and, and, and it becomes exciting. It is very different when you're just working on your own and driving goals, and you have no idea how good that can pass on the world. And then when right then and there, you get to learn that some people have hit that, some people have achieved these goals, you just get excited, "I want to hit that goal too. Who are these people? Connect me with these leaders. Let's have a conversation. How did they do it?" And they start creating best practices together. We even have started places where they collaborate on actual documents and templates, and they put them in the community exchange as a way for people to share with others, even taking templates from the product putting them back into a community exchange. So it is sharing, being enabled on the platform, platform itself. >> Lisa: How did you guys function during the pandemic, the last two years when we couldn't get together? >> Raja: Yeah. >> I know that your customers are really the lifeblood of Coupa and vice versa. >> Raja: Yes. >> But talk to me about some of the things that Coupa did with its customers, you know, by video conferencing, for example, that really helped the evolution and some of the innovations that you announced this morning. >> When we first... when the pandemic first hit I think like we all didn't believe what, what is going on. And there was this, I would call it a beautiful period in a way, despite how horrific that was, and that period was where everyone rose to the occasion, everybody wanted to help one another. Across Coupa everywhere, we started having documents of how can step up and help our customers, help our communities. We started to look at how we get PPE, and get it in the hands of our customers. We have access to suppliers. We started looking at helping suppliers with digital payments to speed things up. So, so many things we started doing as a community to just help each other. And then as we got to the next level, then we started, of course, starting to do things over, over zoom. And the big surprise, was we were incredibly productive. If anything, we were worried about people feeling burnt out. >> Yeah. >> Because they were just in it, completely in it. And it created a lot of new avenues for us because often you go and do these meetings in person. Now you could have a user experience session with a customer very easily, they're available more often than they used to. >> Lisa: Right. >> So we did not miss a beat with the community. We moved into virtual caps. We had the advantage of having them recorded as well, where we could have the global development teams learn and see exactly what the, what the customers are are co-creating together. And our goal lives accelerated, because a lot of these implementations, they used to happen in person, so schedules, they actually got accelerated- >> Lisa: Right. >> ...through that. Now of course, there is nothing that matches to this. You can do it, you can do a lot, but a ton of the collaboration comes from real life dialogue and kind of conversation. So it's that balance between the two that I think will be great. >> Lisa: What are some of the things that you've heard the last few days? You mentioned the Partners Summit and, and the Community Advisory Boards on Monday, yesterday, everything kicked off today. What are some of the things that you've heard in your meetings that really inspire you on say the next 10 years at Coupa? >> Raja: By far, by far, by far, it's a validation of, that what we are doing is, we're absolutely on target with it, and that, we just can do so much more. The silos are massive and there are so so many opportunities that you hear in every different areas that we could be doing this, we could be doing this together. So we can break down more and more silos. And using community.ai is just the tip of the iceberg of what we are, what we are doing. Yes, we created tens and tens of capabilities, helping, helping the community with all of that, but data drives everything. And when you look at that, every single process in every single silo can be informed by the power of data within your own company, and then even better, data across. And, and to the point where we're talking about concepts that customers are really excited about, even thinking about this community, they're customers of each other. And when you are a customer of each other what are the different ways as a community, you can help one another more. So we're talking about community netting as new types of concepts. >> Lisa: Talk to me a little about that. You mentioned the community netting this morning but I didn't quite... Help me understand. >> Raja: It is very simple terms is if, if we are buying from each other and we have to do money movements every time I have to pay you, I have to incur fees and likewise, but since we are part of this community we can manage that relationship. So we just pay the Delta, we net it out. So it, it saves reconciliation times it saves money movement. And these are tip of the icebergs of these very cool things that we're doing together. >> Wow. That's fantastic. Last question for you, as you talk with prospects who are in the early stages, or, or still determining, do we go through like a supply chain digital transformation? I mean, I think of companies that probably haven't now or need to get on the bandwagon. >> Raja: Yeah. >> What are some of the things that you advise to those customers to be able to do what Mick Ebeling talked about this morning and that is, commit and then figure it out? >> Raja: Yes. The number one thing is just make sure you don't do the analysis paralysis. There are just so many opportunities so many opportunities start with a project, get going, and it creates incredible momentum, and then you can move on from one to another, to another, to another, instead of trying to just go for a year or two, trying to look at how the world has changed in that process. And so often you could see that projects pay for themselves within the first month of go life. You do that, you'll create another one. And it's not like you are coming in to do something so new nobody has done. Hundreds and hundreds and thousands as a matter of fact, of other community members have done that. It is proven. So get started with those and then continue. Other things I will be talking to them about is to make sure that they understand the way we work is all about partnerships spread. Often people who haven't worked with us in the enterprise software, they're used to working with vendors. We are not that. We never were that. Like the number one, if we're not going to be real partners, honest, transparent and work with each other, we don't waste each other's time. >> Lisa: Well, Raja, it's been great having you on the program. I've really enjoyed your keynote this morning. Congratulations on your 10 years at Coupa. >> Raja: Thank you. >> I'm excited to see what the next 10 years brings for you. We appreciate your insites and everything that Coupa is doing in partnership with its customers is very evident in an event like this. >> Raja: Thank you. And thank you for coming and covering us as well. We really appreciate it. >> Lisa: It's our pleasure to be here. >> Thank you. >> For Raja Hammoud, I'm Lisa Martin. You're watching theCUBE's coverage, day two of Coupa Inspire 2022, from Las Vegas. (upbeat music)

Published Date : Apr 6 2022

SUMMARY :

and fresh from the main stage, and welcome back to Inspire. It's so great- lived the evolution in the day 10 years ago. and the community keeps but the idea is you feel that the success that we have launch of the Moonshot, One of the things that's around here, and we offer you to connect to each other. So this is incredibly powerful way and he said it here on theCUBE, you know, are not going to be successful. This is one of the fundamental things and see the value of that. is to go and forge new grounds that is the Coupa community, One of the special things Kind of like creating that can pass on the world. are really the lifeblood and some of the innovations and get it in the hands of our customers. And it created a lot of new avenues for us We had the advantage of So it's that balance between the two Lisa: What are some of the things And, and to the point where You mentioned the community and we have to do money movements are in the early stages, or, and then you can move it's been great having you on the program. and everything that Coupa is doing And thank you for coming day two of Coupa Inspire 2022,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mick EbelingPERSON

0.99+

BarbaraPERSON

0.99+

LisaPERSON

0.99+

Lisa MartinPERSON

0.99+

RobPERSON

0.99+

10 yearsQUANTITY

0.99+

CoupaORGANIZATION

0.99+

RajaPERSON

0.99+

3.3 trillionQUANTITY

0.99+

MondayDATE

0.99+

60 peopleQUANTITY

0.99+

twoQUANTITY

0.99+

last weekDATE

0.99+

DeltaORGANIZATION

0.99+

10QUANTITY

0.99+

HundredsQUANTITY

0.99+

one questionQUANTITY

0.99+

Raja HammoudPERSON

0.99+

Las VegasLOCATION

0.99+

both sidesQUANTITY

0.99+

CABORGANIZATION

0.99+

yesterdayDATE

0.99+

first silosQUANTITY

0.99+

tensQUANTITY

0.99+

a yearQUANTITY

0.99+

oneQUANTITY

0.99+

10 anniversaryQUANTITY

0.99+

one personQUANTITY

0.98+

Community Advisory BoardORGANIZATION

0.98+

todayDATE

0.98+

10th anniversaryQUANTITY

0.98+

12 engineersQUANTITY

0.98+

pandemicEVENT

0.98+

this morningDATE

0.98+

OneQUANTITY

0.97+

10 years agoDATE

0.97+

Community Advisory BoardORGANIZATION

0.97+

15 different product areasQUANTITY

0.97+

hundredsQUANTITY

0.96+

theCUBEORGANIZATION

0.95+

first twoQUANTITY

0.94+

firstQUANTITY

0.94+

Power of SpendORGANIZATION

0.94+

Customer Advisory BoardORGANIZATION

0.93+

first monthQUANTITY

0.93+

Coupa Inspire 2022TITLE

0.92+

Partners SummitEVENT

0.89+

Breaking Analysis: RPA has Become a Transformation Catalyst, Here's What's New


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante >> In its early days, robotic process automation emerged from rudimentary screen scraping, macros and workflow automation software. Once a script heavy and limited tool that largely was used to eliminate mundane tasks for individual users, and by the way still is, RPA's evolved into an enterprise-wide mega trend that puts automation at the center of digital business initiatives. Hello and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this breaking analysis, we present our quarterly update of the trends in RPA and automation and share the latest survey data from enterprise technology research. RPA has grown quite rapidly and the acronym is becoming a convenient misnomer in a way. I mean the real action in RPA has evolved into enterprise-wide automation initiatives. Once exclusively focused really on back office automation and areas such as finance, RPA has now become an enterprise initiative as many larger organizations especially, move well beyond cost savings and outside of the CFO's purview. We predicted in early "Breaking Analysis" episodes that productivity declines in the US and Europe especially, would require automation to solve some of the world's most pressing problems. And that's what's happening. Automation today is attacking not only the labor shortage but it's supporting optimizations in ESG, supply chain, helping with inflation challenges, improving capital allocation. For example, the supply chain issues today, think about what they require. Somebody's got to do research, they got to figure out inventory management, they got to go into different systems, do prioritizations, do price matching, and perform a number of other complex tasks. These are time consuming processes. Now the combination of RPA and machine intelligence is helping managers compress the time to value and optimize decision making. Organizations are realizing that a digital business goes beyond cloud and SaaS, and puts data, AI and automation at the core leveraging cloud and SaaS but reimagining entire workflows and customer experiences. Moreover, low code solutions are taking off and dramatically expanding the ability of organizations to make changes to their processes. We're also seeing adjacencies to RPA becoming folded into enterprise automation initiatives. And that trend will continue for example Legacy software testing tools. This is especially important as companies SaaSify their business and look for modern testing tools that can keep pace with their transformations. So the bottom line is, RPA or intelligent automation has become a strategic priority for many companies. And that means you got to get the CIO involved to ensure that the governance and compliance edicts of the organization are appropriately met. And that alignment occurs across the technology and business lines. A couple of years ago, when we saw that RPA could be much much more than what it was at the time, we revisited our total available market or TAM analysis. And in doing so, we felt there would be a confluence of automation, AI, and data and that the front and back office schism would converge. That is shown here. This is our updated TAM chart, which we shared a while back with a dramatically larger scope. We were interested that, just a few days ago by the way Forrester put out a new report, picked up by Digital Nation, that the RPA market would reach 22 billion by 2025. Now, as we said at the time our TAM includes the entire ecosystem including professional services as does Forrester's recent report and the projections they're in. So see that little dotted red line there, that's about at the 22 billion mark. We're a few years away but we definitely feel as though this is taking shape the way we had previously envisioned. That is to say a progression from back office blending with customer facing processes becoming a core element of digital transformations and eventually entering the realm of automated systems of agency where automations are reliable enough and trusted enough to make realtime decisions at scale for a much, much wider scope of enterprise activities. So we see this evolving over the 2020s or the balance of this decade and becoming a massive multi hundred billion dollar market. Now, unfortunately for later investors, this enthusiasm that I'm sharing around automation has not translated into price momentum for the stocks in this sector. Here are the charts, the stock charts for four RPA related players with market values inserted in each graphic. We've set the cross hairs approximately at the timing of UiPath's IPO. And that's where we'll start. UiPath IPOed last April and you can see the steady decline in its price. UiPath's Series F investors got in at $30 billion valuation, so that's been halved, more than half. But UiPath is the leader in this sector as we'll see in a moment. So investors are just going to have to be patient. Now, you know the problem with these hot tech companies is the cat gets let out of the bag before the IPO because they raise so much private money, it hits the headlines and then, at the time you had zero interest rates, you had the tech stock boom during the pandemic, so you're just going to have to wait it out to get a nice return if you got in sort of post IPO. You know, which... I think this business will deliver over the long term. Now, Blue Prism is interesting because it's being bought by SS&C Technologies after a bidding war with Vista. So that's why their stock has held up pretty reasonably. Vista's PE firm, which owns TIBCO and was going to mash it, Blue Prism that is, together with TIBCO. That was a play I always liked because RPA is going to be integrated across the board. And TIBCO is an integration company, and I felt it was in a good position to do that. But SS&C obvious said, "Hey, we can do that too." And look, they're getting a proven RPA tech stack for 10% of the value of UiPath. Might be a sharp move, we'll see. Or maybe they'll jack prices and squeeze the cashflow, I honestly have no idea. And we shelled the other two players here who really aren't RPA specialists. Appian is a low code business process development platform and Pegasystems of course, we've reported on them extensively. They're a longtime business process player that has done pretty well. But both stocks have suffered pretty dramatically since last April. So let's take a look at the customer survey data and see what it tells us. The ETR survey data shows a pretty robust picture frankly. This chart depicts the net score or customer spending momentum on that vertical axis and market share or pervasiveness relative to other companies and technologies in the ETR dataset, that's on the horizontal. That red dotted line at the 40% mark, that indicates an elevated spending level for the company within this technology. The chart insert you see there shows how the company positions are plotted using net score and market share or Ns. And ETR's tool has a couple of cool features. We can click on the dot and it allows you to track the progression over time, in this case going back to January, 2020 that's the lines that we've inserted here. So we'll start with Microsoft and we'll get that over with. Microsoft acquired a company called Softomotive for a reported a hundred million dollars thereabout, it's a little more than that. So pretty much a lunch money for Mr. Softy. So Microsoft bought the company in May and look at the gray line where it started showing up in the October ETR surveys at a very highly elevated level, typical Microsoft, right? I mean, a lot of spending momentum and they're pretty much ubiquitous. And it just stayed there and it's gone up and to the right, just really a dominant picture. But Microsoft Power Automate is really kind of a personal productivity tool not super feature rich like some of the others that we're going to talk about, it's just part of the giant Microsoft software estate. And there's a substantial amount of overlap between, for example, UiPath's and Automation Anywhere's customer bases and Power Automate users. And it's speaking with the number of customers. They'll say, "Yeah, we use Power Automate," but they see enterprise automation platforms as much more feature rich and capable and they see a role for both. But it's something to watch out for because Microsoft can obviously take a bite out of virtually any platform and moderate the enthusiasm for it. But nonetheless, these other firms that we're mentioning here, the two leaders, they really stand out, UiPath and Automation Anywhere. Both are elevated well above that 40% line with a meaningful presence in the data set. And you can see the path that they took to get to where they are today. Now we had predicted in 2021 in our predictions post that Automation Anywhere would IPO in 2021. So we predicted that in December of 2020 but it hasn't happened yet. The company obviously wasn't ready, and it brought in new management. We reported on that, Chris Riley as the Chief Revenue Officer, and it made other moves to show up their business. Now let me say this about Riley. I've known him him for years, he's a world class sales leader, one of the best in the tech business. And he knows how to build a world class go to market team, I guarantee that's what he's doing. I have no doubt he's completely reinventing his sales team, the alliances, he's got a lot of experience of that when he was at EMC and Dell and HPE, and he knows the channel really well. So I have a great deal of confidence that if Automation Anywhere's product is any good, which the ETR data clearly shows that it is, then the company is going to do very well. Now, as for the timing of an IPO, look, with the market choppiness, who knows? Automation Anywhere, they raised a ton of dough and it was last valued around... In 2019, it was just north of 7 billion. And so if UiPath is valued at 15 billion, you could speculate that Automation Anywhere can't be valued at much more than 10 billion, maybe a little under, maybe a little over. And so they might wait for the market volatility to chill out a little bit before they do the IPO or maybe they've got some further cleanup to do and they want to get their metrics better, but we'll see. Now to the point earlier about Blue Prism, look at its position on the vertical axis, very respectable. Just a finer point on Pega. We've always said that they're not an RPA specialist but they have an RPA offering and a presence in the ETR data set in this sector. And they got a sizeable market cap so we'd like to include them. Now here's another look at the net score data. The way net score works is ETR asks customers, are you adopting a platform for the first time? That's that lime green there. Are you accelerating spending on the platform by 6% or more relative to last year, or sometimes relative to some other point in time, this is relative to last year. That's the forest green. Is your spending flat or is it, that's the gray, or is it decreasing by 6% or worse? Or are you churning? That's that bright red. You subtract the reds from the greens and you get net score which is shown for each company on the right along with the Ns in the survey. So other than Pega, every company shown here has new adoptions in the double digits, not a lot of churn. UiPath and and Automation Anywhere have net scores well over that 40% mark. Now, some other data points on those two, ETR did a little peeling of the onion in their data set and I found a couple of interesting nuggets. UiPath in the Fortune 500 has a 91% net score and a 77% net score in the Global 2000. So significantly higher than its overall average. This speaks to the company's strong presence in larger companies and the adoption and how larger companies are leaning in. Although UiPath's actually still solid in smaller firms as well by the way but... Now the other piece of information is, when asked why they buy UiPath over alternatives customers said a robust feature set, technical lead and compatibility with their existing environment. Now to Automation Anywhere. They have a 72% net score in the Fortune 500, well above its average across the survey, but 46% only in the Global 2000 below its overall average shown here of 54. So we'd like to see a wider aperture in the Global 2000. Again, this is a survey set, who knows, but oftentimes these surveys are indicative. So maybe Automation Anywhere just working that out, more time, figuring out the go to market in the Global 2000 beyond those larger customers. Now, when asked why they buy from Automation Anywhere versus the competition customers cited a robust feature set, just like UiPath, technological lead, just like UiPath, and fast ROI. Now I really believe that both for Automation Anywhere and UiPath, the time to value is much compressed relative to most technology projects. So I would highlight that as well. And I think that's a fundamental reason, one of the reasons why RPA has taken off. All right let's wrap up. The bottom line is this space is moving and it's evolving quickly, and will keep on a fast pace given the customer poll, the funding levels that have been poured into the space, and, of course, the competitive climate. We're seeing a new transformation agenda emerge. Pre COVID, the catalyst was back office efficiency. During the pandemic, we saw an acceleration and organizations are taking the lessons learned from that forced March experience, the digital I sometimes call it, and they're realizing a couple things. One, they can attack much more complex problems than previously envisioned. And two, in order to cloudify and SaaSify their businesses, they need to put automation along with data and AI at the core to completely transform into a digital entity. Now we're moving well beyond automating bespoke tasks and paving the cow path as I sometimes like to say. And we're seeing much more integration across systems like ERP and HR and finance and logistics et cetera, collaboration, customer experience, and importantly, this has to extend into broader ecosystems. We're also seeing a rise in semantic workflows to tackle more complex problems. We're talking here about going beyond a linear process of automation. Like for instance, read this, click on that, copy that, put it here, join it with that, drag and drop it over here and send it over there. It's evolving into a much more interpreter of actions using machine intelligence to watch, to learn, to infer, and then ultimately act as well as discover other process automation opportunities. So think about the way work is done today. Going into various applications, you grab data, you trombone back out, you do it again, in and out, in and out, in and out of these systems, et cetera, NASM, and replacing that sequence with a much more intelligent process. We're also seeing a lot more involvement from C-level executives, especially the CIO, but also the chief digital officer, the chief data officer, with low code solutions enabling lines of business to be much more involved in the game. So look, it's still early here. This sector, in my view, hasn't even hit that steep part of the S-curve yet, it's still building momentum with larger firms leading the innovation, investing in things like centers of excellence and training, digging in to find new ways of doing things. It's a huge priority because the efficiencies that large companies get, they drop right to the bottom line and the big ER the more money that drops. We see that in the adoption data and we think it's just getting started. So keep an eye on this space. It's not a fad, it's here to stay. Okay, that's it for now. Thanks to my colleagues, Stephanie Chan who helped research this week's topics and Alex Myerson on the production team who also manages the Breaking Analysis Podcast, Kristen Martin and Cheryl Knight, helped get the word out on social. Thanks guys. Your great teamwork, really appreciate that. Now remember, these episodes, they're all available as podcasts, wherever you listen just search "Breaking Analysis Podcast". Check out ETR's website at etr.ai. And we also publish a full report every week on wikibon.com and siliconangle.com. You can get in touch with me directly, david.vellante@siliconangle.com is my email. You can DM me @dvellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights, powered by ETR. Have a great week, stay safe, be well, and we'll see you next time. (outro music)

Published Date : Mar 5 2022

SUMMARY :

and that the front and back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
UiPathORGANIZATION

0.99+

Stephanie ChanPERSON

0.99+

TIBCOORGANIZATION

0.99+

Alex MyersonPERSON

0.99+

RileyPERSON

0.99+

December of 2020DATE

0.99+

Chris RileyPERSON

0.99+

ForresterORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Kristen MartinPERSON

0.99+

2021DATE

0.99+

January, 2020DATE

0.99+

DellORGANIZATION

0.99+

Blue PrismORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

MayDATE

0.99+

2019DATE

0.99+

PegaORGANIZATION

0.99+

OctoberDATE

0.99+

MicrosoftORGANIZATION

0.99+

10%QUANTITY

0.99+

SS&CORGANIZATION

0.99+

$30 billionQUANTITY

0.99+

91%QUANTITY

0.99+

15 billionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

EuropeLOCATION

0.99+

40%QUANTITY

0.99+

6%QUANTITY

0.99+

USLOCATION

0.99+

2025DATE

0.99+

HPEORGANIZATION

0.99+

46%QUANTITY

0.99+

54QUANTITY

0.99+

SS&C TechnologiesORGANIZATION

0.99+

77%QUANTITY

0.99+

two leadersQUANTITY

0.99+

72%QUANTITY

0.99+

two playersQUANTITY

0.99+

22 billionQUANTITY

0.99+

VistaORGANIZATION

0.99+

last yearDATE

0.99+

last AprilDATE

0.99+

SoftyPERSON

0.99+

twoQUANTITY

0.99+

each companyQUANTITY

0.99+

BothQUANTITY

0.99+

oneQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

first timeQUANTITY

0.99+

Automation AnywhereORGANIZATION

0.98+

Digital NationORGANIZATION

0.98+

more than halfQUANTITY

0.98+

ETRORGANIZATION

0.98+

this weekDATE

0.98+

bothQUANTITY

0.98+

both stocksQUANTITY

0.98+

each graphicQUANTITY

0.97+

Power AutomateTITLE

0.97+

more than 10 billionQUANTITY

0.97+

@dvellantePERSON

0.97+

theCUBEORGANIZATION

0.97+

OneQUANTITY

0.97+

todayDATE

0.96+

this weekDATE

0.96+

Series FOTHER

0.96+

Analyst Predictions 2022: The Future of Data Management


 

[Music] in the 2010s organizations became keenly aware that data would become the key ingredient in driving competitive advantage differentiation and growth but to this day putting data to work remains a difficult challenge for many if not most organizations now as the cloud matures it has become a game changer for data practitioners by making cheap storage and massive processing power readily accessible we've also seen better tooling in the form of data workflows streaming machine intelligence ai developer tools security observability automation new databases and the like these innovations they accelerate data proficiency but at the same time they had complexity for practitioners data lakes data hubs data warehouses data marts data fabrics data meshes data catalogs data oceans are forming they're evolving and exploding onto the scene so in an effort to bring perspective to the sea of optionality we've brought together the brightest minds in the data analyst community to discuss how data management is morphing and what practitioners should expect in 2022 and beyond hello everyone my name is dave vellante with the cube and i'd like to welcome you to a special cube presentation analyst predictions 2022 the future of data management we've gathered six of the best analysts in data and data management who are going to present and discuss their top predictions and trends for 2022 in the first half of this decade let me introduce our six power panelists sanjeev mohan is former gartner analyst and principal at sanjamo tony bear is principal at db insight carl olufsen is well-known research vice president with idc dave meninger is senior vice president and research director at ventana research brad shimon chief analyst at ai platforms analytics and data management at omnia and doug henschen vice president and principal analyst at constellation research gentlemen welcome to the program and thanks for coming on thecube today great to be here thank you all right here's the format we're going to use i as moderator are going to call on each analyst separately who then will deliver their prediction or mega trend and then in the interest of time management and pace two analysts will have the opportunity to comment if we have more time we'll elongate it but let's get started right away sanjeev mohan please kick it off you want to talk about governance go ahead sir thank you dave i i believe that data governance which we've been talking about for many years is now not only going to be mainstream it's going to be table stakes and all the things that you mentioned you know with data oceans data lakes lake houses data fabric meshes the common glue is metadata if we don't understand what data we have and we are governing it there is no way we can manage it so we saw informatica when public last year after a hiatus of six years i've i'm predicting that this year we see some more companies go public uh my bet is on colibra most likely and maybe alation we'll see go public this year we we i'm also predicting that the scope of data governance is going to expand beyond just data it's not just data and reports we are going to see more transformations like spark jaws python even airflow we're going to see more of streaming data so from kafka schema registry for example we will see ai models become part of this whole governance suite so the governance suite is going to be very comprehensive very detailed lineage impact analysis and then even expand into data quality we already seen that happen with some of the tools where they are buying these smaller companies and bringing in data quality monitoring and integrating it with metadata management data catalogs also data access governance so these so what we are going to see is that once the data governance platforms become the key entry point into these modern architectures i'm predicting that the usage the number of users of a data catalog is going to exceed that of a bi tool that will take time and we already seen that that trajectory right now if you look at bi tools i would say there are 100 users to a bi tool to one data catalog and i i see that evening out over a period of time and at some point data catalogs will really become you know the main way for us to access data data catalog will help us visualize data but if we want to do more in-depth analysis it'll be the jumping-off point into the bi tool the data science tool and and that is that is the journey i see for the data governance products excellent thank you some comments maybe maybe doug a lot a lot of things to weigh in on there maybe you could comment yeah sanjeev i think you're spot on a lot of the trends uh the one disagreement i think it's it's really still far from mainstream as you say we've been talking about this for years it's like god motherhood apple pie everyone agrees it's important but too few organizations are really practicing good governance because it's hard and because the incentives have been lacking i think one thing that deserves uh mention in this context is uh esg mandates and guidelines these are environmental social and governance regs and guidelines we've seen the environmental rags and guidelines imposed in industries particularly the carbon intensive industries we've seen the social mandates particularly diversity imposed on suppliers by companies that are leading on this topic we've seen governance guidelines now being imposed by banks and investors so these esgs are presenting new carrots and sticks and it's going to demand more solid data it's going to demand more detailed reporting and solid reporting tighter governance but we're still far from mainstream adoption we have a lot of uh you know best of breed niche players in the space i think the signs that it's going to be more mainstream are starting with things like azure purview google dataplex the big cloud platform uh players seem to be uh upping the ante and and addressing starting to address governance excellent thank you doug brad i wonder if you could chime in as well yeah i would love to be a believer in data catalogs um but uh to doug's point i think that it's going to take some more pressure for for that to happen i recall metadata being something every enterprise thought they were going to get under control when we were working on service oriented architecture back in the 90s and that didn't happen quite the way we we anticipated and and uh to sanjeev's point it's because it is really complex and really difficult to do my hope is that you know we won't sort of uh how do we put this fade out into this nebulous nebula of uh domain catalogs that are specific to individual use cases like purview for getting data quality right or like data governance and cyber security and instead we have some tooling that can actually be adaptive to gather metadata to create something i know is important to you sanjeev and that is this idea of observability if you can get enough metadata without moving your data around but understanding the entirety of a system that's running on this data you can do a lot to help with with the governance that doug is talking about so so i just want to add that you know data governance like many other initiatives did not succeed even ai went into an ai window but that's a different topic but a lot of these things did not succeed because to your point the incentives were not there i i remember when starbucks oxley had come into the scene if if a bank did not do service obviously they were very happy to a million dollar fine that was like you know pocket change for them instead of doing the right thing but i think the stakes are much higher now with gdpr uh the floodgates open now you know california you know has ccpa but even ccpa is being outdated with cpra which is much more gdpr like so we are very rapidly entering a space where every pretty much every major country in the world is coming up with its own uh compliance regulatory requirements data residence is becoming really important and and i i think we are going to reach a stage where uh it won't be optional anymore so whether we like it or not and i think the reason data catalogs were not successful in the past is because we did not have the right focus on adoption we were focused on features and these features were disconnected very hard for business to stop these are built by it people for it departments to to take a look at technical metadata not business metadata today the tables have turned cdo's are driving this uh initiative uh regulatory compliances are beating down hard so i think the time might be right yeah so guys we have to move on here and uh but there's some some real meat on the bone here sanjeev i like the fact that you late you called out calibra and alation so we can look back a year from now and say okay he made the call he stuck it and then the ratio of bi tools the data catalogs that's another sort of measurement that we can we can take even though some skepticism there that's something that we can watch and i wonder if someday if we'll have more metadata than data but i want to move to tony baer you want to talk about data mesh and speaking you know coming off of governance i mean wow you know the whole concept of data mesh is decentralized data and then governance becomes you know a nightmare there but take it away tony we'll put it this way um data mesh you know the the idea at least is proposed by thoughtworks um you know basically was unleashed a couple years ago and the press has been almost uniformly almost uncritical um a good reason for that is for all the problems that basically that sanjeev and doug and brad were just you know we're just speaking about which is that we have all this data out there and we don't know what to do about it um now that's not a new problem that was a problem we had enterprise data warehouses it was a problem when we had our hadoop data clusters it's even more of a problem now the data's out in the cloud where the data is not only your data like is not only s3 it's all over the place and it's also including streaming which i know we'll be talking about later so the data mesh was a response to that the idea of that we need to debate you know who are the folks that really know best about governance is the domain experts so it was basically data mesh was an architectural pattern and a process my prediction for this year is that data mesh is going to hit cold hard reality because if you if you do a google search um basically the the published work the articles and databases have been largely you know pretty uncritical um so far you know that you know basically learning is basically being a very revolutionary new idea i don't think it's that revolutionary because we've talked about ideas like this brad and i you and i met years ago when we were talking about so and decentralizing all of us was at the application level now we're talking about at the data level and now we have microservices so there's this thought of oh if we manage if we're apps in cloud native through microservices why don't we think of data in the same way um my sense this year is that you know this and this has been a very active search if you look at google search trends is that now companies are going to you know enterprises are going to look at this seriously and as they look at seriously it's going to attract its first real hard scrutiny it's going to attract its first backlash that's not necessarily a bad thing it means that it's being taken seriously um the reason why i think that that uh that it will you'll start to see basically the cold hard light of day shine on data mesh is that it's still a work in progress you know this idea is basically a couple years old and there's still some pretty major gaps um the biggest gap is in is in the area of federated governance now federated governance itself is not a new issue uh federated governance position we're trying to figure out like how can we basically strike the balance between getting let's say you know between basically consistent enterprise policy consistent enterprise governance but yet the groups that understand the data know how to basically you know that you know how do we basically sort of balance the two there's a huge there's a huge gap there in practice and knowledge um also to a lesser extent there's a technology gap which is basically in the self-service technologies that will help teams essentially govern data you know basically through the full life cycle from developed from selecting the data from you know building the other pipelines from determining your access control determining looking at quality looking at basically whether data is fresh or whether or not it's trending of course so my predictions is that it will really receive the first harsh scrutiny this year you are going to see some organization enterprises declare premature victory when they've uh when they build some federated query implementations you're going to see vendors start to data mesh wash their products anybody in the data management space they're going to say that whether it's basically a pipelining tool whether it's basically elt whether it's a catalog um or confederated query tool they're all going to be like you know basically promoting the fact of how they support this hopefully nobody is going to call themselves a data mesh tool because data mesh is not a technology we're going to see one other thing come out of this and this harks back to the metadata that sanji was talking about and the catalogs that he was talking about which is that there's going to be a new focus on every renewed focus on metadata and i think that's going to spur interest in data fabrics now data fabrics are pretty vaguely defined but if we just take the most elemental definition which is a common metadata back plane i think that if anybody is going to get serious about data mesh they need to look at a data fabric because we all at the end of the day need to speak you know need to read from the same sheet of music so thank you tony dave dave meninger i mean one of the things that people like about data mesh is it pretty crisply articulates some of the flaws in today's organizational approaches to data what are your thoughts on this well i think we have to start by defining data mesh right the the term is already getting corrupted right tony said it's going to see the cold hard uh light of day and there's a problem right now that there are a number of overlapping terms that are similar but not identical so we've got data virtualization data fabric excuse me for a second sorry about that data virtualization data fabric uh uh data federation right uh so i i think that it's not really clear what each vendor means by these terms i see data mesh and data fabric becoming quite popular i've i've interpreted data mesh as referring primarily to the governance aspects as originally you know intended and specified but that's not the way i see vendors using i see vendors using it much more to mean data fabric and data virtualization so i'm going to comment on the group of those things i think the group of those things is going to happen they're going to happen they're going to become more robust our research suggests that a quarter of organizations are already using virtualized access to their data lakes and another half so a total of three quarters will eventually be accessing their data lakes using some sort of virtualized access again whether you define it as mesh or fabric or virtualization isn't really the point here but this notion that there are different elements of data metadata and governance within an organization that all need to be managed collectively the interesting thing is when you look at the satisfaction rates of those organizations using virtualization versus those that are not it's almost double 68 of organizations i'm i'm sorry um 79 of organizations that were using virtualized access express satisfaction with their access to the data lake only 39 expressed satisfaction if they weren't using virtualized access so thank you uh dave uh sanjeev we just got about a couple minutes on this topic but i know you're speaking or maybe you've spoken already on a panel with jamal dagani who sort of invented the concept governance obviously is a big sticking point but what are your thoughts on this you are mute so my message to your mark and uh and to the community is uh as opposed to what dave said let's not define it we spent the whole year defining it there are four principles domain product data infrastructure and governance let's take it to the next level i get a lot of questions on what is the difference between data fabric and data mesh and i'm like i can compare the two because data mesh is a business concept data fabric is a data integration pattern how do you define how do you compare the two you have to bring data mesh level down so to tony's point i'm on a warp path in 2022 to take it down to what does a data product look like how do we handle shared data across domains and govern it and i think we are going to see more of that in 2022 is operationalization of data mesh i think we could have a whole hour on this topic couldn't we uh maybe we should do that uh but let's go to let's move to carl said carl your database guy you've been around that that block for a while now you want to talk about graph databases bring it on oh yeah okay thanks so i regard graph database as basically the next truly revolutionary database management technology i'm looking forward to for the graph database market which of course we haven't defined yet so obviously i have a little wiggle room in what i'm about to say but that this market will grow by about 600 percent over the next 10 years now 10 years is a long time but over the next five years we expect to see gradual growth as people start to learn how to use it problem isn't that it's used the problem is not that it's not useful is that people don't know how to use it so let me explain before i go any further what a graph database is because some of the folks on the call may not may not know what it is a graph database organizes data according to a mathematical structure called a graph a graph has elements called nodes and edges so a data element drops into a node the nodes are connected by edges the edges connect one node to another node combinations of edges create structures that you can analyze to determine how things are related in some cases the nodes and edges can have properties attached to them which add additional informative material that makes it richer that's called a property graph okay there are two principal use cases for graph databases there's there's semantic proper graphs which are used to break down human language text uh into the semantic structures then you can search it organize it and and and answer complicated questions a lot of ai is aimed at semantic graphs another kind is the property graph that i just mentioned which has a dazzling number of use cases i want to just point out is as i talk about this people are probably wondering well we have relational databases isn't that good enough okay so a relational database defines it uses um it supports what i call definitional relationships that means you define the relationships in a fixed structure the database drops into that structure there's a value foreign key value that relates one table to another and that value is fixed you don't change it if you change it the database becomes unstable it's not clear what you're looking at in a graph database the system is designed to handle change so that it can reflect the true state of the things that it's being used to track so um let me just give you some examples of use cases for this um they include uh entity resolution data lineage uh um social media analysis customer 360 fraud prevention there's cyber security there's strong supply chain is a big one actually there's explainable ai and this is going to become important too because a lot of people are adopting ai but they want a system after the fact to say how did the ai system come to that conclusion how did it make that recommendation right now we don't have really good ways of tracking that okay machine machine learning in general um social network i already mentioned that and then we've got oh gosh we've got data governance data compliance risk management we've got recommendation we've got personalization anti-money money laundering that's another big one identity and access management network and i.t operations is already becoming a key one where you actually have mapped out your operation your your you know whatever it is your data center and you you can track what's going on as things happen there root cause analysis fraud detection is a huge one a number of major credit card companies use graph databases for fraud detection risk analysis tracking and tracing churn analysis next best action what-if analysis impact analysis entity resolution and i would add one other thing or just a few other things to this list metadata management so sanjay here you go this is your engine okay because i was in metadata management for quite a while in my past life and one of the things i found was that none of the data management technologies that were available to us could efficiently handle metadata because of the kinds of structures that result from it but grass can okay grafts can do things like say this term in this context means this but in that context it means that okay things like that and in fact uh logistics management supply chain it also because it handles recursive relationships by recursive relationships i mean objects that own other objects that are of the same type you can do things like bill materials you know so like parts explosion you can do an hr analysis who reports to whom how many levels up the chain and that kind of thing you can do that with relational databases but yes it takes a lot of programming in fact you can do almost any of these things with relational databases but the problem is you have to program it it's not it's not supported in the database and whenever you have to program something that means you can't trace it you can't define it you can't publish it in terms of its functionality and it's really really hard to maintain over time so carl thank you i wonder if we could bring brad in i mean brad i'm sitting there wondering okay is this incremental to the market is it disruptive and replaceable what are your thoughts on this space it's already disrupted the market i mean like carl said go to any bank and ask them are you using graph databases to do to get fraud detection under control and they'll say absolutely that's the only way to solve this problem and it is frankly um and it's the only way to solve a lot of the problems that carl mentioned and that is i think it's it's achilles heel in some ways because you know it's like finding the best way to cross the seven bridges of konigsberg you know it's always going to kind of be tied to those use cases because it's really special and it's really unique and because it's special and it's unique uh it it still unfortunately kind of stands apart from the rest of the community that's building let's say ai outcomes as the great great example here the graph databases and ai as carl mentioned are like chocolate and peanut butter but technologically they don't know how to talk to one another they're completely different um and you know it's you can't just stand up sql and query them you've got to to learn um yeah what is that carlos specter or uh special uh uh yeah thank you uh to actually get to the data in there and if you're gonna scale that data that graph database especially a property graph if you're gonna do something really complex like try to understand uh you know all of the metadata in your organization you might just end up with you know a graph database winter like we had the ai winter simply because you run out of performance to make the thing happen so i i think it's already disrupted but we we need to like treat it like a first-class citizen in in the data analytics and ai community we need to bring it into the fold we need to equip it with the tools it needs to do that the magic it does and to do it not just for specialized use cases but for everything because i i'm with carl i i think it's absolutely revolutionary so i had also identified the principal achilles heel of the technology which is scaling now when these when these things get large and complex enough that they spill over what a single server can handle you start to have difficulties because the relationships span things that have to be resolved over a network and then you get network latency and that slows the system down so that's still a problem to be solved sanjeev any quick thoughts on this i mean i think metadata on the on the on the word cloud is going to be the the largest font uh but what are your thoughts here i want to like step away so people don't you know associate me with only meta data so i want to talk about something a little bit slightly different uh dbengines.com has done an amazing job i think almost everyone knows that they chronicle all the major databases that are in use today in january of 2022 there are 381 databases on its list of ranked list of databases the largest category is rdbms the second largest category is actually divided into two property graphs and rdf graphs these two together make up the second largest number of data databases so talking about accolades here this is a problem the problem is that there's so many graph databases to choose from they come in different shapes and forms uh to bright's point there's so many query languages in rdbms is sql end of the story here we've got sci-fi we've got gremlin we've got gql and then your proprietary languages so i think there's a lot of disparity in this space but excellent all excellent points sanji i must say and that is a problem the languages need to be sorted and standardized and it needs people need to have a road map as to what they can do with it because as you say you can do so many things and so many of those things are unrelated that you sort of say well what do we use this for i'm reminded of the saying i learned a bunch of years ago when somebody said that the digital computer is the only tool man has ever devised that has no particular purpose all right guys we gotta we gotta move on to dave uh meninger uh we've heard about streaming uh your prediction is in that realm so please take it away sure so i like to say that historical databases are to become a thing of the past but i don't mean that they're going to go away that's not my point i mean we need historical databases but streaming data is going to become the default way in which we operate with data so in the next say three to five years i would expect the data platforms and and we're using the term data platforms to represent the evolution of databases and data lakes that the data platforms will incorporate these streaming capabilities we're going to process data as it streams into an organization and then it's going to roll off into historical databases so historical databases don't go away but they become a thing of the past they store the data that occurred previously and as data is occurring we're going to be processing it we're going to be analyzing we're going to be acting on it i mean we we only ever ended up with historical databases because we were limited by the technology that was available to us data doesn't occur in batches but we processed it in batches because that was the best we could do and it wasn't bad and we've continued to improve and we've improved and we've improved but streaming data today is still the exception it's not the rule right there's there are projects within organizations that deal with streaming data but it's not the default way in which we deal with data yet and so that that's my prediction is that this is going to change we're going to have um streaming data be the default way in which we deal with data and and how you label it what you call it you know maybe these databases and data platforms just evolve to be able to handle it but we're going to deal with data in a different way and our research shows that already about half of the participants in our analytics and data benchmark research are using streaming data you know another third are planning to use streaming technologies so that gets us to about eight out of ten organizations need to use this technology that doesn't mean they have to use it throughout the whole organization but but it's pretty widespread in its use today and has continued to grow if you think about the consumerization of i.t we've all been conditioned to expect immediate access to information immediate responsiveness you know we want to know if an uh item is on the shelf at our local retail store and we can go in and pick it up right now you know that's the world we live in and that's spilling over into the enterprise i.t world where we have to provide those same types of capabilities um so that's my prediction historical database has become a thing of the past streaming data becomes the default way in which we we operate with data all right thank you david well so what what say you uh carl a guy who's followed historical databases for a long time well one thing actually every database is historical because as soon as you put data in it it's now history it's no longer it no longer reflects the present state of things but even if that history is only a millisecond old it's still history but um i would say i mean i know you're trying to be a little bit provocative in saying this dave because you know as well as i do that people still need to do their taxes they still need to do accounting they still need to run general ledger programs and things like that that all involves historical data that's not going to go away unless you want to go to jail so you're going to have to deal with that but as far as the leading edge functionality i'm totally with you on that and i'm just you know i'm just kind of wondering um if this chain if this requires a change in the way that we perceive applications in order to truly be manifested and rethinking the way m applications work um saying that uh an application should respond instantly as soon as the state of things changes what do you say about that i i think that's true i think we do have to think about things differently that's you know it's not the way we design systems in the past uh we're seeing more and more systems designed that way but again it's not the default and and agree 100 with you that we do need historical databases you know that that's clear and even some of those historical databases will be used in conjunction with the streaming data right so absolutely i mean you know let's take the data warehouse example where you're using the data warehouse as context and the streaming data as the present you're saying here's a sequence of things that's happening right now have we seen that sequence before and where what what does that pattern look like in past situations and can we learn from that so tony bear i wonder if you could comment i mean if you when you think about you know real-time inferencing at the edge for instance which is something that a lot of people talk about um a lot of what we're discussing here in this segment looks like it's got great potential what are your thoughts yeah well i mean i think you nailed it right you know you hit it right on the head there which is that i think a key what i'm seeing is that essentially and basically i'm going to split this one down the middle is i don't see that basically streaming is the default what i see is streaming and basically and transaction databases um and analytics data you know data warehouses data lakes whatever are converging and what allows us technically to converge is cloud native architecture where you can basically distribute things so you could have you can have a note here that's doing the real-time processing that's also doing it and this is what your leads in we're maybe doing some of that real-time predictive analytics to take a look at well look we're looking at this customer journey what's happening with you know you know with with what the customer is doing right now and this is correlated with what other customers are doing so what i so the thing is that in the cloud you can basically partition this and because of basically you know the speed of the infrastructure um that you can basically bring these together and or and so and kind of orchestrate them sort of loosely coupled manner the other part is that the use cases are demanding and this is part that goes back to what dave is saying is that you know when you look at customer 360 when you look at let's say smart you know smart utility grids when you look at any type of operational problem it has a real-time component and it has a historical component and having predictives and so like you know you know my sense here is that there that technically we can bring this together through the cloud and i think the use case is that is that we we can apply some some real-time sort of you know predictive analytics on these streams and feed this into the transactions so that when we make a decision in terms of what to do as a result of a transaction we have this real time you know input sanjeev did you have a comment yeah i was just going to say that to this point you know we have to think of streaming very different because in the historical databases we used to bring the data and store the data and then we used to run rules on top uh aggregations and all but in case of streaming the mindset changes because the rules normally the inference all of that is fixed but the data is constantly changing so it's a completely reverse way of thinking of uh and building applications on top of that so dave menninger there seemed to be some disagreement about the default or now what kind of time frame are you are you thinking about is this end of decade it becomes the default what would you pin i i think around you know between between five to ten years i think this becomes the reality um i think you know it'll be more and more common between now and then but it becomes the default and i also want sanjeev at some point maybe in one of our subsequent conversations we need to talk about governing streaming data because that's a whole other set of challenges we've also talked about it rather in a two dimensions historical and streaming and there's lots of low latency micro batch sub second that's not quite streaming but in many cases it's fast enough and we're seeing a lot of adoption of near real time not quite real time as uh good enough for most for many applications because nobody's really taking the hardware dimension of this information like how do we that'll just happen carl so near real time maybe before you lose the customer however you define that right okay um let's move on to brad brad you want to talk about automation ai uh the the the pipeline people feel like hey we can just automate everything what's your prediction yeah uh i'm i'm an ai fiction auto so apologies in advance for that but uh you know um i i think that um we've been seeing automation at play within ai for some time now and it's helped us do do a lot of things for especially for practitioners that are building ai outcomes in the enterprise uh it's it's helped them to fill skills gaps it's helped them to speed development and it's helped them to to actually make ai better uh because it you know in some ways provides some swim lanes and and for example with technologies like ottawa milk and can auto document and create that sort of transparency that that we talked about a little bit earlier um but i i think it's there's an interesting kind of conversion happening with this idea of automation um and and that is that uh we've had the automation that started happening for practitioners it's it's trying to move outside of the traditional bounds of things like i'm just trying to get my features i'm just trying to pick the right algorithm i'm just trying to build the right model uh and it's expanding across that full life cycle of building an ai outcome to start at the very beginning of data and to then continue on to the end which is this continuous delivery and continuous uh automation of of that outcome to make sure it's right and it hasn't drifted and stuff like that and because of that because it's become kind of powerful we're starting to to actually see this weird thing happen where the practitioners are starting to converge with the users and that is to say that okay if i'm in tableau right now i can stand up salesforce einstein discovery and it will automatically create a nice predictive algorithm for me um given the data that i that i pull in um but what's starting to happen and we're seeing this from the the the companies that create business software so salesforce oracle sap and others is that they're starting to actually use these same ideals and a lot of deep learning to to basically stand up these out of the box flip a switch and you've got an ai outcome at the ready for business users and um i i'm very much you know i think that that's that's the way that it's going to go and what it means is that ai is is slowly disappearing uh and i don't think that's a bad thing i think if anything what we're going to see in 2022 and maybe into 2023 is this sort of rush to to put this idea of disappearing ai into practice and have as many of these solutions in the enterprise as possible you can see like for example sap is going to roll out this quarter this thing called adaptive recommendation services which which basically is a cold start ai outcome that can work across a whole bunch of different vertical markets and use cases it's just a recommendation engine for whatever you need it to do in the line of business so basically you're you're an sap user you look up to turn on your software one day and you're a sales professional let's say and suddenly you have a recommendation for customer churn it's going that's great well i i don't know i i think that's terrifying in some ways i think it is the future that ai is going to disappear like that but i am absolutely terrified of it because um i i think that what it what it really does is it calls attention to a lot of the issues that we already see around ai um specific to this idea of what what we like to call it omdia responsible ai which is you know how do you build an ai outcome that is free of bias that is inclusive that is fair that is safe that is secure that it's audible etc etc etc etc that takes some a lot of work to do and so if you imagine a customer that that's just a sales force customer let's say and they're turning on einstein discovery within their sales software you need some guidance to make sure that when you flip that switch that the outcome you're going to get is correct and that's that's going to take some work and so i think we're going to see this let's roll this out and suddenly there's going to be a lot of a lot of problems a lot of pushback uh that we're going to see and some of that's going to come from gdpr and others that sam jeeve was mentioning earlier a lot of it's going to come from internal csr requirements within companies that are saying hey hey whoa hold up we can't do this all at once let's take the slow route let's make ai automated in a smart way and that's going to take time yeah so a couple predictions there that i heard i mean ai essentially you disappear it becomes invisible maybe if i can restate that and then if if i understand it correctly brad you're saying there's a backlash in the near term people can say oh slow down let's automate what we can those attributes that you talked about are non trivial to achieve is that why you're a bit of a skeptic yeah i think that we don't have any sort of standards that companies can look to and understand and we certainly within these companies especially those that haven't already stood up in internal data science team they don't have the knowledge to understand what that when they flip that switch for an automated ai outcome that it's it's gonna do what they think it's gonna do and so we need some sort of standard standard methodology and practice best practices that every company that's going to consume this invisible ai can make use of and one of the things that you know is sort of started that google kicked off a few years back that's picking up some momentum and the companies i just mentioned are starting to use it is this idea of model cards where at least you have some transparency about what these things are doing you know so like for the sap example we know for example that it's convolutional neural network with a long short-term memory model that it's using we know that it only works on roman english uh and therefore me as a consumer can say oh well i know that i need to do this internationally so i should not just turn this on today great thank you carl can you add anything any context here yeah we've talked about some of the things brad mentioned here at idc in the our future of intelligence group regarding in particular the moral and legal implications of having a fully automated you know ai uh driven system uh because we already know and we've seen that ai systems are biased by the data that they get right so if if they get data that pushes them in a certain direction i think there was a story last week about an hr system that was uh that was recommending promotions for white people over black people because in the past um you know white people were promoted and and more productive than black people but not it had no context as to why which is you know because they were being historically discriminated black people being historically discriminated against but the system doesn't know that so you know you have to be aware of that and i think that at the very least there should be controls when a decision has either a moral or a legal implication when when you want when you really need a human judgment it could lay out the options for you but a person actually needs to authorize that that action and i also think that we always will have to be vigilant regarding the kind of data we use to train our systems to make sure that it doesn't introduce unintended biases and to some extent they always will so we'll always be chasing after them that's that's absolutely carl yeah i think that what you have to bear in mind as a as a consumer of ai is that it is a reflection of us and we are a very flawed species uh and so if you look at all the really fantastic magical looking supermodels we see like gpt three and four that's coming out z they're xenophobic and hateful uh because the people the data that's built upon them and the algorithms and the people that build them are us so ai is a reflection of us we need to keep that in mind yeah we're the ai's by us because humans are biased all right great okay let's move on doug henson you know a lot of people that said that data lake that term's not not going to not going to live on but it appears to be have some legs here uh you want to talk about lake house bring it on yes i do my prediction is that lake house and this idea of a combined data warehouse and data lake platform is going to emerge as the dominant data management offering i say offering that doesn't mean it's going to be the dominant thing that organizations have out there but it's going to be the predominant vendor offering in 2022. now heading into 2021 we already had cloudera data bricks microsoft snowflake as proponents in 2021 sap oracle and several of these fabric virtualization mesh vendors join the bandwagon the promise is that you have one platform that manages your structured unstructured and semi-structured information and it addresses both the beyond analytics needs and the data science needs the real promise there is simplicity and lower cost but i think end users have to answer a few questions the first is does your organization really have a center of data gravity or is it is the data highly distributed multiple data warehouses multiple data lakes on-premises cloud if it if it's very distributed and you you know you have difficulty consolidating and that's not really a goal for you then maybe that single platform is unrealistic and not likely to add value to you um you know also the fabric and virtualization vendors the the mesh idea that's where if you have this highly distributed situation that might be a better path forward the second question if you are looking at one of these lake house offerings you are looking at consolidating simplifying bringing together to a single platform you have to make sure that it meets both the warehouse need and the data lake need so you have vendors like data bricks microsoft with azure synapse new really to the data warehouse space and they're having to prove that these data warehouse capabilities on their platforms can meet the scaling requirements can meet the user and query concurrency requirements meet those tight slas and then on the other hand you have the or the oracle sap snowflake the data warehouse uh folks coming into the data science world and they have to prove that they can manage the unstructured information and meet the needs of the data scientists i'm seeing a lot of the lake house offerings from the warehouse crowd managing that unstructured information in columns and rows and some of these vendors snowflake in particular is really relying on partners for the data science needs so you really got to look at a lake house offering and make sure that it meets both the warehouse and the data lake requirement well thank you doug well tony if those two worlds are going to come together as doug was saying the analytics and the data science world does it need to be some kind of semantic layer in between i don't know weigh in on this topic if you would oh didn't we talk about data fabrics before common metadata layer um actually i'm almost tempted to say let's declare victory and go home in that this is actually been going on for a while i actually agree with uh you know much what doug is saying there which is that i mean we i remembered as far back as i think it was like 2014 i was doing a a study you know it was still at ovum predecessor omnia um looking at all these specialized databases that were coming up and seeing that you know there's overlap with the edges but yet there was still going to be a reason at the time that you would have let's say a document database for json you'd have a relational database for tran you know for transactions and for data warehouse and you had you know and you had basically something at that time that that resembles to do for what we're considering a day of life fast fo and the thing is what i was saying at the time is that you're seeing basically blur you know sort of blending at the edges that i was saying like about five or six years ago um that's all and the the lake house is essentially you know the amount of the the current manifestation of that idea there is a dichotomy in terms of you know it's the old argument do we centralize this all you know you know in in in in in a single place or do we or do we virtualize and i think it's always going to be a yin and yang there's never going to be a single single silver silver bullet i do see um that they're also going to be questions and these are things that points that doug raised they're you know what your what do you need of of of your of you know for your performance there or for your you know pre-performance characteristics do you need for instance hiking currency you need the ability to do some very sophisticated joins or is your requirement more to be able to distribute and you know distribute our processing is you know as far as possible to get you know to essentially do a kind of brute force approach all these approaches are valid based on you know based on the used case um i just see that essentially that the lake house is the culmination of it's nothing it's just it's a relatively new term introduced by databricks a couple years ago this is the culmination of basically what's been a long time trend and what we see in the cloud is that as we start seeing data warehouses as a checkbox item say hey we can basically source data in cloud and cloud storage and s3 azure blob store you know whatever um as long as it's in certain formats like you know like you know parquet or csv or something like that you know i see that as becoming kind of you know a check box item so to that extent i think that the lake house depending on how you define it is already reality um and in some in some cases maybe new terminology but not a whole heck of a lot new under the sun yeah and dave menger i mean a lot of this thank you tony but a lot of this is going to come down to you know vendor marketing right some people try to co-opt the term we talked about data mesh washing what are your thoughts on this yeah so um i used the term data platform earlier and and part of the reason i use that term is that it's more vendor neutral uh we've we've tried to uh sort of stay out of the the vendor uh terminology patenting world right whether whether the term lake house is what sticks or not the concept is certainly going to stick and we have some data to back it up about a quarter of organizations that are using data lakes today already incorporate data warehouse functionality into it so they consider their data lake house and data warehouse one in the same about a quarter of organizations a little less but about a quarter of organizations feed the data lake from the data warehouse and about a quarter of organizations feed the data warehouse from the data lake so it's pretty obvious that three quarters of organizations need to bring this stuff together right the need is there the need is apparent the technology is going to continue to verge converge i i like to talk about you know you've got data lakes over here at one end and i'm not going to talk about why people thought data lakes were a bad idea because they thought you just throw stuff in a in a server and you ignore it right that's not what a data lake is so you've got data lake people over here and you've got database people over here data warehouse people over here database vendors are adding data lake capabilities and data lake vendors are adding data warehouse capabilities so it's obvious that they're going to meet in the middle i mean i think it's like tony says i think we should there declare victory and go home and so so i it's just a follow-up on that so are you saying these the specialized lake and the specialized warehouse do they go away i mean johnny tony data mesh practitioners would say or or advocates would say well they could all live as just a node on the on the mesh but based on what dave just said are we going to see those all morph together well number one as i was saying before there's always going to be this sort of you know kind of you know centrifugal force or this tug of war between do we centralize the data do we do it virtualize and the fact is i don't think that work there's ever going to be any single answer i think in terms of data mesh data mesh has nothing to do with how you physically implement the data you could have a data mesh on a basically uh on a data warehouse it's just that you know the difference being is that if we use the same you know physical data store but everybody's logically manual basically governing it differently you know um a data mission is basically it's not a technology it's a process it's a governance process um so essentially um you know you know i basically see that you know as as i was saying before that this is basically the culmination of a long time trend we're essentially seeing a lot of blurring but there are going to be cases where for instance if i need let's say like observe i need like high concurrency or something like that there are certain things that i'm not going to be able to get efficiently get out of a data lake um and you know we're basically i'm doing a system where i'm just doing really brute forcing very fast file scanning and that type of thing so i think there always will be some delineations but i would agree with dave and with doug that we are seeing basically a a confluence of requirements that we need to essentially have basically the element you know the ability of a data lake and a data laid out their warehouse we these need to come together so i think what we're likely to see is organizations look for a converged platform that can handle both sides for their center of data gravity the mesh and the fabric vendors the the fabric virtualization vendors they're all on board with the idea of this converged platform and they're saying hey we'll handle all the edge cases of the stuff that isn't in that center of data gradient that is off distributed in a cloud or at a remote location so you can have that single platform for the center of of your your data and then bring in virtualization mesh what have you for reaching out to the distributed data bingo as they basically said people are happy when they virtualize data i i think yes at this point but to this uh dave meningas point you know they have convert they are converging snowflake has introduced support for unstructured data so now we are literally splitting here now what uh databricks is saying is that aha but it's easy to go from data lake to data warehouse than it is from data warehouse to data lake so i think we're getting into semantics but we've already seen these two converge so is that so it takes something like aws who's got what 15 data stores are they're going to have 15 converged data stores that's going to be interesting to watch all right guys i'm going to go down the list and do like a one i'm going to one word each and you guys each of the analysts if you wouldn't just add a very brief sort of course correction for me so sanjeev i mean governance is going to be the maybe it's the dog that wags the tail now i mean it's coming to the fore all this ransomware stuff which really didn't talk much about security but but but what's the one word in your prediction that you would leave us with on governance it's uh it's going to be mainstream mainstream okay tony bear mesh washing is what i wrote down that's that's what we're going to see in uh in in 2022 a little reality check you you want to add to that reality check is i hope that no vendor you know jumps the shark and calls their offering a data mesh project yeah yeah let's hope that doesn't happen if they do we're going to call them out uh carl i mean graph databases thank you for sharing some some you know high growth metrics i know it's early days but magic is what i took away from that it's the magic database yeah i would actually i've said this to people too i i kind of look at it as a swiss army knife of data because you can pretty much do anything you want with it it doesn't mean you should i mean that's definitely the case that if you're you know managing things that are in a fixed schematic relationship probably a relational database is a better choice there are you know times when the document database is a better choice it can handle those things but maybe not it may not be the best choice for that use case but for a great many especially the new emerging use cases i listed it's the best choice thank you and dave meninger thank you by the way for bringing the data in i like how you supported all your comments with with some some data points but streaming data becomes the sort of default uh paradigm if you will what would you add yeah um i would say think fast right that's the world we live in you got to think fast fast love it uh and brad shimon uh i love it i mean on the one hand i was saying okay great i'm afraid i might get disrupted by one of these internet giants who are ai experts so i'm gonna be able to buy instead of build ai but then again you know i've got some real issues there's a potential backlash there so give us the there's your bumper sticker yeah i i would say um going with dave think fast and also think slow uh to to talk about the book that everyone talks about i would say really that this is all about trust trust in the idea of automation and of a transparent invisible ai across the enterprise but verify verify before you do anything and then doug henson i mean i i look i think the the trend is your friend here on this prediction with lake house is uh really becoming dominant i liked the way you set up that notion of you know the the the data warehouse folks coming at it from the analytics perspective but then you got the data science worlds coming together i still feel as though there's this piece in the middle that we're missing but your your final thoughts we'll give you the last well i think the idea of consolidation and simplification uh always prevails that's why the appeal of a single platform is going to be there um we've already seen that with uh you know hadoop platforms moving toward cloud moving toward object storage and object storage becoming really the common storage point for whether it's a lake or a warehouse uh and that second point uh i think esg mandates are uh are gonna come in alongside uh gdpr and things like that to uh up the ante for uh good governance yeah thank you for calling that out okay folks hey that's all the time that that we have here your your experience and depth of understanding on these key issues and in data and data management really on point and they were on display today i want to thank you for your your contributions really appreciate your time enjoyed it thank you now in addition to this video we're going to be making available transcripts of the discussion we're going to do clips of this as well we're going to put them out on social media i'll write this up and publish the discussion on wikibon.com and siliconangle.com no doubt several of the analysts on the panel will take the opportunity to publish written content social commentary or both i want to thank the power panelist and thanks for watching this special cube presentation this is dave vellante be well and we'll see you next time [Music] you

Published Date : Jan 8 2022

SUMMARY :

the end of the day need to speak you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
381 databasesQUANTITY

0.99+

2014DATE

0.99+

2022DATE

0.99+

2021DATE

0.99+

january of 2022DATE

0.99+

100 usersQUANTITY

0.99+

jamal daganiPERSON

0.99+

last weekDATE

0.99+

dave meningerPERSON

0.99+

sanjiPERSON

0.99+

second questionQUANTITY

0.99+

15 converged data storesQUANTITY

0.99+

dave vellantePERSON

0.99+

microsoftORGANIZATION

0.99+

threeQUANTITY

0.99+

sanjeevPERSON

0.99+

2023DATE

0.99+

15 data storesQUANTITY

0.99+

siliconangle.comOTHER

0.99+

last yearDATE

0.99+

sanjeev mohanPERSON

0.99+

sixQUANTITY

0.99+

twoQUANTITY

0.99+

carlPERSON

0.99+

tonyPERSON

0.99+

carl olufsenPERSON

0.99+

six yearsQUANTITY

0.99+

davidPERSON

0.99+

carlos specterPERSON

0.98+

both sidesQUANTITY

0.98+

2010sDATE

0.98+

first backlashQUANTITY

0.98+

five yearsQUANTITY

0.98+

todayDATE

0.98+

davePERSON

0.98+

eachQUANTITY

0.98+

three quartersQUANTITY

0.98+

firstQUANTITY

0.98+

single platformQUANTITY

0.98+

lake houseORGANIZATION

0.98+

bothQUANTITY

0.98+

this yearDATE

0.98+

dougPERSON

0.97+

one wordQUANTITY

0.97+

this yearDATE

0.97+

wikibon.comOTHER

0.97+

one platformQUANTITY

0.97+

39QUANTITY

0.97+

about 600 percentQUANTITY

0.97+

two analystsQUANTITY

0.97+

ten yearsQUANTITY

0.97+

single platformQUANTITY

0.96+

fiveQUANTITY

0.96+

oneQUANTITY

0.96+

three quartersQUANTITY

0.96+

californiaLOCATION

0.96+

googleORGANIZATION

0.96+

singleQUANTITY

0.95+

Predictions 2022: Top Analysts See the Future of Data


 

(bright music) >> In the 2010s, organizations became keenly aware that data would become the key ingredient to driving competitive advantage, differentiation, and growth. But to this day, putting data to work remains a difficult challenge for many, if not most organizations. Now, as the cloud matures, it has become a game changer for data practitioners by making cheap storage and massive processing power readily accessible. We've also seen better tooling in the form of data workflows, streaming, machine intelligence, AI, developer tools, security, observability, automation, new databases and the like. These innovations they accelerate data proficiency, but at the same time, they add complexity for practitioners. Data lakes, data hubs, data warehouses, data marts, data fabrics, data meshes, data catalogs, data oceans are forming, they're evolving and exploding onto the scene. So in an effort to bring perspective to the sea of optionality, we've brought together the brightest minds in the data analyst community to discuss how data management is morphing and what practitioners should expect in 2022 and beyond. Hello everyone, my name is Dave Velannte with theCUBE, and I'd like to welcome you to a special Cube presentation, analysts predictions 2022: the future of data management. We've gathered six of the best analysts in data and data management who are going to present and discuss their top predictions and trends for 2022 in the first half of this decade. Let me introduce our six power panelists. Sanjeev Mohan is former Gartner Analyst and Principal at SanjMo. Tony Baer, principal at dbInsight, Carl Olofson is well-known Research Vice President with IDC, Dave Menninger is Senior Vice President and Research Director at Ventana Research, Brad Shimmin, Chief Analyst, AI Platforms, Analytics and Data Management at Omdia and Doug Henschen, Vice President and Principal Analyst at Constellation Research. Gentlemen, welcome to the program and thanks for coming on theCUBE today. >> Great to be here. >> Thank you. >> All right, here's the format we're going to use. I as moderator, I'm going to call on each analyst separately who then will deliver their prediction or mega trend, and then in the interest of time management and pace, two analysts will have the opportunity to comment. If we have more time, we'll elongate it, but let's get started right away. Sanjeev Mohan, please kick it off. You want to talk about governance, go ahead sir. >> Thank you Dave. I believe that data governance which we've been talking about for many years is now not only going to be mainstream, it's going to be table stakes. And all the things that you mentioned, you know, the data, ocean data lake, lake houses, data fabric, meshes, the common glue is metadata. If we don't understand what data we have and we are governing it, there is no way we can manage it. So we saw Informatica went public last year after a hiatus of six. I'm predicting that this year we see some more companies go public. My bet is on Culebra, most likely and maybe Alation we'll see go public this year. I'm also predicting that the scope of data governance is going to expand beyond just data. It's not just data and reports. We are going to see more transformations like spark jawsxxxxx, Python even Air Flow. We're going to see more of a streaming data. So from Kafka Schema Registry, for example. We will see AI models become part of this whole governance suite. So the governance suite is going to be very comprehensive, very detailed lineage, impact analysis, and then even expand into data quality. We already seen that happen with some of the tools where they are buying these smaller companies and bringing in data quality monitoring and integrating it with metadata management, data catalogs, also data access governance. So what we are going to see is that once the data governance platforms become the key entry point into these modern architectures, I'm predicting that the usage, the number of users of a data catalog is going to exceed that of a BI tool. That will take time and we already seen that trajectory. Right now if you look at BI tools, I would say there a hundred users to BI tool to one data catalog. And I see that evening out over a period of time and at some point data catalogs will really become the main way for us to access data. Data catalog will help us visualize data, but if we want to do more in-depth analysis, it'll be the jumping off point into the BI tool, the data science tool and that is the journey I see for the data governance products. >> Excellent, thank you. Some comments. Maybe Doug, a lot of things to weigh in on there, maybe you can comment. >> Yeah, Sanjeev I think you're spot on, a lot of the trends the one disagreement, I think it's really still far from mainstream. As you say, we've been talking about this for years, it's like God, motherhood, apple pie, everyone agrees it's important, but too few organizations are really practicing good governance because it's hard and because the incentives have been lacking. I think one thing that deserves mention in this context is ESG mandates and guidelines, these are environmental, social and governance, regs and guidelines. We've seen the environmental regs and guidelines and posts in industries, particularly the carbon-intensive industries. We've seen the social mandates, particularly diversity imposed on suppliers by companies that are leading on this topic. We've seen governance guidelines now being imposed by banks on investors. So these ESGs are presenting new carrots and sticks, and it's going to demand more solid data. It's going to demand more detailed reporting and solid reporting, tighter governance. But we're still far from mainstream adoption. We have a lot of, you know, best of breed niche players in the space. I think the signs that it's going to be more mainstream are starting with things like Azure Purview, Google Dataplex, the big cloud platform players seem to be upping the ante and starting to address governance. >> Excellent, thank you Doug. Brad, I wonder if you could chime in as well. >> Yeah, I would love to be a believer in data catalogs. But to Doug's point, I think that it's going to take some more pressure for that to happen. I recall metadata being something every enterprise thought they were going to get under control when we were working on service oriented architecture back in the nineties and that didn't happen quite the way we anticipated. And so to Sanjeev's point it's because it is really complex and really difficult to do. My hope is that, you know, we won't sort of, how do I put this? Fade out into this nebula of domain catalogs that are specific to individual use cases like Purview for getting data quality right or like data governance and cybersecurity. And instead we have some tooling that can actually be adaptive to gather metadata to create something. And I know its important to you, Sanjeev and that is this idea of observability. If you can get enough metadata without moving your data around, but understanding the entirety of a system that's running on this data, you can do a lot. So to help with the governance that Doug is talking about. >> So I just want to add that, data governance, like any other initiatives did not succeed even AI went into an AI window, but that's a different topic. But a lot of these things did not succeed because to your point, the incentives were not there. I remember when Sarbanes Oxley had come into the scene, if a bank did not do Sarbanes Oxley, they were very happy to a million dollar fine. That was like, you know, pocket change for them instead of doing the right thing. But I think the stakes are much higher now. With GDPR, the flood gates opened. Now, you know, California, you know, has CCPA but even CCPA is being outdated with CPRA, which is much more GDPR like. So we are very rapidly entering a space where pretty much every major country in the world is coming up with its own compliance regulatory requirements, data residents is becoming really important. And I think we are going to reach a stage where it won't be optional anymore. So whether we like it or not, and I think the reason data catalogs were not successful in the past is because we did not have the right focus on adoption. We were focused on features and these features were disconnected, very hard for business to adopt. These are built by IT people for IT departments to take a look at technical metadata, not business metadata. Today the tables have turned. CDOs are driving this initiative, regulatory compliances are beating down hard, so I think the time might be right. >> Yeah so guys, we have to move on here. But there's some real meat on the bone here, Sanjeev. I like the fact that you called out Culebra and Alation, so we can look back a year from now and say, okay, he made the call, he stuck it. And then the ratio of BI tools to data catalogs that's another sort of measurement that we can take even though with some skepticism there, that's something that we can watch. And I wonder if someday, if we'll have more metadata than data. But I want to move to Tony Baer, you want to talk about data mesh and speaking, you know, coming off of governance. I mean, wow, you know the whole concept of data mesh is, decentralized data, and then governance becomes, you know, a nightmare there, but take it away, Tony. >> We'll put this way, data mesh, you know, the idea at least as proposed by ThoughtWorks. You know, basically it was at least a couple of years ago and the press has been almost uniformly almost uncritical. A good reason for that is for all the problems that basically Sanjeev and Doug and Brad we're just speaking about, which is that we have all this data out there and we don't know what to do about it. Now, that's not a new problem. That was a problem we had in enterprise data warehouses, it was a problem when we had over DoOP data clusters, it's even more of a problem now that data is out in the cloud where the data is not only your data lake, is not only us three, it's all over the place. And it's also including streaming, which I know we'll be talking about later. So the data mesh was a response to that, the idea of that we need to bait, you know, who are the folks that really know best about governance? It's the domain experts. So it was basically data mesh was an architectural pattern and a process. My prediction for this year is that data mesh is going to hit cold heart reality. Because if you do a Google search, basically the published work, the articles on data mesh have been largely, you know, pretty uncritical so far. Basically loading and is basically being a very revolutionary new idea. I don't think it's that revolutionary because we've talked about ideas like this. Brad now you and I met years ago when we were talking about so and decentralizing all of us, but it was at the application level. Now we're talking about it at the data level. And now we have microservices. So there's this thought of have we managed if we're deconstructing apps in cloud native to microservices, why don't we think of data in the same way? My sense this year is that, you know, this has been a very active search if you look at Google search trends, is that now companies, like enterprise are going to look at this seriously. And as they look at it seriously, it's going to attract its first real hard scrutiny, it's going to attract its first backlash. That's not necessarily a bad thing. It means that it's being taken seriously. The reason why I think that you'll start to see basically the cold hearted light of day shine on data mesh is that it's still a work in progress. You know, this idea is basically a couple of years old and there's still some pretty major gaps. The biggest gap is in the area of federated governance. Now federated governance itself is not a new issue. Federated governance decision, we started figuring out like, how can we basically strike the balance between getting let's say between basically consistent enterprise policy, consistent enterprise governance, but yet the groups that understand the data and know how to basically, you know, that, you know, how do we basically sort of balance the two? There's a huge gap there in practice and knowledge. Also to a lesser extent, there's a technology gap which is basically in the self-service technologies that will help teams essentially govern data. You know, basically through the full life cycle, from develop, from selecting the data from, you know, building the pipelines from, you know, determining your access control, looking at quality, looking at basically whether the data is fresh or whether it's trending off course. So my prediction is that it will receive the first harsh scrutiny this year. You are going to see some organization and enterprises declare premature victory when they build some federated query implementations. You going to see vendors start with data mesh wash their products anybody in the data management space that they are going to say that where this basically a pipelining tool, whether it's basically ELT, whether it's a catalog or federated query tool, they will all going to get like, you know, basically promoting the fact of how they support this. Hopefully nobody's going to call themselves a data mesh tool because data mesh is not a technology. We're going to see one other thing come out of this. And this harks back to the metadata that Sanjeev was talking about and of the catalog just as he was talking about. Which is that there's going to be a new focus, every renewed focus on metadata. And I think that's going to spur interest in data fabrics. Now data fabrics are pretty vaguely defined, but if we just take the most elemental definition, which is a common metadata back plane, I think that if anybody is going to get serious about data mesh, they need to look at the data fabric because we all at the end of the day, need to speak, you know, need to read from the same sheet of music. >> So thank you Tony. Dave Menninger, I mean, one of the things that people like about data mesh is it pretty crisply articulate some of the flaws in today's organizational approaches to data. What are your thoughts on this? >> Well, I think we have to start by defining data mesh, right? The term is already getting corrupted, right? Tony said it's going to see the cold hard light of day. And there's a problem right now that there are a number of overlapping terms that are similar but not identical. So we've got data virtualization, data fabric, excuse me for a second. (clears throat) Sorry about that. Data virtualization, data fabric, data federation, right? So I think that it's not really clear what each vendor means by these terms. I see data mesh and data fabric becoming quite popular. I've interpreted data mesh as referring primarily to the governance aspects as originally intended and specified. But that's not the way I see vendors using it. I see vendors using it much more to mean data fabric and data virtualization. So I'm going to comment on the group of those things. I think the group of those things is going to happen. They're going to happen, they're going to become more robust. Our research suggests that a quarter of organizations are already using virtualized access to their data lakes and another half, so a total of three quarters will eventually be accessing their data lakes using some sort of virtualized access. Again, whether you define it as mesh or fabric or virtualization isn't really the point here. But this notion that there are different elements of data, metadata and governance within an organization that all need to be managed collectively. The interesting thing is when you look at the satisfaction rates of those organizations using virtualization versus those that are not, it's almost double, 68% of organizations, I'm sorry, 79% of organizations that were using virtualized access express satisfaction with their access to the data lake. Only 39% express satisfaction if they weren't using virtualized access. >> Oh thank you Dave. Sanjeev we just got about a couple of minutes on this topic, but I know you're speaking or maybe you've always spoken already on a panel with (indistinct) who sort of invented the concept. Governance obviously is a big sticking point, but what are your thoughts on this? You're on mute. (panelist chuckling) >> So my message to (indistinct) and to the community is as opposed to what they said, let's not define it. We spent a whole year defining it, there are four principles, domain, product, data infrastructure, and governance. Let's take it to the next level. I get a lot of questions on what is the difference between data fabric and data mesh? And I'm like I can't compare the two because data mesh is a business concept, data fabric is a data integration pattern. How do you compare the two? You have to bring data mesh a level down. So to Tony's point, I'm on a warpath in 2022 to take it down to what does a data product look like? How do we handle shared data across domains and governance? And I think we are going to see more of that in 2022, or is "operationalization" of data mesh. >> I think we could have a whole hour on this topic, couldn't we? Maybe we should do that. But let's corner. Let's move to Carl. So Carl, you're a database guy, you've been around that block for a while now, you want to talk about graph databases, bring it on. >> Oh yeah. Okay thanks. So I regard graph database as basically the next truly revolutionary database management technology. I'm looking forward for the graph database market, which of course we haven't defined yet. So obviously I have a little wiggle room in what I'm about to say. But this market will grow by about 600% over the next 10 years. Now, 10 years is a long time. But over the next five years, we expect to see gradual growth as people start to learn how to use it. The problem is not that it's not useful, its that people don't know how to use it. So let me explain before I go any further what a graph database is because some of the folks on the call may not know what it is. A graph database organizes data according to a mathematical structure called a graph. The graph has elements called nodes and edges. So a data element drops into a node, the nodes are connected by edges, the edges connect one node to another node. Combinations of edges create structures that you can analyze to determine how things are related. In some cases, the nodes and edges can have properties attached to them which add additional informative material that makes it richer, that's called a property graph. There are two principle use cases for graph databases. There's semantic property graphs, which are use to break down human language texts into the semantic structures. Then you can search it, organize it and answer complicated questions. A lot of AI is aimed at semantic graphs. Another kind is the property graph that I just mentioned, which has a dazzling number of use cases. I want to just point out as I talk about this, people are probably wondering, well, we have relation databases, isn't that good enough? So a relational database defines... It supports what I call definitional relationships. That means you define the relationships in a fixed structure. The database drops into that structure, there's a value, foreign key value, that relates one table to another and that value is fixed. You don't change it. If you change it, the database becomes unstable, it's not clear what you're looking at. In a graph database, the system is designed to handle change so that it can reflect the true state of the things that it's being used to track. So let me just give you some examples of use cases for this. They include entity resolution, data lineage, social media analysis, Customer 360, fraud prevention. There's cybersecurity, there's strong supply chain is a big one actually. There is explainable AI and this is going to become important too because a lot of people are adopting AI. But they want a system after the fact to say, how do the AI system come to that conclusion? How did it make that recommendation? Right now we don't have really good ways of tracking that. Machine learning in general, social network, I already mentioned that. And then we've got, oh gosh, we've got data governance, data compliance, risk management. We've got recommendation, we've got personalization, anti money laundering, that's another big one, identity and access management, network and IT operations is already becoming a key one where you actually have mapped out your operation, you know, whatever it is, your data center and you can track what's going on as things happen there, root cause analysis, fraud detection is a huge one. A number of major credit card companies use graph databases for fraud detection, risk analysis, tracking and tracing turn analysis, next best action, what if analysis, impact analysis, entity resolution and I would add one other thing or just a few other things to this list, metadata management. So Sanjeev, here you go, this is your engine. Because I was in metadata management for quite a while in my past life. And one of the things I found was that none of the data management technologies that were available to us could efficiently handle metadata because of the kinds of structures that result from it, but graphs can, okay? Graphs can do things like say, this term in this context means this, but in that context, it means that, okay? Things like that. And in fact, logistics management, supply chain. And also because it handles recursive relationships, by recursive relationships I mean objects that own other objects that are of the same type. You can do things like build materials, you know, so like parts explosion. Or you can do an HR analysis, who reports to whom, how many levels up the chain and that kind of thing. You can do that with relational databases, but yet it takes a lot of programming. In fact, you can do almost any of these things with relational databases, but the problem is, you have to program it. It's not supported in the database. And whenever you have to program something, that means you can't trace it, you can't define it. You can't publish it in terms of its functionality and it's really, really hard to maintain over time. >> Carl, thank you. I wonder if we could bring Brad in, I mean. Brad, I'm sitting here wondering, okay, is this incremental to the market? Is it disruptive and replacement? What are your thoughts on this phase? >> It's already disrupted the market. I mean, like Carl said, go to any bank and ask them are you using graph databases to get fraud detection under control? And they'll say, absolutely, that's the only way to solve this problem. And it is frankly. And it's the only way to solve a lot of the problems that Carl mentioned. And that is, I think it's Achilles heel in some ways. Because, you know, it's like finding the best way to cross the seven bridges of Koenigsberg. You know, it's always going to kind of be tied to those use cases because it's really special and it's really unique and because it's special and it's unique, it's still unfortunately kind of stands apart from the rest of the community that's building, let's say AI outcomes, as a great example here. Graph databases and AI, as Carl mentioned, are like chocolate and peanut butter. But technologically, you think don't know how to talk to one another, they're completely different. And you know, you can't just stand up SQL and query them. You've got to learn, know what is the Carl? Specter special. Yeah, thank you to, to actually get to the data in there. And if you're going to scale that data, that graph database, especially a property graph, if you're going to do something really complex, like try to understand you know, all of the metadata in your organization, you might just end up with, you know, a graph database winter like we had the AI winter simply because you run out of performance to make the thing happen. So, I think it's already disrupted, but we need to like treat it like a first-class citizen in the data analytics and AI community. We need to bring it into the fold. We need to equip it with the tools it needs to do the magic it does and to do it not just for specialized use cases, but for everything. 'Cause I'm with Carl. I think it's absolutely revolutionary. >> Brad identified the principal, Achilles' heel of the technology which is scaling. When these things get large and complex enough that they spill over what a single server can handle, you start to have difficulties because the relationships span things that have to be resolved over a network and then you get network latency and that slows the system down. So that's still a problem to be solved. >> Sanjeev, any quick thoughts on this? I mean, I think metadata on the word cloud is going to be the largest font, but what are your thoughts here? >> I want to (indistinct) So people don't associate me with only metadata, so I want to talk about something slightly different. dbengines.com has done an amazing job. I think almost everyone knows that they chronicle all the major databases that are in use today. In January of 2022, there are 381 databases on a ranked list of databases. The largest category is RDBMS. The second largest category is actually divided into two property graphs and IDF graphs. These two together make up the second largest number databases. So talking about Achilles heel, this is a problem. The problem is that there's so many graph databases to choose from. They come in different shapes and forms. To Brad's point, there's so many query languages in RDBMS, in SQL. I know the story, but here We've got cipher, we've got gremlin, we've got GQL and then we're proprietary languages. So I think there's a lot of disparity in this space. >> Well, excellent. All excellent points, Sanjeev, if I must say. And that is a problem that the languages need to be sorted and standardized. People need to have a roadmap as to what they can do with it. Because as you say, you can do so many things. And so many of those things are unrelated that you sort of say, well, what do we use this for? And I'm reminded of the saying I learned a bunch of years ago. And somebody said that the digital computer is the only tool man has ever device that has no particular purpose. (panelists chuckle) >> All right guys, we got to move on to Dave Menninger. We've heard about streaming. Your prediction is in that realm, so please take it away. >> Sure. So I like to say that historical databases are going to become a thing of the past. By that I don't mean that they're going to go away, that's not my point. I mean, we need historical databases, but streaming data is going to become the default way in which we operate with data. So in the next say three to five years, I would expect that data platforms and we're using the term data platforms to represent the evolution of databases and data lakes, that the data platforms will incorporate these streaming capabilities. We're going to process data as it streams into an organization and then it's going to roll off into historical database. So historical databases don't go away, but they become a thing of the past. They store the data that occurred previously. And as data is occurring, we're going to be processing it, we're going to be analyzing it, we're going to be acting on it. I mean we only ever ended up with historical databases because we were limited by the technology that was available to us. Data doesn't occur in patches. But we processed it in patches because that was the best we could do. And it wasn't bad and we've continued to improve and we've improved and we've improved. But streaming data today is still the exception. It's not the rule, right? There are projects within organizations that deal with streaming data. But it's not the default way in which we deal with data yet. And so that's my prediction is that this is going to change, we're going to have streaming data be the default way in which we deal with data and how you label it and what you call it. You know, maybe these databases and data platforms just evolved to be able to handle it. But we're going to deal with data in a different way. And our research shows that already, about half of the participants in our analytics and data benchmark research, are using streaming data. You know, another third are planning to use streaming technologies. So that gets us to about eight out of 10 organizations need to use this technology. And that doesn't mean they have to use it throughout the whole organization, but it's pretty widespread in its use today and has continued to grow. If you think about the consumerization of IT, we've all been conditioned to expect immediate access to information, immediate responsiveness. You know, we want to know if an item is on the shelf at our local retail store and we can go in and pick it up right now. You know, that's the world we live in and that's spilling over into the enterprise IT world We have to provide those same types of capabilities. So that's my prediction, historical databases become a thing of the past, streaming data becomes the default way in which we operate with data. >> All right thank you David. Well, so what say you, Carl, the guy who has followed historical databases for a long time? >> Well, one thing actually, every database is historical because as soon as you put data in it, it's now history. They'll no longer reflect the present state of things. But even if that history is only a millisecond old, it's still history. But I would say, I mean, I know you're trying to be a little bit provocative in saying this Dave 'cause you know, as well as I do that people still need to do their taxes, they still need to do accounting, they still need to run general ledger programs and things like that. That all involves historical data. That's not going to go away unless you want to go to jail. So you're going to have to deal with that. But as far as the leading edge functionality, I'm totally with you on that. And I'm just, you know, I'm just kind of wondering if this requires a change in the way that we perceive applications in order to truly be manifested and rethinking the way applications work. Saying that an application should respond instantly, as soon as the state of things changes. What do you say about that? >> I think that's true. I think we do have to think about things differently. It's not the way we designed systems in the past. We're seeing more and more systems designed that way. But again, it's not the default. And I agree 100% with you that we do need historical databases you know, that's clear. And even some of those historical databases will be used in conjunction with the streaming data, right? >> Absolutely. I mean, you know, let's take the data warehouse example where you're using the data warehouse as its context and the streaming data as the present and you're saying, here's the sequence of things that's happening right now. Have we seen that sequence before? And where? What does that pattern look like in past situations? And can we learn from that? >> So Tony Baer, I wonder if you could comment? I mean, when you think about, you know, real time inferencing at the edge, for instance, which is something that a lot of people talk about, a lot of what we're discussing here in this segment, it looks like it's got a great potential. What are your thoughts? >> Yeah, I mean, I think you nailed it right. You know, you hit it right on the head there. Which is that, what I'm seeing is that essentially. Then based on I'm going to split this one down the middle is that I don't see that basically streaming is the default. What I see is streaming and basically and transaction databases and analytics data, you know, data warehouses, data lakes whatever are converging. And what allows us technically to converge is cloud native architecture, where you can basically distribute things. So you can have a node here that's doing the real-time processing, that's also doing... And this is where it leads in or maybe doing some of that real time predictive analytics to take a look at, well look, we're looking at this customer journey what's happening with what the customer is doing right now and this is correlated with what other customers are doing. So the thing is that in the cloud, you can basically partition this and because of basically the speed of the infrastructure then you can basically bring these together and kind of orchestrate them sort of a loosely coupled manner. The other parts that the use cases are demanding, and this is part of it goes back to what Dave is saying. Is that, you know, when you look at Customer 360, when you look at let's say Smart Utility products, when you look at any type of operational problem, it has a real time component and it has an historical component. And having predictive and so like, you know, my sense here is that technically we can bring this together through the cloud. And I think the use case is that we can apply some real time sort of predictive analytics on these streams and feed this into the transactions so that when we make a decision in terms of what to do as a result of a transaction, we have this real-time input. >> Sanjeev, did you have a comment? >> Yeah, I was just going to say that to Dave's point, you know, we have to think of streaming very different because in the historical databases, we used to bring the data and store the data and then we used to run rules on top, aggregations and all. But in case of streaming, the mindset changes because the rules are normally the inference, all of that is fixed, but the data is constantly changing. So it's a completely reversed way of thinking and building applications on top of that. >> So Dave Menninger, there seem to be some disagreement about the default. What kind of timeframe are you thinking about? Is this end of decade it becomes the default? What would you pin? >> I think around, you know, between five to 10 years, I think this becomes the reality. >> I think its... >> It'll be more and more common between now and then, but it becomes the default. And I also want Sanjeev at some point, maybe in one of our subsequent conversations, we need to talk about governing streaming data. 'Cause that's a whole nother set of challenges. >> We've also talked about it rather in two dimensions, historical and streaming, and there's lots of low latency, micro batch, sub-second, that's not quite streaming, but in many cases its fast enough and we're seeing a lot of adoption of near real time, not quite real-time as good enough for many applications. (indistinct cross talk from panelists) >> Because nobody's really taking the hardware dimension (mumbles). >> That'll just happened, Carl. (panelists laughing) >> So near real time. But maybe before you lose the customer, however we define that, right? Okay, let's move on to Brad. Brad, you want to talk about automation, AI, the pipeline people feel like, hey, we can just automate everything. What's your prediction? >> Yeah I'm an AI aficionados so apologies in advance for that. But, you know, I think that we've been seeing automation play within AI for some time now. And it's helped us do a lot of things especially for practitioners that are building AI outcomes in the enterprise. It's helped them to fill skills gaps, it's helped them to speed development and it's helped them to actually make AI better. 'Cause it, you know, in some ways provide some swim lanes and for example, with technologies like AutoML can auto document and create that sort of transparency that we talked about a little bit earlier. But I think there's an interesting kind of conversion happening with this idea of automation. And that is that we've had the automation that started happening for practitioners, it's trying to move out side of the traditional bounds of things like I'm just trying to get my features, I'm just trying to pick the right algorithm, I'm just trying to build the right model and it's expanding across that full life cycle, building an AI outcome, to start at the very beginning of data and to then continue on to the end, which is this continuous delivery and continuous automation of that outcome to make sure it's right and it hasn't drifted and stuff like that. And because of that, because it's become kind of powerful, we're starting to actually see this weird thing happen where the practitioners are starting to converge with the users. And that is to say that, okay, if I'm in Tableau right now, I can stand up Salesforce Einstein Discovery, and it will automatically create a nice predictive algorithm for me given the data that I pull in. But what's starting to happen and we're seeing this from the companies that create business software, so Salesforce, Oracle, SAP, and others is that they're starting to actually use these same ideals and a lot of deep learning (chuckles) to basically stand up these out of the box flip-a-switch, and you've got an AI outcome at the ready for business users. And I am very much, you know, I think that's the way that it's going to go and what it means is that AI is slowly disappearing. And I don't think that's a bad thing. I think if anything, what we're going to see in 2022 and maybe into 2023 is this sort of rush to put this idea of disappearing AI into practice and have as many of these solutions in the enterprise as possible. You can see, like for example, SAP is going to roll out this quarter, this thing called adaptive recommendation services, which basically is a cold start AI outcome that can work across a whole bunch of different vertical markets and use cases. It's just a recommendation engine for whatever you needed to do in the line of business. So basically, you're an SAP user, you look up to turn on your software one day, you're a sales professional let's say, and suddenly you have a recommendation for customer churn. Boom! It's going, that's great. Well, I don't know, I think that's terrifying. In some ways I think it is the future that AI is going to disappear like that, but I'm absolutely terrified of it because I think that what it really does is it calls attention to a lot of the issues that we already see around AI, specific to this idea of what we like to call at Omdia, responsible AI. Which is, you know, how do you build an AI outcome that is free of bias, that is inclusive, that is fair, that is safe, that is secure, that its audible, et cetera, et cetera, et cetera, et cetera. I'd take a lot of work to do. And so if you imagine a customer that's just a Salesforce customer let's say, and they're turning on Einstein Discovery within their sales software, you need some guidance to make sure that when you flip that switch, that the outcome you're going to get is correct. And that's going to take some work. And so, I think we're going to see this move, let's roll this out and suddenly there's going to be a lot of problems, a lot of pushback that we're going to see. And some of that's going to come from GDPR and others that Sanjeev was mentioning earlier. A lot of it is going to come from internal CSR requirements within companies that are saying, "Hey, hey, whoa, hold up, we can't do this all at once. "Let's take the slow route, "let's make AI automated in a smart way." And that's going to take time. >> Yeah, so a couple of predictions there that I heard. AI simply disappear, it becomes invisible. Maybe if I can restate that. And then if I understand it correctly, Brad you're saying there's a backlash in the near term. You'd be able to say, oh, slow down. Let's automate what we can. Those attributes that you talked about are non trivial to achieve, is that why you're a bit of a skeptic? >> Yeah. I think that we don't have any sort of standards that companies can look to and understand. And we certainly, within these companies, especially those that haven't already stood up an internal data science team, they don't have the knowledge to understand when they flip that switch for an automated AI outcome that it's going to do what they think it's going to do. And so we need some sort of standard methodology and practice, best practices that every company that's going to consume this invisible AI can make use of them. And one of the things that you know, is sort of started that Google kicked off a few years back that's picking up some momentum and the companies I just mentioned are starting to use it is this idea of model cards where at least you have some transparency about what these things are doing. You know, so like for the SAP example, we know, for example, if it's convolutional neural network with a long, short term memory model that it's using, we know that it only works on Roman English and therefore me as a consumer can say, "Oh, well I know that I need to do this internationally. "So I should not just turn this on today." >> Thank you. Carl could you add anything, any context here? >> Yeah, we've talked about some of the things Brad mentioned here at IDC and our future of intelligence group regarding in particular, the moral and legal implications of having a fully automated, you know, AI driven system. Because we already know, and we've seen that AI systems are biased by the data that they get, right? So if they get data that pushes them in a certain direction, I think there was a story last week about an HR system that was recommending promotions for White people over Black people, because in the past, you know, White people were promoted and more productive than Black people, but it had no context as to why which is, you know, because they were being historically discriminated, Black people were being historically discriminated against, but the system doesn't know that. So, you know, you have to be aware of that. And I think that at the very least, there should be controls when a decision has either a moral or legal implication. When you really need a human judgment, it could lay out the options for you. But a person actually needs to authorize that action. And I also think that we always will have to be vigilant regarding the kind of data we use to train our systems to make sure that it doesn't introduce unintended biases. In some extent, they always will. So we'll always be chasing after them. But that's (indistinct). >> Absolutely Carl, yeah. I think that what you have to bear in mind as a consumer of AI is that it is a reflection of us and we are a very flawed species. And so if you look at all of the really fantastic, magical looking supermodels we see like GPT-3 and four, that's coming out, they're xenophobic and hateful because the people that the data that's built upon them and the algorithms and the people that build them are us. So AI is a reflection of us. We need to keep that in mind. >> Yeah, where the AI is biased 'cause humans are biased. All right, great. All right let's move on. Doug you mentioned mentioned, you know, lot of people that said that data lake, that term is not going to live on but here's to be, have some lakes here. You want to talk about lake house, bring it on. >> Yes, I do. My prediction is that lake house and this idea of a combined data warehouse and data lake platform is going to emerge as the dominant data management offering. I say offering that doesn't mean it's going to be the dominant thing that organizations have out there, but it's going to be the pro dominant vendor offering in 2022. Now heading into 2021, we already had Cloudera, Databricks, Microsoft, Snowflake as proponents, in 2021, SAP, Oracle, and several of all of these fabric virtualization/mesh vendors joined the bandwagon. The promise is that you have one platform that manages your structured, unstructured and semi-structured information. And it addresses both the BI analytics needs and the data science needs. The real promise there is simplicity and lower cost. But I think end users have to answer a few questions. The first is, does your organization really have a center of data gravity or is the data highly distributed? Multiple data warehouses, multiple data lakes, on premises, cloud. If it's very distributed and you'd have difficulty consolidating and that's not really a goal for you, then maybe that single platform is unrealistic and not likely to add value to you. You know, also the fabric and virtualization vendors, the mesh idea, that's where if you have this highly distributed situation, that might be a better path forward. The second question, if you are looking at one of these lake house offerings, you are looking at consolidating, simplifying, bringing together to a single platform. You have to make sure that it meets both the warehouse need and the data lake need. So you have vendors like Databricks, Microsoft with Azure Synapse. New really to the data warehouse space and they're having to prove that these data warehouse capabilities on their platforms can meet the scaling requirements, can meet the user and query concurrency requirements. Meet those tight SLS. And then on the other hand, you have the Oracle, SAP, Snowflake, the data warehouse folks coming into the data science world, and they have to prove that they can manage the unstructured information and meet the needs of the data scientists. I'm seeing a lot of the lake house offerings from the warehouse crowd, managing that unstructured information in columns and rows. And some of these vendors, Snowflake a particular is really relying on partners for the data science needs. So you really got to look at a lake house offering and make sure that it meets both the warehouse and the data lake requirement. >> Thank you Doug. Well Tony, if those two worlds are going to come together, as Doug was saying, the analytics and the data science world, does it need to be some kind of semantic layer in between? I don't know. Where are you in on this topic? >> (chuckles) Oh, didn't we talk about data fabrics before? Common metadata layer (chuckles). Actually, I'm almost tempted to say let's declare victory and go home. And that this has actually been going on for a while. I actually agree with, you know, much of what Doug is saying there. Which is that, I mean I remember as far back as I think it was like 2014, I was doing a study. I was still at Ovum, (indistinct) Omdia, looking at all these specialized databases that were coming up and seeing that, you know, there's overlap at the edges. But yet, there was still going to be a reason at the time that you would have, let's say a document database for JSON, you'd have a relational database for transactions and for data warehouse and you had basically something at that time that resembles a dupe for what we consider your data life. Fast forward and the thing is what I was seeing at the time is that you were saying they sort of blending at the edges. That was saying like about five to six years ago. And the lake house is essentially on the current manifestation of that idea. There is a dichotomy in terms of, you know, it's the old argument, do we centralize this all you know in a single place or do we virtualize? And I think it's always going to be a union yeah and there's never going to be a single silver bullet. I do see that there are also going to be questions and these are points that Doug raised. That you know, what do you need for your performance there, or for your free performance characteristics? Do you need for instance high concurrency? You need the ability to do some very sophisticated joins, or is your requirement more to be able to distribute and distribute our processing is, you know, as far as possible to get, you know, to essentially do a kind of a brute force approach. All these approaches are valid based on the use case. I just see that essentially that the lake house is the culmination of it's nothing. It's a relatively new term introduced by Databricks a couple of years ago. This is the culmination of basically what's been a long time trend. And what we see in the cloud is that as we start seeing data warehouses as a check box items say, "Hey, we can basically source data in cloud storage, in S3, "Azure Blob Store, you know, whatever, "as long as it's in certain formats, "like, you know parquet or CSP or something like that." I see that as becoming kind of a checkbox item. So to that extent, I think that the lake house, depending on how you define is already reality. And in some cases, maybe new terminology, but not a whole heck of a lot new under the sun. >> Yeah. And Dave Menninger, I mean a lot of these, thank you Tony, but a lot of this is going to come down to, you know, vendor marketing, right? Some people just kind of co-op the term, we talked about you know, data mesh washing, what are your thoughts on this? (laughing) >> Yeah, so I used the term data platform earlier. And part of the reason I use that term is that it's more vendor neutral. We've tried to sort of stay out of the vendor terminology patenting world, right? Whether the term lake houses, what sticks or not, the concept is certainly going to stick. And we have some data to back it up. About a quarter of organizations that are using data lakes today, already incorporate data warehouse functionality into it. So they consider their data lake house and data warehouse one in the same, about a quarter of organizations, a little less, but about a quarter of organizations feed the data lake from the data warehouse and about a quarter of organizations feed the data warehouse from the data lake. So it's pretty obvious that three quarters of organizations need to bring this stuff together, right? The need is there, the need is apparent. The technology is going to continue to converge. I like to talk about it, you know, you've got data lakes over here at one end, and I'm not going to talk about why people thought data lakes were a bad idea because they thought you just throw stuff in a server and you ignore it, right? That's not what a data lake is. So you've got data lake people over here and you've got database people over here, data warehouse people over here, database vendors are adding data lake capabilities and data lake vendors are adding data warehouse capabilities. So it's obvious that they're going to meet in the middle. I mean, I think it's like Tony says, I think we should declare victory and go home. >> As hell. So just a follow-up on that, so are you saying the specialized lake and the specialized warehouse, do they go away? I mean, Tony data mesh practitioners would say or advocates would say, well, they could all live. It's just a node on the mesh. But based on what Dave just said, are we gona see those all morphed together? >> Well, number one, as I was saying before, there's always going to be this sort of, you know, centrifugal force or this tug of war between do we centralize the data, do we virtualize? And the fact is I don't think that there's ever going to be any single answer. I think in terms of data mesh, data mesh has nothing to do with how you're physically implement the data. You could have a data mesh basically on a data warehouse. It's just that, you know, the difference being is that if we use the same physical data store, but everybody's logically you know, basically governing it differently, you know? Data mesh in space, it's not a technology, it's processes, it's governance process. So essentially, you know, I basically see that, you know, as I was saying before that this is basically the culmination of a long time trend we're essentially seeing a lot of blurring, but there are going to be cases where, for instance, if I need, let's say like, Upserve, I need like high concurrency or something like that. There are certain things that I'm not going to be able to get efficiently get out of a data lake. And, you know, I'm doing a system where I'm just doing really brute forcing very fast file scanning and that type of thing. So I think there always will be some delineations, but I would agree with Dave and with Doug, that we are seeing basically a confluence of requirements that we need to essentially have basically either the element, you know, the ability of a data lake and the data warehouse, these need to come together, so I think. >> I think what we're likely to see is organizations look for a converge platform that can handle both sides for their center of data gravity, the mesh and the fabric virtualization vendors, they're all on board with the idea of this converged platform and they're saying, "Hey, we'll handle all the edge cases "of the stuff that isn't in that center of data gravity "but that is off distributed in a cloud "or at a remote location." So you can have that single platform for the center of your data and then bring in virtualization, mesh, what have you, for reaching out to the distributed data. >> As Dave basically said, people are happy when they virtualized data. >> I think we have at this point, but to Dave Menninger's point, they are converging, Snowflake has introduced support for unstructured data. So obviously literally splitting here. Now what Databricks is saying is that "aha, but it's easy to go from data lake to data warehouse "than it is from databases to data lake." So I think we're getting into semantics, but we're already seeing these two converge. >> So take somebody like AWS has got what? 15 data stores. Are they're going to 15 converge data stores? This is going to be interesting to watch. All right, guys, I'm going to go down and list do like a one, I'm going to one word each and you guys, each of the analyst, if you would just add a very brief sort of course correction for me. So Sanjeev, I mean, governance is going to to be... Maybe it's the dog that wags the tail now. I mean, it's coming to the fore, all this ransomware stuff, which you really didn't talk much about security, but what's the one word in your prediction that you would leave us with on governance? >> It's going to be mainstream. >> Mainstream. Okay. Tony Baer, mesh washing is what I wrote down. That's what we're going to see in 2022, a little reality check, you want to add to that? >> Reality check, 'cause I hope that no vendor jumps the shark and close they're offering a data niche product. >> Yeah, let's hope that doesn't happen. If they do, we're going to call them out. Carl, I mean, graph databases, thank you for sharing some high growth metrics. I know it's early days, but magic is what I took away from that, so magic database. >> Yeah, I would actually, I've said this to people too. I kind of look at it as a Swiss Army knife of data because you can pretty much do anything you want with it. That doesn't mean you should. I mean, there's definitely the case that if you're managing things that are in fixed schematic relationship, probably a relation database is a better choice. There are times when the document database is a better choice. It can handle those things, but maybe not. It may not be the best choice for that use case. But for a great many, especially with the new emerging use cases I listed, it's the best choice. >> Thank you. And Dave Menninger, thank you by the way, for bringing the data in, I like how you supported all your comments with some data points. But streaming data becomes the sort of default paradigm, if you will, what would you add? >> Yeah, I would say think fast, right? That's the world we live in, you got to think fast. >> Think fast, love it. And Brad Shimmin, love it. I mean, on the one hand I was saying, okay, great. I'm afraid I might get disrupted by one of these internet giants who are AI experts. I'm going to be able to buy instead of build AI. But then again, you know, I've got some real issues. There's a potential backlash there. So give us your bumper sticker. >> I'm would say, going with Dave, think fast and also think slow to talk about the book that everyone talks about. I would say really that this is all about trust, trust in the idea of automation and a transparent and visible AI across the enterprise. And verify, verify before you do anything. >> And then Doug Henschen, I mean, I think the trend is your friend here on this prediction with lake house is really becoming dominant. I liked the way you set up that notion of, you know, the data warehouse folks coming at it from the analytics perspective and then you get the data science worlds coming together. I still feel as though there's this piece in the middle that we're missing, but your, your final thoughts will give you the (indistinct). >> I think the idea of consolidation and simplification always prevails. That's why the appeal of a single platform is going to be there. We've already seen that with, you know, DoOP platforms and moving toward cloud, moving toward object storage and object storage, becoming really the common storage point for whether it's a lake or a warehouse. And that second point, I think ESG mandates are going to come in alongside GDPR and things like that to up the ante for good governance. >> Yeah, thank you for calling that out. Okay folks, hey that's all the time that we have here, your experience and depth of understanding on these key issues on data and data management really on point and they were on display today. I want to thank you for your contributions. Really appreciate your time. >> Enjoyed it. >> Thank you. >> Thanks for having me. >> In addition to this video, we're going to be making available transcripts of the discussion. We're going to do clips of this as well we're going to put them out on social media. I'll write this up and publish the discussion on wikibon.com and siliconangle.com. No doubt, several of the analysts on the panel will take the opportunity to publish written content, social commentary or both. I want to thank the power panelists and thanks for watching this special CUBE presentation. This is Dave Vellante, be well and we'll see you next time. (bright music)

Published Date : Jan 7 2022

SUMMARY :

and I'd like to welcome you to I as moderator, I'm going to and that is the journey to weigh in on there, and it's going to demand more solid data. Brad, I wonder if you that are specific to individual use cases in the past is because we I like the fact that you the data from, you know, Dave Menninger, I mean, one of the things that all need to be managed collectively. Oh thank you Dave. and to the community I think we could have a after the fact to say, okay, is this incremental to the market? the magic it does and to do it and that slows the system down. I know the story, but And that is a problem that the languages move on to Dave Menninger. So in the next say three to five years, the guy who has followed that people still need to do their taxes, And I agree 100% with you and the streaming data as the I mean, when you think about, you know, and because of basically the all of that is fixed, but the it becomes the default? I think around, you know, but it becomes the default. and we're seeing a lot of taking the hardware dimension That'll just happened, Carl. Okay, let's move on to Brad. And that is to say that, Those attributes that you And one of the things that you know, Carl could you add in the past, you know, I think that what you have to bear in mind that term is not going to and the data science needs. and the data science world, You need the ability to do lot of these, thank you Tony, I like to talk about it, you know, It's just a node on the mesh. basically either the element, you know, So you can have that single they virtualized data. "aha, but it's easy to go from I mean, it's coming to the you want to add to that? I hope that no vendor Yeah, let's hope that doesn't happen. I've said this to people too. I like how you supported That's the world we live I mean, on the one hand I And verify, verify before you do anything. I liked the way you set up We've already seen that with, you know, the time that we have here, We're going to do clips of this as well

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave MenningerPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Doug HenschenPERSON

0.99+

DavidPERSON

0.99+

Brad ShimminPERSON

0.99+

DougPERSON

0.99+

Tony BaerPERSON

0.99+

Dave VelanntePERSON

0.99+

TonyPERSON

0.99+

CarlPERSON

0.99+

BradPERSON

0.99+

Carl OlofsonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2014DATE

0.99+

Sanjeev MohanPERSON

0.99+

Ventana ResearchORGANIZATION

0.99+

2022DATE

0.99+

OracleORGANIZATION

0.99+

last yearDATE

0.99+

January of 2022DATE

0.99+

threeQUANTITY

0.99+

381 databasesQUANTITY

0.99+

IDCORGANIZATION

0.99+

InformaticaORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

twoQUANTITY

0.99+

SanjeevPERSON

0.99+

2021DATE

0.99+

GoogleORGANIZATION

0.99+

OmdiaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SanjMoORGANIZATION

0.99+

79%QUANTITY

0.99+

second questionQUANTITY

0.99+

last weekDATE

0.99+

15 data storesQUANTITY

0.99+

100%QUANTITY

0.99+

SAPORGANIZATION

0.99+

Manoj Nair, Metallic.io & Dave Totten, Microsoft | Commvault Connections 2021


 

(lighthearted music) >> We're here now with Manoj Nair, who's the general manager of Metallic and Dave Totten CTO with Microsoft. And we're going to talk about some of the announcements that we heard earlier today and what Metallic and Microsoft are doing to meet customer needs around cyber threats and ensuring secure cloud data management. Gentlemen, welcome to theCUBE. Good to see you. >> Thanks Dave. >> Thank you. >> Hey Manoj, let me start with you. We heard early this morning, Dave Totten was here, David Noe, talk a lot about security. Has the conversation changed, how has it changed when you talk to customers, Manoj? What's top of mind. >> Yeah, thank you, Dave. And thank you, Dave Totten. You know, great conversation earlier. Dave, you and I have talked about this in the past, right? Security long a big passion of mine. You know, having lived through nation state attacks in the past and all that. We're seeing those kinds of techniques really just getting mainstream, right? Ransomware has become a mainstream problem in the scourge in our lives. Now, when you look at it from a lens of data and data management, data protection, backup, all of this was very much a passive you know, compliance centric use case. It was pretty static you know, put it in tapes, haul it all over. And what has really changed with this ransomware and cybercrime change rate is data, which is now your most precious asset, is under attack. So now you see security teams, just like you talked with Dave Martin, from ADP earlier, they are looking for that bridge between SecurityOps and ITOps. That data management solution needs to do more. It needs to be part of an active conversation, you know? Not just, you know, recovery readiness. Can you ensure that, are you testing that, is it recoverable? That is your last mile of defense. So then you get questions like that from security teams. You get you know, the need for doing more, signals. Can I get better signals from my data management stack to tell me I might be under attack? So what we're seeing in the conversation is the need to have more active conversations around data management and the bridge between ITOps and SecurityOps is really becoming paramount for our customers. >> Yeah, Dave Totten I mean, I often say that I think data protection used to be this bolt on. Now it's a fundamental component of the digital business stack. Anything you would add to what Manoj just said. >> Yeah, I would just say exactly that. Data is an asset, right? We talked about it a lot about the competitive advantage that customers are now realizing that no longer is IT considered sort of this cost center element. We need to be able to leverage our interactions with customers, with partners, with supply chains, with manufacturers, we need to be able to leverage that to sort of create differentiation and competitive advantage in the marketplace. And so if you think about it, as that way as the fuel for economic profitability and business growth, you would do everything in your power to secure it, to support it, to make sure you had access to it, to make sure that you didn't have you know, bad intent users accessing it. And I think we're seeing that shift with customers as they think more about how to be more efficient with their investments in information technology and then how just to make sure that they protect the lifeblood of their businesses. >> Yeah, and that just makes it harder because the adversary is very capable. They're coming in through the digital supply chain. So it's complicated. And so Dave and maybe Manoj, you can comment as well after, Microsoft and Commvault, you guys have been working together for decades and so you've seen a lot of the changes, a lot of the waves. So I'm curious as to how the partnership has evolved. You've got a recent strategic announcement around Azure with Metallic. Dave, take us through that. >> Yeah, I mean you know, Commvault and Microsoft aren't newlyweds, we've been together now for 25 plus years. We send each other anniversary gifts, all that good stuff. And you know, listen, there's a couple things that are key to our relationship. One, we started believing in each other's engineering organizations, right? We hire the best, we train and retain the best. And we both put a lot of investment behind our infrastructure and the ability to work together to really innovate at real time, rapid speeds. Two, we use Commvault products so you know, there's no greater I think, advantage that if a major supplier or platform partner like Microsoft uses your products. We've used it for years in our Xbox group to support and store the data for a hundred million XBox live users. And we're very avid with it with our data centers, our access to Azure data centers, our Microsoft office products. And so we use Commvault services as well. And through that mutual relationship you know, obviously Commvault has seen the ins and outs of what's great about our services and where we're continuing to build and invest. And so they've been able to really you know, dedicate a team of engineers and architects to support all that Azure as a platform, as a service can provide. And then how to take the best of those features and build it into their own first party products. I think when you get close enough to somebody for so many years right, 25 plus years, you figure out what they're great at and you learn to take those advantages like Commvault has with Microsoft and Azure and use it to your advantage, right? To build the best in class product that Metallic actually is. And you're right, the announcement this week it feels culminating, it feels like it's a major milestone in first off, industry innovation but also in our relationship. But it's really not that big of a step change from what we've been doing and building and innovating on for the past you know, 25 years. >> Yeah so Manoj, that's got to be music to your ears. Because you come at it with this rich data protection stack, Microsoft there's so many capabilities. One of the courses, which is Azure. It's like the secret weapon, it's become the secret weapon. How do you think about that relationship, Manoj? >> Absolutely Dave said it right. We are strong partners, 25 years, founding in Western Commvault, mutual customers, partnership. You know, really when you look at it from a customer lens, what our customers have appreciated, over the last year of that strengthening of that partnership basically the two pillars of Commvault the leader of data protection, or you know, for the last 25 years, 10 out of 10 in the Gartner MQ comes together with Azure, the enterprise secure cloud leader in creating Metallic. Metallic, now with 1,000 plus customers around the world, there's a reason they trust it. It's now become part of how they protect their Office 365. No workload left behind, which is very unique, you know? So what we have architected together and now we're taking it to the next phase, our joint partners, right? Our joint customers, that those are some of the things that are really changing in terms of how we're accelerating the partnership. >> Manoj, you and I have talked about ransomware a lot, we did a special segment a while back on that. The adversary is very capable. And you know, I put in the chat this morning, at Commvault Connections, you don't even need a high school diploma to be a ransomwarist. You can go on the dark web, you can buy ransomware as a service. All you need is access to a server and you can stick you know, some malware on it. So you know, it's very, very dangerous times. What is it about data management as a service that makes it a good fit right now from a customer perspective to solve this problem? >> Absolutely. Bad guys, real life, or in the cyber world, they have some techniques. First thing they do in a ransomware is you go after the exits. What are the exit doors? Now you back up data, they know that that backup data can be used to recover. So they go and try to defeat the backup products in that environment. That's number one game that changes with data management as a service. Your data management data protection environment is not inside your environment. Chances to do two simultaneous penetrations to try and anything is possible. But now you've got an additional layer of recovery readiness because that control plane secured on top of Microsoft, Azure, 3,500 security professionals, FedRAMP high standard only data management and service entity to get it. As one of our customers said, "A unicorn in the wild", that is what you have as your data management environment. So if something bad happens, worst case, this environment is ready. Our enterprise customers are starting to understand that this is becoming a big reason to shift to this model. You know, then it's okay if you're not ready to shift the entire model, you're given the easy button of just air gapping of your data. So if you're an existing Commvault customer, appliance, software, anything, secure air gap Metallic cloud storage on hardened Azure Blob protected jointly by us, start there. And finally things like active directory. Talk about shutting the exit path, right? Take that down, your entire environment is not accessible. We make it easy for you to recover that. And because of our partnership, we're able to get it for free to every one of our customers. Go protect your active directory environment using (speaks faintly) kind of three big reasons that we're seeing that entire conversation shift in the minds of our customers. >> Yeah, thank you for that. That's a no brainer. Dave, how do Metallic and Microsoft fit together? Where's the you know, kind of value chain if you will, when it comes to dealing with cyber protection or ransomware recovery, how are your customers thinking about that? >> Yeah well, first it's a shared responsibility model, right? When you've got the best in class platform like Azure with built in protections, scalable data centers all over the global footprint. But then also we spend 10 plus billion dollars a year in security and defense and our own data center environments, right? And so I always find it inspiring when companies believe that their investments in security and platform protection is going to do the job. That's true, that used to be true. Now with Azure, you can take advantage of this global scale and secure you know, footprint of investment that a company like Microsoft has done to really set your heart at ease. Now, what do you do with your actual applications and who has access to it, and how do you actually integrate like Manoj was talking about down to the individual or the individual account that's trying to get access to your environment? Well, that's where Commvault comes in at that point of attack or at that point of an actual data element. So if you've got that environment within Commvault system backed by the umbrella of the Azure security infrastructure, that's how the two sort of compliment each other. And again, it's about shared responsibility, right? We want every customer that leverages Azure to make sure that they know it's secure, it's protected. We've got a mechanism to protect your best interests. Commvault has that exact same mission statement, right? To make sure that every single element that comes into contact with their products is protected, is secure, is trustworthy. You know, I got a long lesson, long, long time ago, early in my career that says you can goof up a product feature, you can goof up the color scheme on a website but if you lose a customer's data or somebody trust, you never get it back. And so we don't take our relationships with customers very lightly. And I think our committed and joint responsibility to delight and support our customers is what has led to this partnership being so successful over the past couple of decades. >> Great, thank you, Dave. And so Manoj, I was saying earlier that data protection has become a fundamental component of your digital business stack. So that sounds good but what should customers be doing to make data protection and data management, a business value driver versus just a liability or exposure or cost factor that has to be managed? What do you think about that? >> No, and then David added earlier, right? It's no longer a liability. In fact it is you know, someone said data is the new oil, right? It is your crown jewels. You got to to start with thinking about an active data protection strategy, not you know, thinking about passive tools and looking at it in terms of a compliance or I need to keep the data around. So that's the number one part is like, how do I have something that protects all my workloads and everyone has a different pace of transformation. So unless you know, you're a company that just got created, you have environments that are on-prem, on the edge, in CoLOS, public cloud. You got you know, SaaS applications, all of those have a critical data that needs to come together. Look for breadth of data protection, something that doesn't leave your workloads behind. Siloed solutions, create a Swiss cheese that create light for the attackers to go after those gaps. You don't want to look for that, you know? And then finally trust. I mean you know, what are the pillars of trust that the solution is built on? You got to figure out how your teams can get to doing more productive things rather than patching systems. You know, making sure that the infrastructure is up. As Dave said you know, we invest a ton jointly in securing this infrastructure. Trust that and leverage that as a differentiator rather than trying to duplicate all of that. So those are some of the you know, key things. And you know, look for players who understand that hybrid is here, give you different entry points. Don't force you know, the single single mode of operation. Those are the things we have built to make it easier for our customers to have a more active data management strategy. >> Dave, Todd, I'll give you the last word we got to go but I want to hit on this notion of zero trust. It used to be a buzz word now it's mainstream. There's so much that this discussion, is it Prudentialist access? Every access is treated maybe as privileged but what does zero trust mean to you in less than a minute? >> Yeah you know, trust but verify, right? Every interaction you have with your infrastructure, with your data, with your applications and you do it at the identity level. We care about identity and we know that that's the core of how people are going to try and access infrastructure. Used to be protect the perimeter. The analogy I always use is we have locks on our houses. Now the bad guys are everywhere. They're getting inside our houses and they're not immediately taking things, they're hiding in the closet and they're popping out three weeks later before anybody knows it. And so being able to actually manage, measure, protect every interaction you have with your infrastructure and do it at the individual or application level, that's what zero trust is all about. So don't trust any interaction, make sure that you pass that authorization through with every ask. And then make sure you protect it from the inside out. >> Great stuff. Okay guys, we've got to leave it there. Thanks so much for the time today. All right next, right after a short break, we're headed into the CXL Power Panel to hear what's on the minds of the executives as it relates to data management in the digital era. Keep it right there, you're watching theCUBE. (lighthearted music)

Published Date : Nov 1 2021

SUMMARY :

Good to see you. when you talk to customers, Manoj? You get you know, the need of the digital business stack. to make sure that you Microsoft and Commvault, you able to really you know, to be music to your ears. or you know, for the last You can go on the dark web, you can buy that is what you have as your Where's the you know, kind and secure you know, that has to be managed? And you know, look for to you in less than a minute? make sure that you pass minds of the executives

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Dave TottenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

David NoePERSON

0.99+

Dave MartinPERSON

0.99+

10QUANTITY

0.99+

25 plus yearsQUANTITY

0.99+

MetallicORGANIZATION

0.99+

ToddPERSON

0.99+

ManojPERSON

0.99+

25 yearsQUANTITY

0.99+

Manoj NairPERSON

0.99+

25 plus yearsQUANTITY

0.99+

TwoQUANTITY

0.99+

oneQUANTITY

0.99+

ADPORGANIZATION

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

Commvault ConnectionsORGANIZATION

0.99+

Office 365TITLE

0.99+

OneQUANTITY

0.99+

less than a minuteQUANTITY

0.99+

1,000 plus customersQUANTITY

0.99+

CommvaultORGANIZATION

0.99+

three weeks laterDATE

0.98+

two pillarsQUANTITY

0.98+

bothQUANTITY

0.98+

XBoxCOMMERCIAL_ITEM

0.98+

Metallic.ioORGANIZATION

0.97+

3,500 security professionalsQUANTITY

0.97+

early this morningDATE

0.95+

last yearDATE

0.95+

singleQUANTITY

0.95+

2021DATE

0.95+

hundred millionQUANTITY

0.94+

firstQUANTITY

0.94+

AzureORGANIZATION

0.94+

AzureTITLE

0.93+

10 plus billion dollars a yearQUANTITY

0.93+

SecurityOpsTITLE

0.93+

ManojORGANIZATION

0.93+

past couple of decadesDATE

0.92+

this weekDATE

0.92+

zero trustQUANTITY

0.91+

Gartner MQORGANIZATION

0.9+

decadesQUANTITY

0.9+

XboxCOMMERCIAL_ITEM

0.9+

three big reasonsQUANTITY

0.9+

FedRAMPORGANIZATION

0.88+

this morningDATE

0.86+

WesternLOCATION

0.85+