Tony Jeffries, Dell Technologies & HonoreĢ LaBourdette, Red Hat | MWC Barcelona 2023
>> theCUBE's live coverage is made possible by funding from Dell Technologies: "Creating technologies that drive human progress." >> Good late afternoon from Barcelona, Spain at the Theater of Barcelona. It's Lisa Martin and Dave Nicholson of "theCUBE" covering MWC23. This is our third day of continuous wall-to-wall coverage on theCUBE. And you know we're going to be here tomorrow as well. We've been having some amazing conversations about the ecosystem. And we're going to continue those conversations next. Honore Labourdette is here, the VP global partner, Ecosystem Success Team, Telco Media and Entertainment at Red Hat. And Tony Jeffries joins us as well, a Senior Director of Product Management, Telecom Systems Business at Dell. Welcome to the theCUBE. >> Thank you. >> Thank you. >> Great to have both of you here. So we're going to be talking about the evolution of the telecom stack. We've been talking a lot about disaggregation the last couple of days. Honore, starting with you, talk about the evolution of the telecom stock. You were saying before we went live this is your 15th at least MWC. So you've seen a lot of evolution, but what are some of the things you're seeing right now? >> Well, I think the interesting thing about disaggregation, which is a key topic, right? 'Cause it's so relative to 5G and the 5G core and the benefits and the features of 5G core around disaggregation. But one thing we have to remember, when you disaggregate, you separate things. You have to bring those things back together again in a different way. And that's predominantly what we're doing in our partnership with Dell, is we're bringing those disaggregated components back together in a cohesive way that takes advantage of the new technology, at the same time taking out the complexity and making it easier for our Telco customers to deploy and to scale and to get much more, accelerate the time to revenue. So the trend now is, what we're seeing is two things I would say. One is how do we solve for the complexity with the disaggregation? And how do we leverage the ecosystem as a partner in order to help solve for some of those challenges? >> Tony, jump on in, talk about what you guys announced last week, Dell and Red Hat, and how it's addressing the complexities that Honore was saying, "Hey, they're there." >> Yeah. You know, our customers, our operators are saying, "Hey, I want disaggregation." "I want competition in the market." But at the same time who's going to support all this disaggregation, right? And so at the end of the day, there's going to be an operator that's going to have to figure this out. They're going to have an SLA that they're going to have to meet. And so they're going to want to go with a best-in-class partner with Red Hat and Dell, in terms of our infrastructure and their software together as one combined engineered system. And that's what we call a Dell Telecom infrastructure block for Red Hat. And so at the end of the day, things may go wrong, and if they do, who are they going to call for that support? And that's also really a key element of an engineered system, is this experience that they get both with Red Hat and with Dell together supporting the customer as one. Which is really important to solve this disaggregated problem that can arise from a disaggregated open network situation, yeah. >> So what is the market, the go to market motion look like? People have loyalties in the IT space to technologies that they've embraced and been successful with for years and years. So you have folks in the marketplace who are diehard, you know, dyed red, Red Hat folks. Is it primarily a pull from them? How does that work? How do you approach that to your, what are your end user joint customers? What does that look like from your perspective? >> Sure, well, interestingly enough both Red Hat and Dell have been in the marketplace for a very long time, right? So we do have the brand with those Telco customers for these solutions. What we're seeing with this solution is, it's an emerging market. It's an emerging market for a new technology. So there's an opportunity for both Red Hat and Dell together to leverage our brands with those customers with no friction in the marketplace as we go to market together. So our field sales teams will be motivated to, you know, take advantage of the solution for their customers, as will the Dell team. And I'll let Tony speak to the Dell, go to market. >> Yeah. You know, so we really co-sell together, right? We're the key partners. Dell will end up fulfilling that order, right? We send these engineered systems through our factories and we send that out either directly to a customer or to a OTEL lab, like an intermediate lab where we can further refine and customize that offer for that particular customer. And so we got a lot of options there, but we're essentially co-selling. And Dell is fulfilling that from an infrastructure perspective, putting Red Hat software on top and the licensing for that support. So it's a really good mix. >> And I think, if I may, one of the key differentiators is the actual capabilities that we're bringing together inside of this pre-integrated solution. So it includes the Red Hat OpenShift which is the container software, but we also add our advanced cluster management as well as our Ansible automation. And then Dell adds their orchestration capability along with the features and functionalities of the platform. And we put that together and we offer capability, remote automation orchestration and management capabilities that again reduces the operating expense, reduces the complexity, allows for easy scale. So it's, you know, certainly it's all about the partnership but it's also the capabilities of the combined technology. >> I was just going to ask about some of the numbers, and you mentioned some of them. Reduction of TCO I imagine is also a big capability that this solution enables besides reducing OpEx. Talk about the TCO reduction. 'Cause I know there's some numbers there that Dell and Red Hat have already delivered to the market. >> Yeah. You know, so these infrastructure blocks are designed specifically for Core, or for RAN, or for the Edge. We're starting out initially in the Core, but we've done some market research with a company called ACG. And ACG has looked at day zero, day one and day two TCO, FTE hours saved. And we're looking at over 40 to 50% TCO savings over you know, five year period, which is quite significant in terms of cost savings at a TCO level. But also we have a lot of numbers around power consumption and savings around power consumption. But also just that experience for our operator that says, hey, I'm going to go to one company to get the best in class from Red Hat and Dell together. That saves a lot of time in procurement and that entire ordering process as well. So you get a lot of savings that aren't exactly seen in the FTE hours around TCO, but just in that overall experience by talking to one company to get the best of both from both Red Hat and Dell together. >> I think the comic book character Charlie Brown once said, "The most discouraging thing in the world is having a lot of potential." (laughing) >> Right. >> And so when we talk about disaggregating and then reaggregating or reintegrating, that means choice. >> Tony: Yeah. >> How does an operator approach making that choice? Because, yeah, it sounds great. We have this integration lab and you have all these choices. Well, how do I decide, how does a person decide? This is a question for Honore from a Red Hat perspective, what's the secret sauce that you believe differentiates the Red Hat-infused stack versus some other assemblage of gear? >> Well, there's a couple of key characteristics, and the one that I think is most prevalent is that we're open, right? So "open" is in Red Hat's DNA because we're an open source technology company, and with that open source technology and that open platform, our customers can now add workloads. They have options to choose the workloads that they want to run on that open source platform. As they choose those workloads, they can be confident that those workloads have been certified and validated on our platform because we have a very robust ecosystem of ISVs that have already completed that process with open source, with Red Hat OpenShift. So then we take the Red Hat OpenShift and we put it on the Dell platform, which is market leader platform, right? Combine those two things, the customers can be confident that they can put those workloads on the combined platform that we're offering and that those workloads would run. So again, it goes back to making it simpler, making it easy to procure, easy to run workloads, easy to deploy, easy to operate. And all of that of course equates to saving time always equates to saving money. >> Yeah. Absolutely. >> Oh, I thought you wanted to continue. >> No, I think Honore sort of, she nailed it. You know, Red Hat is so dominant in 5G, and what they're doing in the market, especially in the Core and where we're going into the RAN, you know, next steps are to validate those workloads, those workload vendors on top of a stack. And the Red Hat leader in the Core is key, right? It's instant credibility in the core market. And so that's one of the reasons why we, Dell, want to partner with with Red Hat for the core market and beyond. We're going to be looking at not only Core but moving into RAN very soon. But then we do, we take that validated workload on top of that to optimize that workload and then be able to instantiate that in the core and the RAN. It's just a really streamlined, good experience for our operators. At the end of the day, we want happy customers in between our mutual customer base. And that's what you get whenever you do that combined stack together. >> Were operators, any operators, and you don't have to mention them by name, involved in the evolution of the infra blocks? I'm just curious how involved they were in helping to co-develop this. I imagine they were to some degree. >> Yeah, I could take that one. So, in doing so, yeah, we can't be myopic and just assume that we nailed it the first time, right? So yeah, we do work with partners all the way up and down the stack. A lot of our engineering work with Red Hat also brings in customer experience that is key to ensure that you're building and designing the right architecture for the Core. I would like to use the names, I don't know if I should, but a lot of those names are big names that are leaders in our industry. But yeah, their footprints, their fingerprints are all over those design best practices, those architectural designs that we build together. And then we further that by doing those validated workloads on top of that. So just to really prove the point that it's optimized for the Core, RAN, Edge kind of workload. >> And it's a huge added value for Red Hat to have a partner like Dell who can take all of those components, take the workload, take the Red Hat software, put it on the platform, and deliver that out to the customers. That's really, you know, a key part of the partnership and the value of the partnership because nobody really does that better than Dell. That center of excellence around delivery and support. >> Can you share any feedback from any of those nameless operators in terms of... I'm even kind of wondering what the catalyst was for the infra block. Was it operators saying, "Ah, we have these challenges here"? Was it the evolution of the Telco stack and Dell said, "We can come in with Red Hat and solve this problem"? And what's been some of their feedback? >> Yeah, it really comes down to what Honore said about, okay, you know, when we are looking at day zero, which is primarily your design, how much time savings can we do by creating that stack for them, right? We have industry experts designing that Core stack that's optimized for different levels of spectrum. When we do that we save a lot of time in terms of FTE hours for our architects, our operators, and then it goes into day one, right? Which is the deployment aspect for saving tons of hours for our operators by being able to deploy this. Speed to market is key. That ultimately ends up in, you know, faster time to revenue for our customers, right? So it's, when they see that we've already done the pre-work that they don't have to, that's what really resonates for them in terms of that, yeah. >> Honore, Lisa and I happen to be veterans of the Cloud native space, and what we heard from a lot of the folks in that ecosystem is that there is a massive hunger for developers to be able to deploy and manage and orchestrate environments that consist of Cloud native application infrastructure, microservices. >> Right. >> What we've heard here is that 5G equals Cloud native application stacks. Is that a fair assessment of the environment? And what are you seeing from a supply and demand for that kind of labor perspective? Is there still a hunger for those folks who develop in that space? >> Well, there is, because the very nature of an open source, Kubernetes-based container platform, which is what OpenShift is, the very nature of it is to open up that code so that developers can have access to the code to develop the workloads to the platform, right? And so, again, the combination of bringing together the Dell infrastructure with the Red Hat software, it doesn't change anything. The developer, the development community still has access to that same container platform to develop to, you know, Cloud native types of application. And you know, OpenShift is Red Hat's hybrid Cloud platform. So it runs on-prem, it runs in the public Cloud, it runs at the edge, it runs at the far edge. So any of the development community that's trying to develop Cloud native applications can develop it on this platform as they would if they were developing on an OpenShift platform in the public Cloud. >> So in "The Graduate", the advice to the graduate was, "Plastics." Plastics. As someone who has more children than I can remember, I forget how many kids I have. >> Four. >> That's right, I have four. That's right. (laughing) Three in college and grad school already at this point. Cloud native, I don't know. Kubernetes definitely a field that's going to, it's got some legs? >> Yes. >> Okay. So I can get 'em off my payroll quickly. >> Honore: Yes, yes. (laughing) >> Okay, good to know. Good to know. Any thoughts on that open Cloud native world? >> You know, there's so many changes that's going to happen in Kubernetes and services that you got to be able to update quickly. CICD, obviously the topic is huge. How quickly can we keep these systems up to date with new releases, changes? That's a great thing about an engineered system is that we do provide that lifecycle management for three to five years through this engagement with our customers. So we're constantly keeping them up with the latest and the greatest. >> David: Well do those customers have that expertise in-house, though? Do they have that now? Or is this a seismic cultural shift in those environments? >> Well, you know, they do have a lot of that experience, but it takes a lot of that time, and we're taking that off of their plate and putting that within us on our system, within our engineered system, and doing that automatically for them. And so they don't have to check in and try to understand what the release certification matrix is. Every quarter we're providing that to them. We're communicating out to the operator, telling them what's coming up latest and greatest, not only in terms of the software but the hardware and how to optimize it all together. That's the beauty of these systems. These are five year relationships with our operators that we're providing that lifecycle management end to end, for years to come. >> Lisa: So last question. You talked about joint GTM availability. When can operators get their hands on this? >> Yes. Yes. It's currently slated for early September release. >> Lisa: Awesome. So sometime this year? >> Yes. >> Well guys, thank you so much for talking with us today about Dell, Red Hat, what you're doing to really help evolve the telecom stack. We appreciate it. Next time come back with a customer, we can dig into it. That'd be fun. >> We sure will, absolutely. That may happen today actually, a little bit later. Not to let the cat out the bag, but good news. >> All right, well, geez, you're going to want to stick around. Thank you so much for your time. For our guests and for Dave Nicholson. This is Lisa Martin of theCUBE at MWC23 from Barcelona, Spain. We'll be back after a short break. (calm music)
SUMMARY :
that drive human progress." at the Theater of Barcelona. of the telecom stock. accelerate the time to revenue. and how it's addressing the complexities And so at the end of the day, the IT space to technologies in the marketplace as we and the licensing for that support. that again reduces the operating expense, about some of the numbers, in the FTE hours around TCO, in the world is having that means choice. the Red Hat-infused stack versus And all of that of course equates to And so that's one of the of the infra blocks? and just assume that we nailed and the value of the partnership Was it the evolution of the Which is the deployment aspect of the Cloud native space, of the environment? So any of the development So in "The Graduate", the Three in college and grad (laughing) Okay, good to know. is that we do provide but the hardware and how to Lisa: So last question. It's currently slated for So sometime this year? help evolve the telecom stack. the bag, but good news. going to want to stick around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
ACG | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Tony Jeffries | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Honore | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
five year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Charlie Brown | PERSON | 0.99+ |
Honore Labourdette | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
OTEL | ORGANIZATION | 0.99+ |
third day | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Three | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
early September | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one company | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Four | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Red Hat | TITLE | 0.98+ |
Red Hat OpenShift | TITLE | 0.98+ |
this year | DATE | 0.98+ |
OpenShift | TITLE | 0.97+ |
Analyst Predictions 2023: The Future of Data Management
(upbeat music) >> Hello, this is Dave Valente with theCUBE, and one of the most gratifying aspects of my role as a host of "theCUBE TV" is I get to cover a wide range of topics. And quite often, we're able to bring to our program a level of expertise that allows us to more deeply explore and unpack some of the topics that we cover throughout the year. And one of our favorite topics, of course, is data. Now, in 2021, after being in isolation for the better part of two years, a group of industry analysts met up at AWS re:Invent and started a collaboration to look at the trends in data and predict what some likely outcomes will be for the coming year. And it resulted in a very popular session that we had last year focused on the future of data management. And I'm very excited and pleased to tell you that the 2023 edition of that predictions episode is back, and with me are five outstanding market analyst, Sanjeev Mohan of SanjMo, Tony Baer of dbInsight, Carl Olofson from IDC, Dave Menninger from Ventana Research, and Doug Henschen, VP and Principal Analyst at Constellation Research. Now, what is it that we're calling you, guys? A data pack like the rat pack? No, no, no, no, that's not it. It's the data crowd, the data crowd, and the crowd includes some of the best minds in the data analyst community. They'll discuss how data management is evolving and what listeners should prepare for in 2023. Guys, welcome back. Great to see you. >> Good to be here. >> Thank you. >> Thanks, Dave. (Tony and Dave faintly speaks) >> All right, before we get into 2023 predictions, we thought it'd be good to do a look back at how we did in 2022 and give a transparent assessment of those predictions. So, let's get right into it. We're going to bring these up here, the predictions from 2022, they're color-coded red, yellow, and green to signify the degree of accuracy. And I'm pleased to report there's no red. Well, maybe some of you will want to debate that grading system. But as always, we want to be open, so you can decide for yourselves. So, we're going to ask each analyst to review their 2022 prediction and explain their rating and what evidence they have that led them to their conclusion. So, Sanjeev, please kick it off. Your prediction was data governance becomes key. I know that's going to knock you guys over, but elaborate, because you had more detail when you double click on that. >> Yeah, absolutely. Thank you so much, Dave, for having us on the show today. And we self-graded ourselves. I could have very easily made my prediction from last year green, but I mentioned why I left it as yellow. I totally fully believe that data governance was in a renaissance in 2022. And why do I say that? You have to look no further than AWS launching its own data catalog called DataZone. Before that, mid-year, we saw Unity Catalog from Databricks went GA. So, overall, I saw there was tremendous movement. When you see these big players launching a new data catalog, you know that they want to be in this space. And this space is highly critical to everything that I feel we will talk about in today's call. Also, if you look at established players, I spoke at Collibra's conference, data.world, work closely with Alation, Informatica, a bunch of other companies, they all added tremendous new capabilities. So, it did become key. The reason I left it as yellow is because I had made a prediction that Collibra would go IPO, and it did not. And I don't think anyone is going IPO right now. The market is really, really down, the funding in VC IPO market. But other than that, data governance had a banner year in 2022. >> Yeah. Well, thank you for that. And of course, you saw data clean rooms being announced at AWS re:Invent, so more evidence. And I like how the fact that you included in your predictions some things that were binary, so you dinged yourself there. So, good job. Okay, Tony Baer, you're up next. Data mesh hits reality check. As you see here, you've given yourself a bright green thumbs up. (Tony laughing) Okay. Let's hear why you feel that was the case. What do you mean by reality check? >> Okay. Thanks, Dave, for having us back again. This is something I just wrote and just tried to get away from, and this just a topic just won't go away. I did speak with a number of folks, early adopters and non-adopters during the year. And I did find that basically that it pretty much validated what I was expecting, which was that there was a lot more, this has now become a front burner issue. And if I had any doubt in my mind, the evidence I would point to is what was originally intended to be a throwaway post on LinkedIn, which I just quickly scribbled down the night before leaving for re:Invent. I was packing at the time, and for some reason, I was doing Google search on data mesh. And I happened to have tripped across this ridiculous article, I will not say where, because it doesn't deserve any publicity, about the eight (Dave laughing) best data mesh software companies of 2022. (Tony laughing) One of my predictions was that you'd see data mesh washing. And I just quickly just hopped on that maybe three sentences and wrote it at about a couple minutes saying this is hogwash, essentially. (laughs) And that just reun... And then, I left for re:Invent. And the next night, when I got into my Vegas hotel room, I clicked on my computer. I saw a 15,000 hits on that post, which was the most hits of any single post I put all year. And the responses were wildly pro and con. So, it pretty much validates my expectation in that data mesh really did hit a lot more scrutiny over this past year. >> Yeah, thank you for that. I remember that article. I remember rolling my eyes when I saw it, and then I recently, (Tony laughing) I talked to Walmart and they actually invoked Martin Fowler and they said that they're working through their data mesh. So, it takes a really lot of thought, and it really, as we've talked about, is really as much an organizational construct. You're not buying data mesh >> Bingo. >> to your point. Okay. Thank you, Tony. Carl Olofson, here we go. You've graded yourself a yellow in the prediction of graph databases. Take off. Please elaborate. >> Yeah, sure. So, I realized in looking at the prediction that it seemed to imply that graph databases could be a major factor in the data world in 2022, which obviously didn't become the case. It was an error on my part in that I should have said it in the right context. It's really a three to five-year time period that graph databases will really become significant, because they still need accepted methodologies that can be applied in a business context as well as proper tools in order for people to be able to use them seriously. But I stand by the idea that it is taking off, because for one thing, Neo4j, which is the leading independent graph database provider, had a very good year. And also, we're seeing interesting developments in terms of things like AWS with Neptune and with Oracle providing graph support in Oracle database this past year. Those things are, as I said, growing gradually. There are other companies like TigerGraph and so forth, that deserve watching as well. But as far as becoming mainstream, it's going to be a few years before we get all the elements together to make that happen. Like any new technology, you have to create an environment in which ordinary people without a whole ton of technical training can actually apply the technology to solve business problems. >> Yeah, thank you for that. These specialized databases, graph databases, time series databases, you see them embedded into mainstream data platforms, but there's a place for these specialized databases, I would suspect we're going to see new types of databases emerge with all this cloud sprawl that we have and maybe to the edge. >> Well, part of it is that it's not as specialized as you might think it. You can apply graphs to great many workloads and use cases. It's just that people have yet to fully explore and discover what those are. >> Yeah. >> And so, it's going to be a process. (laughs) >> All right, Dave Menninger, streaming data permeates the landscape. You gave yourself a yellow. Why? >> Well, I couldn't think of a appropriate combination of yellow and green. Maybe I should have used chartreuse, (Dave laughing) but I was probably a little hard on myself making it yellow. This is another type of specialized data processing like Carl was talking about graph databases is a stream processing, and nearly every data platform offers streaming capabilities now. Often, it's based on Kafka. If you look at Confluent, their revenues have grown at more than 50%, continue to grow at more than 50% a year. They're expected to do more than half a billion dollars in revenue this year. But the thing that hasn't happened yet, and to be honest, they didn't necessarily expect it to happen in one year, is that streaming hasn't become the default way in which we deal with data. It's still a sidecar to data at rest. And I do expect that we'll continue to see streaming become more and more mainstream. I do expect perhaps in the five-year timeframe that we will first deal with data as streaming and then at rest, but the worlds are starting to merge. And we even see some vendors bringing products to market, such as K2View, Hazelcast, and RisingWave Labs. So, in addition to all those core data platform vendors adding these capabilities, there are new vendors approaching this market as well. >> I like the tough grading system, and it's not trivial. And when you talk to practitioners doing this stuff, there's still some complications in the data pipeline. And so, but I think, you're right, it probably was a yellow plus. Doug Henschen, data lakehouses will emerge as dominant. When you talk to people about lakehouses, practitioners, they all use that term. They certainly use the term data lake, but now, they're using lakehouse more and more. What's your thoughts on here? Why the green? What's your evidence there? >> Well, I think, I was accurate. I spoke about it specifically as something that vendors would be pursuing. And we saw yet more lakehouse advocacy in 2022. Google introduced its BigLake service alongside BigQuery. Salesforce introduced Genie, which is really a lakehouse architecture. And it was a safe prediction to say vendors are going to be pursuing this in that AWS, Cloudera, Databricks, Microsoft, Oracle, SAP, Salesforce now, IBM, all advocate this idea of a single platform for all of your data. Now, the trend was also supported in 2023, in that we saw a big embrace of Apache Iceberg in 2022. That's a structured table format. It's used with these lakehouse platforms. It's open, so it ensures portability and it also ensures performance. And that's a structured table that helps with the warehouse side performance. But among those announcements, Snowflake, Google, Cloud Era, SAP, Salesforce, IBM, all embraced Iceberg. But keep in mind, again, I'm talking about this as something that vendors are pursuing as their approach. So, they're advocating end users. It's very cutting edge. I'd say the top, leading edge, 5% of of companies have really embraced the lakehouse. I think, we're now seeing the fast followers, the next 20 to 25% of firms embracing this idea and embracing a lakehouse architecture. I recall Christian Kleinerman at the big Snowflake event last summer, making the announcement about Iceberg, and he asked for a show of hands for any of you in the audience at the keynote, have you heard of Iceberg? And just a smattering of hands went up. So, the vendors are ahead of the curve. They're pushing this trend, and we're now seeing a little bit more mainstream uptake. >> Good. Doug, I was there. It was you, me, and I think, two other hands were up. That was just humorous. (Doug laughing) All right, well, so I liked the fact that we had some yellow and some green. When you think about these things, there's the prediction itself. Did it come true or not? There are the sub predictions that you guys make, and of course, the degree of difficulty. So, thank you for that open assessment. All right, let's get into the 2023 predictions. Let's bring up the predictions. Sanjeev, you're going first. You've got a prediction around unified metadata. What's the prediction, please? >> So, my prediction is that metadata space is currently a mess. It needs to get unified. There are too many use cases of metadata, which are being addressed by disparate systems. For example, data quality has become really big in the last couple of years, data observability, the whole catalog space is actually, people don't like to use the word data catalog anymore, because data catalog sounds like it's a catalog, a museum, if you may, of metadata that you go and admire. So, what I'm saying is that in 2023, we will see that metadata will become the driving force behind things like data ops, things like orchestration of tasks using metadata, not rules. Not saying that if this fails, then do this, if this succeeds, go do that. But it's like getting to the metadata level, and then making a decision as to what to orchestrate, what to automate, how to do data quality check, data observability. So, this space is starting to gel, and I see there'll be more maturation in the metadata space. Even security privacy, some of these topics, which are handled separately. And I'm just talking about data security and data privacy. I'm not talking about infrastructure security. These also need to merge into a unified metadata management piece with some knowledge graph, semantic layer on top, so you can do analytics on it. So, it's no longer something that sits on the side, it's limited in its scope. It is actually the very engine, the very glue that is going to connect data producers and consumers. >> Great. Thank you for that. Doug. Doug Henschen, any thoughts on what Sanjeev just said? Do you agree? Do you disagree? >> Well, I agree with many aspects of what he says. I think, there's a huge opportunity for consolidation and streamlining of these as aspects of governance. Last year, Sanjeev, you said something like, we'll see more people using catalogs than BI. And I have to disagree. I don't think this is a category that's headed for mainstream adoption. It's a behind the scenes activity for the wonky few, or better yet, companies want machine learning and automation to take care of these messy details. We've seen these waves of management technologies, some of the latest data observability, customer data platform, but they failed to sweep away all the earlier investments in data quality and master data management. So, yes, I hope the latest tech offers, glimmers that there's going to be a better, cleaner way of addressing these things. But to my mind, the business leaders, including the CIO, only want to spend as much time and effort and money and resources on these sorts of things to avoid getting breached, ending up in headlines, getting fired or going to jail. So, vendors bring on the ML and AI smarts and the automation of these sorts of activities. >> So, if I may say something, the reason why we have this dichotomy between data catalog and the BI vendors is because data catalogs are very soon, not going to be standalone products, in my opinion. They're going to get embedded. So, when you use a BI tool, you'll actually use the catalog to find out what is it that you want to do, whether you are looking for data or you're looking for an existing dashboard. So, the catalog becomes embedded into the BI tool. >> Hey, Dave Menninger, sometimes you have some data in your back pocket. Do you have any stats (chuckles) on this topic? >> No, I'm glad you asked, because I'm going to... Now, data catalogs are something that's interesting. Sanjeev made a statement that data catalogs are falling out of favor. I don't care what you call them. They're valuable to organizations. Our research shows that organizations that have adequate data catalog technologies are three times more likely to express satisfaction with their analytics for just the reasons that Sanjeev was talking about. You can find what you want, you know you're getting the right information, you know whether or not it's trusted. So, those are good things. So, we expect to see the capabilities, whether it's embedded or separate. We expect to see those capabilities continue to permeate the market. >> And a lot of those catalogs are driven now by machine learning and things. So, they're learning from those patterns of usage by people when people use the data. (airy laughs) >> All right. Okay. Thank you, guys. All right. Let's move on to the next one. Tony Bear, let's bring up the predictions. You got something in here about the modern data stack. We need to rethink it. Is the modern data stack getting long at the tooth? Is it not so modern anymore? >> I think, in a way, it's got almost too modern. It's gotten too, I don't know if it's being long in the tooth, but it is getting long. The modern data stack, it's traditionally been defined as basically you have the data platform, which would be the operational database and the data warehouse. And in between, you have all the tools that are necessary to essentially get that data from the operational realm or the streaming realm for that matter into basically the data warehouse, or as we might be seeing more and more, the data lakehouse. And I think, what's important here is that, or I think, we have seen a lot of progress, and this would be in the cloud, is with the SaaS services. And especially you see that in the modern data stack, which is like all these players, not just the MongoDBs or the Oracles or the Amazons have their database platforms. You see they have the Informatica's, and all the other players there in Fivetrans have their own SaaS services. And within those SaaS services, you get a certain degree of simplicity, which is it takes all the housekeeping off the shoulders of the customers. That's a good thing. The problem is that what we're getting to unfortunately is what I would call lots of islands of simplicity, which means that it leads it (Dave laughing) to the customer to have to integrate or put all that stuff together. It's a complex tool chain. And so, what we really need to think about here, we have too many pieces. And going back to the discussion of catalogs, it's like we have so many catalogs out there, which one do we use? 'Cause chances are of most organizations do not rely on a single catalog at this point. What I'm calling on all the data providers or all the SaaS service providers, is to literally get it together and essentially make this modern data stack less of a stack, make it more of a blending of an end-to-end solution. And that can come in a number of different ways. Part of it is that we're data platform providers have been adding services that are adjacent. And there's some very good examples of this. We've seen progress over the past year or so. For instance, MongoDB integrating search. It's a very common, I guess, sort of tool that basically, that the applications that are developed on MongoDB use, so MongoDB then built it into the database rather than requiring an extra elastic search or open search stack. Amazon just... AWS just did the zero-ETL, which is a first step towards simplifying the process from going from Aurora to Redshift. You've seen same thing with Google, BigQuery integrating basically streaming pipelines. And you're seeing also a lot of movement in database machine learning. So, there's some good moves in this direction. I expect to see more than this year. Part of it's from basically the SaaS platform is adding some functionality. But I also see more importantly, because you're never going to get... This is like asking your data team and your developers, herding cats to standardizing the same tool. In most organizations, that is not going to happen. So, take a look at the most popular combinations of tools and start to come up with some pre-built integrations and pre-built orchestrations, and offer some promotional pricing, maybe not quite two for, but in other words, get two products for the price of two services or for the price of one and a half. I see a lot of potential for this. And it's to me, if the class was to simplify things, this is the next logical step and I expect to see more of this here. >> Yeah, and you see in Oracle, MySQL heat wave, yet another example of eliminating that ETL. Carl Olofson, today, if you think about the data stack and the application stack, they're largely separate. Do you have any thoughts on how that's going to play out? Does that play into this prediction? What do you think? >> Well, I think, that the... I really like Tony's phrase, islands of simplification. It really says (Tony chuckles) what's going on here, which is that all these different vendors you ask about, about how these stacks work. All these different vendors have their own stack vision. And you can... One application group is going to use one, and another application group is going to use another. And some people will say, let's go to, like you go to a Informatica conference and they say, we should be the center of your universe, but you can't connect everything in your universe to Informatica, so you need to use other things. So, the challenge is how do we make those things work together? As Tony has said, and I totally agree, we're never going to get to the point where people standardize on one organizing system. So, the alternative is to have metadata that can be shared amongst those systems and protocols that allow those systems to coordinate their operations. This is standard stuff. It's not easy. But the motive for the vendors is that they can become more active critical players in the enterprise. And of course, the motive for the customer is that things will run better and more completely. So, I've been looking at this in terms of two kinds of metadata. One is the meaning metadata, which says what data can be put together. The other is the operational metadata, which says basically where did it come from? Who created it? What's its current state? What's the security level? Et cetera, et cetera, et cetera. The good news is the operational stuff can actually be done automatically, whereas the meaning stuff requires some human intervention. And as we've already heard from, was it Doug, I think, people are disinclined to put a lot of definition into meaning metadata. So, that may be the harder one, but coordination is key. This problem has been with us forever, but with the addition of new data sources, with streaming data with data in different formats, the whole thing has, it's been like what a customer of mine used to say, "I understand your product can make my system run faster, but right now I just feel I'm putting my problems on roller skates. (chuckles) I don't need that to accelerate what's already not working." >> Excellent. Okay, Carl, let's stay with you. I remember in the early days of the big data movement, Hadoop movement, NoSQL was the big thing. And I remember Amr Awadallah said to us in theCUBE that SQL is the killer app for big data. So, your prediction here, if we bring that up is SQL is back. Please elaborate. >> Yeah. So, of course, some people would say, well, it never left. Actually, that's probably closer to true, but in the perception of the marketplace, there's been all this noise about alternative ways of storing, retrieving data, whether it's in key value stores or document databases and so forth. We're getting a lot of messaging that for a while had persuaded people that, oh, we're not going to do analytics in SQL anymore. We're going to use Spark for everything, except that only a handful of people know how to use Spark. Oh, well, that's a problem. Well, how about, and for ordinary conventional business analytics, Spark is like an over-engineered solution to the problem. SQL works just great. What's happened in the past couple years, and what's going to continue to happen is that SQL is insinuating itself into everything we're seeing. We're seeing all the major data lake providers offering SQL support, whether it's Databricks or... And of course, Snowflake is loving this, because that is what they do, and their success is certainly points to the success of SQL, even MongoDB. And we were all, I think, at the MongoDB conference where on one day, we hear SQL is dead. They're not teaching SQL in schools anymore, and this kind of thing. And then, a couple days later at the same conference, they announced we're adding a new analytic capability-based on SQL. But didn't you just say SQL is dead? So, the reality is that SQL is better understood than most other methods of certainly of retrieving and finding data in a data collection, no matter whether it happens to be relational or non-relational. And even in systems that are very non-relational, such as graph and document databases, their query languages are being built or extended to resemble SQL, because SQL is something people understand. >> Now, you remember when we were in high school and you had had to take the... Your debating in the class and you were forced to take one side and defend it. So, I was was at a Vertica conference one time up on stage with Curt Monash, and I had to take the NoSQL, the world is changing paradigm shift. And so just to be controversial, I said to him, Curt Monash, I said, who really needs acid compliance anyway? Tony Baer. And so, (chuckles) of course, his head exploded, but what are your thoughts (guests laughing) on all this? >> Well, my first thought is congratulations, Dave, for surviving being up on stage with Curt Monash. >> Amen. (group laughing) >> I definitely would concur with Carl. We actually are definitely seeing a SQL renaissance and if there's any proof of the pudding here, I see lakehouse is being icing on the cake. As Doug had predicted last year, now, (clears throat) for the record, I think, Doug was about a year ahead of time in his predictions that this year is really the year that I see (clears throat) the lakehouse ecosystems really firming up. You saw the first shots last year. But anyway, on this, data lakes will not go away. I've actually, I'm on the home stretch of doing a market, a landscape on the lakehouse. And lakehouse will not replace data lakes in terms of that. There is the need for those, data scientists who do know Python, who knows Spark, to go in there and basically do their thing without all the restrictions or the constraints of a pre-built, pre-designed table structure. I get that. Same thing for developing models. But on the other hand, there is huge need. Basically, (clears throat) maybe MongoDB was saying that we're not teaching SQL anymore. Well, maybe we have an oversupply of SQL developers. Well, I'm being facetious there, but there is a huge skills based in SQL. Analytics have been built on SQL. They came with lakehouse and why this really helps to fuel a SQL revival is that the core need in the data lake, what brought on the lakehouse was not so much SQL, it was a need for acid. And what was the best way to do it? It was through a relational table structure. So, the whole idea of acid in the lakehouse was not to turn it into a transaction database, but to make the data trusted, secure, and more granularly governed, where you could govern down to column and row level, which you really could not do in a data lake or a file system. So, while lakehouse can be queried in a manner, you can go in there with Python or whatever, it's built on a relational table structure. And so, for that end, for those types of data lakes, it becomes the end state. You cannot bypass that table structure as I learned the hard way during my research. So, the bottom line I'd say here is that lakehouse is proof that we're starting to see the revenge of the SQL nerds. (Dave chuckles) >> Excellent. Okay, let's bring up back up the predictions. Dave Menninger, this one's really thought-provoking and interesting. We're hearing things like data as code, new data applications, machines actually generating plans with no human involvement. And your prediction is the definition of data is expanding. What do you mean by that? >> So, I think, for too long, we've thought about data as the, I would say facts that we collect the readings off of devices and things like that, but data on its own is really insufficient. Organizations need to manipulate that data and examine derivatives of the data to really understand what's happening in their organization, why has it happened, and to project what might happen in the future. And my comment is that these data derivatives need to be supported and managed just like the data needs to be managed. We can't treat this as entirely separate. Think about all the governance discussions we've had. Think about the metadata discussions we've had. If you separate these things, now you've got more moving parts. We're talking about simplicity and simplifying the stack. So, if these things are treated separately, it creates much more complexity. I also think it creates a little bit of a myopic view on the part of the IT organizations that are acquiring these technologies. They need to think more broadly. So, for instance, metrics. Metric stores are becoming much more common part of the tooling that's part of a data platform. Similarly, feature stores are gaining traction. So, those are designed to promote the reuse and consistency across the AI and ML initiatives. The elements that are used in developing an AI or ML model. And let me go back to metrics and just clarify what I mean by that. So, any type of formula involving the data points. I'm distinguishing metrics from features that are used in AI and ML models. And the data platforms themselves are increasingly managing the models as an element of data. So, just like figuring out how to calculate a metric. Well, if you're going to have the features associated with an AI and ML model, you probably need to be managing the model that's associated with those features. The other element where I see expansion is around external data. Organizations for decades have been focused on the data that they generate within their own organization. We see more and more of these platforms acquiring and publishing data to external third-party sources, whether they're within some sort of a partner ecosystem or whether it's a commercial distribution of that information. And our research shows that when organizations use external data, they derive even more benefits from the various analyses that they're conducting. And the last great frontier in my opinion on this expanding world of data is the world of driver-based planning. Very few of the major data platform providers provide these capabilities today. These are the types of things you would do in a spreadsheet. And we all know the issues associated with spreadsheets. They're hard to govern, they're error-prone. And so, if we can take that type of analysis, collecting the occupancy of a rental property, the projected rise in rental rates, the fluctuations perhaps in occupancy, the interest rates associated with financing that property, we can project forward. And that's a very common thing to do. What the income might look like from that property income, the expenses, we can plan and purchase things appropriately. So, I think, we need this broader purview and I'm beginning to see some of those things happen. And the evidence today I would say, is more focused around the metric stores and the feature stores starting to see vendors offer those capabilities. And we're starting to see the ML ops elements of managing the AI and ML models find their way closer to the data platforms as well. >> Very interesting. When I hear metrics, I think of KPIs, I think of data apps, orchestrate people and places and things to optimize around a set of KPIs. It sounds like a metadata challenge more... Somebody once predicted they'll have more metadata than data. Carl, what are your thoughts on this prediction? >> Yeah, I think that what Dave is describing as data derivatives is in a way, another word for what I was calling operational metadata, which not about the data itself, but how it's used, where it came from, what the rules are governing it, and that kind of thing. If you have a rich enough set of those things, then not only can you do a model of how well your vacation property rental may do in terms of income, but also how well your application that's measuring that is doing for you. In other words, how many times have I used it, how much data have I used and what is the relationship between the data that I've used and the benefits that I've derived from using it? Well, we don't have ways of doing that. What's interesting to me is that folks in the content world are way ahead of us here, because they have always tracked their content using these kinds of attributes. Where did it come from? When was it created, when was it modified? Who modified it? And so on and so forth. We need to do more of that with the structure data that we have, so that we can track what it's used. And also, it tells us how well we're doing with it. Is it really benefiting us? Are we being efficient? Are there improvements in processes that we need to consider? Because maybe data gets created and then it isn't used or it gets used, but it gets altered in some way that actually misleads people. (laughs) So, we need the mechanisms to be able to do that. So, I would say that that's... And I'd say that it's true that we need that stuff. I think, that starting to expand is probably the right way to put it. It's going to be expanding for some time. I think, we're still a distance from having all that stuff really working together. >> Maybe we should say it's gestating. (Dave and Carl laughing) >> Sorry, if I may- >> Sanjeev, yeah, I was going to say this... Sanjeev, please comment. This sounds to me like it supports Zhamak Dehghani's principles, but please. >> Absolutely. So, whether we call it data mesh or not, I'm not getting into that conversation, (Dave chuckles) but data (audio breaking) (Tony laughing) everything that I'm hearing what Dave is saying, Carl, this is the year when data products will start to take off. I'm not saying they'll become mainstream. They may take a couple of years to become so, but this is data products, all this thing about vacation rentals and how is it doing, that data is coming from different sources. I'm packaging it into our data product. And to Carl's point, there's a whole operational metadata associated with it. The idea is for organizations to see things like developer productivity, how many releases am I doing of this? What data products are most popular? I'm actually in right now in the process of formulating this concept that just like we had data catalogs, we are very soon going to be requiring data products catalog. So, I can discover these data products. I'm not just creating data products left, right, and center. I need to know, do they already exist? What is the usage? If no one is using a data product, maybe I want to retire and save cost. But this is a data product. Now, there's a associated thing that is also getting debated quite a bit called data contracts. And a data contract to me is literally just formalization of all these aspects of a product. How do you use it? What is the SLA on it, what is the quality that I am prescribing? So, data product, in my opinion, shifts the conversation to the consumers or to the business people. Up to this point when, Dave, you're talking about data and all of data discovery curation is a very data producer-centric. So, I think, we'll see a shift more into the consumer space. >> Yeah. Dave, can I just jump in there just very quickly there, which is that what Sanjeev has been saying there, this is really central to what Zhamak has been talking about. It's basically about making, one, data products are about the lifecycle management of data. Metadata is just elemental to that. And essentially, one of the things that she calls for is making data products discoverable. That's exactly what Sanjeev was talking about. >> By the way, did everyone just no notice how Sanjeev just snuck in another prediction there? So, we've got- >> Yeah. (group laughing) >> But you- >> Can we also say that he snuck in, I think, the term that we'll remember today, which is metadata museums. >> Yeah, but- >> Yeah. >> And also comment to, Tony, to your last year's prediction, you're really talking about it's not something that you're going to buy from a vendor. >> No. >> It's very specific >> Mm-hmm. >> to an organization, their own data product. So, touche on that one. Okay, last prediction. Let's bring them up. Doug Henschen, BI analytics is headed to embedding. What does that mean? >> Well, we all know that conventional BI dashboarding reporting is really commoditized from a vendor perspective. It never enjoyed truly mainstream adoption. Always that 25% of employees are really using these things. I'm seeing rising interest in embedding concise analytics at the point of decision or better still, using analytics as triggers for automation and workflows, and not even necessitating human interaction with visualizations, for example, if we have confidence in the analytics. So, leading companies are pushing for next generation applications, part of this low-code, no-code movement we've seen. And they want to build that decision support right into the app. So, the analytic is right there. Leading enterprise apps vendors, Salesforce, SAP, Microsoft, Oracle, they're all building smart apps with the analytics predictions, even recommendations built into these applications. And I think, the progressive BI analytics vendors are supporting this idea of driving insight to action, not necessarily necessitating humans interacting with it if there's confidence. So, we want prediction, we want embedding, we want automation. This low-code, no-code development movement is very important to bringing the analytics to where people are doing their work. We got to move beyond the, what I call swivel chair integration, between where people do their work and going off to separate reports and dashboards, and having to interpret and analyze before you can go back and do take action. >> And Dave Menninger, today, if you want, analytics or you want to absorb what's happening in the business, you typically got to go ask an expert, and then wait. So, what are your thoughts on Doug's prediction? >> I'm in total agreement with Doug. I'm going to say that collectively... So, how did we get here? I'm going to say collectively as an industry, we made a mistake. We made BI and analytics separate from the operational systems. Now, okay, it wasn't really a mistake. We were limited by the technology available at the time. Decades ago, we had to separate these two systems, so that the analytics didn't impact the operations. You don't want the operations preventing you from being able to do a transaction. But we've gone beyond that now. We can bring these two systems and worlds together and organizations recognize that need to change. As Doug said, the majority of the workforce and the majority of organizations doesn't have access to analytics. That's wrong. (chuckles) We've got to change that. And one of the ways that's going to change is with embedded analytics. 2/3 of organizations recognize that embedded analytics are important and it even ranks higher in importance than AI and ML in those organizations. So, it's interesting. This is a really important topic to the organizations that are consuming these technologies. The good news is it works. Organizations that have embraced embedded analytics are more comfortable with self-service than those that have not, as opposed to turning somebody loose, in the wild with the data. They're given a guided path to the data. And the research shows that 65% of organizations that have adopted embedded analytics are comfortable with self-service compared with just 40% of organizations that are turning people loose in an ad hoc way with the data. So, totally behind Doug's predictions. >> Can I just break in with something here, a comment on what Dave said about what Doug said, which (laughs) is that I totally agree with what you said about embedded analytics. And at IDC, we made a prediction in our future intelligence, future of intelligence service three years ago that this was going to happen. And the thing that we're waiting for is for developers to build... You have to write the applications to work that way. It just doesn't happen automagically. Developers have to write applications that reference analytic data and apply it while they're running. And that could involve simple things like complex queries against the live data, which is through something that I've been calling analytic transaction processing. Or it could be through something more sophisticated that involves AI operations as Doug has been suggesting, where the result is enacted pretty much automatically unless the scores are too low and you need to have a human being look at it. So, I think that that is definitely something we've been watching for. I'm not sure how soon it will come, because it seems to take a long time for people to change their thinking. But I think, as Dave was saying, once they do and they apply these principles in their application development, the rewards are great. >> Yeah, this is very much, I would say, very consistent with what we were talking about, I was talking about before, about basically rethinking the modern data stack and going into more of an end-to-end solution solution. I think, that what we're talking about clearly here is operational analytics. There'll still be a need for your data scientists to go offline just in their data lakes to do all that very exploratory and that deep modeling. But clearly, it just makes sense to bring operational analytics into where people work into their workspace and further flatten that modern data stack. >> But with all this metadata and all this intelligence, we're talking about injecting AI into applications, it does seem like we're entering a new era of not only data, but new era of apps. Today, most applications are about filling forms out or codifying processes and require a human input. And it seems like there's enough data now and enough intelligence in the system that the system can actually pull data from, whether it's the transaction system, e-commerce, the supply chain, ERP, and actually do something with that data without human involvement, present it to humans. Do you guys see this as a new frontier? >> I think, that's certainly- >> Very much so, but it's going to take a while, as Carl said. You have to design it, you have to get the prediction into the system, you have to get the analytics at the point of decision has to be relevant to that decision point. >> And I also recall basically a lot of the ERP vendors back like 10 years ago, we're promising that. And the fact that we're still looking at the promises shows just how difficult, how much of a challenge it is to get to what Doug's saying. >> One element that could be applied in this case is (indistinct) architecture. If applications are developed that are event-driven rather than following the script or sequence that some programmer or designer had preconceived, then you'll have much more flexible applications. You can inject decisions at various points using this technology much more easily. It's a completely different way of writing applications. And it actually involves a lot more data, which is why we should all like it. (laughs) But in the end (Tony laughing) it's more stable, it's easier to manage, easier to maintain, and it's actually more efficient, which is the result of an MIT study from about 10 years ago, and still, we are not seeing this come to fruition in most business applications. >> And do you think it's going to require a new type of data platform database? Today, data's all far-flung. We see that's all over the clouds and at the edge. Today, you cache- >> We need a super cloud. >> You cache that data, you're throwing into memory. I mentioned, MySQL heat wave. There are other examples where it's a brute force approach, but maybe we need new ways of laying data out on disk and new database architectures, and just when we thought we had it all figured out. >> Well, without referring to disk, which to my mind, is almost like talking about cave painting. I think, that (Dave laughing) all the things that have been mentioned by all of us today are elements of what I'm talking about. In other words, the whole improvement of the data mesh, the improvement of metadata across the board and improvement of the ability to track data and judge its freshness the way we judge the freshness of a melon or something like that, to determine whether we can still use it. Is it still good? That kind of thing. Bringing together data from multiple sources dynamically and real-time requires all the things we've been talking about. All the predictions that we've talked about today add up to elements that can make this happen. >> Well, guys, it's always tremendous to get these wonderful minds together and get your insights, and I love how it shapes the outcome here of the predictions, and let's see how we did. We're going to leave it there. I want to thank Sanjeev, Tony, Carl, David, and Doug. Really appreciate the collaboration and thought that you guys put into these sessions. Really, thank you. >> Thank you. >> Thanks, Dave. >> Thank you for having us. >> Thanks. >> Thank you. >> All right, this is Dave Valente for theCUBE, signing off for now. Follow these guys on social media. Look for coverage on siliconangle.com, theCUBE.net. Thank you for watching. (upbeat music)
SUMMARY :
and pleased to tell you (Tony and Dave faintly speaks) that led them to their conclusion. down, the funding in VC IPO market. And I like how the fact And I happened to have tripped across I talked to Walmart in the prediction of graph databases. But I stand by the idea and maybe to the edge. You can apply graphs to great And so, it's going to streaming data permeates the landscape. and to be honest, I like the tough grading the next 20 to 25% of and of course, the degree of difficulty. that sits on the side, Thank you for that. And I have to disagree. So, the catalog becomes Do you have any stats for just the reasons that And a lot of those catalogs about the modern data stack. and more, the data lakehouse. and the application stack, So, the alternative is to have metadata that SQL is the killer app for big data. but in the perception of the marketplace, and I had to take the NoSQL, being up on stage with Curt Monash. (group laughing) is that the core need in the data lake, And your prediction is the and examine derivatives of the data to optimize around a set of KPIs. that folks in the content world (Dave and Carl laughing) going to say this... shifts the conversation to the consumers And essentially, one of the things (group laughing) the term that we'll remember today, to your last year's prediction, is headed to embedding. and going off to separate happening in the business, so that the analytics didn't And the thing that we're waiting for and that deep modeling. that the system can of decision has to be relevant And the fact that we're But in the end We see that's all over the You cache that data, and improvement of the and I love how it shapes the outcome here Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
Dave Menninger | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Carl Olofson | PERSON | 0.99+ |
Dave Menninger | PERSON | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Dave Valente | PERSON | 0.99+ |
Collibra | ORGANIZATION | 0.99+ |
Curt Monash | PERSON | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Christian Kleinerman | PERSON | 0.99+ |
Dave Valente | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sanjeev | PERSON | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Hazelcast | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Tony Bear | PERSON | 0.99+ |
25% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
last year | DATE | 0.99+ |
65% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
five-year | QUANTITY | 0.99+ |
TigerGraph | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two services | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
RisingWave Labs | ORGANIZATION | 0.99+ |
Deepu Kumar, Tony Abrozie, Ashlee Lane | AWS Executive Summit 2022
>>Now welcome back to the Cube as we continue our coverage here. AWS Reinvent 2022, going out here at the Venetian in Las Vegas. Tens of thousands of attendees. That exhibit Hall is full. Let me tell you, it's been something else. Well, here in the executive summit, sponsored by Accenture. Accenture rather. We're gonna talk about Baptist Health, what's going on with that organization down in South Florida with me. To do that, I have Tony Abro, who's the SVP and Chief Digital and Information Officer. I have Ashley Lane, the managing director of the Accenture Healthcare Practice, and on the far end Poop Kumar, who is the VP and cto Baptist Health Florida won and all. Welcome. Thank you. First off, let's just talk about Baptist Health, the size of your footprint. One and a half million patient visits a year, not a small number. >>That was probably last year's number, but okay. >>Right. But not a small number about your footprint and, and what, I guess the client base basically that you guys are serving in it. >>Absolutely. So we are the largest organization in South Florida system provider and the 11 hospitals soon to be 12, as you said, it's probably about 1.8 million by now. People were, were, were supporting a lot of other units and you know, we're focusing on the four southern counties of South Florida. Okay. >>So got day Broward. Broward, yep. Down that way. Got it. So now let's get to your migration or your cloud transformation. As we're talking about a lot this week, what's been your, I guess, overarching goal, you know, as you worked with Accenture and, and developed a game plan going forward, you know, what was on the front end of that? What was the motivation to say this is the direction we're going to go and this is how we're gonna get there? >>Perfect. So Baptist started a digital transformation initiative before I came about three years ago. The board, the executive steering committee, decided that this is gonna be very important for us to support us, to help our patients and, and consumers. So I was brought in for that digital transformation. And by the way, digital transformation is kind of an umbrella. It's really business transformation with technology, digital technologies. So that's, that's basically where we started in terms of consumer focused and, and, and patient focus. And digital is a big word that really encompasses a lot of things. Cloud is one of, of course. And, you know, AI and ML and all the things that we are here for this, this event, you know, and, and we've started that journey about two years ago. And obviously cloud is very important. AWS is our main cloud provider and clearly in AWS or any club providers is not just the infrastructure they're providing, it's the whole ecosystem that provides us back value into, into our transformation. And then somebody, I think Adam this morning at the keynote said, this is a team sport. So with this big transformation, we need all the help and that we can get to mines and, and, and hands. And that's where Accenture has been invaluable over the last two years. >>Yeah, so as a team sport then depu, you, you've got external stakeholders, otherwise we talked about patience, right? Internal, right. You've, you've got a whole different set of constituents there, basically, but it takes that team, right? You all have to work together. What kind of conversations or what kind of actions, I guess have you had with different departments and what different of sectors of, of the healthcare business as Baptist Health sees it in order to bring them along too, because this is, you know, kind of a shocking turn for them too, right? And how they're gonna be doing business >>Mostly from an end user perspective. This is something that they don't care much about where the infrastructure is hosted or how the services are provided from that perspective. As long as the capabilities function in a better way, they are seemingly not worried about where the hosting is. So what we focus on is in terms of how it's going to be a better experience for, from them, from, from their perspective, right? How is it going to be better responsiveness, availability, or stability overall? So that's been the mode of communication from that perspective. Other than that, from a, from a hosting and service perspective, the clientele doesn't care as much as the infrastructure or the security or the, the technology and digital teams themselves. >>But you know, some of us are resistant to change, right? We're, we're just, we are old dogs. We don't like new tricks and, and change can be a little daunting sometimes. So even though it is about my ease of use and my efficiency and why I can then save my time on so and so forth, if I'm used to doing something a certain way, and that's worked fine for me and here comes Tony and Depo and here comes a, >>They're troublemaker >>And they're stir my pot. Yeah. So, so how do you, the work, you were giving advice maybe to somebody watching this and say, okay, you've got internal, I wouldn't say battles, but discussions to be held. How did you navigate through that? >>Yeah, no, absolutely. And Baptist has been a very well run system, very successful for 60 something odd years. Clearly that conversation did come, why should we change? But you always start with, this is what we think is gonna happen in the future. These are the changes that very likely will happen in the future. One is the consumer expectations are the consumer expectations in terms of their ability to have access to information, get access to care, being control of the process and their, their health and well-being. Everything else that happens in the market. And so you start with the, with that, and that's where clearly there are, there are a lot of signs that point to quite a lot of change in the ecosystem. And therefore, from there, the conversation is how do we now meet that challenge, so to speak, that we all face in, in, in healthcare. >>And then from there, you kind of designed the, a vision of where we want to be in terms of that digital transformation and how do we get there. And then once that is well explained and evangelized, and that's part of our jobs with the help of our colleagues who have, have been doing this with others, then is the, what I call a tell end show. We're gonna say, okay, in this, in this road, we're gonna start with this. It's a small thing and we're gonna show you how it works in terms of, in terms of the process, right? And then as, as you go along and you deliver some things, people understand more, they're on board more and they're ready for for more. So it's iterative from small to larger. >>The proof is always in the place, right? If you can show somebody, so actually I, I obviously we know about Accenture's role, but in terms of almost, almost what Tony was just saying, that you have to show people that it works. How, how do you interface with a client? And when you're talking about these new approaches and you're suggesting changes and, and making these maybe rather dramatic proposals, you know, to how they do things internally, from Accenture's perspective, how do you make it happen? How, how do you bring the client along in this case, batches >>Down? Well, in this case, with Tony and Depu, I mean, they have been on this journey already at another client, right? So they came to Baptist where they had done a similar journey previously. And so it wasn't really about convincing >>Also with Accenture's >>Health, also with Accenture's Health, correct. But it wasn't about telling Tony Dupe, how do we do this? Or anything like that. Cuz they were by far the experts and have, you know, the experience behind it. Well, it's really like, how do we make sure that we're providing the right, right team, the right skills to match, you know, what they wanted to do and their aspirations. So we had brought the, the healthcare knowledge along with the AWS knowledge and the architects and you know, we said that we gotta, you know, let's look at the roadmap and let's make sure that we have the right team and moving at the right pace and, you know, testing everything out and working with all the different vendors in the provider world specifically, there's a lot of different vendors and applications that are, you know, that are provided to them. It's not a lot of custom activity, you know, applications or anything like that. So it was a lot of, you know, working with other third party that we really had to align with them and with Baptist to make sure that, you know, we were moving together at speed. >>Yeah, we've heard about transformation quite a bit. Tony, you brought it up a little bit ago, depu, just, if you had to define transformation in this case, I mean, how big of a, of a, of a change is that? I mean, how, how would you describe it when you say we're gonna transform our, you know, our healthcare business? I mean, I think there are a lot of things that come to my mind, but, but how do you define it and, and when you're, when you're talking to the folks with whom you've got to bring along on this journey? >>So there's the transformation umbrella and compos two or three things. As Tony said, there is this big digital transformation that everybody's talking about. Then there is this technology transformation that powers the digital transformation and business transformation. That's the outcome of the digital transformation. So I think we, we started focusing on all three areas to get the right digital experience for the consumers. We have to transform the way we operate healthcare in its current state or, or in the existing state. It's a lot of manual processes, a lot of antiquated processes, so to speak. So we had to go and reassess some of that and work with the respective business stakeholders to streamline those because in, it's not about putting a digital solution out there with the anti cured processes because the outcome is not what you expect when you do that. So from that perspective, it has been a heavy lifting in terms of how we transform the operations or the processes that facilitates some of the outcomes. >>How do you know it's working >>Well? So I I, to add to what Deep was saying is I think we are fortunate and that, you know, there are a lot of folks inside Baptist who have been wanting this and they're instrumental to this. So this is not a two man plus, you know, show is really a, you know, a, a team sport. Again, that same. So in, in that, that in terms of how do we know it works well when, when we define what we want to do, there is some level of precision along the way. In those iterations, what is it that we want to do next, right? So whatever we introduce, let's say a, a proper fluid check in for a patient into a, for an appointment, we measure that and then we measure the next one, and then we kind of zoom out and we look at the, the journey and say, is this better? >>Is this better for the consumer? Do they like it better? We measure that and it's better for the operations in terms of, but this is the interesting thing is it's always a balance of how much you can change. We want to improve the consumer experience, but as deeply said, there's lot to be changed in, in the operations, how much you do at the same time. And that's where we have to do the prioritization. But you know, the, the interesting thing is that a lot of times, especially on the self servicing for consumers, there are a lot of benefits for the operations as well. And that's, that's where we're in, we're in it together and we measure. Yeah, >>Don't gimme too much control though. I don't, I'm gonna leave the hard lifting for you. >>Absolutely, absolutely right. Thank you. >>So, and, and just real quick, Ashley, maybe you can shine some light on this, about the relationship, about, about next steps, about, you know, you, you're on this, this path and things are going well and, and you've got expansion plans, you want, you know, bring in other services, other systems. Where do you want to take 'em in the big picture in terms of capabilities? >>Well, I, I mean, they've been doing a fantastic job just being one of the first to actually say, Hey, we're gonna go and make an investment in the cloud and digital transformation. And so it's really looking at like, what are the next problems that we need to solve, whether it's patient care diagnosis or how we're doing research or, you know, the next kind of realm of, of how we're gonna use data and to improve patient care. So I think it's, you know, we're getting the foundation, the basics and everything kind of laid out right now. And then it's really, it's like what's the next thing and how can we really improve the patient care and the access that they have. >>Well, it sure sounds like you have a winning accommodation, so I I keep the team together. >>Absolutely. >>Teamwork makes the dream >>Work. Absolutely. It is, as you know. So there's a certain amount of, if you look at the healthcare industry as a whole, and not, not just Baptist, Baptist is, you know, fourth for thinking, but entire industry, there's a lot of catching up to do compared to whatever else is doing, whatever else the consumers are expecting of, of an entity, right? But then once we catch up, there's a lot of other things that we were gonna have to move on, innovate for, for problems that we maybe we don't know we have will have right now. So plenty of work to do. Right. >>Which is job security for everybody, right? >>Yes. >>Listen, thanks for sharing the story. Yeah, yeah. Continued success. I wish you that and I appreciate the time and expertise here today. Thank you. Thanks for being with us. Thank you. Thank you. We'll be back with more. You're watching the Cube here. It's the Executive Summit sponsored by Accenture. And the cube, as I love to remind you, is the leader in tech coverage.
SUMMARY :
I have Ashley Lane, the managing director of the Accenture Healthcare Practice, and on the far end Poop and what, I guess the client base basically that you guys are serving in it. units and you know, we're focusing on the four southern you know, as you worked with Accenture and, and developed a game plan going forward, And, you know, AI and ML and all the things that we are here them along too, because this is, you know, kind of a shocking turn for them too, So that's been the mode of communication But you know, some of us are resistant to change, right? you were giving advice maybe to somebody watching this and say, okay, you've got internal, And so you start with the, with that, and that's where clearly And then as, as you go along and you deliver some things, people and making these maybe rather dramatic proposals, you know, So they came to Baptist where they had done a similar journey previously. the healthcare knowledge along with the AWS knowledge and the architects and you know, come to my mind, but, but how do you define it and, and when you're, when you're talking to the folks with whom you've there with the anti cured processes because the outcome is not what you expect when and that, you know, there are a lot of folks inside Baptist who have been wanting this and But you know, the, the interesting thing is that a lot of times, especially on the self I don't, I'm gonna leave the hard lifting for you. Thank you. about next steps, about, you know, you, you're on this, this path and things are going well So I think it's, you know, we're getting the foundation, the basics and everything kind of laid out right now. So there's a certain amount of, if you look at the healthcare industry And the cube, as I love to remind you, is the leader in tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tony | PERSON | 0.99+ |
Tony Abrozie | PERSON | 0.99+ |
Ashley Lane | PERSON | 0.99+ |
Tony Abro | PERSON | 0.99+ |
Ashlee Lane | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Deepu Kumar | PERSON | 0.99+ |
Poop Kumar | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ashley | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
South Florida | LOCATION | 0.99+ |
11 hospitals | QUANTITY | 0.99+ |
Baptist Health | ORGANIZATION | 0.99+ |
Tony Dupe | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
12 | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
60 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
fourth | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
today | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
Venetian | LOCATION | 0.97+ |
Accenture Healthcare Practice | ORGANIZATION | 0.97+ |
One and a half million patient | QUANTITY | 0.97+ |
a year | QUANTITY | 0.96+ |
Depu | PERSON | 0.96+ |
two man | QUANTITY | 0.95+ |
Baptist | ORGANIZATION | 0.95+ |
three things | QUANTITY | 0.95+ |
about 1.8 million | QUANTITY | 0.93+ |
One | QUANTITY | 0.9+ |
Tens of thousands | QUANTITY | 0.9+ |
Depo | PERSON | 0.9+ |
three years ago | DATE | 0.89+ |
cto Baptist Health Florida | ORGANIZATION | 0.87+ |
this morning | DATE | 0.86+ |
one | QUANTITY | 0.85+ |
three | QUANTITY | 0.84+ |
AWS | EVENT | 0.83+ |
last two years | DATE | 0.82+ |
Executive Summit | EVENT | 0.77+ |
Broward | ORGANIZATION | 0.76+ |
about two years ago | DATE | 0.73+ |
Cube | ORGANIZATION | 0.69+ |
Deep | PERSON | 0.69+ |
four southern counties | QUANTITY | 0.67+ |
Executive | EVENT | 0.59+ |
Reinvent 2022 | EVENT | 0.55+ |
Cube | PERSON | 0.5+ |
2022 | DATE | 0.49+ |
Snehal Antani, Horizon3.ai Market Deepdive
foreign welcome back everyone to our special presentation here at thecube with Horizon 3.a I'm John Furrier host thecube here in Palo Alto back it's niho and Tony CEO and co-founder of horizon 3 for deep dive on going under the hood around the big news and also the platform autonomous pen testing changing the game and security great to see you welcome back thank you John I love what you guys have been doing with the cube huge fan been here a bunch of times and yeah looking forward to the conversation let's get into it all right so what what's the market look like and how do you see it evolving we're in a down Market relative to startups some say our data we're reporting on siliconangle in the cube that yeah there might be a bit of downturn in the economy with inflation but the tech Market is booming because the hyperscalers are still pumping out massive scale and still innovating so so you know for the first time in history this is a recession or downturn where there's now Cloud scale players that are an economic engine what's your view on this where's the market heading relative to the downturn and how are you guys navigating that so um I think about it one the there's a lot of belief out there that we're going to hit a downturn and we started to see that we started to see deals get longer and longer to close back in May across the board in the industry we continue to see deals get at least backloaded in the quarter as people understand their procurement how much money they really have to spend what their earnings are going to be so we're seeing this across the board one is quarters becoming lumpier for tech companies and we think that that's going to become kind of the norm over the next over the next year but what's interesting in our space of security testing is a very basic supply and demand problem the demand for security testing has skyrocketed when I was a CIO eight years ago I only had to worry about my on-prem attack surface my perimeter and Insider threat those are my primary threat vectors now if I was a CIO I have to include multiple clouds all of the data in my SAS offerings my Salesforce account and so on as well as work from home threat vectors and other pieces and I've got Regulatory Compliance in Europe in Asia in in the U.S tons of demand for testing and there's just not enough Supply there's only 5 000 certified pen testers in the United States so I think for starters you have a fundamental supply and demand problem that plays to our strength because we're able to bring a tremendous amount of pen testing supply to the table but now let's flip to if you are the CEO of a large security company or whether it's a Consulting shop or so on you've got a whole bunch of deferred revenue in your business model around security testing services and what we've done in our past in previous companies I worked at is if we didn't think we were going to make the money the quarter with product Revenue we would start to unlock some of that deferred Services Revenue to make the number to hit what we expected Wall Street to hit what Wall Street expected of us in testing that's not possible because there's not enough Supply except us so if I'm the CEO of an mssp or a large security company and I need I see a huge backlog of security testing revenue on the table the easy button to convert that to recognized revenue is Horizon 3. and when I think about the next six months and the amount of Revenue misses we're going to see in security shops especially those that can't fulfill their orders I think there's a ripe opportunity for us to win yeah one of the few opportunities where on any Market you win because the forces will drive your flywheel that's exactly right very basic supply and demand forces that are only increasing with pressure and there's no way it takes 10 years just to build a master hacker just it's a very hard complex space we become the easy button to address that supply problem yeah and this and the autonomous aspect makes appsec reviews as new things get pushed with Cloud native developers they're shifting left but still the security policies need to stay Pace as these new vectors threat vectors appear yeah I mean because that's what's happening a new new thing makes a vector possible that's exactly right I think there's two aspects one is the as you in increase change in your environment you need to increase testing they are absolutely correlated the second thing though is you know for 20 years we focused on remote code execution or rces as an industry what was the latest rce that gave an attacker access to my environment but if you look over the past few years that entire mindset has shifted credentials are the new code execution what I mean by that is if I have a large organization with a hundred a thousand ten thousand employees all it takes is one of them to have a password I can crack in credential spray and gain access to as an attacker and once I've gained access to a single user I'm going to systematically snowball that into something of consequence and so I think that the attackers have shifted away from looking for code execution and looked more towards harvesting credentials and cascading credentials from a regular domain user into an admin this brings up the conversation I would like to do it more Deep dive now shift into more of like the real kind of landscape of the market and your positioning and value proposition in that and that is managed services are becoming really popular as we move into this next next wave of super cloud and multi-cloud and hybrid Cloud because I mean multi-cloud and hybrid hybrid than multi-cloud sounds good on paper but the security Ops become big and one of the things we're reporting with here on the cube and siliconangle the past six months is devops has made the developer the IT team because they've essentially run it now in CI CD pipeline as they say that means it's replaced by data Ops or AI Ops or security Ops and data and security kind of go hand in hand so I can see that playing out do you believe that to be true that that's kind of the new operational kind of beach head that's critical and if so secure if data is part of security that makes security the new it yeah I I think that if you think about organizations hell even for Horizon 3 right now I don't need to hire a CIO I'll have a CSO and that CSO will own it and governance risk and compliance and security operations because at the end of the day the most pressing question for me to answer as a CEO is my security posture IIT is a supporting function of that security posture and we see that at say or a growth stage company like Horizon 3 but when I thought about my time at GE Capital we really shifted to this mindset of security by Design architecture as code and it was very much security driven conversation and I think that is the norm going forward and how do you view the idea that you have to enable a managed service provider with security also managing comp and which then manages the company to enable them to have agile security um security is code because what you're getting at is this autonomous layer that's going to be automated away to make the next talented layer whether it's coder or architect scale so the question is what is abstracted away at at automation seems to be the conversation that's coming out of this big cloud native or super cloud next wave of cloud scale I think there's uh there's two Dimensions to that and honestly I think the more interesting Dimension is not the technical side of it but rather think of the Equifax hack a bunch of years ago had Equifax used a managed security services provider would the CEO have been fired after the breach and the answer is probably not I think the CEO would have transferred enough reputational risk in operational risk to the third party mssp to save his job from being you know from him being fired you can look at that across the board I think that if if I were a CIO again I would be hard-pressed to build my own internal security function because I'm accepting that risk as an executive and we saw what just happened at Uber there's a ton of risk coming with that with the with accepting that as a security person so I think in the future the role of the mssp becomes more significant as a mechanism for transferring enough reputational and operational and legal risk to a third party so that you as the Core Company are able to protect yourself and your people now then what you think is a super cloud printables and Concepts being applied at mssp scale and I think that becomes really interesting talk about the talent opportunity because I think the managed service providers point to markets that are growing and changing also having managed service means that the customers can't always hire Talent hence they go to a Channel or a partner this seems to be a key part of the growth in your area talk about the talent aspect of it yeah um think back to what we saw in Cloud so as as Cloud picked up we saw IBM HP other Hardware companies sell more servers but to fewer customers Amazon Google and others right and so I think something similar is going to happen in the security space where I think you're going to see security tools providers selling more volume but to fewer customers that are just really big mssps so that is the the path forward and I think that the underlying Talent issue gives us economies at scale and that's what we saw this with Cloud we're going to see the same thing in the mssp space I've got a density of Talent Plus a density of automation plus a density of of relationships and ecosystem that give mssps a huge economies of scale advantage over everybody else I mean I want to get into the mssp business sounds like I make a lot of money yeah definitely it's profitable no doubt about it like that I got to ask more on the more of the burden side of it because if you're a partner I don't need another training class I don't need another tool I don't need someone saying this is the highest margin product I need to actually downsize my tools so right now there's hundreds of tools that mssps have all the time dealing with and does the customer so tools platforms we've kind of teased this out in previous conversations together but more more relevant to the mssp is what they do to the customers so talk about this uh burden of tools and the socks out there in the in in the landscape how do you how do you view that and what's the conversation like on average an organization has 130 different cyber security tools installed none of those tools were designed to work together none of those tools are from the same vendor and in fact oftentimes they're from vendors that have competing products and so what we don't have and they're still getting breached in the industry we don't have a tools problem we have an Effectiveness problem we have to reduce the number of tools we have get more out of out of the the effectiveness out of the existing infrastructure build muscle memory you know how to detect and respond to a breach and continuously verify that posture I think that's what the the most successful security organizations have mastered the fundamentals and they mastered that by making sure they were effective in detection and response not mastering it by buying the next shiny AI tool on the defensive side okay so you mentioned supply and demand early since you're brought up economics we'll get into the economic equations here when you have great profits that's going to attract more entrance into the marketplace so as more mssps enter the market you're going to start to see a little bit of competition maybe some fud maybe some price competitive price penetration all kinds of different Tactics get out go on there um how does that impact you because now does that impact your price or are you now part of them just competing on their own value what's that mean for the channel as more entrants come in hey you know I can compete against that other one does that create conflict is that an opportunity does are you neutral on that what's the position it's a great question actually I think the way it plays out is one we are neutral two the mssp has to stand on their own with their own unique value proposition otherwise they're going to become commoditized we saw this in the early cloud provider days the cloud providers that were just basically wrapping existing Hardware with with a race to the bottom pricing model didn't survive those that use the the cloud infrastructure as a starting point to build higher value capabilities they're the ones that have succeeded to this day the same Mo I think will occur in mssps which is there's a base level of capability that they've got to be able to deliver and it is the burden of the mssp to innovate effectively to elevate their value problem it's interesting Dynamic and I brought it up mainly because if you believe that this is going to be a growing New Market price erosion is more in mature markets so it's interesting to see that Dynamic come up and we'll see how that handles on the on the economics and just the macro side of it getting more into kind of like the next gen autonomous pen testing is a leading indicator that a new kind of security assessment is here um if I said that to you how do you respond to that what is this new security assessment mean what does that mean for the customer and to the partner and that that relationship down that whole chain yeah um back to I'm wearing a CIO hat right now don't tell me we're secure in PowerPoint show me we're secure Today Show me where we're secure tomorrow and then show me we're secure again next week because that's what matters to me if you can show me we're secure I can understand the risk I'm accepting and articulate it up to my board to my Regulators up until now we've had a PowerPoint tell me where secure culture and security and I just don't think that's going to last all that much longer so I think the future of security testing and assessment is this shift from a PowerPoint report to truly showing me that my I'm secure enough you guys auto-generate those statements now you mentioned that earlier that's exactly right because the other part is you know the classic way to do security reports was garbage in garbage out you had a human kind of theoretically fill out a spreadsheet that magically came up with the risk score or security posture that doesn't work that's a check the box mentality what you want to have is an accurate High Fidelity understanding of your blind spots your threat vectors what data is at risk what credentials are at risk you want to look at those results over time how quickly did I find problems how quickly did I fix them how often did they reoccur and that is how you get to a show me where secure culture whether I'm a company or I'm a channel partner working with Horizon 3.ai I have to put my name on the line and say Here's a service level agreement I'm going to stand behind there's levels of compliance you mentioned that earlier how do you guys help that area because that becomes I call the you know below the line I got to do it anyway usually it's you know they grind out the work but it has to be fundamental because if the threats vectors are increasing and you're handling it like you say you are the way it is real time today tomorrow the next day you got to have that other stuff flow into it can you describe how that works under the hood yeah there's there's two parts to it the first part is that attackers don't have to hack in with zero days they log in with credentials that they found but often what attackers are doing is chaining together different types of problems so if you have 10 different tactics you can chain those together a number of different ways it's not just 10 to the 10th it's it's actually because you don't you don't have to use all the tactics at once this is a very large number of combinations that an attacker can apply upon you is what it comes down to and so at the base level what you want to have is what are the the primary tactics that are being used and those tactics are always being added to and evolving what are the primary outcomes that an attacker is trying to achieve steal your data disrupt your systems become a domain admin and borrow and now what you have is it actually looks more like a chess game algorithm than it does any sort of hard-coded automation or anything else which is based on the pieces on the board the the it infrastructure I've discovered what is the next best action to become a domain admin or steal your data and that's the underlying innovation in IP we've created which is next best action Knowledge Graph analytics and adaptiveness to figure out how to combine different problems together to achieve an objective that an attacker cares about so the 3D chess players out there I'd say that's more like 3D chess are the practitioners implementing it but when I think about compliance managers I don't see 3D chess players I see back office accountants in my mind like okay are they actually even understand what comes out of that so how do you handle the compliance side do you guys just check the boxes there is it not part of it is it yeah I I know I don't Envision the compliance guys on the front lines identifying vectors do you know what it doesn't even know what it means yeah it's a great question when you think about uh the market segmentation I think there are we've seen are three basic types of users you've got the the really mature high frequency security testing purple team type folks and for them we are the the force multiplier for them to secure the environment you then have the middle group where the IT person and the security person are the same individual they are barely Treading Water they don't know what their attack surface is and they don't know what to focus on we end up that's actually where we started with the barely Treading Water Persona and that's why we had a product that helped those Network Engineers become superheroes the third segment are those that view security and compliance as synonymous and they don't really care about continuous they care about running and checking the box for PCI and forever else and those customers while they use us they are better served by our partner ecosystem and that's really so the the first two categories tend to use us directly self-service pen tests as often as they want that compliance-minded folks end up going through our partners because they're better served there steel great to have you on thanks for this deep dive on um under the hood section of the interview appreciate it and I think autonomous is is an indicator Beyond pen testing pen testing has become like okay penetration security but this is not going away where do you see this evolving what's next what's next for Horizon take a minute to give a plug for what's going on with copy how do you see it I know you got good margins you're raising Capital always raising money you're not yet public um looking good right now as they say yeah yeah well I think the first thing is our company strategy is in three chapters chapter one is become the best security testing platform in the industry period that's it and be very good at helping you find and fix your security blind spots that's chapter one we've been crushing it there with great customer attraction great partner traction chapter two which we've started to enter is look at our results over time to help that that GRC officer or auditor accurately assess the security posture of an organization and we're going to enter that chapter about this time next year longer term though the big Vision I have is how do I use offense to inform defense so for me chapter three is how do I get away from just security testing towards autonomous security overall where you can use our security testing platform to identify ways to attack that informs defensive tools exactly where to focus how to adjust and so on and now you've got offset and integrated learning Loop between attack and defense that's the future never been done before Master the art of attack to become a better Defender is the bigger vision of the company love the new paradigm security congratulations been following you guys we will continue to follow you thanks for coming on the Special Report congratulations on the new Market expansion International going indirect that a big way congratulations thank you John appreciate it okay this is a special presentation with the cube and Horizon 3.ai I'm John Furrier your host thanks for watching thank you
SUMMARY :
the game and security great to see you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
10 years | QUANTITY | 0.99+ |
Snehal Antani | PERSON | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
GE Capital | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
Tony | PERSON | 0.99+ |
PowerPoint | TITLE | 0.99+ |
two parts | QUANTITY | 0.99+ |
10 different tactics | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
U.S | LOCATION | 0.99+ |
first part | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
GRC | ORGANIZATION | 0.99+ |
third segment | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
10th | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
first two categories | QUANTITY | 0.99+ |
three basic types | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
10 | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
second thing | QUANTITY | 0.98+ |
Cloud | TITLE | 0.97+ |
eight years ago | DATE | 0.97+ |
Horizon 3 | TITLE | 0.96+ |
hundreds of tools | QUANTITY | 0.95+ |
next year | DATE | 0.95+ |
single user | QUANTITY | 0.95+ |
horizon | ORGANIZATION | 0.94+ |
Horizon 3.ai | TITLE | 0.93+ |
one | QUANTITY | 0.93+ |
past six months | DATE | 0.93+ |
hundred a thousand ten thousand employees | QUANTITY | 0.92+ |
5 000 certified pen testers | QUANTITY | 0.92+ |
zero days | QUANTITY | 0.92+ |
130 different cyber security tools | QUANTITY | 0.91+ |
next day | DATE | 0.9+ |
wave | EVENT | 0.89+ |
Horizon 3.a | ORGANIZATION | 0.88+ |
three | QUANTITY | 0.87+ |
next six months | DATE | 0.87+ |
SAS | ORGANIZATION | 0.87+ |
chapter three | OTHER | 0.86+ |
Horizon 3 | ORGANIZATION | 0.85+ |
lot of money | QUANTITY | 0.82+ |
first thing | QUANTITY | 0.77+ |
CEO | PERSON | 0.74+ |
niho | PERSON | 0.72+ |
chapter one | OTHER | 0.71+ |
of years ago | DATE | 0.7+ |
chapter two | OTHER | 0.7+ |
two Dimensions | QUANTITY | 0.7+ |
past few years | DATE | 0.7+ |
Street | LOCATION | 0.7+ |
Horizon | ORGANIZATION | 0.7+ |
3 | TITLE | 0.65+ |
Salesforce | TITLE | 0.64+ |
Wall Street | ORGANIZATION | 0.63+ |
two | QUANTITY | 0.61+ |
ORGANIZATION | 0.61+ | |
HP | ORGANIZATION | 0.61+ |
3.ai | TITLE | 0.6+ |
CSO | TITLE | 0.59+ |
users | QUANTITY | 0.5+ |
Wall | ORGANIZATION | 0.5+ |
Today | DATE | 0.47+ |
Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally
hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching
SUMMARY :
that's the sort of stuff that we do you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Patrick Coughlin | PERSON | 0.99+ |
Jennifer Lee | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Raina Richter | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Patrick | PERSON | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
seven minutes | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
2.7 billion | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
Finland | LOCATION | 0.99+ |
seven hours | QUANTITY | 0.99+ |
sixty percent | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Sweden | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
six weeks | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
19 credentials | QUANTITY | 0.99+ |
ten dollars | QUANTITY | 0.99+ |
Jennifer | PERSON | 0.99+ |
5 000 hosts | QUANTITY | 0.99+ |
Horizon 3 | TITLE | 0.99+ |
Wednesday | DATE | 0.99+ |
30 | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
three licenses | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
European Union | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
seven occurrences | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
Horizon 3.ai | TITLE | 0.99+ |
ATT | ORGANIZATION | 0.99+ |
Net Zero | ORGANIZATION | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
less than two percent | QUANTITY | 0.99+ |
less than two hours | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
UK | LOCATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
four issues | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
three steps | QUANTITY | 0.99+ |
node 0 | TITLE | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
hundred percent | QUANTITY | 0.99+ |
node zero | TITLE | 0.99+ |
10x | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
7 minutes | QUANTITY | 0.99+ |
one license | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
thousands of hosts | QUANTITY | 0.99+ |
five thousand hosts | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Rob Emsley, Dell Technologies
>>Welcome back to a blueprint for trusted infrastructure. We're here with Rob Emsley. Who's the director of product marketing for data protection and cyber security. Rob. Good to see you a new role. >>Yeah. Good to be back, Dave. Good to see you. Yeah, it's been a while since we chatted last and you know, one of the changes in, in my world is that I've expanded my responsibilities beyond data protection marketing, to also focus on cyber security marketing specifically for our infrastructure solutions group. So certainly that's, you know, something that really has driven us to, you know, to come and have this conversation with you today. >>So data protection obviously has become an increasingly important component of the cyber security space. I, I don't think necessarily of, you know, traditional backup and recovery as security it's to me, it's an adjacency. I know some companies have said, oh yeah, now we're a security company. They're kind of chasing the valuation for sure. Bubble. Dell's interesting because you, you have, you know, data protection in the form of backup and recovery and data management, but you also have security, you know, direct security capabilities. So you're sort of bringing those two worlds together and it sounds like your responsibility is to, to connect those, those dots. Is that right? >>Absolutely. Yeah. I mean, I think that the reality is, is that security is a, a multi-layer discipline. I think the, the days of thinking that it's one or another technology that you can use or process that you can use to make your organization secure long gone. I mean, certainly you actually correct. If you think about the backup and recovery space, I mean, people have been doing that for years, you know, certainly backup and recovery. It's all about the recovery. It's all about getting yourself backup and running when bad things happen. And one of the realities, unfortunately today is that one of the worst things that can happen is cyber attacks. You know, ransomware, malware are all things that are top of mind for all organizations today. And that's why you see a lot of technology and a lot of innovation going into the backup and recovery space, because if you have a copy, a good copy of your data, then that is really the, the first place you go to recover from a cyber attack. >>And that's why it's so important. The reality is is that unfortunately the cyber criminals keep on getting smarter. I don't know how it happens, but one of the things that is happening is that the days of them just going after your production data are no longer the only challenge that you have, they go after your, your backup data as well. So over the last half a decade, Dell technologies with its backup and recovery portfolio has introduced the concept of isolated cyber recovery volts. And that is really the, you know, we've had many conversations about that over the years. Yeah. And that's really a big tenant of what we do in the debt protection portfolio. >>So this idea of, of cybersecurity resilience, that definition is evolving. What does it mean to you? >>Yeah, I think the, the analyst team over at Gartner, they wrote a, a very insightful paper called you will be hacked, embraced the breach. And the whole basis of this analysis is so much money's been spent on prevention is that what's outta balance is the amount of budget that companies have spent on cyber resilience and cyber resilience is based upon the premise that you will be hacked. You have to embrace that fact and be ready and prepared to bring yourself back into business. You know, and that's really where cyber resiliency is very, very different than cyber security and prevention, you know, and I think that balance of get your security disciplines well funded, get your defenses as good as you can get them, but make sure that if the inevitable happens and you find yourself compromised that you have a great recovery plan and certainly a great recovery plan, it's really the basis of any good solid data protection backup from recovery philosophy. >>So if I had to do a SWOT analysis, we don't have to do the w OT, but let's focus on the S what would you say are Dell's strengths in this, you know, cybersecurity space, as it relates to data protection. >>One is we've been doing it a long time. You know, we talk a lot about Dell's data protection being proven and modern. You know, certainly the experience that we've had over literally three decades of providing enterprise scale data protection solutions to our customers has really allowed us to have a lot of insight into what works and what doesn't, as I mentioned to you, one of the unique differentiators of our solution is the cyber recovery vaulting solution that we introduce a little over five years ago, five, six years, power protect cyber recovery is something which has become a unique capability for customers to adopt on top of their investment in Dell technologies, data protection, you know, the, the unique elements of our solution already threefold, and it's, we call them the three eyes. It's isolation, it's a mutability and its intelligence. And the, the isolation part is really so important because you need to reduce the attack surface of your good known copies of data. >>You know, you need to put it in a location that the bad actors can't get to it. And that really is the, the, the, the essence of a cyber recovery vault. Interestingly enough, you're starting to see the market throw out that word, you know, from many other places, but really it comes down to having a real discipline that you don't allow the security of your cyber recovery vault to be compromised insofar as allowing it to be controlled from outside of the vault, you know, allowing it to be controlled by your backup application. Our cyber recovery vaulting technology is independent of the backup infrastructure. It uses it, but it controls its own security. And that is so, so important. It's like having a, a vault that the only way to open it is from the inside, you know, and think about that. If you think about, you know, vaults in banks or vaults in your home, normally you have a keypad on the outside, think of our cyber recovery vault as having its security controlled from inside of the vault. >>So nobody can get in, nothing can get in unless it's already in. And if it's already in, then it's trusted. Exactly. Yeah, exactly. Yeah. So isolation's the key. And then you, you mentioned immutability is the second piece. >>Yeah, so I, mutability is, is also something which has been around for a long time. People talk about backup mutability or immutable backup copies. So immutability is just the, the, the additional technology that allows the data that's inside of the vault to be unchangeable, you know, but again, that immutability, you know, your mileage varies, you know, when you look across the, the different offers that are out there in the market, especially in the backup industry, you make a very valid point earlier that the backup vendors in the market seem to be security, washing their marketing messages. I mean, everybody is leaning into the ever present danger of cyber security, not a bad thing, but the reality is is that you have to have the technology to back it up, you know, quite literally, >>Yeah, yeah, no pun intended. Right. And then actually pun intended. Now what about the intelligence piece of it? That's that's AI ML, where does that fit >>For sure. So the intelligence piece is delivered by a solution called cyber sense. And cyber sense for us is what really gives you the confidence that what you have in your cyber recovery volt is a good clean copy of data. So it's looking at the backup copies that get driven into the cyber volt, and it's looking for anomalies. So it's not looking for signatures of malware. You know, that's what your antivirus software does. That's what your endpoint protection software does. That's on the prevention side of the equation. But what we're looking for is we're looking to ensure that the data that you need when all hell breaks loose is good, and that when you get a request to restore and recover your business, you go right, let's go and do it. And you don't have any concern that what you have in the vault has been compromised. >>So cyber sense is really a, a unique analytics solution in the market, based upon the fact that it, it, isn't looking at at cursory indicators of, of, of, of, of malware infection or, or, or ransomware introduction it's doing full content analytics, you know, looking at, you know, has the data in any way changed, has it suddenly become encrypted? Has it suddenly become different to how it was in the previous scan? So that anomaly detection is very, very different. It's looking for, you know, like different characteristics that really are an indicator that something is going on. And of course, if it sees it, you immediately get flagged. But the good news is, is that you always have in the vault, the previous copy of good known data, which now becomes your restore point. >>So we're talking to Rob Emsley about how data protection fits into what Dell calls DT, I, Dell trusted infrastructure. And, and I'm, I want to come back Rob to this notion of, and, or cuz I think a lot of people are skeptical. Like how can I have great security and not introduce friction into my organization? Is that an automation play? How, how does Dell tackle that problem? >>I mean, I think a lot of it is across our infrastructure is, is security has to be built in, I mean, intrinsic security within our servers, within our storage devices, within our elements of our backup infrastructure. I mean, security, multifactor authentication, you know, elements that make the overall infrastructure secure. You know, we have capabilities that, you know, allow us to identify whether or not configurations have changed. You know, we'll probably be talking about that a little bit more to you later in the segment, but the, the essence is, is security is not, not a Bolton. It has to be part of the overall infrastructure. And that's so true, certainly in the data protection space, >>Give us the, the, the bottom line on, on how you see Dell's key differentiators. Maybe you could talk about Dell, of course always talks about its portfolio, but, but why should customers, you know, lead in to Dell in, in this whole cyber resilience space, >>You know, staying on the data protection space. As I mentioned, the, the, the work we've been doing to introduce this cyber resiliency solution for debt protection is in our opinion, as good as it gets, you know, the, you know, you've spoken to a number of our, of our best customers, whether it be Bob bender from founders, federal, or more recently at Delta arches world, you spoke to Tony Bryson yep. From the town of Gilbert. And these are customers that we've had for many years that have implemented cyber recovery volts. And at the end of the day, they can now sleep at night. You know, that's really the, the peace of mind that they have is that the insurance that a data protection from Dell cyber recovery vault a para protect cyber recovery solution, gives them, you know, really allows them to, you know, just have the assurance that they don't have to pay a ransom if they have a, an insider threat issue. And you know, all the way down to data deletion is they know that what's in the cyber recovery vault is good and ready for them to recover from. >>Great, well, Rob, congratulations on the new scope of responsibility. I like how you know, your organization is expanding as the threat surface is expanding. As we said, data protection becoming an adjacency to, to security, not security in and of itself. A key component of a comprehensive security strategy. Rob Emsley. Thank you for coming back in the cube. Good to see you again. >>You too, Dave. Thanks. >>All right. In a moment, I'll be back to wrap up a blueprint for trusted infrastructure. You watching the cube.
SUMMARY :
Good to see you a new role. something that really has driven us to, you know, to come and have this conversation with you today. but you also have security, you know, direct security capabilities. recovery space, I mean, people have been doing that for years, you know, certainly backup and recovery. And that is really the, you know, What does it mean to you? that if the inevitable happens and you find yourself you say are Dell's strengths in this, you know, cybersecurity space, And the, the isolation part is really so important because you need is from the inside, you know, and think about that. you mentioned immutability is the second piece. you know, but again, that immutability, you know, your mileage varies, And then actually pun intended. And you don't have any concern that what you have in the vault has been compromised. you know, looking at, you know, has the data in any way So we're talking to Rob Emsley about how data protection fits into what Dell calls DT, You know, we have capabilities that, you know, allow us to identify whether or not you know, lead in to Dell in, in this whole cyber resilience space, as good as it gets, you know, the, you know, you've spoken to a number of I like how you know, In a moment, I'll be back to wrap up a blueprint for trusted infrastructure.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Tony Bryson | PERSON | 0.99+ |
Rob Emsley | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
second piece | QUANTITY | 0.99+ |
Bob bender | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Gilbert | LOCATION | 0.99+ |
three decades | QUANTITY | 0.98+ |
five | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
two worlds | QUANTITY | 0.96+ |
three eyes | QUANTITY | 0.96+ |
One | QUANTITY | 0.94+ |
last half a decade | DATE | 0.92+ |
Delta | ORGANIZATION | 0.89+ |
five years ago | DATE | 0.83+ |
years | QUANTITY | 0.82+ |
over | DATE | 0.72+ |
six years | QUANTITY | 0.54+ |
Dell A Blueprint for Trusted Infrastructure
the cyber security landscape has changed dramatically over the past 24 to 36 months rapid cloud migration has created a new layer of security defense sure but that doesn't mean csos can relax in many respects it further complicates or at least changes the ciso's scope of responsibilities in particular the threat surface has expanded and that creates more seams and cisos have to make sure their teams pick up where the hyperscaler clouds leave off application developers have become a critical execution point for cyber assurance shift left is the kind of new buzz phrase for devs but organizations still have to shield right meaning the operational teams must continue to partner with secops to make sure infrastructure is resilient so it's no wonder that in etr's latest survey of nearly 1500 cios and it buyers that business technology executives cite security as their number one priority well ahead of other critical technology initiatives including collaboration software cloud computing and analytics rounding out the top four but budgets are under pressure and csos have to prioritize it's not like they have an open checkbook they have to contend with other key initiatives like those just mentioned to secure the funding and what about zero trust can you go out and buy xero trust or is it a framework a mindset in a series of best practices applied to create a security consciousness throughout the organization can you implement zero trust in other words if a machine or human is not explicitly allowed access then access is denied can you implement that policy without constricting organizational agility the question is what's the most practical way to apply that premise and what role does infrastructure play as the enforcer how does automation play in the equation the fact is that today's approach to cyber resilient type resilience can't be an either or it has to be an and conversation meaning you have to ensure data protection while at the same time advancing the mission of the organization with as little friction as possible and don't even talk to me about the edge that's really going to keep you up at night hello and welcome to the special cube presentation a blueprint for trusted infrastructure made possible by dell technologies in this program we explore the critical role that trusted infrastructure plays in cyber security strategies how organizations should think about the infrastructure side of the cyber security equation and how dell specifically approaches securing infrastructure for your business we'll dig into what it means to transform and evolve toward a modern security infrastructure that's both trusted and agile first up are pete gear and steve kenniston they're both senior cyber security consultants at dell technologies and they're going to talk about the company's philosophy and approach to trusted infrastructure and then we're going to speak to paris arcadi who's a senior consultant for storage at dell technologies to understand where and how storage plays in this trusted infrastructure world and then finally rob emsley who heads product marketing for data protection and cyber security he's going to take a deeper dive with rob into data protection and explain how it has become a critical component of a comprehensive cyber security strategy okay let's get started pete gear steve kenniston welcome to the cube thanks for coming into the marlboro studios today great to be here dave thanks dave good to see you great to see you guys pete start by talking about the security landscape you heard my little rap up front what are you seeing i thought you wrapped it up really well and you touched on all the key points right technology is ubiquitous today it's everywhere it's no longer confined to a monolithic data center it lives at the edge it lives in front of us it lives in our pockets and smartphones along with that is data and as you said organizations are managing sometimes 10 to 20 times the amount of data that they were just five years ago and along with that cyber crime has become a very profitable enterprise in fact it's been more than 10 years since uh the nsa chief actually called cyber crime the biggest transfer of wealth in history that was 10 years ago and we've seen nothing but accelerating cyber crime and really sophistication of how those attacks are perpetrated and so the new security landscape is really more of an evolution we're finally seeing security catch up with all of the technology adoption all the build out the work from home and work from anywhere that we've seen over the last couple of years we're finally seeing organizations and really it goes beyond the i t directors it's a board level discussion today security's become a board level discussion yeah i think that's true as well it's like it used to be the security was okay the secops team you're responsible for security now you've got the developers are involved the business lines are involved it's part of onboarding for most companies you know steve this concept of zero trust it was kind of a buzzword before the pandemic and i feel like i've often said it's now become a mandate but it's it's it's still fuzzy to a lot of people how do you guys think about zero trust what does it mean to you how does it fit yeah i thought again i thought your opening was fantastic in in this whole lead into to what is zero trust it had been a buzzword for a long time and now ever since the federal government came out with their implementation or or desire to drive zero trust a lot more people are taking a lot more seriously because i don't think they've seen the government do this but ultimately let's see ultimately it's just like you said right if if you don't have trust to those particular devices uh applications or data you can't get at it the question is and and you phrase it perfectly can you implement that as well as allow the business to be as agile as it needs to be in order to be competitive because we're seeing with your whole notion around devops and the ability to kind of build make deploy build make deploy right they still need that functionality but it also needs to be trusted it needs to be secure and things can't get away from you yeah so it's interesting we attended every uh reinforce since 2019 and the narrative there is hey everything in this in the cloud is great you know and this narrative around oh security is a big problem is you know doesn't help the industry the fact is that the big hyperscalers they're not strapped for talent but csos are they don't have the the capabilities to really apply all these best practices they're they're playing whack-a-mole so they look to companies like yours to take their r your r d and bake it into security products and solutions so what are the critical aspects of the so-called dell trusted infrastructure that we should be thinking about yeah well dell trusted infrastructure for us is a way for us to describe uh the the work that we do through design development and even delivery of our it system so dell trusted infrastructure includes our storage it includes our servers our networking our data protection our hyper converged everything that infrastructure always has been it's just that today customers consume that infrastructure at the edge as a service in a multi-cloud environment i mean i view the cloud as really a way for organizations to become more agile and to become more flexible and also to control costs i don't think organizations move to the cloud or move to a multi-cloud environment to enhance security so i don't see cloud computing as a panacea for security i see it as another attack surface and another uh aspect in front that organizations and and security organizations and departments have to manage it's part of their infrastructure today whether it's in their data center in a cloud or at the edge i mean i think it's a huge point because a lot of people think oh data's in the cloud i'm good it's like steve we've talked about oh why do i have to back up my data it's in the cloud well you might have to recover it someday so i don't know if you have anything to add to that or any additional thoughts on it no i mean i think i think like what pete was saying when it comes to when it comes to all these new vectors for attack surfaces you know people did choose the cloud in order to be more agile more flexible and all that did was open up to the csos who need to pay attention to now okay where can i possibly be attacked i need to be thinking about is that secure and part of the part of that is dell now also understands and thinks about as we're building solutions is it is it a trusted development life cycle so we have our own trusted development life cycle how many times in the past did you used to hear about vendors saying you got to patch your software because of this we think about what changes to our software and what implementations and what enhancements we deliver can actually cause from a security perspective and make sure we don't give up or or have security become a whole just in order to implement a feature we got to think about those things yeah and as pete alluded to our secure supply chain so all the way through knowing what you're going to get when you actually receive it is going to be secure and not be tampered with becomes vitally important and pete and i were talking earlier when you have tens of thousands of devices that need to be delivered whether it be storage or laptops or pcs or or whatever it is you want to be you want to know that that that those devices are can be trusted okay guys maybe pete you could talk about the how dell thinks about it's its framework and its philosophy of cyber security and then specifically what dell's advantages are relative to the competition yeah definitely dave thank you so we've talked a lot about dell as a technology provider but one thing dell also is is a partner in this larger ecosystem we realize that security whether it's a zero trust paradigm or any other kind of security environment is an ecosystem uh with a lot of different vendors so we look at three areas one is protecting data in systems we know that it starts with and ends with data that helps organizations combat threats across their entire infrastructure and what it means is dell's embedding security features consistently across our portfolios of storage servers networking the second is enhancing cyber resiliency over the last decade a lot of the funding and spending has been in protecting or trying to prevent cyber threats not necessarily in responding to and recovering from threats right we call that resiliency organizations need to build resiliency across their organization so not only can they withstand a threat but they can respond recover and continue with their operations and the third is overcoming security complexity security is hard it's more difficult because of the things we've talked about about distributed data distributed technology and and attack surfaces everywhere and so we're enabling organizations to scale confidently to continue their business but know that all all the i.t decisions that they're making um have these intrinsic security features and are built and delivered in a consistent security so those are kind of the three pillars maybe we could end on what you guys see as the key differentiators that people should know about that that dell brings to the table maybe each of you could take take a shot at that yeah i think first of all from from a holistic portfolio perspective right the uh secure supply chain and the secure development life cycle permeate through everything dell does when building things so we build things with security in mind all the way from as pete mentioned from from creation to delivery we want to make sure you have that that secure device or or asset that permeates everything from servers networking storage data protection through hyper converge through everything that to me is really a key asset because that means you can you understand when you receive something it's a trusted piece of your infrastructure i think the other core component to think about and pete mentioned as dell being a partner for making sure you can deliver these things is that even though those are that's part of our framework these pillars are our framework of how we want to deliver security it's also important to understand that we are partners and that you don't need to rip and replace but as you start to put in new components you can be you can be assured that the components that you're replacing as you're evolving as you're growing as you're moving to the cloud as you're moving to a more on-prem type services or whatever that your environment is secure i think those are two key things got it okay pete bring us home yeah i think one of one of the big advantages of dell is our scope and our scale right we're a large technology vendor that's been around for decades and we develop and sell almost every piece of technology we also know that organizations are might make different decisions and so we have a large services organization with a lot of experienced services people that can help customers along their security journey depending on whatever type of infrastructure or solutions that they're looking at the other thing we do is make it very easy to consume our technology whether that's traditional on-premise in a multi-cloud environment uh or as a service and so the best of breed technology can be consumed in any variety of fashion and know that you're getting that consistent secure infrastructure that dell provides well and dell's forgot the probably top supply chain not only in the tech business but probably any business and so you can actually take take your dog food and then and allow other billionaire champagne sorry allow other people to you know share share best practices with your with your customers all right guys thanks so much for coming thank you appreciate it okay keep it right there after this short break we'll be back to drill into the storage domain you're watching a blueprint for trusted infrastructure on the cube the leader in enterprise and emerging tech coverage be right back concern over cyber attacks is now the norm for organizations of all sizes the impact of these attacks can be operationally crippling expensive and have long-term ramifications organizations have accepted the reality of not if but when from boardrooms to i.t departments and are now moving to increase their cyber security preparedness they know that security transformation is foundational to digital transformation and while no one can do it alone dell technologies can help you fortify with modern security modern security is built on three pillars protect your data and systems by modernizing your security approach with intrinsic features and hardware and processes from a provider with a holistic presence across the entire it ecosystem enhance your cyber resiliency by understanding your current level of resiliency for defending your data and preparing for business continuity and availability in the face of attacks overcome security complexity by simplifying and automating your security operations to enable scale insights and extend resources through service partnerships from advanced capabilities that intelligently scale a holistic presence throughout it and decades as a leading global technology provider we'll stop at nothing to help keep you secure okay we're back digging into trusted infrastructure with paris sarcadi he's a senior consultant for product marketing and storage at dell technologies parasaur welcome to the cube good to see you great to be with you dave yeah coming from hyderabad awesome so i really appreciate you uh coming on the program let's start with talking about your point of view on what cyber security resilience means to to dell generally but storage specifically yeah so for something like storage you know we are talking about the data layer name and if you look at cyber security it's all about securing your data applications and infrastructure it has been a very mature field at the network and application layers and there are a lot of great technologies right from you know enabling zero trust advanced authentications uh identity management systems and so on and and in fact you know with the advent of you know the the use of artificial intelligence and machine learning really these detection tools for cyber securities have really evolved in the network and the application spaces so for storage what it means is how can you bring them to the data layer right how can you bring you know the principles of zero trust to the data layer uh how can you leverage artificial intelligence and machine learning to look at you know access patterns and make intelligent decisions about maybe an indicator of a compromise and identify them ahead of time just like you know how it's happening and other ways of applications and when it comes to cyber resilience it's it's basically a strategy which assumes that a threat is imminent and it's a good assumption with the severity of the frequency of the attacks that are happening and the question is how do we fortify the infrastructure in the switch infrastructure to withstand those attacks and have a plan a response plan where we can recover the data and make sure the business continuity is not affected so that's uh really cyber security and cyber resiliency and storage layer and of course there are technologies like you know network isolation immutability and all these principles need to be applied at the storage level as well let me have a follow up on that if i may the intelligence that you talked about that ai and machine learning is that do you do you build that into the infrastructure or is that sort of a separate software module that that points at various you know infrastructure components how does that work both dave right at the data storage level um we have come with various data characteristics depending on the nature of data we developed a lot of signals to see what could be a good indicator of a compromise um and there are also additional applications like cloud iq is the best example which is like an infrastructure wide health monitoring system for dell infrastructure and now we have elevated that to include cyber security as well so these signals are being gathered at cloud iq level and other applications as well so that we can make those decisions about compromise and we can either cascade that intelligence and alert stream upstream for uh security teams um so that they can take actions in platforms like sign systems xtr systems and so on but when it comes to which layer the intelligence is it has to be at every layer where it makes sense where we have the information to make a decision and being closest to the data we have we are basically monitoring you know the various parallels data access who is accessing um are they crossing across any geo fencing uh is there any mass deletion that is happening or a mass encryption that is happening and we are able to uh detect uh those uh patterns and flag them as indicators of compromise and in allowing automated response manual control and so on for it teams yeah thank you for that explanation so at dell technologies world we were there in may it was one of the first you know live shows that that we did in the spring certainly one of the largest and i interviewed shannon champion and a huge takeaway from the storage side was the degree to which you guys emphasized security uh within the operating systems i mean really i mean powermax more than half i think of the features were security related but also the rest of the portfolio so can you talk about the the security aspects of the dell storage portfolio specifically yeah yeah so when it comes to data security and broadly data availability right in the context of cyber resiliency dell storage this you know these elements have been at the core of our um a core strength for the portfolio and the source of differentiation for the storage portfolio you know with almost decades of collective experience of building highly resilient architectures for mission critical data something like power max system which is the most secure storage platform for high-end enterprises and now with the increased focus on cyber security we are extending those core technologies of high availability and adding modern detection systems modern data isolation techniques to offer a comprehensive solution to the customer so that they don't have to piece together multiple things to ensure data security or data resiliency but a well-designed and well-architected solution by design is delivered to them to ensure cyber protection at the data layer got it um you know we were talking earlier to steve kenniston and pete gear about this notion of dell trusted infrastructure how does storage fit into that as a component of that sort of overall you know theme yeah and you know and let me say this if you could adjust because a lot of people might be skeptical that i can actually have security and at the same time not constrict my organizational agility that's old you know not an ore it's an end how do you actually do that if you could address both of those that would be great definitely so for dell trusted infrastructure cyber resiliency is a key component of that and just as i mentioned you know uh air gap isolation it really started with you know power protect cyber recovery you know that was the solution more than three years ago we launched and that was first in the industry which paved way to you know kind of data isolation being a core element of data management and uh for data infrastructure and since then we have implemented these technologies within different storage platforms as well so that customers have the flexibility depending on their data landscape they can approach they can do the right data isolation architecture right either natively from the storage platform or consolidate things into the backup platform and isolate from there and and the other key thing we focus in trusted infrastructure dell infra dell trusted infrastructure is you know the goal of simplifying security for the customers so one good example here is uh you know being able to respond to these cyber threats or indicators of compromise is one thing but an i.t security team may not be looking at the dashboard of the storage systems constantly right storage administration admins may be looking at it so how can we build this intelligence and provide this upstream platforms so that they have a single pane of glass to understand security landscape across applications across networks firewalls as well as storage infrastructure and in compute infrastructure so that's one of the key ways where how we are helping simplify the um kind of the ability to uh respond ability to detect and respond these threads uh in real time for security teams and you mentioned you know about zero trust and how it's a balance of you know not uh kind of restricting users or put heavy burden on you know multi-factor authentication and so on and this really starts with you know what we're doing is provide all the tools you know when it comes to advanced authentication uh supporting external identity management systems multi-factor authentication encryption all these things are intrinsically built into these platforms now the question is the customers are actually one of the key steps is to identify uh what are the most critical parts of their business or what are the applications uh that the most critical business operations depend on and similarly identify uh mission critical data where part of your response plan where it cannot be compromised where you need to have a way to recover once you do this identification then the level of security can be really determined uh by uh by the security teams by the infrastructure teams and you know another you know intelligence that gives a lot of flexibility uh for for even developers to do this is today we have apis um that so you can not only track these alerts at the data infrastructure level but you can use our apis to take concrete actions like blocking a certain user or increasing the level of authentication based on the threat level that has been perceived at the application layer or at the network layer so there is a lot of flexibility that is built into this by design so that depending on the criticality of the data criticality of the application number of users affected these decisions have to be made from time to time and it's as you mentioned it's it's a balance right and sometimes you know if if an organization had a recent attack you know the level of awareness is very high against cyber attacks so for a time you know these these settings may be a bit difficult to deal with but then it's a decision that has to be made by security teams as well got it so you're surfacing what may be hidden kpis that are being buried inside for instance the storage system through apis upstream into a dashboard so that somebody could you know dig into the storage tunnel extract that data and then somehow you know populate that dashboard you're saying you're automating that that that workflow that's a great example and you may have others but is that the correct understanding absolutely and it's a two-way integration let's say a detector an attack has been detected at a completely different layer right in the application layer or at a firewall we can respond to those as well so it's a two-way integration we can cascade things up as well as respond to threats that have been detected elsewhere um uh through the api that's great all right hey api for power skill is the best example for that uh excellent so thank you appreciate that give us the last word put a bow on this and and bring this segment home please absolutely so a dell storage portfolio um using advanced data isolation um with air gap having machine learning based algorithms to detect uh indicators of compromise and having rigor mechanisms with granular snapshots being able to recover data and restore applications to maintain business continuity is what we deliver to customers uh and these are areas where a lot of innovation is happening a lot of product focus as well as you know if you look at the professional services all the way from engineering to professional services the way we build these systems the way we we configure and architect these systems um cyber security and protection is a key focus uh for all these activities and dell.com securities is where you can learn a lot about these initiatives that's great thank you you know at the recent uh reinforce uh event in in boston we heard a lot uh from aws about you know detent and response and devops and machine learning and some really cool stuff we heard a little bit about ransomware but i'm glad you brought up air gaps because we heard virtually nothing in the keynotes about air gaps that's an example of where you know this the cso has to pick up from where the cloud leaves off but that was in front and so number one and number two we didn't hear a ton about how the cloud is making the life of the cso simpler and that's really my takeaway is is in part anyway your job and companies like dell so paris i really appreciate the insights thank you for coming on thecube thank you very much dave it's always great to be in these uh conversations all right keep it right there we'll be right back with rob emsley to talk about data protection strategies and what's in the dell portfolio you're watching thecube data is the currency of the global economy it has value to your organization and cyber criminals in the age of ransomware attacks companies need secure and resilient it infrastructure to safeguard their data from aggressive cyber attacks [Music] as part of the dell technologies infrastructure portfolio powerstor and powermax combine storage innovation with advanced security that adheres to stringent government regulations and corporate compliance requirements security starts with multi-factor authentication enabling only authorized admins to access your system using assigned roles tamper-proof audit logs track system usage and changes so it admins can identify suspicious activity and act with snapshot policies you can quickly automate the protection and recovery process for your data powermax secure snapshots cannot be deleted by any user prior to the retention time expiration dell technologies also make sure your data at rest stays safe with power store and powermax data encryption protects your flash drive media from unauthorized access if it's removed from the data center while adhering to stringent fips 140-2 security requirements cloud iq brings together predictive analytics anomaly detection and machine learning with proactive policy-based security assessments monitoring and alerting the result intelligent insights that help you maintain the security health status of your storage environment and if a security breach does occur power protect cyber recovery isolates critical data identifies suspicious activity and accelerates data recovery using the automated data copy feature unchangeable data is duplicated in a secure digital vault then an operational air gap isolates the vault from the production and backup environments [Music] architected with security in mind dell emc power store and powermax provides storage innovation so your data is always available and always secure wherever and whenever you need it [Music] welcome back to a blueprint for trusted infrastructure we're here with rob emsley who's the director of product marketing for data protection and cyber security rob good to see a new role yeah good to be back dave good to see you yeah it's been a while since we chatted last and you know one of the changes in in my world is that i've expanded my responsibilities beyond data protection marketing to also focus on uh cyber security marketing specifically for our infrastructure solutions group so certainly that's you know something that really has driven us to you know to come and have this conversation with you today so data protection obviously has become an increasingly important component of the cyber security space i i don't think necessarily of you know traditional backup and recovery as security it's to me it's an adjacency i know some companies have said oh yeah now we're a security company they're kind of chasing the valuation for sure bubble um dell's interesting because you you have you know data protection in the form of backup and recovery and data management but you also have security you know direct security capability so you're sort of bringing those two worlds together and it sounds like your responsibility is to to connect those those dots is that right absolutely yeah i mean i think that uh the reality is is that security is a a multi-layer discipline um i think the the days of thinking that it's one uh or another um technology that you can use or process that you can use to make your organization secure uh are long gone i mean certainly um you actually correct if you think about the backup and recovery space i mean people have been doing that for years you know certainly backup and recovery is all about the recovery it's all about getting yourself back up and running when bad things happen and one of the realities unfortunately today is that one of the worst things that can happen is cyber attacks you know ransomware malware are all things that are top of mind for all organizations today and that's why you see a lot of technology and a lot of innovation going into the backup and recovery space because if you have a copy a good copy of your data then that is really the the first place you go to recover from a cyber attack and that's why it's so important the reality is is that unfortunately the cyber criminals keep on getting smarter i don't know how it happens but one of the things that is happening is that the days of them just going after your production data are no longer the only challenge that you have they go after your your backup data as well so over the last half a decade dell technologies with its backup and recovery portfolio has introduced the concept of isolated cyber recovery vaults and that is really the you know we've had many conversations about that over the years um and that's really a big tenant of what we do in the data protection portfolio so this idea of of cyber security resilience that definition is evolving what does it mean to you yeah i think the the analyst team over at gartner they wrote a very insightful paper called you will be hacked embrace the breach and the whole basis of this analysis is so much money has been spent on prevention is that what's out of balance is the amount of budget that companies have spent on cyber resilience and cyber resilience is based upon the premise that you will be hacked you have to embrace that fact and be ready and prepared to bring yourself back into business you know and that's really where cyber resiliency is very very different than cyber security and prevention you know and i think that balance of get your security disciplines well-funded get your defenses as good as you can get them but make sure that if the inevitable happens and you find yourself compromised that you have a great recovery plan and certainly a great recovery plan is really the basis of any good solid data protection backup and recovery uh philosophy so if i had to do a swot analysis we don't have to do the wot but let's focus on the s um what would you say are dell's strengths in this you know cyber security space as it relates to data protection um one is we've been doing it a long time you know we talk a lot about dell's data protection being proven and modern you know certainly the experience that we've had over literally three decades of providing enterprise scale data protection solutions to our customers has really allowed us to have a lot of insight into what works and what doesn't as i mentioned to you one of the unique differentiators of our solution is the cyber recovery vaulting solution that we introduced a little over five years ago five six years parapatek cyber recovery is something which has become a unique capability for customers to adopt uh on top of their investment in dell technologies data protection you know the the unique elements of our solution already threefold and it's we call them the three eyes it's isolation it's immutability and it's intelligence and the the isolation part is really so important because you need to reduce the attack surface of your good known copies of data you know you need to put it in a location that the bad actors can't get to it and that really is the the the the essence of a cyber recovery vault interestingly enough you're starting to see the market throw out that word um you know from many other places but really it comes down to having a real discipline that you don't allow the security of your cyber recovery vault to be compromised insofar as allowing it to be controlled from outside of the vault you know allowing it to be controlled by your backup application our cyber recovery vaulting technology is independent of the backup infrastructure it uses it but it controls its own security and that is so so important it's like having a vault that the only way to open it is from the inside you know and think about that if you think about you know volts in banks or volts in your home normally you have a keypad on the outside think of our cyber recovery vault as having its security controlled from inside of the vault so nobody can get in nothing can get in unless it's already in and if it's already in then it's trusted exactly yeah exactly yeah so isolation is the key and then you mentioned immutability is the second piece yeah so immutability is is also something which has been around for a long time people talk about uh backup immunoability or immutable backup copies so immutability is just the the the additional um technology that allows the data that's inside of the vault to be unchangeable you know but again that immutability you know your mileage varies you know when you look across the uh the different offers that are out there in the market especially in the backup industry you make a very valid point earlier that the backup vendors in the market seems to be security washing their marketing messages i mean everybody is leaning into the ever-present danger of cyber security not a bad thing but the reality is is that you have to have the technology to back it up you know quite literally yeah no pun intended and then actually pun intended now what about the intelligence piece of it uh that's that's ai ml where does that fit for sure so the intelligence piece is delivered by um a solution called cybersense and cybersense for us is what really gives you the confidence that what you have in your cyber recovery vault is a good clean copy of data so it's looking at the backup copies that get driven into the cyber vault and it's looking for anomalies so it's not looking for signatures of malware you know that's what your antivirus software does that's what your endpoint protection software does that's on the prevention side of the equation but what we're looking for is we're looking to ensure that the data that you need when all hell breaks loose is good and that when you get a request to restore and recover your business you go right let's go and do it and you don't have any concern that what you have in the vault has been compromised so cyber sense is really a unique analytic solution in the market based upon the fact that it isn't looking at cursory indicators of of um of of of malware infection or or ransomware introduction it's doing full content analytics you know looking at you know has the data um in any way changed has it suddenly become encrypted has it suddenly become different to how it was in the previous scan so that anomaly detection is very very different it's looking for um you know like different characteristics that really are an indicator that something is going on and of course if it sees it you immediately get flagged but the good news is is that you always have in the vault the previous copy of good known data which now becomes your restore point so we're talking to rob emsley about how data protection fits into what dell calls dti dell trusted infrastructure and and i want to come back rob to this notion of and not or because i think a lot of people are skeptical like how can i have great security and not introduce friction into my organization is that an automation play how does dell tackle that problem i mean i think a lot of it is across our infrastructure is is security has to be built in i mean intrinsic security within our servers within our storage devices uh within our elements of our backup infrastructure i mean security multi-factor authentication you know elements that make the overall infrastructure secure you know we have capabilities that you know allow us to identify whether or not configurations have changed you know we'll probably be talking about that a little bit more to you later in the segment but the the essence is is um security is not a bolt-on it has to be part of the overall infrastructure and that's so true um certainly in the data protection space give us the the bottom line on on how you see dell's key differentiators maybe you could talk about dell of course always talks about its portfolio but but why should customers you know lead in to dell in in this whole cyber resilience space um you know staying on the data protection space as i mentioned the the the work we've been doing um to introduce this cyber resiliency solution for data protection is in our opinion as good as it gets you know the you know you've spoken to a number of our of our best customers whether it be bob bender from founders federal or more recently at delton allergies world you spoke to tony bryson from the town of gilbert and these are customers that we've had for many years that have implemented cyber recovery vaults and at the end of the day they can now sleep at night you know that's really the the peace of mind that they have is that the insurance that a data protection from dell cyber recovery vault a parapatex cyber recovery solution gives them you know really allows them to you know just have the assurance that they don't have to pay a ransom if they have a an insider threat issue and you know all the way down to data deletion is they know that what's in the cyber recovery vault is good and ready for them to recover from great well rob congratulations on the new scope of responsibility i like how you know your organization is expanding as the threat surface is expanding as we said data protection becoming an adjacency to security not security in and of itself a key component of a comprehensive security strategy rob emsley thank you for coming back in the cube good to see you again you too dave thanks all right in a moment i'll be back to wrap up a blueprint for trusted infrastructure you're watching the cube every day it seems there's a new headline about the devastating financial impacts or trust that's lost due to ransomware or other sophisticated cyber attacks but with our help dell technologies customers are taking action by becoming more cyber resilient and deterring attacks so they can greet students daily with a smile they're ensuring that a range of essential government services remain available 24 7 to citizens wherever they're needed from swiftly dispatching public safety personnel or sending an inspector to sign off on a homeowner's dream to protecting restoring and sustaining our precious natural resources for future generations with ever-changing cyber attacks targeting organizations in every industry our cyber resiliency solutions are right on the money providing the security and controls you need we help customers protect and isolate critical data from ransomware and other cyber threats delivering the highest data integrity to keep your doors open and ensuring that hospitals and healthcare providers have access to the data they need so patients get life-saving treatment without fail if a cyber incident does occur our intelligence analytics and responsive team are in a class by themselves helping you reliably recover your data and applications so you can quickly get your organization back up and running with dell technologies behind you you can stay ahead of cybercrime safeguarding your business and your customers vital information learn more about how dell technology's cyber resiliency solutions can provide true peace of mind for you the adversary is highly capable motivated and well equipped and is not standing still your job is to partner with technology vendors and increase the cost of the bad guys getting to your data so that their roi is reduced and they go elsewhere the growing issues around cyber security will continue to drive forward thinking in cyber resilience we heard today that it is actually possible to achieve infrastructure security while at the same time minimizing friction to enable organizations to move quickly in their digital transformations a xero trust framework must include vendor r d and innovation that builds security designs it into infrastructure products and services from the start not as a bolt-on but as a fundamental ingredient of the cloud hybrid cloud private cloud to edge operational model the bottom line is if you can't trust your infrastructure your security posture is weakened remember this program is available on demand in its entirety at thecube.net and the individual interviews are also available and you can go to dell security solutions landing page for for more information go to dell.com security solutions that's dell.com security solutions this is dave vellante thecube thanks for watching a blueprint for trusted infrastructure made possible by dell we'll see you next time
SUMMARY :
the degree to which you guys
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
tony bryson | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
boston | LOCATION | 0.99+ |
hyderabad | LOCATION | 0.99+ |
steve kenniston | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
rob emsley | PERSON | 0.99+ |
two-way | QUANTITY | 0.99+ |
rob emsley | PERSON | 0.99+ |
dell technologies | ORGANIZATION | 0.99+ |
pete | PERSON | 0.99+ |
today | DATE | 0.99+ |
thecube.net | OTHER | 0.99+ |
dell.com | ORGANIZATION | 0.99+ |
gartner | ORGANIZATION | 0.98+ |
three eyes | QUANTITY | 0.98+ |
dave | PERSON | 0.98+ |
more than 10 years | QUANTITY | 0.98+ |
dell | ORGANIZATION | 0.98+ |
three areas | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
two key | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
dell technologies | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.97+ |
steve kenniston | PERSON | 0.97+ |
20 times | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
third | QUANTITY | 0.97+ |
cybersense | ORGANIZATION | 0.97+ |
nearly 1500 cios | QUANTITY | 0.96+ |
a lot more people | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.95+ |
second | QUANTITY | 0.95+ |
steve | PERSON | 0.94+ |
cloud iq | TITLE | 0.94+ |
tens of thousands of devices | QUANTITY | 0.94+ |
pete gear | PERSON | 0.94+ |
more than three years ago | DATE | 0.93+ |
one | QUANTITY | 0.93+ |
powermax | ORGANIZATION | 0.93+ |
two worlds | QUANTITY | 0.93+ |
2019 | DATE | 0.92+ |
gilbert | LOCATION | 0.92+ |
one of the key ways | QUANTITY | 0.91+ |
Dell | ORGANIZATION | 0.91+ |
pandemic | EVENT | 0.91+ |
more than half | QUANTITY | 0.9+ |
each | QUANTITY | 0.9+ |
first place | QUANTITY | 0.89+ |
bender | PERSON | 0.89+ |
a lot of people | QUANTITY | 0.89+ |
zero trust | QUANTITY | 0.89+ |
last decade | DATE | 0.88+ |
Tony Taylor, HPE | CUBE Conversation, August 2022
>>Hey everyone. Lisa Martin here with you. I'm with HPE right now. Tony Taylor joins me the director global test and supply chain cybersecurity at HPE. Tony. It's great to have you on the cube. >>Hi, thank you. Lisa's please, please, to be here. >>Tell me a little bit about your role and your background. >>I've been in the computer industry for about 33 years. Done a variety of roles throughout operations, fulfillment, R and D doing different things. My current role here at HPE is to lead in the organization, responsible for developing test solutions and our PCA manufacturing process and our systems integration team. And then we implement a supply chain cybersecurity process. That's focused on internal aspects of development, activities, and strategies, and then how we will drive our supply chain, our suppliers, to make sure that they adhere to these guidelines. >>And your background is engineering. I saw LinkedIn a little bit of science in there. Tell me a little bit about your background and how you got to where you are now. >>Oh, that's a, that's a long story going through school and doing that type of work. I, I, I got a phone call too many years ago and got involved in the computer industry, going from a, a user and working on those processes and then changing that to building product, introducing new product, developing new solutions and ideas, working on innovation and design of new products, new, new hardware, working on new software processes did heuristics level customer testing. So it's just a wide variety of activities. I've spanned a lot of different things over the years, been very fortunate to travel the world live in different parts of the world to bring up these activities. >>I always love to hear people's back stories on how they got to where they were. If it was a zigzaggy path or kind a path >>Was get a phone call from buddy one day, Hey, we're doing this. You wanna do it. Then that's where I ended up. >>And the rest is history. So a lot of dynamics in the last couple of years, obviously we've been hearing so much about the supply chain in the news for various reasons, but what are you seeing in the marketplace where with regards to security and the trusted supply chain, obviously a big focus there. What are you seeing? >>A lot of changes that have been occurring over time and especially in the last couple years with the things that we're seeing geopolitically is changing our, our environments, the threat vectors that we're seeing in, in cybersecurity are changing. They're becoming more sophisticated. They're coming in in different areas. What we're seeing is greater penetration and our customers. We're seeing a greater number of incidences in the, in the field where that, that I told you I'd stumble. The we're seeing a greater number of instances in the field and it's becoming a bigger impact for our customers and, and the supply chain. So we we've seen a tax at the root of the cause where neon gas, we're no longer having those activities that are coming into the, into the space. You're seeing greater ransomware processes and additional challenges associated with the cost associated with these programs. The, the infiltration from a hardware perspective, we've looked at those types of processes going through the supply chain processes are getting hacked more with that increased sophistication, even at the user level with phishing and Sping, those kind of things. And then you're seeing the, the changes in the geopolitical market. That's beginning to drive, you know, governmental aspects and things like that are coming in. So we we've seen roughly about what 10 and a half trillion worth of cybersecurity estimated in 2025, our loss on an annual perspective across the globe is right around a hundred billion, 45% of organizations have experienced or will be experiencing an attack. And by, so it it's just on the rise and it's creating a lot of concern with, with our customers. >>Yeah, it's really not a matter of these days. If we get hit it's when, when, so organizations right across every industry have to be prepared, what is HPE? What is HPE C as opportunities, obviously the threat landscape changing dramatically, but there's opportunities there for your customers truly tighten security. What are some of those opportunities through the HPE lens? >>The, the opportunities as we're looking at it is from an internal perspective, we need to begin focusing on all the activities and work that we're doing. How do we at hard in our environments, how do we, how do we grow those things? And then begin to investigate the things that we need to do at, at the, in the supply base, as those customers are beginning to look at things, hardening their environments, looking at their it systems, where are the areas for penetration within their environments? When you look at the process, we, we think cyber security a lot of times is just about it. Attacks counterfeit is a big aspect associated with this, and that can impact many of the different types of organizations. So what we've done is we created a, a heat map, looking at the different places where we believe those penetrations can take place internal. And that's our, our communication back out to our customers, look at the areas where you can be penetrated. And then where do you think are the, the areas that you really need to focus on? And then look for that remediation plan? I think that's the opportunity for our, our customers is to harden, you know, have a zero trust, but verified type process, >>Right? That's critical these days, as we know that threat landscape has changed so much recently and is only going to continue to change. As we said, it's not a matter of if it's now a matter of when an organizations need to be ready for that. So then you talked about the heat map from a technology. What is to help organizations really achieve a 360 degree approach to security >>From an H focus starts with our chief technology office, right? So we're looking at all the strategies as are coming down. They, we look at designing our hardware solutions to be able to support those activities. We're designing our systems and, and the integration programs around like GreenLake as services that we're able to provide to our customers to support that. And, and then, you know, as we continue to do that, we, we will, you know, look at, look within the supply chain and what are the things that we can do there to help, you know, drive, you know, the, the improvements there to really ensure that the products that are being delivered will make those customers requirements. >>And I understand you might have a teaser for me in terms of what we can expect going forward with HPE, with respect to cybersecurity in the supply chain, >>Lots of really good things that are coming up. And from a supply chain perspective, look for an announcement coming up in October for cybersecurity month, about what our next steps are and how we're really going to attack this problem. >>Excellent. And we'll be waiting for cybersecurity month in October. And to hear that announced from, from HPE, Tony, thanks so much for joining me on the queue today. Talking a little bit about your background, how you got to where you are now, the trusted supply chain and what HPE is doing there to really help customers mitigate the risk. We appreciate your insights and your time. >>Thank you. I appreciate your time. >>All Tony Taylor. I'm Lisa Martin. Thank you so much for watching this conversation. We'll see you next time.
SUMMARY :
It's great to have you on the cube. Lisa's please, please, to be here. And then we implement a supply And your background is engineering. on those processes and then changing that to building product, I always love to hear people's back stories on how they got to where they were. Then that's where I ended So a lot of dynamics in the last couple of years, That's beginning to drive, you know, governmental aspects and things like that are coming in. What is HPE C as opportunities, obviously the threat landscape changing dramatically, our customers, look at the areas where you can be penetrated. So then you talked about the heat map from a technology. We're designing our systems and, and the integration programs around like GreenLake And from a supply chain perspective, look for And to hear that announced from, I appreciate your time. Thank you so much for watching this conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Tony | PERSON | 0.99+ |
Tony Taylor | PERSON | 0.99+ |
August 2022 | DATE | 0.99+ |
2025 | DATE | 0.99+ |
October | DATE | 0.99+ |
360 degree | QUANTITY | 0.99+ |
10 and a half trillion | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
about 33 years | QUANTITY | 0.98+ |
Lisa | PERSON | 0.97+ |
around a hundred billion | QUANTITY | 0.95+ |
GreenLake | ORGANIZATION | 0.94+ |
last couple of years | DATE | 0.91+ |
45% | QUANTITY | 0.91+ |
zero trust | QUANTITY | 0.87+ |
last couple years | DATE | 0.86+ |
years | DATE | 0.72+ |
day | QUANTITY | 0.61+ |
Breaking Analysis: What Black Hat '22 tells us about securing the Supercloud
>> From theCUBE Studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR, This is "Breaking Analysis with Dave Vellante". >> Black Hat 22 was held in Las Vegas last week, the same time as theCUBE Supercloud event. Unlike AWS re:Inforce where words are carefully chosen to put a positive spin on security, Black Hat exposes all the warts of cyber and openly discusses its hard truths. It's a conference that's attended by technical experts who proudly share some of the vulnerabilities they've discovered, and, of course, by numerous vendors marketing their products and services. Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this "Breaking Analysis", we summarize what we learned from discussions with several people who attended Black Hat and our analysis from reviewing dozens of keynotes, articles, sessions, and data from a recent Black Hat Attendees Survey conducted by Black Hat and Informa, and we'll end with the discussion of what it all means for the challenges around securing the supercloud. Now, I personally did not attend, but as I said at the top, we reviewed a lot of content from the event which is renowned for its hundreds of sessions, breakouts, and strong technical content that is, as they say, unvarnished. Chris Krebs, the former director of Us cybersecurity and infrastructure security agency, CISA, he gave the keynote, and he spoke about the increasing complexity of tech stacks and the ripple effects that that has on organizational risk. Risk was a big theme at the event. Where re:Inforce tends to emphasize, again, the positive state of cybersecurity, it could be said that Black Hat, as the name implies, focuses on the other end of the spectrum. Risk, as a major theme of the event at the show, got a lot of attention. Now, there was a lot of talk, as always, about the expanded threat service, you hear that at any event that's focused on cybersecurity, and tons of emphasis on supply chain risk as a relatively new threat that's come to the CISO's minds. Now, there was also plenty of discussion about hybrid work and how remote work has dramatically increased business risk. According to data from in Intel 471's Mark Arena, the previously mentioned Black Hat Attendee Survey showed that compromise credentials posed the number one source of risk followed by infrastructure vulnerabilities and supply chain risks, so a couple of surveys here that we're citing, and we'll come back to that in a moment. At an MIT cybersecurity conference earlier last decade, theCUBE had a hypothetical conversation with former Boston Globe war correspondent, Charles Sennott, about the future of war and the role of cyber. We had similar discussions with Dr. Robert Gates on theCUBE at a ServiceNow event in 2016. At Black Hat, these discussions went well beyond the theoretical with actual data from the war in Ukraine. It's clear that modern wars are and will be supported by cyber, but the takeaways are that they will be highly situational, targeted, and unpredictable because in combat scenarios, anything can happen. People aren't necessarily at their keyboards. Now, the role of AI was certainly discussed as it is at every conference, and particularly cyber conferences. You know, it was somewhat dissed as over hyped, not surprisingly, but while AI is not a panacea to cyber exposure, automation and machine intelligence can definitely augment, what appear to be and have been stressed out, security teams can do this by recommending actions and taking other helpful types of data and presenting it in a curated form that can streamline the job of the SecOps team. Now, most cyber defenses are still going to be based on tried and true monitoring and telemetry data and log analysis and curating known signatures and analyzing consolidated data, but increasingly, AI will help with the unknowns, i.e. zero-day threats and threat actor behaviors after infiltration. Now, finally, while much lip service was given to collaboration and public-private partnerships, especially after Stuxsnet was revealed early last decade, the real truth is that threat intelligence in the private sector is still evolving. In particular, the industry, mid decade, really tried to commercially exploit proprietary intelligence and, you know, do private things like private reporting and monetize that, but attitudes toward collaboration are trending in a positive direction was one of the sort of outcomes that we heard at Black Hat. Public-private partnerships are being both mandated by government, and there seems to be a willingness to work together to fight an increasingly capable adversary. These things are definitely on the rise. Now, without this type of collaboration, securing the supercloud is going to become much more challenging and confined to narrow solutions. and we're going to talk about that little later in the segment. Okay, let's look at some of the attendees survey data from Black Hat. Just under 200 really serious security pros took the survey, so not enough to slice and dice by hair color, eye color, height, weight, and favorite movie genre, but enough to extract high level takeaways. You know, these strongly agree or disagree survey responses can sometimes give vanilla outputs, but let's look for the ones where very few respondents strongly agree or disagree with a statement or those that overwhelmingly strongly agree or somewhat agree. So it's clear from this that the respondents believe the following, one, your credentials are out there and available to criminals. Very few people thought that that was, you know, unavoidable. Second, remote work is here to stay, and third, nobody was willing to really jinx their firms and say that they strongly disagree that they'll have to respond to a major cybersecurity incident within the next 12 months. Now, as we've reported extensively, COVID has permanently changed the cybersecurity landscape and the CISO's priorities and playbook. Check out this data that queries respondents on the pandemic's impact on cybersecurity, new requirements to secure remote workers, more cloud, more threats from remote systems and remote users, and a shift away from perimeter defenses that are no longer as effective, e.g. firewall appliances. Note, however, the fifth response that's down there highlighted in green. It shows a meaningful drop in the percentage of remote workers that are disregarding corporate security policy, still too many, but 10 percentage points down from 2021 survey. Now, as we've said many times, bad user behavior will trump good security technology virtually every time. Consistent with the commentary from Mark Arena's Intel 471 threat report, fishing for credentials is the number one concern cited in the Black Hat Attendees Survey. This is a people and process problem more than a technology issue. Yes, using multifactor authentication, changing passwords, you know, using unique passwords, using password managers, et cetera, they're all great things, but if it's too hard for users to implement these things, they won't do it, they'll remain exposed, and their organizations will remain exposed. Number two in the graphic, sophisticated attacks that could expose vulnerabilities in the security infrastructure, again, consistent with the Intel 471 data, and three, supply chain risks, again, consistent with Mark Arena's commentary. Ask most CISOs their number one problem, and they'll tell you, "It's a lack of talent." That'll be on the top of their list. So it's no surprise that 63% of survey respondents believe they don't have the security staff necessary to defend against cyber threats. This speaks to the rise of managed security service providers that we've talked about previously on "Breaking Analysis". We've seen estimates that less than 50% of organizations in the US have a SOC, and we see those firms as ripe for MSSP support as well as larger firms augmenting staff with managed service providers. Now, after re:Invent, we put forth this conceptual model that discussed how the cloud was becoming the first line of defense for CISOs, and DevOps was being asked to do more, things like securing the runtime, the containers, the platform, et cetera, and audit was kind of that last line of defense. So a couple things we picked up from Black Hat which are consistent with this shift and some that are somewhat new, first, is getting visibility across the expanded threat surface was a big theme at Black Hat. This makes it even harder to identify risk, of course, this being the expanded threat surface. It's one thing to know that there's a vulnerability somewhere. It's another thing to determine the severity of the risk, but understanding how easy or difficult it is to exploit that vulnerability and how to prioritize action around that. Vulnerability is increasingly complex for CISOs as the security landscape gets complexified. So what's happening is the SOC, if there even is one at the organization, is becoming federated. No longer can there be one ivory tower that's the magic god room of data and threat detection and analysis. Rather, the SOC is becoming distributed following the data, and as we just mentioned, the SOC is being augmented by the cloud provider and the managed service providers, the MSSPs. So there's a lot of critical security data that is decentralized and this will necessitate a new cyber data model where data can be synchronized and shared across a federation of SOCs, if you will, or mini SOCs or SOC capabilities that live in and/or embedded in an organization's ecosystem. Now, to this point about cloud being the first line of defense, let's turn to a story from ETR that came out of our colleague Eric Bradley's insight in a one-on-one he did with a senior IR person at a manufacturing firm. In a piece that ETR published called "Saved by Zscaler", check out this comment. Quote, "As the last layer, we are filtering all the outgoing internet traffic through Zscaler. And when an attacker is already on your network, and they're trying to communicate with the outside to exchange encryption keys, Zscaler is already blocking the traffic. It happened to us. It happened and we were saved by Zscaler." So that's pretty cool. So not only is the cloud the first line of defense, as we sort of depicted in that previous graphic, here's an example where it's also the last line of defense. Now, let's end on what this all means to securing the supercloud. At our Supercloud 22 event last week in our Palo Alto CUBE Studios, we had a session on this topic on supercloud, securing the supercloud. Security, in our view, is going to be one of the most important and difficult challenges for the idea of supercloud to become real. We reviewed in last week's "Breaking Analysis" a detailed discussion with Snowflake co-founder and president of products, Benoit Dageville, how his company approaches security in their data cloud, what we call a superdata cloud. Snowflake doesn't use the term supercloud. They use the term datacloud, but what if you don't have the focus, the engineering depth, and the bank roll that Snowflake has? Does that mean superclouds will only be developed by those companies with deep pockets and enormous resources? Well, that's certainly possible, but on the securing the supercloud panel, we had three technical experts, Gee Rittenhouse of Skyhigh Security, Piyush Sharrma who's the founder of Accurics who sold to Tenable, and Tony Kueh, who's the former Head of Product at VMware. Now, John Furrier asked each of them, "What is missing? What's it going to take to secure the supercloud? What has to happen?" Here's what they said. Play the clip. >> This is the final question. We have one minute left. I wish we had more time. This is a great panel. We'll bring you guys back for sure after the event. What one thing needs to happen to unify or get through the other side of this fragmentation and then the challenges for supercloud? Because remember, the enterprise equation is solve complexity with more complexity. Well, that's not what the market wants. They want simplicity. They want SaaS. They want ease of use. They want infrastructure risk code. What has to happen? What do you think, each of you? >> So I can start, and extending to the previous conversation, I think we need a consortium. We need a framework that defines that if you really want to operate on supercloud, these are the 10 things that you must follow. It doesn't matter whether you take AWS, Slash, or TCP or you have all, and you will have the on-prem also, which means that it has to follow a pattern, and that pattern is what is required for supercloud, in my opinion. Otherwise, security is going everywhere. They're like they have to fix everything, find everything, and so on and so forth. It's not going to be possible. So they need a framework. They need a consortium, and this consortium needs to be, I think, needs to led by the cloud providers because they're the ones who have these foundational infrastructure elements, and the security vendor should contribute on providing more severe detections or severe findings. So that's, in my opinion, should be the model. >> Great, well, thank you, Gee. >> Yeah, I would think it's more along the lines of a business model. We've seen in cloud that the scale matters, and once you're big, you get bigger. We haven't seen that coalesce around either a vendor, a business model, or whatnot to bring all of this and connect it all together yet. So that value proposition in the industry, I think, is missing, but there's elements of it already available. >> I think there needs to be a mindset. If you look, again, history repeating itself. The internet sort of came together around set of IETF, RSC standards. Everybody embraced and extended it, right? But still, there was, at least, a baseline, and I think at that time, the largest and most innovative vendors understood that they couldn't do it by themselves, right? And so I think what we need is a mindset where these big guys, like Google, let's take an example. They're not going to win at all, but they can have a substantial share. So how do they collaborate with the ecosystem around a set of standards so that they can bring their differentiation and then embrace everybody together. >> Okay, so Gee's point about a business model is, you know, business model being missing, it's broadly true, but perhaps Snowflake serves as a business model where they've just gone out and and done it, setting or trying to set a de facto standard by which data can be shared and monetized. They're certainly setting that standard and mandating that standard within the Snowflake ecosystem with its proprietary framework. You know, perhaps that is one answer, but Tony lays out a scenario where there's a collaboration mindset around a set of standards with an ecosystem. You know, intriguing is this idea of a consortium or a framework that Piyush was talking about, and that speaks to the collaboration or lack thereof that we spoke of earlier, and his and Tony's proposal that the cloud providers should lead with the security vendor ecosystem playing a supporting role is pretty compelling, but can you see AWS and Azure and Google in a kumbaya moment getting together to make that happen? It seems unlikely, but maybe a better partnership between the US government and big tech could be a starting point. Okay, that's it for today. I want to thank the many people who attended Black Hat, reported on it, wrote about it, gave talks, did videos, and some that spoke to me that had attended the event, Becky Bracken, who is the EIC at Dark Reading. They do a phenomenal job and the entire team at Dark Reading, the news desk there, Mark Arena, whom I mentioned, Garrett O'Hara, Nash Borges, Kelly Jackson, sorry, Kelly Jackson Higgins, Roya Gordon, Robert Lipovsky, Chris Krebs, and many others, thanks for the great, great commentary and the content that you put out there, and thanks to Alex Myerson, who's on production, and Alex manages the podcasts for us. Ken Schiffman is also in our Marlborough studio as well, outside of Boston. Kristen Martin and Cheryl Knight, they help get the word out on social media and in our newsletters, and Rob Hoff is our Editor-in-Chief at SiliconANGLE and does some great editing and helps with the titles of "Breaking Analysis" quite often. Remember these episodes, they're all available as podcasts, wherever you listen, just search for "Breaking Analysis Podcasts". I publish each on wikibon.com and siliconangle.com, and you could email me, get in touch with me at david.vellante@siliconangle.com or you can DM me @dvellante or comment on my LinkedIn posts, and please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on "Breaking Analysis". (upbeat music)
SUMMARY :
with Dave Vellante". and the ripple effects that This is the final question. and the security vendor should contribute that the scale matters, the largest and most innovative and the content that you put out there,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cheryl Knight | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Robert Lipovsky | PERSON | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
Chris Krebs | PERSON | 0.99+ |
Charles Sennott | PERSON | 0.99+ |
Becky Bracken | PERSON | 0.99+ |
Rob Hoff | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Kelly Jackson | PERSON | 0.99+ |
Gee Rittenhouse | PERSON | 0.99+ |
Benoit Dageville | PERSON | 0.99+ |
Tony Kueh | PERSON | 0.99+ |
Mark Arena | PERSON | 0.99+ |
Piyush Sharrma | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Roya Gordon | PERSON | 0.99+ |
CISA | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto | LOCATION | 0.99+ |
Garrett O'Hara | PERSON | 0.99+ |
Accurics | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Skyhigh Security | ORGANIZATION | 0.99+ |
Black Hat | ORGANIZATION | 0.99+ |
10 things | QUANTITY | 0.99+ |
Tenable | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Nash Borges | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Robert Gates | PERSON | 0.99+ |
one minute | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
Kelly Jackson Higgins | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Black Hat 22 | EVENT | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Black Hat | EVENT | 0.98+ |
three technical experts | QUANTITY | 0.98+ |
first line | QUANTITY | 0.98+ |
fifth response | QUANTITY | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Ukraine | LOCATION | 0.98+ |
Boston Globe | ORGANIZATION | 0.98+ |
Dr. | PERSON | 0.98+ |
one answer | QUANTITY | 0.97+ |
wikibon.com | OTHER | 0.97+ |
first line | QUANTITY | 0.97+ |
this week | DATE | 0.96+ |
first | QUANTITY | 0.96+ |
Marlborough | LOCATION | 0.96+ |
siliconangle.com | OTHER | 0.95+ |
Saved by Zscaler | TITLE | 0.95+ |
Palo Alto CUBE Studios | LOCATION | 0.95+ |
hundreds of sessions | QUANTITY | 0.95+ |
ORGANIZATION | 0.94+ | |
both | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
dozens of keynotes | QUANTITY | 0.93+ |
today | DATE | 0.93+ |
Breaking Analysis Further defining Supercloud W/ tech leaders VMware, Snowflake, Databricks & others
from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante at our inaugural super cloud 22 event we further refined the concept of a super cloud iterating on the definition the salient attributes and some examples of what is and what is not a super cloud welcome to this week's wikibon cube insights powered by etr you know snowflake has always been what we feel is one of the strongest examples of a super cloud and in this breaking analysis from our studios in palo alto we unpack our interview with benoit de javille co-founder and president of products at snowflake and we test our super cloud definition on the company's data cloud platform and we're really looking forward to your feedback first let's examine how we defl find super cloudant very importantly one of the goals of super cloud 22 was to get the community's input on the definition and iterate on previous work super cloud is an emerging computing architecture that comprises a set of services which are abstracted from the underlying primitives of hyperscale clouds we're talking about services such as compute storage networking security and other native tooling like machine learning and developer tools to create a global system that spans more than one cloud super cloud as shown on this slide has five essential properties x number of deployment models and y number of service models we're looking for community input on x and y and on the first point as well so please weigh in and contribute now we've identified these five essential elements of a super cloud let's talk about these first the super cloud has to run its services on more than one cloud leveraging the cloud native tools offered by each of the cloud providers the builder of the super cloud platform is responsible for optimizing the underlying primitives of each cloud and optimizing for the specific needs be it cost or performance or latency or governance data sharing security etc but those primitives must be abstracted such that a common experience is delivered across the clouds for both users and developers the super cloud has a metadata intelligence layer that can maximize efficiency for the specific purpose of the super cloud i.e the purpose that the super cloud is intended for and it does so in a federated model and it includes what we call a super pass this is a prerequisite that is a purpose-built component and enables ecosystem partners to customize and monetize incremental services while at the same time ensuring that the common experiences exist across clouds now in terms of deployment models we'd really like to get more feedback on this piece but here's where we are so far based on the feedback we got at super cloud 22. we see three deployment models the first is one where a control plane may run on one cloud but supports data plane interactions with more than one other cloud the second model instantiates the super cloud services on each individual cloud and within regions and can support interactions across more than one cloud with a unified interface connecting those instantiations those instances to create a common experience and the third model superimposes its services as a layer or in the case of snowflake they call it a mesh on top of the cloud on top of the cloud providers region or regions with a single global instantiation a single global instantiation of those services which spans multiple cloud providers this is our understanding from a comfort the conversation with benoit dejaville as to how snowflake approaches its solutions and for now we're going to park the service models we need to more time to flesh that out and we'll propose something shortly for you to comment on now we peppered benoit dejaville at super cloud 22 to test how the snowflake data cloud aligns to our concepts and our definition let me also say that snowflake doesn't use the term data cloud they really want to respect and they want to denigrate the importance of their hyperscale partners nor do we but we do think the hyperscalers today anyway are building or not building what we call super clouds but they are but but people who bar are building super clouds are building on top of hyperscale clouds that is a prerequisite so here are the questions that we tested with snowflake first question how does snowflake architect its data cloud and what is its deployment model listen to deja ville talk about how snowflake has architected a single system play the clip there are several ways to do this you know uh super cloud as as you name them the way we we we picked is is to create you know one single system and that's very important right the the the um [Music] there are several ways right you can instantiate you know your solution uh in every region of a cloud and and you know potentially that region could be a ws that region could be gcp so you are indeed a multi-cloud solution but snowflake we did it differently we are really creating cloud regions which are superposed on top of the cloud provider you know region infrastructure region so we are building our regions but but where where it's very different is that each region of snowflake is not one in instantiation of our service our service is global by nature we can move data from one region to the other when you land in snowflake you land into one region but but you can grow from there and you can you know exist in multiple clouds at the same time and that's very important right it's not one single i mean different instantiation of a system is one single instantiation which covers many cloud regions and many cloud providers snowflake chose the most advanced level of our three deployment models dodgeville talked about too presumably so it could maintain maximum control and ensure that common experience like the iphone model next we probed about the technical enablers of the data cloud listen to deja ville talk about snow grid he uses the term mesh and then this can get confusing with the jamaicani's data mesh concept but listen to benoit's explanation well as i said you know first we start by building you know snowflake regions we have today furry region that spawn you know the world so it's a worldwide worldwide system with many regions but all these regions are connected together they are you know meshed together with our technology we name it snow grid and that makes it hard because you know regions you know azure region can talk to a ws region or gcp regions and and as a as a user of our cloud you you don't see really these regional differences that you know regions are in different you know potentially clown when you use snowflake you can exist your your presence as an organization can be in several regions several clouds if you want geographic and and and both geographic and cloud provider so i can share data irrespective of the the cloud and i'm in the snowflake data cloud is that correct i can do that today exactly and and that's very critical right what we wanted is to remove data silos and and when you instantiate a system in one single region and that system is locked in that region you cannot communicate with other parts of the world you are locking the data in one region right and we didn't want to do that we wanted you know data to be distributed the way customer wants it to be distributed across the world and potentially sharing data at world scale now maybe there are many ways to skin the other cat meaning perhaps if a platform does instantiate in multiple places there are ways to share data but this is how snowflake chose to approach the problem next question how do you deal with latency in this big global system this is really important to us because while snowflake has some really smart people working as engineers and and the like we don't think they've solved for the speed of light problem the best people working on it as we often joke listen to benoit deja ville's comments on this topic so yes and no the the way we do it it's very expensive to do that because generally if you want to join you know data which is in which are in different regions and different cloud it's going to be very expensive because you need to move you know data every time you join it so the way we do it is that you replicate the subset of data that you want to access from one region from other regions so you can create this data mesh but data is replicated to make it very cheap and very performant too and is the snow grid does that have the metadata intelligence yes to actually can you describe that a little bit yeah snow grid is both uh a way to to exchange you know metadata about so each region of snowflake knows about all the other regions of snowflake every time we create a new region diary you know the metadata is distributed over our data cloud not only you know region knows all the regions but knows you know every organization that exists in our clouds where this organization is where data can be replicated by this organization and then of course it's it's also used as a way to uh uh exchange data right so you can exchange you know beta by scale of data size and we just had i was just receiving an email from one of our customers who moved more than four petabytes of data cross-region cross you know cloud providers in you know few days and you know it's a lot of data so it takes you know some time to move but they were able to do that online completely online and and switch over you know to the diff to the other region which is failover is very important also so yes and no probably means typically no he says yes and no probably means no so it sounds like snowflake is selectively pulling small amounts of data and replicating it where necessary but you also heard him talk about the metadata layer which is one of the essential aspects of super cloud okay next we dug into security it's one of the most important issues and we think one of the hardest parts related to deploying super cloud so we've talked about how the cloud has become the first line of defense for the cso but now with multi-cloud you have multiple first lines of defense and that means multiple shared responsibility models and multiple tool sets from different cloud providers and an expanded threat surface so listen to benoit's explanation here please play the clip this is a great question uh security has always been the most important aspect of snowflake since day one right this is the question that every customer of ours has you know how you can you guarantee the security of my data and so we secure data really tightly in region we have several layers of security it starts by by encrypting it every data at rest and that's very important a lot of customers are not doing that right you hear these attacks for example on on cloud you know where someone left you know their buckets uh uh open and then you know you can access the data because it's a non-encrypted uh so we are encrypting everything at rest we are encrypting everything in transit so a region is very secure now you know you never from one region you never access data from another region in snowflake that's why also we replicate data now the replication of that data across region or the metadata for that matter is is really highly secure so snow grits ensure that everything is encrypted everything is you know we have multiple you know encryption keys and it's you know stored in hardware you know secure modules so we we we built you know snow grids such that it's secure and it allows very secure movement of data so when we heard this explanation we immediately went to the lowest common denominator question meaning when you think about how aws for instance deals with data in motion or data and rest it might be different from how another cloud provider deals with it so how does aws uh uh uh differences for example in the aws maturity model for various you know cloud capabilities you know let's say they've got a faster nitro or graviton does it do do you have to how does snowflake deal with that do they have to slow everything else down like imagine a caravan cruising you know across the desert so you know every truck can keep up let's listen it's a great question i mean of course our software is abstracting you know all the cloud providers you know infrastructure so that when you run in one region let's say aws or azure it doesn't make any difference as far as the applications are concerned and and this abstraction of course is a lot of work i mean really really a lot of work because it needs to be secure it needs to be performance and you know every cloud and it has you know to expose apis which are uniform and and you know cloud providers even though they have potentially the same concept let's say blob storage apis are completely different the way you know these systems are secure it's completely different the errors that you can get and and the retry you know mechanism is very different from one cloud to the other performance is also different we discovered that when we were starting to port our software and and and you know we had to completely rethink how to leverage blob storage in that cloud versus that cloud because just of performance too so we had you know for example to you know stripe data so all this work is work that's you know you don't need as an application because our vision really is that applications which are running in our data cloud can you know be abstracted of all this difference and and we provide all the services all the workload that this application need whether it's transactional access to data analytical access to data you know managing you know logs managing you know metrics all of these is abstracted too such that they are not you know tied to one you know particular service of one cloud and and distributing this application across you know many regions many cloud is very seamless so from that answer we know that snowflake takes care of everything but we really don't understand the performance implications in you know in that specific case but we feel pretty certain that the promises that snowflake makes around governance and security within their data sharing construct construct will be kept now another criterion that we've proposed for super cloud is a super pass layer to create a common developer experience and an enabler for ecosystem partners to monetize please play the clip let's listen we build it you know a custom build because because as you said you know what exists in one cloud might not exist in another cloud provider right so so we have to build you know on this all these this components that modern application mode and that application need and and and and that you know goes to machine learning as i say transactional uh analytical system and the entire thing so such that they can run in isolation basically and the objective is the developer experience will be identical across those clouds yes right the developers doesn't need to worry about cloud provider and actually our system we have we didn't talk about it but the marketplace that we have which allows actually to deliver we're getting there yeah okay now we're not going to go deep into ecosystem today we've talked about snowflakes strengths in this regard but snowflake they pretty much ticked all the boxes on our super cloud attributes and definition we asked benoit dejaville to confirm that this is all shipping and available today and he also gave us a glimpse of the future play the clip and we are still developing it you know the transactional you know unistore as we call it was announced in last summit so so they are still you know working properly but but but that's the vision right and and and that's important because we talk about the infrastructure right you mentioned a lot about storage and compute but it's not only that right when you think about application they need to use the transactional database they need to use an analytical system they need to use you know machine learning so you need to provide also all these services which are consistent across all the cloud providers so you can hear deja ville talking about expanding beyond taking advantage of the core infrastructure storage and networking et cetera and bringing intelligence to the data through machine learning and ai so of course there's more to come and there better be at this company's valuation despite the recent sharp pullback in a tightening fed environment okay so i know it's cliche but everyone's comparing snowflakes and data bricks databricks has been pretty vocal about its open source posture compared to snowflakes and it just so happens that we had aligotsy on at super cloud 22 as well he wasn't in studio he had to do remote because i guess he's presenting at an investor conference this week so we had to bring him in remotely now i didn't get to do this interview john furrier did but i listened to it and captured this clip about how data bricks sees super cloud and the importance of open source take a listen to goatzee yeah i mean let me start by saying we just we're big fans of open source we think that open source is a force in software that's going to continue for you know decades hundreds of years and it's going to slowly replace all proprietary code in its way we saw that you know it could do that with the most advanced technology windows you know proprietary operating system very complicated got replaced with linux so open source can pretty much do anything and what we're seeing with the data lake house is that slowly the open source community is building a replacement for the proprietary data warehouse you know data lake machine learning real-time stack in open source and we're excited to be part of it for us delta lake is a very important project that really helps you standardize how you lay out your data in the cloud and with it comes a really important protocol called delta sharing that enables you in an open way actually for the first time ever share large data sets between organizations but it uses an open protocol so the great thing about that is you don't need to be a database customer you don't even like databricks you just need to use this open source project and you can now securely share data sets between organizations across clouds and it actually does so really efficiently just one copy of the data so you don't have to copy it if you're within the same cloud so the implication of ellie gotzi's comments is that databricks with delta sharing as john implied is playing a long game now i don't know if enough about the databricks architecture to comment in detail i got to do more research there so i reached out to my two analyst friends tony bear and sanji mohan to see what they thought because they cover these companies pretty closely here's what tony bear said quote i've viewed the divergent lake house strategies of data bricks and snowflake in the context of their roots prior to delta lake databrick's prime focus was the compute not the storage layer and more specifically they were a compute engine not a database snowflake approached from the opposite end of the pool as they originally fit the mold of the classic database company rather than a specific compute engine per se the lake house pushes both companies outside of their original comfort zones data bricks to storage snowflake to compute engine so it makes perfect sense for databricks to embrace the open source narrative at the storage layer and for snowflake to continue its walled garden approach but in the long run their strategies are already overlapping databricks is not a 100 open source company its practitioner experience has always been proprietary and now so is its sql query engine likewise snowflake has had to open up with the support of iceberg for open data lake format the question really becomes how serious snowflake will be in making iceberg a first-class citizen in its environment that is not necessarily officially branding a lake house but effectively is and likewise can databricks deliver the service levels associated with walled gardens through a more brute force approach that relies heavily on the query engine at the end of the day those are the key requirements that will matter to data bricks and snowflake customers end quote that was some deep thought by by tony thank you for that sanjay mohan added the following quote open source is a slippery slope people buy mobile phones based on open source android but it's not fully open similarly databricks delta lake was not originally fully open source and even today its photon execution engine is not we are always going to live in a hybrid world snowflake and databricks will support whatever model works best for them and their customers the big question is do customers care as deeply about which vendor has a higher degree of openness as we technology people do i believe customers evaluation criteria is far more nuanced than just to decipher each vendor's open source claims end quote okay so i had to ask dodgeville about their so-called wall garden approach and what their strategy is with apache iceberg here's what he said iceberg is is very important so just to to give some context iceberg is an open you know table format right which was you know first you know developed by netflix and netflix you know put it open source in the apache community so we embrace that's that open source standard because because it's widely used by by many um many you know companies and also many companies have you know really invested a lot of effort in building you know big data hadoop solution or data like solution and they want to use snowflake and they couldn't really use snowflake because all their data were in open you know formats so we are embracing icebergs to help these companies move through the cloud but why we have been relentless with direct access to data direct access to data is a little bit of a problem for us and and the reason is when you direct access to data now you have direct access to storage now you have to understand for example the specificity of one cloud versus the other so as soon as you start to have direct access to data you lose your you know your cloud diagnostic layer you don't access data with api when you have direct access to data it's very hard to secure data because you need to grant access direct access to tools which are not you know protected and you see a lot of you know hacking of of data you know because of that so so that was not you know direct access to data is not serving well our customers and that's why we have been relented to do that because it's it's cr it's it's not cloud diagnostic it's it's you you have to code that you have to you you you need a lot of intelligence while apis access so we want open apis that's that's i guess the way we embrace you know openness is is by open api versus you know you access directly data here's my take snowflake is hedging its bets because enough people care about open source that they have to have some open data format options and it's good optics and you heard benoit deja ville talk about the risks of directly accessing the data and the complexities it brings now is that maybe a little fud against databricks maybe but same can be said for ollie's comments maybe flooding the proprietaryness of snowflake but as both analysts pointed out open is a spectrum hey i remember unix used to equal open systems okay let's end with some etr spending data and why not compare snowflake and data bricks spending profiles this is an xy graph with net score or spending momentum on the y-axis and pervasiveness or overlap in the data set on the x-axis this is data from the january survey when snowflake was holding above 80 percent net score off the charts databricks was also very strong in the upper 60s now let's fast forward to this next chart and show you the july etr survey data and you can see snowflake has come back down to earth now remember anything above 40 net score is highly elevated so both companies are doing well but snowflake is well off its highs and data bricks has come down somewhat as well databricks is inching to the right snowflake rocketed to the right post its ipo and as we know databricks wasn't able to get to ipo during the covet bubble ali gotzi is at the morgan stanley ceo conference this week they got plenty of cash to withstand a long-term recession i'm told and they've started the message that they're a billion dollars in annualized revenue i'm not sure exactly what that means i've seen some numbers on their gross margins i'm not sure what that means i've seen some numbers on their net retention revenue or net revenue retention again i'll reserve judgment until we see an s1 but it's clear both of these companies have momentum and they're out competing in the market well as always be the ultimate arbiter different philosophies perhaps is it like democrats and republicans well it could be but they're both going after a solving data problem both companies are trying to help customers get more value out of their data and both companies are highly valued so they have to perform for their investors to paraphrase ralph nader the similarities may be greater than the differences okay that's it for today thanks to the team from palo alto for this awesome super cloud studio build alex myerson and ken shiffman are on production in the palo alto studios today kristin martin and sheryl knight get the word out to our community rob hoff is our editor-in-chief over at siliconangle thanks to all please check out etr.ai for all the survey data remember these episodes are all available as podcasts wherever you listen just search breaking analysis podcasts i publish each week on wikibon.com and siliconangle.com and you can email me at david.vellante at siliconangle.com or dm me at devellante or comment on my linkedin posts and please as i say etr has got some of the best survey data in the business we track it every quarter and really excited to be partners with them this is dave vellante for the cube insights powered by etr thanks for watching and we'll see you next time on breaking analysis [Music] you
SUMMARY :
and and the retry you know mechanism is
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
netflix | ORGANIZATION | 0.99+ |
john furrier | PERSON | 0.99+ |
palo alto | ORGANIZATION | 0.99+ |
tony bear | PERSON | 0.99+ |
boston | LOCATION | 0.99+ |
sanji mohan | PERSON | 0.99+ |
ken shiffman | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ellie gotzi | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
more than four petabytes | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
kristin martin | PERSON | 0.99+ |
both companies | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
rob hoff | PERSON | 0.99+ |
more than one | QUANTITY | 0.99+ |
second model | QUANTITY | 0.98+ |
alex myerson | PERSON | 0.98+ |
third model | QUANTITY | 0.98+ |
one region | QUANTITY | 0.98+ |
one copy | QUANTITY | 0.98+ |
one region | QUANTITY | 0.98+ |
five essential elements | QUANTITY | 0.98+ |
android | TITLE | 0.98+ |
100 | QUANTITY | 0.98+ |
first line | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
sheryl | PERSON | 0.98+ |
more than one cloud | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
iphone | COMMERCIAL_ITEM | 0.98+ |
super cloud 22 | EVENT | 0.98+ |
each cloud | QUANTITY | 0.98+ |
each | QUANTITY | 0.97+ |
sanjay mohan | PERSON | 0.97+ |
john | PERSON | 0.97+ |
republicans | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
hundreds of years | QUANTITY | 0.97+ |
siliconangle | ORGANIZATION | 0.97+ |
each week | QUANTITY | 0.97+ |
data lake house | ORGANIZATION | 0.97+ |
one single region | QUANTITY | 0.97+ |
january | DATE | 0.97+ |
dave vellante | PERSON | 0.96+ |
each region | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
dave vellante | PERSON | 0.96+ |
tony | PERSON | 0.96+ |
above 80 percent | QUANTITY | 0.95+ |
more than one cloud | QUANTITY | 0.95+ |
more than one cloud | QUANTITY | 0.95+ |
data lake | ORGANIZATION | 0.95+ |
five essential properties | QUANTITY | 0.95+ |
democrats | ORGANIZATION | 0.95+ |
first time | QUANTITY | 0.95+ |
july | DATE | 0.94+ |
linux | TITLE | 0.94+ |
etr | ORGANIZATION | 0.94+ |
devellante | ORGANIZATION | 0.93+ |
dodgeville | ORGANIZATION | 0.93+ |
each vendor | QUANTITY | 0.93+ |
super cloud 22 | ORGANIZATION | 0.93+ |
delta lake | ORGANIZATION | 0.92+ |
three deployment models | QUANTITY | 0.92+ |
first lines | QUANTITY | 0.92+ |
dejaville | LOCATION | 0.92+ |
day one | QUANTITY | 0.92+ |
Closing Remarks | Supercloud22
(gentle upbeat music) >> Welcome back everyone, to "theCUBE"'s live stage performance here in Palo Alto, California at "theCUBE" Studios. I'm John Furrier with Dave Vellante, kicking off our first inaugural Supercloud event. It's an editorial event, we wanted to bring together the best in the business, the smartest, the biggest, the up-and-coming startups, venture capitalists, everybody, to weigh in on this new Supercloud trend, this structural change in the cloud computing business. We're about to run the Ecosystem Speaks, which is a bunch of pre-recorded companies that wanted to get their voices on the record, so stay tuned for the rest of the day. We'll be replaying all that content and they're going to be having some really good commentary and hear what they have to say. I had a chance to interview and so did Dave. Dave, this is our closing segment where we kind of unpack everything or kind of digest and report. So much to kind of digest from the conversations today, a wide range of commentary from Supercloud operating system to developers who are in charge to maybe it's an ops problem or maybe Oracle's a Supercloud. I mean, that was debated. So so much discussion, lot to unpack. What was your favorite moments? >> Well, before I get to that, I think, I go back to something that happened at re:Invent last year. Nick Sturiale came up, Steve Mullaney from Aviatrix; we're going to hear from him shortly in the Ecosystem Speaks. Nick Sturiale's VC said "it's happening"! And what he was talking about is this ecosystem is exploding. They're building infrastructure or capabilities on top of the CapEx infrastructure. So, I think it is happening. I think we confirmed today that Supercloud is a thing. It's a very immature thing. And I think the other thing, John is that, it seems to me that the further you go up the stack, the weaker the business case gets for doing Supercloud. We heard from Marianna Tessel, it's like, "Eh, you know, we can- it was easier to just do it all on one cloud." This is a point that, Adrian Cockcroft just made on the panel and so I think that when you break out the pieces of the stack, I think very clearly the infrastructure layer, what we heard from Confluent and HashiCorp, and certainly VMware, there's a real problem there. There's a real need at the infrastructure layer and then even at the data layer, I think Benoit Dageville did a great job of- You know, I was peppering him with all my questions, which I basically was going through, the Supercloud definition and they ticked the box on pretty much every one of 'em as did, by the way Ali Ghodsi you know, the big difference there is the philosophy of Republicans and Democrats- got open versus closed, not to apply that to either one side, but you know what I mean! >> And the similarities are probably greater than differences. >> Berkely, I would probably put them on the- >> Yeah, we'll put them on the Democrat side we'll make Snowflake the Republicans. But so- but as we say there's a lot of similarities as well in terms of what their objectives are. So, I mean, I thought it was a great program and a really good start to, you know, an industry- You brought up the point about the industry consortium, asked Kit Colbert- >> Yep. >> If he thought that was something that was viable and what'd they say? That hyperscale should lead it? >> Yeah, they said hyperscale should lead it and there also should be an industry consortium to get the voices out there. And I think VMware is very humble in how they're putting out their white paper because I think they know that they can't do it all and that they do not have a great track record relative to cloud. And I think, but they have a great track record of loyal installed base ops people using VMware vSphere all the time. >> Yeah. >> So I think they need a catapult moment where they can catapult to the cloud native which they've been working on for years under Raghu and the team. So the question on VMware is in the light of Broadcom, okay, acquisition of VMware, this is an opportunity or it might not be an opportunity or it might be a spin-out or something, I just think VMware's got way too much engineering culture to be ignored, Dave. And I think- well, I'm going to watch this very closely because they can pull off some sort of rallying moment. I think they could. And then you hear the upstarts like Platform9, Rafay Systems and others they're all like, "Yes, we need to unify behind something. There needs to be some sort of standard". You know, we heard the argument of you know, more standards bodies type thing. So, it's interesting, maybe "theCUBE" could be that but we're going to certainly keep the conversation going. >> I thought one of the most memorable statements was Vittorio who said we- for VMware, we want our cake, we want to eat it too and we want to lose weight. So they have a lot of that aspirations there! (John laughs) >> And then I thought, Adrian Cockcroft said you know, the devs, they want to get married. They were marrying everybody, and then the ops team, they have to deal with the divorce. >> Yeah. >> And I thought that was poignant. It's like, they want consistency, they want standards, they got to be able to scale And Lori MacVittie, I'm not sure you agree with this, I'd have to think about it, but she was basically saying, all we've talked about is devs devs devs for the last 10 years, going forward we're going to be talking about ops. >> Yeah, and I think one of the things I learned from this day and looking back, and some kind of- I've been sauteing through all the interviews. If you zoom out, for me it was the epiphany of developers are still in charge. And I've said, you know, the developers are doing great, it's an ops security thing. Not sure I see that the way I was seeing before. I think what I learned was the refactoring pattern that's emerging, In Sik Rhee brought this up from Vertex Ventures with Marianna Tessel, it's a nuanced point but I think he's right on which is the pattern that's emerging is developers want ease-of-use tooling, they're driving the change and I think the developers in the devs ops ethos- it's never going to be separate. It's going to be DevOps. That means developers are driving operations and then security. So what I learned was it's not ops teams leveling up, it's devs redefining what ops is. >> Mm. And I think that to me is where Supercloud's going to be interesting- >> Forcing that. >> Yeah. >> Forcing the change because the structural change is open sources thriving, devs are still in charge and they still want more developers, Vittorio "we need more developers", right? So the developers are in charge and that's clear. Now, if that happens- if you believe that to be true the domino effect of that is going to be amazing because then everyone who gets on the wrong side of history, on the ops and security side, is going to be fighting a trend that may not be fight-able, you know, it might be inevitable. And so the winners are the ones that are refactoring their business like Snowflake. Snowflake is a data warehouse that had nothing to do with Amazon at first. It was the developers who said "I'm going to refactor data warehouse on AWS". That is a developer-driven refactorization and a business model. So I think that's the pattern I'm seeing is that this concept refactoring, patterns and the developer trajectory is critical. >> I thought there was another great comment. Maribel Lopez, her Lord of the Rings comment: "there will be no one ring to rule them all". Now at the same time, Kit Colbert, you know what we asked him straight out, "are you the- do you want to be the, the Supercloud OS?" and he basically said, "yeah, we do". Now, of course they're confined to their world, which is a pretty substantial world. I think, John, the reason why Maribel is so correct is security. I think security's a really hard problem to solve. You've got cloud as the first layer of defense and now you've got multiple clouds, multiple layers of defense, multiple shared responsibility models. You've got different tools for XDR, for identity, for governance, for privacy all within those different clouds. I mean, that really is a confusing picture. And I think the hardest- one of the hardest parts of Supercloud to solve. >> Yeah, and I thought the security founder Gee Rittenhouse, Piyush Sharrma from Accurics, which sold to Tenable, and Tony Kueh, former head of product at VMware. >> Right. >> Who's now an investor kind of looking for his next gig or what he is going to do next. He's obviously been extremely successful. They brought up the, the OS factor. Another point that they made I thought was interesting is that a lot of the things to do to solve the complexity is not doable. >> Yeah. >> It's too much work. So managed services might field the bit. So, and Chris Hoff mentioned on the Clouderati segment that the higher level services being a managed service and differentiating around the service could be the key competitive advantage for whoever does it. >> I think the other thing is Chris Hoff said "yeah, well, Web 3, metaverse, you know, DAO, Superclouds" you know, "Stupercloud" he called it and this bring up- It resonates because one of the criticisms that Charles Fitzgerald laid on us was, well, it doesn't help to throw out another term. I actually think it does help. And I think the reason it does help is because it's getting people to think. When you ask people about Supercloud, they automatically- it resonates with them. They play back what they think is the future of cloud. So Supercloud really talks to the future of cloud. There's a lot of aspects to it that need to be further defined, further thought out and we're getting to the point now where we- we can start- begin to say, okay that is Supercloud or that isn't Supercloud. >> I think that's really right on. I think Supercloud at the end of the day, for me from the simplest way to describe it is making sure that the developer experience is so good that the operations just happen. And Marianna Tessel said, she's investing in making their developer experience high velocity, very easy. So if you do that, you have to run on premise and on the cloud. So hybrid really is where Supercloud is going right now. It's not multi-cloud. Multi-cloud was- that was debunked on this session today. I thought that was clear. >> Yeah. Yeah, I mean I think- >> It's not about multi-cloud. It's about operationally seamless operations across environments, public cloud to on-premise, basically. >> I think we got consensus across the board that multi-cloud, you know, is a symptom Chuck Whitten's thing of multi-cloud by default versus multi- multi-cloud has not been a strategy, Kit Colbert said, up until the last couple of years. Yeah, because people said, "oh we got all these multiple clouds, what do we do with it?" and we got this mess that we have to solve. Whereas, I think Supercloud is something that is a strategy and then the other nuance that I keep bringing up is it's industries that are- as part of their digital transformation, are building clouds. Now, whether or not they become superclouds, I'm not convinced. I mean, what Goldman Sachs is doing, you know, with AWS, what Walmart's doing with Azure connecting their on-prem tools to those public clouds, you know, is that a supercloud? I mean, we're going to have to go back and really look at that definition. Or is it just kind of a SAS that spans on-prem and cloud. So, as I said, the further you go up the stack, the business case seems to wane a little bit but there's no question in my mind that from an infrastructure standpoint, to your point about operations, there's a real requirement for super- what we call Supercloud. >> Well, we're going to keep the conversation going, Dave. I want to put a shout out to our founding supporters of this initiative. Again, we put this together really fast kind of like a pilot series, an inaugural event. We want to have a face-to-face event as an industry event. Want to thank the founding supporters. These are the people who donated their time, their resource to contribute content, ideas and some cash, not everyone has committed some financial contribution but we want to recognize the names here. VMware, Intuit, Red Hat, Snowflake, Aisera, Alteryx, Confluent, Couchbase, Nutanix, Rafay Systems, Skyhigh Security, Aviatrix, Zscaler, Platform9, HashiCorp, F5 and all the media partners. Without their support, this wouldn't have happened. And there are more people that wanted to weigh in. There was more demand than we could pull off. We'll certainly continue the Supercloud conversation series here on "theCUBE" and we'll add more people in. And now, after this session, the Ecosystem Speaks session, we're going to run all the videos of the big name companies. We have the Nutanix CEOs weighing in, Aviatrix to name a few. >> Yeah. Let me, let me chime in, I mean you got Couchbase talking about Edge, Platform 9's going to be on, you know, everybody, you know Insig was poopoo-ing Oracle, but you know, Oracle and Azure, what they did, two technical guys, developers are coming on, we dig into what they did. Howie Xu from Zscaler, Paula Hansen is going to talk about going to market in the multi-cloud world. You mentioned Rajiv, the CEO of Nutanix, Ramesh is going to talk about multi-cloud infrastructure. So that's going to run now for, you know, quite some time here and some of the pre-record so super excited about that and I just want to thank the crew. I hope guys, I hope you have a list of credits there's too many of you to mention, but you know, awesome jobs really appreciate the work that you did in a very short amount of time. >> Well, I'm excited. I learned a lot and my takeaway was that Supercloud's a thing, there's a kind of sense that people want to talk about it and have real conversations, not BS or FUD. They want to have real substantive conversations and we're going to enable that on "theCUBE". Dave, final thoughts for you. >> Well, I mean, as I say, we put this together very quickly. It was really a phenomenal, you know, enlightening experience. I think it confirmed a lot of the concepts and the premises that we've put forth, that David Floyer helped evolve, that a lot of these analysts have helped evolve, that even Charles Fitzgerald with his antagonism helped to really sharpen our knives. So, you know, thank you Charles. And- >> I like his blog, by the I'm a reader- >> Yeah, absolutely. And it was great to be back in Palo Alto. It was my first time back since pre-COVID, so, you know, great job. >> All right. I want to thank all the crew and everyone. Thanks for watching this first, inaugural Supercloud event. We are definitely going to be doing more of these. So stay tuned, maybe face-to-face in person. I'm John Furrier with Dave Vellante now for the Ecosystem chiming in, and they're going to speak and share their thoughts here with "theCUBE" our first live stage performance event in our studio. Thanks for watching. (gentle upbeat music)
SUMMARY :
and they're going to be having as did, by the way Ali Ghodsi you know, And the similarities on the Democrat side And I think VMware is very humble So the question on VMware is and we want to lose weight. they have to deal with the divorce. And I thought that was poignant. Not sure I see that the Mm. And I think that to me is where And so the winners are the ones that are of the Rings comment: the security founder Gee Rittenhouse, a lot of the things to do So, and Chris Hoff mentioned on the is the future of cloud. is so good that the public cloud to on-premise, basically. So, as I said, the further and all the media partners. So that's going to run now for, you know, I learned a lot and my takeaway was and the premises that we've put forth, since pre-COVID, so, you know, great job. and they're going to speak
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tristan | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
John | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Steve Mullaney | PERSON | 0.99+ |
Katie | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Charles | PERSON | 0.99+ |
Mike Dooley | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Maribel Lopez | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mike Wolf | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Merim | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Rossi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Chris Wegmann | PERSON | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Chris Hoff | PERSON | 0.99+ |
Jamak Dagani | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Caterpillar | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Marianna Tessel | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jerome | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lori MacVittie | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Peter McKee | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Securing the Supercloud | Supercloud22
>>Okay, welcome back everyone to Supercloud 22, this is the cube studio's live performance. We streaming virtually@siliconangledotcomandthecube.net. I'm John for host the cube at Dave Alane with a distinguished panel talking about securing the Supercloud all cube alumni G written house was the CEO of Skyhigh security, Peter Sharma founder of, of QX sold to tenable and Tony qua who's investor. Co-founder former head of product at VMware chance. Thanks for coming on and to our, in all girls super cloud pilot event. >>Good to see you guys big topic. >>Okay. So before we get into secure in the cloud, one of the things that we were discussing before we came on camera was how cloud, the relationship between cloud and on premise and multi-cloud and how Supercloud fits into that. At the end of the day, security's driving a lot of the conversations at the op side and dev shift left is happening. We see that out there. So before we get into it, how do you guys see super cloud Tony? We'll start with you. We'll go down the line. What is Supercloud to you? >>Well, to me, super cloud is really the next evolution, the culmination of the services coming all together, right? As a application developer today, you really don't need to worry about where this thing is. Sit sitting or what's the latency cuz cuz the internet is fast enough. Now I really wanna know what services something provides. What, how do I get access to it now? Security. We'll talk about that later. That that becomes a, a big issue because of the fragmentation of how security is implemented across all the different vendors. So to me it's an IP address I program to it and you know, off we go, but there's a lot of >>You like that pipe happens >>Iceberg chart, right? Like I'm the developer touching the APIs up there. There's a bunch of other things. BU service. >>Okay. Looking forward again. Gee, what's your take? Obviously we've had many conversations on the cube. What's your super cloud update. >>Yeah, so I, I view it as just an extension of what we see today before like maybe 10 years ago we were mashing up applications built on other SAS applications and whatnot. Now we're just extending that down to further primitives, not, we don't really care where our mashup resides, what cloud platform, where it sits to Tony's point, as long as you have an IP address. But beyond that, we're just gonna start to get little micro services and deeper into the applications. >>BP, what should you take? >>I think, I think super cloud to me is something that don't don't exist. It exists only on my laptop. That's the super cloud means to me. I know it takes a lot behind the scene to get that working of and running. But, but essentially, essentially that the everything having be able to touch physically versus not being able to touch anything is super cloud to me. >>So we, what Victoria was saying. Yeah, we see serverless out there, all these cool things happening. Exactly. And you look at the, some of the successful companies that have come in, I call V two cloud. Some are, some are saying the next gen, they're all building on top of the CapEx. I mean, if, why would you not wanna leverage all that work AWS is doing and now Azure, and obviously Google's out there and you got other, other, other clouds out there. But in terms of AWS as a hyperscaler, they're spending all the money and they're getting better. They're getting lower level. We're talking about some of that yesterday, data bricks, snowflake, Goldman Sachs there's industry clouds that could be powerhouse service providers to themselves and their vertical. Then you got specialty clouds. Like there could be a data cloud, there could be an identity cloud. So yeah. How does this sort itself out? How do you guys see that? Because can they coexist? >>But I think they have to right, because I, I think, you know, eventually organizations will get big enough where they can be strong and really market leading in multiple segments. But if you think about what it takes to really build a massive scaled out database company that, that DNA doesn't just overnight translate to identity or translate to video, it takes years to build that up. So in the meantime, all these guys have to understand that they are one part of the service stack to power the next gen solutions. And if they don't play well with each other, then you're gonna have a problem. >>So security, I think is one of the hardest problems of, of super cloud. And not only do you have too many tools and a lack of talent, but you've now got this new first line of defense, which is the cloud. And the problem is you've got multiple clouds. So you've got multiple first lines of defense with multiple cloud provider tools. And then the CISO, I guess, is the next line of defense with the application development team. You know, there to be the pivot point between strategy and execution. And I guess audit is the third line of the defense. So it's an even more complicated environment. So gee, how do you see that CSO role changing and, and can there actually be a unified security layer in Supercloud? >>Yeah, so I believe that that they can be, the role is definitely changing because now a CSO actually has to have a basic understanding of how clouds work, the dependency of clouds on the, on the business that they serve. And, and this is to your point, not only do we have these new lines and opening up in a tax surface, but they're coupled together. So we have supply chain type connections between this. So there's a coherence across these systems that a CISO has to kind of think about not only these Bo cloud boundaries, but the trust boundaries between them. So classic example visibility, wh what, where are these things and what are the dependencies in my business then of course you mentioned compliance. Am I regulatory? And then of course protecting and responding to this, >>You know? Yeah. The, the, the supply chain piece that you just mentioned. I mean, I feel like there's like these milestones stocks, net was a milestone, you know, obvious obviously log four J was another one, the supply chain hack with solar winds. Yep. You know, it's just, the adversary just keeps getting stronger and stronger and, and, and more agile. So, so is this a data? Do we solve this as a data problem? Is it, you know, you can't just throw more infrastructure at it. What are your thoughts >>For it? I think, you know, great, great point that you're brought up. We need to look at things very fundamentally. What is happening is security has the most difficult job in the cloud, especially super cloud. The poor guys are managing some, managing something or securing something that they can't govern, right? Your, your custodian of the cloud as your developers and DevOps, they are the ones who are defining, creating, destroying things in the cloud. And that guy sitting at the end of the tunnel, looking at things that what he gets and he has to immediately respond. That's why it has to be fundamentally solve. Number one, we talked about supply chain. We talked about the, the, the stuck net to wanna cry, to sort of wins, to know the most recent one on the pipeline. Once the interesting phenomena is that the way industry has moved super cloud, the attackers are also moving them super attackers, right? They have stopped. They have not stopped, but they have started slowly moving to the left, which is the governance part. So they have started attacking your source code, you know, impersonating the codes, replacing the binary, finding one is there. So if they can, if the cloud is built so early, why can't I go early and, and, and inject myself. >>So super hackers is coming to super thinking Hollywood right now. I mean, that brings up a good point. I mean, this whole trust thing is huge. I mean, I hear zero trust. I think, wait a minute, that's not the conference I was just at, we went to, we managed, we work with DockerCon and they were talking about trust services. Yeah. So supply chain source code has trust brokering going on. And yet you got zero trust, which is which are they contextually different? I mean, what, what, >>What, from my perspective, though, the same in that zero trust is a framework that starts with minimum privileges and then build up those privileges over time. Normally in today's dialogue, zero trust is around access. I'm not having a broad access. I'm having a narrow access around an application, but you can also extend those principles to usage. What can, how much privilege do I have within an application? I have to build up my trust to enhance and, and get extended privileges within an application. Of course you can then extend this naturally to applications, APIs, applications, talking with each other. And so by you, you have to restrict the attack surface that is based on a trust model fundamentally. And then to your point, I mean, there's always this residual that you have to deal with afterwards. >>So, so super cloud implies more surface area. You're talking about private. So here we go. So how, and by the way, the AWS was supposed to be at this conference. They said they couldn't make it. They had a schedule issue, but they wanted to be here, but I would ask them, how do you differentiate AWS going forward? Do you go IAS all the way? Do you release the pass layer up? How does this solve? Because you have native clouds that are doing great, the complexity on super cloud, and multi-cloud has to be solved. >>Let me offer maybe a different argument. So if you think about we're all old enough to see the history sort of re pendulum shift and it shifting back in a way, if you're arguing that this culmination of all these services in the form of cloud today, essentially moving up stack, then really this is a architectural pattern that's emerging, right? And therefore there needs to be a super cloud, almost operating system. So operating systems, if you build one before you need a scheduler, you need process handler, you need process isolation, you need memory storage, compute all that together. Now that is our sitting in different parts of the internet. And, and there is no operating system. Yes. And that's the gap, right? And so if you don't even have an operating system, how do you implement security? And that's the pain. Yeah, because today it's one off, directly from service to service. Like how many times can you set up SAML orchestration? You can have an entire team doing that, right. If that's, that's what you have to do. So I think that's ultimately the gap and, and we're sort of just revolving around this concept that there's missing an operating system for superpower. >>It's like Maribel Lopez said in the previous panel that Lord of the rings, there will be no one ring rule the ball. Right. Probably there is needs one. Oh yeah. But, but, but, so what happens? So again, security's the hardest problem. So Snowflake's gotta implement its security, you know, data bricks with an open source model has to implement its security. So there's these multiple security models. You talk about zero trust, which I, if, if I infer what you said, gee, it's essentially, if you don't have privilege access, you don't get access. Yeah. Right. If you, okay. Okay. So that's the framework. Fine. And then you gotta earn it over time. Yeah. Now companies like Amazon, they have the, the talent and the skills to implement that zero trust framework. Exactly. So, so the, the industry, you, you guys with the R and D have to actually ultimately build that, that super cloud framework, don't you? >>Yeah. But I would just look all of the major cloud providers, the ones you mentioned and more will have their own framework within their own environment. Right? Yeah. The problem is with super cloud, you're extending it across multiple ones. There's no standards. There's no easy way to integrate that. So now all of that is left to the developer who is like throwing out code as fast as they can >>Is their, their job is to abstract that, I mean, they've gotta secure the, the run time, they gotta secure the container. >>You have to >>Abstract it. Right. Okay. But, but they're not security pros or ops. >>Exactly. They're haves. >>But to, but to G's point, right. If everyone's implementing their own little Z TNA, then inherently, there's a blind trust between two vendors. Right. That has to >>Be, >>That has to be >>Established. That's implicit. You're saying, >>Yeah. But, but it's, it's contractual, it's not technology. Right. Because I'm turning something out in my cloud, you're turning out something in your cloud that says we've got something, some token exchange, which gives us trust. But what happens if that breaks down and whatever happens to the third party comes in? I think that's the problem. >>Yeah. In fact, in fact, the, if I put the, you know, combine one of those commons, the zero trust was build, keeping identity authentication, then authorization in mind, right? Yeah. This needs to be extended because the zero test definition now probably go into integrity. Yeah, exactly. Right. Yeah. I authenticated. I worked well with Tony in the past, but how do I know that something has changed on the Tony's side? Yeah, exactly. Right, right. That, that integrity is going to be very, very foundational. Given developers are building those third party libraries, those source code pumping stuff. The only way I can validate is, Hey, what has changed? >>And then throw edge into the equation, John and IOT and machine to machine. Exactly. It's just, >>Well, >>Yeah. I think, I think we have another example to build on Tony's operating system model. Okay. And that is the cloud access service broker model for SAS. So we, we have these services sitting out there, we've brokered them together. They're normally on user policies. What I can have access to what I can do, what I can't do, but that can be extended down to services and have the same kind of broker arrangement all through APIs. You have to establish that trust and the, and the policies there, and they can be dynamic and all of this stuff. But you can from an, either an operating system or a SAS interaction and integration model come to these same kind of points. So who >>Builds the, the, the secure Supercloud? Is it new guys like you? Is it your old company giants like Palo Alto? Who, who actually builds the and secures the Supercloud it sounds like it's an ecosystem. >>Yeah. It is an ecosystem. Absolutely. It's an ecosystem. >>Yeah. There's no one security Supercloud >>As well. No, but I, I do think there's one, there's one difference in that historically security has always focused on that shiny object. The, the, the, a particular solution to a particular threat when you're dealing with a, a cloud or super cloud, like the number of that is incalculable. So you have to come into some sort of platform. And so you will see if it's not one, you know, a finite number of platform type solutions that are trying to solve this on behalf of the >>Customer. That to your point, then get connected. >>I think it's gonna be like Unix, right? Like how many flavors of Unix were there out there? All of them 'em had a scheduler. All of them had these processes. All of them had their little compilers. You can compile to that system, target to that system. And for a while, it's gonna be very fragmented until multiple parties decide to converge. >>Right? Well, this is, this is the final question we have one minute left. I wish we had more time. This is a great panel. We'll we'll bring you guys back for sure. After the event, what one thing needs to happen to unify or get through the other side of this fragmentation than the challenges for Supercloud. Because remember the enterprise equation is solve complexity with more complexity. Well, that's not what the market wants. They want simplicity. They want SA they want ease of use. They want infrastructure risk code. What has to happen? What do you think each of you? >>So I, I can start and extending to the previous conversation. I think we need a consortium. We need, we need a framework that defines that if you really want to operate in super cloud, these are the 10 things that you must follow. It doesn't matter whether you take AWS slash or GCP, or you have all, and you will have the on-prem also, which means that it has to follow a pattern. And that pattern is what is required for super cloud. In my opinion, otherwise security is going everywhere. They're like they have to fix everything, find everything and so on. So forth, it's not gonna be possible. So they need a, they need a framework. They need a consortium. And it, this consortium needs to be, I think, needs to led by the cloud providers, because they're the ones who have these foundational infrastructure elements and the security vendor should contribute on providing more severe detections or findings. So that's, in my opinion is, should be the model. >>Well, thank you G >>Yeah, I would think it's more along the lines of a business model we've seen in cloud that the scale matters. And once you're big, you get bigger. We haven't seen that coals around either a vendor, a business model, whatnot, to bring all of this and connect it all together yet. So that value proposition in the industry I think is missing, but there's elements of it already available. >>I, I think there needs to be a mindset. If you look again, history repeating itself, the internet sort of came together around set of I ETF, RSC standards, everybody embraced and extended it. Right. But still there was at least a baseline. Yeah. And I think at that time, the, the largest and most innovative vendors understood that they couldn't do it by themselves. Right. And so I think what we need is a mindset where these big guys like Google, let's take an example. They're not gonna win at all, but they can have a substantial share. So how do they collaborate with the ecosystem around a set of standards so that they can bring, bring their differentiation and then embrace everybody >>Together. Guys, this has been fantastic. I mean, I would just chime in back in the day, those was proprietary nosis proprietary network protocols. You had kind of an enemy to rally around. I'm not sure. I see an enemy out here right now. So the clouds are doing great. Right? So it's a tough one, but I think super OS super consortiums, super business models are gonna emerge. Thanks so much for spending the time. Great conversation. Thank you for having us to bring, keep going hour superclouds here in Palo Alto, live coverage stream virtually I'm John with Dave. Thanks for watching. Stay with us for more coverage. This break.
SUMMARY :
I'm John for host the cube at Dave Alane with So before we get into it, how do you guys see super cloud Tony? So to me it's an IP address I program to it Like I'm the developer touching the APIs up there. Gee, what's your take? where it sits to Tony's point, as long as you have an IP address. I know it takes a lot behind the scene to get I mean, if, why would you not wanna leverage all that work But I think they have to right, because I, I think, you know, eventually organizations And I guess audit is the third line of the defense. And then of course protecting and responding to this, Is it, you know, you can't just throw more infrastructure at it. I think, you know, great, great point that you're brought up. So super hackers is coming to super thinking Hollywood right now. And then to your point, I mean, there's always this residual that you have to deal with afterwards. the complexity on super cloud, and multi-cloud has to be solved. So if you think about we're the talent and the skills to implement that zero trust framework. So now all of that is left to the developer They're haves. That has to You're saying, happens to the third party comes in? This needs to be extended because the zero And then throw edge into the equation, John and IOT and machine to machine. And that is the cloud access service broker model for SAS. Is it your old company It's an ecosystem. So you have to come into some sort of platform. That to your point, then get connected. to that system, target to that system. Because remember the enterprise equation is solve complexity with more complexity. So I, I can start and extending to the previous conversation. So So how do they collaborate with the ecosystem around a So the clouds are doing great.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Maribel Lopez | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Tony | PERSON | 0.99+ |
Tony qua | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Peter Sharma | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
two vendors | QUANTITY | 0.99+ |
Victoria | PERSON | 0.99+ |
10 things | QUANTITY | 0.99+ |
third line | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
DockerCon | ORGANIZATION | 0.99+ |
first line | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
today | DATE | 0.99+ |
one minute | QUANTITY | 0.99+ |
Skyhigh security | ORGANIZATION | 0.98+ |
first lines | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
QX | ORGANIZATION | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
yesterday | DATE | 0.98+ |
one part | QUANTITY | 0.97+ |
zero trust | QUANTITY | 0.97+ |
super cloud | EVENT | 0.97+ |
Supercloud 22 | EVENT | 0.96+ |
each | QUANTITY | 0.96+ |
Palo Alto | ORGANIZATION | 0.95+ |
Dave Alane | PERSON | 0.93+ |
virtually@siliconangledotcomandthecube.net | OTHER | 0.91+ |
Unix | TITLE | 0.91+ |
super cloud | ORGANIZATION | 0.89+ |
VMware | ORGANIZATION | 0.89+ |
Azure | TITLE | 0.88+ |
CapEx | ORGANIZATION | 0.85+ |
SAS | ORGANIZATION | 0.85+ |
one difference | QUANTITY | 0.83+ |
Supercloud22 | ORGANIZATION | 0.79+ |
V two cloud | ORGANIZATION | 0.74+ |
super OS | ORGANIZATION | 0.71+ |
one thing | QUANTITY | 0.7+ |
zero test | QUANTITY | 0.67+ |
ETF | OTHER | 0.6+ |
Iceberg | TITLE | 0.59+ |
CISO | ORGANIZATION | 0.57+ |
superclouds | ORGANIZATION | 0.54+ |
agile | TITLE | 0.52+ |
Snowflake | TITLE | 0.52+ |
Hollywood | ORGANIZATION | 0.51+ |
minute | QUANTITY | 0.49+ |
hardest | QUANTITY | 0.48+ |
GCP | ORGANIZATION | 0.42+ |
Supercloud | TITLE | 0.41+ |
DevOps | TITLE | 0.4+ |
slash | TITLE | 0.34+ |
Rob Emsley, Dell Technologies
(upbeat music) >> Welcome back to a Blueprint For Trusted Infrastructure. We're here with Rob Emsley. Who's the director of product marketing for data protection and cyber security. Rob, good to see you. A new role. >> Yeah. Good to be back, Dave. Good to see you. Yeah, it's been a while since we chatted last and, you know, one of the changes in my world is that I've expanded my responsibilities beyond data protection marketing to also focus on cybersecurity marketing specifically for our infrastructure solutions group. So certainly that's, you know, something that really has driven us, you know, to come and have this conversation with you today. >> So data protection obviously has become an increasingly important component of the cyber security space. I don't think necessarily of, you know, traditional backup and recovery as security, to me, it's an adjacency. I know some companies have said, oh, yeah. Now we're a security company. They're kind of chasing the valuation bubble. >> For sure. >> Dell's interesting because you have, you know, data protection in the form of backup and recovery and data management, but you also have security, you know, direct security capabilities. So you're sort of bringing those two worlds together and it sounds like your responsibility is to connect those dots. Is that right? >> Absolutely. Yeah. I mean, I think that the reality is is that security is a multi-layer discipline. I think the days of thinking that it's one or another technology that you can use or process that you can use to make your organization secure are long gone. I mean, certainly you actually correct. If you think about the backup and recovery space, I mean, people have been doing that for years, you know, certainly backup and recovery, it's all about the recovery. It's all about getting yourself backup and running when bad things happen. And one of the realities, unfortunately today is that one of the worst things that can happen is cyber attacks. You know, ransomware, malware are all things that are top of mind for all organizations today. And that's why you see a lot of technology and a lot of innovation going into the backup and recovery space because if you have a copy, a good copy of your data, then that is really the first place you go to recover from a cyber attack. And that's why it's so important. The reality is is that unfortunately the cyber criminals keep on getting smarter. I don't know how it happens, but one of the things that is happening is that the days of them just going after your production data are no longer the only challenge that you have, they go after your backup data as well. So over the last half a decade, Dell Technologies with its backup and recovery portfolio has introduced the concept of isolated cyber recovery vaults. We've had many conversations about that over the years and that's really a big tenant of what we do in the data protection portfolio. >> So this idea of cybersecurity resilience that definition is evolving. What does it mean to you? >> Yeah, I think the analyst team over at Gartner, they wrote a very insightful paper called you will be hacked embrace the breach. And the whole basis of this analysis is so much money's been spent on prevention is that what's out of balance is the amount of budget that companies have spent on cyber resilience and cyber resilience is based upon the premise that you will be hacked. You have to embrace that fact and be ready and prepared to bring yourself back into business. You know, and that's really where cyber resiliency is very, very different than cyber security and prevention, you know, and I think that balance of get your security disciplines well funded, get your defenses as good as you can get them but make sure that if the inevitable happens and you find yourself compromised that you have a great recovery plan and certainly a great recovery plan, it's really the basis of any good, solid data protection backup from recovery philosophy. >> So if I had to do a SWOT analysis, we don't have to do the WOT, but let's focus on the S. What would you say are Dell's strengths in this, you know, cyber security space as it relates to data protection? >> One is we've been doing it a long time. You know, we talk a lot about Dell's data protection being proven and modern. You know, certainly the experience that we've had over literally three decades of providing enterprise scale data protection solutions to our customers has really allowed us to have a lot of insight into what works and what doesn't. As I mentioned to you, one of the unique differentiators of our solution is the cyber recovery vaulting solution that we introduce a little over five years ago, five, six years. Power protect cyber recovery is something which has become a unique capability for customers to adopt on top of their investment in Dell Technologies data protection, you know, the unique elements of our solution already threefold, and we call them the three Is. It's isolation, it's a immutability and it's intelligence. And the, the isolation part is really so important because you need to reduce the attack surface of your good known copies of data. You know, you need to put it in a location that the bad actors can't get to it. And that really is the essence of a cyber recovery vault. Interestingly enough, you're starting to see the market throw out that word, you know, from many other places, but really it comes down to having a real discipline that you don't allow the security of your cyber recovery vault to be compromised insofar as allowing it to be controlled from outside of the vault, you know, allowing it to be controlled by your backup application. Our cyber recovery vaulting technology is independent of the backup infrastructure. It uses it, but it controls its own security. And that is so, so important. It's like having a vault that the only way to open it is from the inside, you know, and think about that. If you think about, you know, vaults in banks or vaults in your home, normally you have a key pad on the outside. Think of our cyber recovery vault as having its security controlled from inside of the vault. >> So nobody can get in, nothing can get in unless it's already in. And if it's already in, then it's trusted. >> Exactly, exactly. >> Yeah. So isolation's the key. And then you mentioned immutability is the second piece. >> Yeah, so immutability is also something which has been around for a long time. People talk about backup mutability or immutable backup copies. So I mutability is just the additional technology that allows the data that's inside of the vault to be unchangeable, you know, but again that immutability, you know, your mileage varies, you know, when you look across the different offers that are out there in the market especially in the backup industry. You made a very valid point earlier that the backup vendors in the market seem to be security washing their marketing messages. I mean, everybody is leaning into the ever present danger of cybersecurity, not a bad thing, but the reality is is that you have to have the technology to back it up, you know, quite literally >> Yeah, no pun intended. Right. Actually pun intended. Now what about the intelligence piece of it? That's that's AI, ML, where does that fit? >> For sure. So the intelligence piece is delivered by a solution called CyberSense. And CyberSense for us is what really gives you the confidence that what you have in your cyber recovery vault is a good clean copy of data. So it's looking at the backup copies that get driven into the cyber vault, and it's looking for anomalies. So it's not looking for signatures of malware. You know, that's what your antivirus software does. That's what your endpoint protection software does. That's on the prevention side of the equation. But what we're looking for is we're looking to ensure that the data that you need when all hell breaks loose is good and that when you get a request to restore and recover your business, you go, right, let's go and do it. And you don't have any concern that what you have in the vault has been compromised. So cyber sense is really a unique analytic solution in the market based upon the fact that it isn't looking at at cursory indicators of malware infection or ransomware introduction, it's doing full content analytics, you know, looking at, you know, has the data in any way changed, has it suddenly become encrypted? Has it suddenly become different to how it was in the previous scan? So that anomaly detection is very, very different. It's looking for, you know, like different characteristics that really are an indicator that something is going on. And, of course, if it sees it, you immediately get flagged. But the good news is is that you always have in the vault the previous copy of good known data which now becomes your restore point. >> So we're talking to Rob Emsley about how data protection fits into what Dell calls DTI, Dell Trusted Infrastructure. And I want to come back, Rob, to this notion of, and not or cause I think a lot of people are skeptical. Like how can I have great security and not introduce friction into my organization? Is that an automation play? How does Dell tackle that problem? >> I mean, I think a lot of it is across our infrastructure is is security has to be built in, I mean, intrinsic security within our servers, within our storage devices, within our elements of our backup infrastructure. I mean, security, multifactor authentication, you know, elements that make the overall infrastructure secure. You know, we have capabilities that, you know, allow us to identify whether or not configurations have changed. You know, we'll probably be talking about that a little bit more to you later in the segment, but the essence is security is not a Bolton. It has to be part of the overall infrastructure. And that's so true, certainly in the data protection space >> Give us the bottom line on how you see Dell's key differentiators. Maybe you could talk about Dell, of course, always talks about its portfolio, but why should customers, you know, lead in to Dell in this whole cyber resilience space? >> You know, staying on the data protection space as I mentioned, the work we've been doing to introduce this cyber resiliency solution for data protection is in our opinion, as good as it gets. You know, you've spoken to a number of our best customers whether it be Bob Bender from Founders Federal or more recently at (indistinct) you spoke to Tony Bryson from the Town of Gilbert. And these are customers that we've had for many years that have implemented cyber recovery vaults. And at the end of the day, they can now sleep at night. You know, that's really the peace of mind that they have is that the insurance that a data protection from Dell cyber recovery vault, a power protect cyber recovery solution gives them, you know, really allows them to, you know, just have the assurance that they don't have to pay a ransom. If they have an insider threat issue and, you know, all the way down to data deletion is they know that what's in the cyber recovery vault is good and ready for them to recover from. >> Great. Well, Rob, congratulations on the new scope of responsibility. I like how, you know, your organization is expanding as the threat surface is expanding. As we said, data protection becoming an adjacency to security, not security in and of itself. A key component of a comprehensive security strategy. Rob Emsley, thank you for coming back in theCUBE. Good to see you again. >> You too, Dave. Thanks. >> All right, in a moment, I'll be back to wrap up a blueprint for trusted infrastructure. You are watching theCUBE. (upbeat music)
SUMMARY :
Who's the director of product So certainly that's, you know, of the cyber security space. also have security, you know, is that the days of them that definition is evolving. that you have a great recovery plan in this, you know, cyber security space from outside of the vault, you know, And if it's already in, then it's trusted. immutability is the second piece. is that you have to have the That's that's AI, ML, where does that fit? that the data that you need Is that an automation play? elements that make the you know, lead in to Dell is that the insurance I like how, you know, your You too, Dave. I'll be back to wrap up a blueprint
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tony Bryson | PERSON | 0.99+ |
Rob Emsley | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
second piece | QUANTITY | 0.99+ |
Bob Bender | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
CyberSense | ORGANIZATION | 0.98+ |
Gilbert | LOCATION | 0.97+ |
three | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
One | QUANTITY | 0.97+ |
DTI | ORGANIZATION | 0.96+ |
two worlds | QUANTITY | 0.95+ |
last half a decade | DATE | 0.94+ |
three decades | QUANTITY | 0.92+ |
over | DATE | 0.86+ |
five years ago | DATE | 0.81+ |
Founders Federal | ORGANIZATION | 0.77+ |
first place | QUANTITY | 0.77+ |
things | QUANTITY | 0.72+ |
six years | DATE | 0.54+ |
threefold | QUANTITY | 0.5+ |
five | QUANTITY | 0.5+ |
worst things | QUANTITY | 0.5+ |
Blueprint For Trusted Infrastructure | TITLE | 0.43+ |
Day One Wrap | HPE Discover 2022
>>The cube presents HPE discover 2022 brought to you by HPE. >>Hey everyone. Welcome back to the Cube's day one coverage of HPE discover 22 live from the Venetian in Las Vegas. I got a power panel here, Lisa Martin, with Dave Valante, John furrier, Holger Mueller also joins us. We are gonna wrap this, like you've never seen a rap before guys. Lot of momentum today, lot, lot of excitement, about 8,000 or so customers, partners, HPE leaders here. Holger. Let's go ahead and start with you. What are some of the things that you heard felt saw observed today on day one? >>Yeah, it's great to be back in person. Right? 8,000 people events are rare. Uh, I'm not sure. Have you been to more than 8,000? <laugh> yeah, yeah. Okay. This year, this year. I mean, historically, yes, but, um, >>Snowflake was 10. Yeah. >>So, oh, wow. Okay. So 8,000 was my, >>Cisco was, they said 15, >>But is my, my 8,000, my record, I let us down with 7,000 kind of like, but it's in the Florida swarm. It's not nicely. Like, and there's >>Usually what SFI, there's usually >>20, 20, 30, 40, 50. I remember 50 in the nineties. Right. That was a different time. But yeah. Interesting. Yeah. Interesting what people do and it depends how much time there is to come. Right. And know that it happens. Right. But yeah, no, I think it's interesting. We, we had a good two analyst track today. Um, interesting. Like HPE is kind of like back not being your grandfather's HPE to a certain point. One of the key stats. I know Dave always for the stats, right. Is what I found really interesting that over two third of GreenLake revenue is software and services. Now a love to know how much of that services, how much of that software. But I mean, I, I, I, provocate some, one to ones, the HP executives saying, Hey, you're a hardware company. Right. And they didn't even come back. Right. But Antonio said, no, two thirds is, uh, software and services. Right. That's interesting. They passed the one exabyte, uh, being managed, uh, as a, as a hallmark. Right. I was surprised only 120,000 users if I had to remember the number. Right, right. So that doesn't seem a terrible high amount of number of users. Right. So, but that's, that's, that's promising. >>So what software is in there, cuz it's gotta be mostly services. >>Right? Well it's the 70 plus cloud services, right. That everybody's talking about where the added eight of them shockingly back up and recovery, I thought that was done at launch. Right. >>Still who >>Keep recycling storage and you back. But now it's real. Yeah. >>But the company who knows the enterprise, right. HPE, what I've been doing before with no backup and recovery GreenLake. So that was kind of like, okay, we really want to do this now and nearly, and then say like, oh, by the way, we've been doing this all the time. Yeah. >>Oh, what's your take on the installed base of HP. We had that conversation, the, uh, kickoff or on who's their target, what's the target audience environment look like. It certainly is changing. Right? If it's software and services, GreenLake is resonating. Yeah. Um, ecosystems responding. What's their customers cuz managed services are up too Kubernetes, all the managed services what's what's it like what's their it transformation base look like >>Much of it is of course install base, right? The trusted 20, 30 plus year old HP customer. Who's keeping doing stuff of HP. Right. And call it GreenLake. They've been for so many name changes. It doesn't really matter. And it's kind of like nice that you get the consume pain only what you consume. Right. I get the cloud broad to me then the general markets, of course, people who still need to run stuff on premises. Right. And there's three reasons of doing this performance, right. Because we know the speed of light is relative. If you're in the Southern hemisphere and even your email servers in Northern hemisphere, it takes a moment for your email to arrive. It's a very different user experience. Um, local legislation for data, residency privacy. And then, I mean Charles Phillips who we all know, right. Former president of uh, info nicely always said, Hey, if the CIOs over 50, I don't have to sell qu. Right. So there is not invented. I'm not gonna do cloud here. And now I've kind of like clouded with something like HP GreenLake. That's the customers. And then of course procurement is a big friend, right? Yeah. Because when you do hardware refresh, right. You have to have two or three competitors who are the two or three competitors left. Right. There's Dell. Yeah. And then maybe Lenovo. Right? So, so like a >>Little bit channels, the strength, the procurement physicians of strength, of course install base question. Do you think they have a Microsoft opportunity where, what 365 was Microsoft had office before 365, but they brought in the cloud and then everything changed. Does HP have that same opportunity with kind of the GreenLake, you know, model with their existing stuff. >>It has a GreenLake opportunity, but there's not much software left. It's a very different situation like Microsoft. Right? So, uh, which green, which HP could bring along to say, now run it with us better in the cloud because they've been selling much of it. Most of it, of their software portfolio, which they bought as an HP in the past. Right. So I don't see that happening so much, but GreenLake as a platform itself course interesting because enterprise need a modern container based platform. >>I want, I want to double click on this a little bit because the way I see it is HP is going to its installed base. I think you guys are right on say, this is how we're doing business now. Yeah. You know, come on along. But my sense is, some customers don't want to do the consumption model. There are actually some customers that say, Hey, of course I got, I don't have a cash port problem. I wanna pay for it up front and leave me alone. >>I've been doing this since 50 years. Nice. As I changed it, now <laugh> two know >>Money's wants to do it. And I don't wanna rent because rental's more expensive and blah, blah, blah. So do you see that in the customer base that, that some are pushing back? >>Of course, look, I have a German accent, right? So I go there regularly and uh, the Germans are like worried about doing anything in the cloud. And if you go to a board in Germany and say, Hey, we can pay our usual hardware, refresh, CapEx as usual, or should we bug consumption? And they might know what we are running. <laugh> so not whole, no offense against the Germans out. The German parts are there, but many of them will say, Hey, so this is change with COVID. Right. Which is super interesting. Right? So the, the traditional boards non-technical have been hearing about this cloud variable cost OPEX to CapEx and all of a sudden there's so much CapEx, right. Office buildings, which are not being used truck fleets. So there's a whole new sensitivity by traditional non-technical boards towards CapEx, which now the light bulb went on and say, oh, that's the cloud thing about also. So we have to find a way to get our cost structure, to ramp up and ramp down as our business might be ramping up through COVID through now inflation fears, recession, fears, and so on. >>So, okay. HP's, HP's made the statement that anything you can do in the cloud you can do in GreenLake. Yes. And I've said you can't run on snowflake. You can't run Mongo Atlas, you can't run data bricks, but that's okay. That's fine. Let's be, I think they're talking about, there's >>A short list of things. I think they're talking about the, their >>Stuff, their, >>The operating experience. So we've got single sign on through a URL, right. Uh, you've got, you know, some level of consistency in terms of policy. It's unclear exactly what that is. You've got storage backup. Dr. What, some other services, seven other services. If you had to sort of take your best guess as to where HP is now and peg it toward where Amazon was in which year? >>20 14, 20 14. >>Yeah. Where they had their first conference or the second we invent here with 3000 people and they were thinking, Hey, we're big. Yeah. >>Yeah. And I think GreenLake is the building blocks. So they quite that's the >>Building. Right? I mean similar. >>Okay. Well, I mean they had E C, Q and S3 and SQS, right. That was the core. And then the rest of those services were, I mean, base stock was one of that first came in behind and >>In fairness, the industry has advanced since then, Kubernetes is further along. And so HPE can take advantage of that. But in terms of just the basic platform, I, I would agree. I think it's >>Well, I mean, I think, I mean the software, question's a big one. I wanna bring up because the question is, is that software is getting the world. Hardware is really software scales, everything, data, the edge story. I love their story. I think HP story is wonderful Aruba, you know, hybrid cloud, good story, edge edge. But if you look under the covers, it's weak, right? It's like, it's not software. They don't have enough software juice, but the ecosystem opportunity to me is where you plug and play. So HP knows that game. But if you look historically over the past 25 years, HP now HPE, they understand plug and play interoperability. So the question is, can they thread the needle >>Right. >>Between filling the gaps on the software? Yeah. With partners, >>Can they get the partners? Right. And which have been long, long time. Right. For a long time, HP has been the number one platform under ICP, right? Same thing. You get certified for running this. Right. I know from my own history, uh, I joined Oracle last century and the big thing was, let's get your eBusiness suite certified on HP. Right? Like as if somebody would buy H Oracle work for them, right. This 20 years ago, server >>The original exit data was HP. Oracle. >>Exactly. Exactly. So there's this thinking that's there. But I think the key thing is we know that all modern forget about the hardware form in the platforms, right? All modern software has to move to containers and snowflake runs in containers. You mentioned that, right? Yeah. If customers force snowflake and HPE to the table, right, there will be a way to make it work. Right. And which will help HPE to be the partner open part will bring the software. >>I, I think it's, I think that's an opportunity because that changes the game and agility and speed. If HP plays their differentiation, right. Which we asked on their opening segment, what's their differentiation. They got size scale channel, >>What to the enterprise. And then the big benefit is this workload portability thing. Right? You understand what is run in the public cloud? I need to run it local. For whatever reason, performance, local residency of data. I can move that. There that's the big benefit to the ISVs, the sales vendors as well. >>But they have to have a stronger data platform story in my that's right. Opinion. I mean, you can run Oracle and HPE, but there's no reason they shouldn't be able to do a deal with, with snowflake. I mean, we saw it with Dell. Yep. We saw it with, with, with pure and I, if our HPE I'd be saying, Hey, because the way the snowflake deal worked, you probably know this is your reading data into the cloud. The compute actually occurs in the cloud viral HB going snowflake saying we can separate compute and storage. Right. And we have GreenLake. We have on demand. Why don't we run the compute on-prem and make it a full class, first class citizen, right. For all of our customers data. And that would be really innovative. And I think Mongo would be another, they've got OnPrem. >>And the question is, how many, how many snowflake customers are telling snowflake? Can I run you on premise? And how much defo open years will they hear from that? Right? This is >>Why would they deal Dell? That >>Deal though, with that, they did a deal. >>I think they did that deal because the customer came to them and said, you don't exactly that deal. We're gonna spend the >>Snowflake >>Customers think crazy things happen, right? Even, even put an Oracle database in a Microsoft Azure data center, right. Would off who, what as >>Possible snowflake, >>Oracle. So on, Aw, the >>Snow, the snowflakes in the world have to make a decision. Dave on, is it all snowflake all the time? Because what the reality is, and I think, again, this comes back down to the, the track that HP could go up or down is gonna be about software. Open source is now the software industry. There's no such thing as proprietary software, in my opinion, relatively speaking, cloud scale and integrated, integrated integration software is proprietary. The workflows are proprietary. So if they can get that right with the partners, I would focus on that. I think they can tap open source, look at Amazon with open source. They sucked it up and they integrated it in. No, no. So integration is the deal, not >>Software first, but Snowflake's made the call. You were there, Lisa. They basically saying it's we have, you have to be in snowflake in order to get the governance and the scalability, all that other wonderful stuff. Oh, but we we'll do Apache iceberg. We'll we'll open it up. We'll do Python. Yeah. >>But you can't do it data clean room unless you are in snowflake. Exactly. Snowflake on snowflake. >>Exactly. >>But got it. Isn't that? What you heard from AWS all the time till they came out outposts, right? I mean, snowflake is a market leader for what they're doing. Right. So that they want to change their platform. I mean, kudos to them. They don't need to change the platform. They will be the last to change their platform to a ne to anything on premises. Right. But I think the trend already shows that it's going that way. >>Well, if you look at outpost is an signal, Dave, the success of outpost launched what four years ago, they announced it. >>What >>EKS is beating, what outpost is doing. Outpost is there. There's not a lot of buzz and talk to the insiders and the open source community, uh, EKS and containers. To your point mm-hmm <affirmative> is moving faster on, I won't say commodity hardware, but like could be white box or HP, Dell, whatever it's gonna be that scale differentiation and the edge story is, is a good one. And I think with what we're seeing in the market now it's the industrial edge. The back office was gen one cloud back office data center. Now it's hybrid. The focus will be industrial edge machine learning and AI, and they have it here. And there's some, some early conversations with, uh, I heard it from, uh, this morning, you guys interviewed, uh, uh, John Schultz, right? With the world economic 4k birth Butterfield. She was amazing. And then you had Justin bring up a Hoar, bring up quantum. Yes. That is a differentiator. >>HP. >>Yes. Yeah. You, they have the computing shops. They had the R and D can they bring it to the table >>As, as HPC, right. To what they Schultz for of uh, the frontier system. Right. So very impressed. >>So the ecosystem is the key for them is because that's how they're gonna fill the gaps. They can't, they can't only, >>They could, they could high HPC edge piece. I wouldn't count 'em out of that game yet. If you co-locate a box, I'll use the word box, particularly at a telco tower. That's a data center. Yep. Right. If done properly. Yep. So, you know, what outpost was supposed to do actually is a hybrid opportunity. Aruba >>Gives them a unique, >>But the key thing is right. It's a yin and yang, right? It's the ecosystem it's partners to bring those software workload. Absolutely. Right. But HPE has to keep the platform attractive enough. Right. And the key thing there is that you have this workload capability thing that you can bring things, which you've built yourself. I mean, look at the telcos right. Network function, visualization, thousands of man, years into these projects. Right. So if I can't bring it to your edge box, no, I'm not trying to get to your Xbox. Right. >>Hold I gotta ask you since in the Dave too, since you guys both here and Lisa, you know, I said on the opening, they have serious customers and those customers have serious problems, cyber security, ransomware. So yeah. I teach transformation now. Industrial transformation machine learning, check, check, check. Oh, sounds good. But at the end of the day, their customers have some serious problems. Right? Cyber, this is, this is high stakes poker. Yeah. What do you think HP's position for in the security? You mentioned containers, you got all this stuff, you got open source, supply chain, you have to left supply chain issues. What is their position with security? Cuz that's the big one. >>I, I think they have to have a mature attitude that customers expect from HPE. Right? I don't have to educate HP on security. So they have to have the partner offerings again. We're back at the ecosystem to have what probably you have. So bring your own security apart from what they have to have out of the box to do business with them. This is why the shocker this morning was back up in recovery coming. <laugh> it's kind like important for that. Right? Well >>That's, that's, that's more ransomware and the >>More skeleton skeletons in the closet there, which customers should check of course. But I think the expectations HP understands that and brings it along either from partner or natively. >>I, I think it's, I think it's services. I think point next is the point of integration for their security. That's why two thirds is software and services. A lot of that is services, right? You know, you need security, we'll help you get there. We people trust HP >>Here, but we have nothing against point next or any professional service. They're all hardworking. But if I will have to rely on humans for my cyber security strategy on a daily level, I'm getting gray hair and I little gray hair >>Red. Okay. I that's, >>But >>I think, but I do think that's the camera strategy. I mean, I'm sure there's a lot of that stuff that's beginning to be designed in, but I, my guess is a lot of it is services. >>Well, you got the Aruba. Part of the booth was packed. Aruba's there. You mentioned that earlier. Is that good enough? Because the word zero trust is kicked around a lot. On one hand, on the other hand, other conversations, it's all about trust. So supply chain and software is trusting trust, trust and verified. So you got this whole mentality of perimeter gone mentality. It's zero trust. And if you've got software trust, interesting thoughts there, how do you reconcile zero trust? And then I need trust. What's what's you? What are you seeing older on that? Because I ask people all the time, they're like, uh, I'm zero trust or is it trust? >>Yeah. The middle ground. Right? Trusted. The meantime people are man manipulating what's happening in your runtime containers. Right? So, uh, drift control is a new password there that you check what's in your runtime containers, which supposedly impenetrable, but people finding ways to hack them. So we'll see this cat and mouse game going on all the time. Yeah. Yeah. There's always gonna be the need for being in a secure, good environment from that perspective. Absolutely. But the key is edge has to be more than Aruba, right? If yeah. HV goes away and says, oh yeah, we can manage your edge with our Aruba devices. That's not enough. It's the virtual probability. And you said the important thing before it's about the data, right? Because the dirty secret of containers is yeah, I move the code, but what enterprise code works without data, right? You can't say as enterprise, okay, we're done for the day check tomorrow. We didn't persist your data, auditor customer. We don't have your data anymore. So filling a way to transport the data. And there just one last thought, right? They have a super interesting asset. They want break lands for the venerable map R right. Which wrote their own storage drivers and gives you the chance to potentially do something in that area, which I'm personally excited about. But we'll see what happens. >>I mean, I think the holy grail is can I, can I put my data into a cloud who's ever, you know, call it a super cloud and can I, is it secure? Is it governed? Can I share it and be confident that it's discoverable and that the, the person I give it to has the right to use it. Yeah. And, and it's the correct data. There's not like a zillion copies running. That's the holy grail. And I, I think the answer today is no, you can, you can do that maybe inside of AWS or maybe inside of Azure, look maybe certainly inside of snowflake, can you do that inside a GreenLake? Well, you probably can inside a GreenLake, but then when you put it into the cloud, is it cross cloud? Is it really out to the edge? And that's where it starts to break down, but that's where the work is to be done. That's >>The one Exide is in there already. Right. So men being men. Yeah. >>But okay. But it it's in there. Yeah. Okay. What do you do with it? Can you share that data? What can you actually automate governance? Right? Uh, is that data discoverable? Are there multiple copies of that data? What's the, you know, master copy. Here's >>A question. You guys, here's a question for you guys analyst, what do you think the psychology is of the CIO or CSO when HP comes into town with GreenLake, uh, and they say, what's your relationship with the hyperscalers? Cause I'm a CIO. I got my environment. I might be CapEx centric or Hey, I'm open model. Open-minded to an operating model. Every one of these enterprises has a cloud relationship. Yeah. Yeah. What's the dynamic. What do you think the psychology is of the CIO when they're rationalizing their, their trajectory, their architecture, cloud, native scale integration with HPE GreenLake or >>HP service. I think she or he hears defensiveness from HPE. I think she hears HPE or he hears HPE coming in and saying, you don't need to go to the cloud. You know, you could keep it right here. I, I don't think that's the right posture. I think it should be. We are your cloud. And we can manage whether it's OnPrem hybrid in AWS, Azure, Google, across those clouds. And we have an edge story that should be the vision that they put forth. That's the super cloud vision, but I don't hear it >>From these guys. What do you think psycho, do you agree with that? >>I'm totally to make, sorry to be boring, but I totally agree with, uh, Dave on that. Right? So the, the, the multi-cloud capability from a trusted large company has worked for anybody up and down the stack. Right? You can look historically for, uh, past layers with cloud Foundry, right? It's history vulnerable. You can look for DevOps of Hashi coop. You can look for database with MongoDB right now. So if HPE provides that data access, right, with all the problems of data gravity and egres cost and the workability, they will be doing really, really well, but we need to hear it more, right. We didn't hear much software today in the keynote. Right. >>Do they have a competitive offering vis-a-vis or Azure? >>The question is, will it be an HPE offering or will, or the software platform, one of the offerings and you as customer can plug and play, right. Will software be a differentiator for HP, right. And will be close, proprietary to the point to again, be open enough for it, or will they get that R and D format that, or will they just say, okay, ES MES here on the side, your choice, and you can use OpenShift or whatever, we don't matter. That's >>The, that's the key question. That's the key question. Is it because it is a competitive strategy? Is it highly differentiated? Oracle is a highly differentiated strategy, right? Is Dell highly differentiated? Eh, Dell differentiates based on its breadth. What? >>Right. Well, let's try for the control plane too. Dell wants to be an, >>Their, their vision is differentiated. Okay. But their execution today is not >>High. All right. Let me throw, let me throw this out at you then. I'm I'm, I'm sorry. I'm I'm HPE. I wanna be the glue layer. Is that, does that fly? >>What >>Do you mean? The group glue layer? I'll I wanna be, you can do Amazon, but I wanna be the glue layer between the clouds and our GreenLake will. >>What's the, what's the incremental value that, that glue provides, >>Provides comfort and reliability and control for the single pane of glass for AWS >>And comes back to the data. In my opinion. Yeah. >>There, there there's glue levels on the data level. Yeah. And there's glue levels on API level. Right. And there's different vendors in the different spaces. Right. Um, I think HPE will want to play on the data side. We heard lots of data stuff. We >>Hear that, >>But we have to see it. Exactly. >>Yeah. But it's, it's lacking today. And so, Hey, you know, you guys know better than I APIs can be fragile and they can be, there's a lot of diversity in terms of the quality of APIs and the documentation, how they work, how mature they are, what, how, what kind of performance they can provide and recoverability. And so just saying, oh wow. We are living the API economy. You know, the it's gonna take time to brew, chime in here. Hi. >><laugh> oh, so guys, you've all been covering HPE for a long time. You know, when Antonio stood up on stage three years ago and said by 2022, and here we are, we're gonna be delivering everything as a service. He's saying we've, we've done it, but, and we're a new company. Do you guys agree with that? >>Definitely. >>I, yes. Yes. With the caveat, I think, yes. The COVID pandemic slowed them down a lot because, um, that gave a tailwind to the hyperscalers, um, because of the, the force of massive O under forecasting working at home. I mean, everyone I talked to was like, no one forecasted a hundred percent work at home, the, um, the CapEx investments. So I think that was an opportunity that they'd be much farther along if there's no COVID people >>Thought it wasn't impossible. Yeah. But so we had the old work from home thing right. Where people trying to get people fired at IBM and Yahoo. Right. So I would've this question covering the HR side and my other hat on. Right. And I would ask CHS let's assume, because I didn't know about COVID shame on me. Right. I said, big California, earthquake breaks. Right. Nobody gets hurt, but all the buildings have to be retrofitted and checked for seism logic down. So everybody's working from home, ask CHS, what kind of productivity gap hit would you get by forcing everybody working from home with the office unsafe? So one, one gentleman, I won't know him, his name, he said 20% and the other one's going ha you're smoking. It's 40 50%. We need to be in the office. We need to meet it first night. And now we went for this exercise. Luckily not with the California. Right. Well, through the price of COVID and we've seen what it can do to, to productivity well, >>The productivity, but also the impact. So like with all the, um, stories we've done over two years, the people that want came out ahead were the ones that had good cloud action. They were already in the cloud. So I, I think they're definitely in different company in the sense of they, I give 'em a pass. I think they're definitely a new company and I'm not gonna judge 'em on. I think they're doing great. But I think pandemic definitely slowed 'em down that about >>It. So I have a different take on this. I think. So we've go back a little history. I mean, you' said this, I steal your line. Meg Whitman took one for the Silicon valley team. Right. She came in. I don't think she ever was excited that I, that you said, you said that, and I think you wrote >>Up, get tape on that one. She >>Had to figure out how do I deal with this mess? I have EDS. I got PC. >>She never should have spun off the PC, but >>Okay. But >>Me, >>Yeah, you can, you certainly could listen. Maybe, maybe Gerstner never should have gone all in on services and IBM would dominate something other than mainframes. They had think pads even for a while, but, but, but so she had that mess to deal with. She dealt with it and however, they dealt with it, Antonio came in, he, he, and he said, all right, we're gonna focus the company. And we're gonna focus the mission on not the machine. Remember those yeah. Presentations, but you just make your eyes glaze over. We're going all in on Azure service >>And edge. He was all on. >>We're gonna build our own cloud. We acquired Aruba. He made some acquisitions in HPC to help differentiate. Yep. And they are definitely a much more focused company now. And unfortunately I wish Antonio would CEO in 2015, cuz that's really when this should have started. >>Yeah. And then, and if you remember back then, Dave, we were interviewing Docker with DevOps teams. They had composability, they were on hybrid really early. I think they might have even coined the term hybrid before VMware tri-state credit for it. But they were first on hybrid. They had DevOps, they had infrastructure risk code. >>HPE had an HP had an awesome cloud team. Yeah. But, and then, and then they tried to go public cloud. Yeah. You know, and then, you know, just made them, I mean, it was just a mess. The focus >>Is there. I give them huge props. And I think, I think the GreenLake to me is exciting here because it's much better than it was two years ago. When, when we talked to, when we started, it's >>Starting to get real. >>It's, it's a real thing. And I think the, the tell will be partners. If they make that right, can pull their different >>Ecosystem, >>Their scale and their customers and fill the software gas with partners mm-hmm <affirmative> and then create that integration opportunity. It's gonna be a home run if they don't do that, they're gonna miss the operating, >>But they have to have their own to your point. They have to have their own software innovation. >>They have to good infrastructure ways to build applications. I don't wanna build with somebody else. I don't wanna take a Microsoft stack on open source stack. I'm not sure if it's gonna work with HP. So they have to have an app dev answer. I absolutely agree with that. And the, the big thing for the partners is, which is a good thing, right? Yep. HPE will not move into applications. Right? You don't have to have the fear of where Microsoft is with their vocal large. Right. If AWS kind of like comes up with APIs and manufacturing, right. Google the same thing with their vertical push. Right. So HPE will not have the CapEx, but >>Application, >>As I SV making them, the partner, the bonus of being able to on premise is an attractive >>Part. That's a great point. >>Hold. So that's an inflection point for next 12 months to watch what we see absolutely running on GreenLake. >>Yeah. And I think one of the things that came out of the, the last couple events this past year, and I'll bring this up, we'll table it and we'll watch it. And it's early in this, I think this is like even, not even the first inning, the machine learning AI impact to the industrial piece. I think we're gonna see a, a brand new era of accelerated digital transformation on the industrial physical world, back office, cloud data center, accounting, all the stuff. That's applications, the app, the real world from space to like robotics. I think that HP edge opportunity is gonna be visible and different. >>So guys, Antonio Neri is on tomorrow. This is only day one. If you can imagine this power panel on day one, can you imagine tomorrow? What is your last question for each of you? What is your, what, what question would you want to ask him tomorrow? Hold start with you. >>How is HPE winning in the long run? Because we know their on premise market will shrink, right? And they can out execute Dell. They can out execute Lenovo. They can out Cisco and get a bigger share of the shrinking market. But that's the long term strategy, right? So why should I buy HPE stock now and have a good return put in the, in the safe and forget about it and have a great return 20 years from now? What's the really long term strategy might be unfair because they, they ran in survival mode to a certain point out of the mass post equipment situation. But what is really the long term strategy? Is it more on the hardware side? Is it gonna go on the HPE, the frontier side? It's gonna be a DNA question, which I would ask Antonio. >>John, >>I would ask him what relative to the macro conditions relative to their customer base, I'd say, cuz the customers are the scoreboard. Can they create a value proposition with their, I use the Microsoft 365 example how they kind of went to the cloud. So my question would be Antonio, what is your core value proposition to CIOs out there who want to transform and take a step function, increase for value with HPE? Tell me that story. I wanna hear. And I don't want to hear, oh, we got a portfolio and no, what value are you enabling your customers to do? >>What and what should that value be? >>I think it's gonna be what we were kind of riffing on, which is you have to provide either what their product market fit needs are, which is, are you solving a problem? Is it a pain point is a growth driver. Uh, and what's the, what's that tailwind. And it's obviously we know at cloud we know edge. The story is great, but what's the value proposition. But by going with HPE, you get X, Y, and Z. If they can explain that clearly with real, so qualitative and quantitative data it's home >>Run. He had a great line of the analyst summit today where somebody asking questions, I'm just listening to the customer. So be ready for this Steve jobs photo, listening to the customer. You can't build something great listening to the customer. You'll be good for the next quarter. The next exponential >>Say, what are the customers saying? <laugh> >>So I would make an observation. And my question would, so my observation would be cloud is growing collectively at 35%. It's, you know, it's approaching 200 billion with a big, big four. If you include Alibaba, IBM has actually said, Hey, we're gonna gr they've promised 6% growth. Uh, Cisco I think is at eight or 9% growth. Dow's growing in double digits. Antonio and HPE have promised three to 4% growth. So what do you have to do to actually accelerate growth? Because three to 4%, my view, not enough to answer Holger's question is why should I buy HPE stock? Well, >>If they have product, if they have customer and there's demand and traction to me, that's going to drive the growth numbers. And I think the weak side of the forecast means that they don't have that fit yet. >>Yeah. So what has to happen for them to get above five, 6% growth? >>That's what we're gonna analyze. I mean, I, I mean, I don't have an answer for that. I wish I had a better answer. I'd tell them <laugh> but I feel, it feels, it feels like, you know, HP has an opportunity to say here's the new HPE. Yeah. Okay. And this is what we stand for. And here's the one thing that we're going to do that consistently drives value for you, the customer. And that's gonna have to come into some, either architectural cloud shift or a data thing, or we are your store for blank. >>All of the above. >>I guess the other question is, would, would you know, he won't answer a rude question, would suspending things like dividends and stock buybacks and putting it into R and D. I would definitely, if you have confidence in the market and you know what to do, why wouldn't you just accelerate R and D and put the money there? IBM, since 2007, IBM spent is the last stat. And I'm looking go in 2007, IBM way, outspent, Google, and Amazon and R and D and, and CapEx two, by the way. Yep. Subsequent to that, they've spent, I believe it's the numbers close to 200 billion on stock buyback and dividends. They could have owned cloud. And so look at this business, the technology business by and large is driven by innovation. Yeah. And so how do you innovate if >>You have I'm buying, I'm buying HP because they're reliable high quality and they have the outcomes that I want. Oh, >>Buy their products and services. I'm not sure I'd buy the stock. Yeah. >>Yeah. But she has to answer ultimately, because a public company. Right. So >>Right. It's this job. Yeah. >>Never a dull moment with the three of you around <laugh> guys. Thank you so much for sharing your insights, your, an analysis from day one. I can't imagine what day two is gonna bring tomorrow. Debut and I are gonna be anchoring here. We've got a jam packed day, lots going on, hearing from the ecosystem from leadership. As we mentioned, Antonio is gonna be Tony >>Alma Russo. I'm dying. Dr. >>EDMA as well as on the CTO gonna be another action pack day. I'm excited for it, guys. Thanks so much for sharing your insights and for letting me join this power panel. >>Great. Great to be here. >>Power panel plus me. All right. For Holger, John and Dave, I'm Lisa, you're watching the cube our day one coverage of HPE discover wraps right now. Don't go anywhere, cuz we'll see you tomorrow for day two, live from Vegas, have a good night.
SUMMARY :
What are some of the things that you heard I mean, So, oh, wow. but it's in the Florida swarm. I know Dave always for the stats, right. Well it's the 70 plus cloud services, right. Keep recycling storage and you back. But the company who knows the enterprise, right. We had that conversation, the, uh, kickoff or on who's their target, I get the cloud broad to me then the general markets, of course, people who still need to run stuff on premises. with kind of the GreenLake, you know, model with their existing stuff. So I don't see that happening so much, but GreenLake as a platform itself course interesting because enterprise I think you guys are right on say, this is how we're doing business now. As I changed it, now <laugh> two know And I don't wanna rent because rental's more expensive and blah, And if you go to a board in Germany and say, Hey, we can pay our usual hardware, refresh, HP's, HP's made the statement that anything you can do in the cloud you I think they're talking about the, their If you had to sort of take your best guess as to where Yeah. So they quite that's the I mean similar. And then the rest of those services But in terms of just the basic platform, I, I would agree. I think HP story is wonderful Aruba, you know, hybrid cloud, Between filling the gaps on the software? I know from my own history, The original exit data was HP. But I think the key thing is we know that all modern I, I think it's, I think that's an opportunity because that changes the game and agility and There that's the big benefit to the ISVs, if our HPE I'd be saying, Hey, because the way the snowflake deal worked, you probably know this is I think they did that deal because the customer came to them and said, you don't exactly that deal. Customers think crazy things happen, right? So if they can get that right with you have to be in snowflake in order to get the governance and the scalability, But you can't do it data clean room unless you are in snowflake. But I think the trend already shows that it's going that way. Well, if you look at outpost is an signal, Dave, the success of outpost launched what four years ago, And I think with what we're seeing in the market now it's They had the R and D can they bring it to the table So very impressed. So the ecosystem is the key for them is because that's how they're gonna fill the gaps. So, you know, I mean, look at the telcos right. I said on the opening, they have serious customers and those customers have serious problems, We're back at the ecosystem to have what probably But I think the expectations I think point next is the point of integration for their security. But if I will have to rely on humans for I mean, I'm sure there's a lot of that stuff that's beginning Because I ask people all the time, they're like, uh, I'm zero trust or is it trust? I move the code, but what enterprise code works without data, I mean, I think the holy grail is can I, can I put my data into a cloud who's ever, So men being men. What do you do with it? You guys, here's a question for you guys analyst, what do you think the psychology is of the CIO or I think she hears HPE or he hears HPE coming in and saying, you don't need to go to the What do you think psycho, do you agree with that? So if HPE provides that data access, right, with all the problems of data gravity and egres one of the offerings and you as customer can plug and play, right. That's the key question. Right. But their execution today is not I wanna be the glue layer. I'll I wanna be, you can do Amazon, but I wanna be the glue layer between the clouds and And comes back to the data. And there's glue levels on API level. But we have to see it. And so, Hey, you know, you guys know better than I APIs can be fragile and Do you guys agree with that? I mean, everyone I talked to was like, no one forecasted a hundred percent work but all the buildings have to be retrofitted and checked for seism logic down. But I think pandemic definitely slowed I don't think she ever was excited that I, that you said, you said that, Up, get tape on that one. I have EDS. Presentations, but you just make your eyes glaze over. And edge. I wish Antonio would CEO in 2015, cuz that's really when this should have started. I think they might have even coined the term You know, and then, you know, just made them, I mean, And I think, I think the GreenLake to me is And I think the, the tell will be partners. It's gonna be a home run if they don't do that, they're gonna miss the operating, But they have to have their own to your point. You don't have to have the fear of where Microsoft is with their vocal large. the machine learning AI impact to the industrial piece. If you can imagine this power panel But that's the long term strategy, And I don't want to hear, oh, we got a portfolio and no, what value are you enabling I think it's gonna be what we were kind of riffing on, which is you have to provide either what their product So be ready for this Steve jobs photo, listening to the customer. So what do you have to do to actually accelerate growth? And I think the weak side of the forecast means that they don't I feel, it feels, it feels like, you know, HP has an opportunity to say here's I guess the other question is, would, would you know, he won't answer a rude question, You have I'm buying, I'm buying HP because they're reliable high quality and they have the outcomes that I want. I'm not sure I'd buy the stock. So Yeah. Never a dull moment with the three of you around <laugh> guys. Thanks so much for sharing your insights and for letting me join this power panel. Great to be here. Don't go anywhere, cuz we'll see you tomorrow for day two, live from Vegas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Charles Phillips | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
John Schultz | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
40 | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Holger | PERSON | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
35% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Holger Mueller | PERSON | 0.99+ |
Alma Russo | PERSON | 0.99+ |
6% | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
200 billion | QUANTITY | 0.99+ |
John furrier | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
this year | DATE | 0.99+ |
This year | DATE | 0.99+ |
Breaking Analysis: Snowflake Summit 2022...All About Apps & Monetization
>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> Snowflake Summit 2022 underscored that the ecosystem excitement which was once forming around Hadoop is being reborn, escalated and coalescing around Snowflake's data cloud. What was once seen as a simpler cloud data warehouse and good marketing with the data cloud is evolving rapidly with new workloads of vertical industry focus, data applications, monetization, and more. The question is, will the promise of data be fulfilled this time around, or is it same wine, new bottle? Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this "Breaking Analysis," we'll talk about the event, the announcements that Snowflake made that are of greatest interest, the major themes of the show, what was hype and what was real, the competition, and some concerns that remain in many parts of the ecosystem and pockets of customers. First let's look at the overall event. It was held at Caesars Forum. Not my favorite venue, but I'll tell you it was packed. Fire Marshall Full, as we sometimes say. Nearly 10,000 people attended the event. Here's Snowflake's CMO Denise Persson on theCUBE describing how this event has evolved. >> Yeah, two, three years ago, we were about 1800 people at a Hilton in San Francisco. We had about 40 partners attending. This week we're close to 10,000 attendees here. Almost 10,000 people online as well, and over over 200 partners here on the show floor. >> Now, those numbers from 2019 remind me of the early days of Hadoop World, which was put on by Cloudera but then Cloudera handed off the event to O'Reilly as this article that we've inserted, if you bring back that slide would say. The headline it almost got it right. Hadoop World was a failure, but it didn't have to be. Snowflake has filled the void created by O'Reilly when it first killed Hadoop World, and killed the name and then killed Strata. Now, ironically, the momentum and excitement from Hadoop's early days, it probably could have stayed with Cloudera but the beginning of the end was when they gave the conference over to O'Reilly. We can't imagine Frank Slootman handing the keys to the kingdom to a third party. Serious business was done at this event. I'm talking substantive deals. Salespeople from a host sponsor and the ecosystems that support these events, they love physical. They really don't like virtual because physical belly to belly means relationship building, pipeline, and deals. And that was blatantly obvious at this show. And in fairness, all theCUBE events that we've done year but this one was more vibrant because of its attendance and the action in the ecosystem. Ecosystem is a hallmark of a cloud company, and that's what Snowflake is. We asked Frank Slootman on theCUBE, was this ecosystem evolution by design or did Snowflake just kind of stumble into it? Here's what he said. >> Well, when you are a data clouding, you have data, people want to do things with that data. They don't want just run data operations, populate dashboards, run reports. Pretty soon they want to build applications and after they build applications, they want build businesses on it. So it goes on and on and on. So it drives your development to enable more and more functionality on that data cloud. Didn't start out that way, you know, we were very, very much focused on data operations. Then it becomes application development and then it becomes, hey, we're developing whole businesses on this platform. So similar to what happened to Facebook in many ways. >> So it sounds like it was maybe a little bit of both. The Facebook analogy is interesting because Facebook is a walled garden, as is Snowflake, but when you come into that garden, you have assurances that things are going to work in a very specific way because a set of standards and protocols is being enforced by a steward, i.e. Snowflake. This means things run better inside of Snowflake than if you try to do all the integration yourself. Now, maybe over time, an open source version of that will come out but if you wait for that, you're going to be left behind. That said, Snowflake has made moves to make its platform more accommodating to open source tooling in many of its announcements this week. Now, I'm not going to do a deep dive on the announcements. Matt Sulkins from Monte Carlo wrote a decent summary of the keynotes and a number of analysts like Sanjeev Mohan, Tony Bear and others are posting some deeper analysis on these innovations, and so we'll point to those. I'll say a few things though. Unistore extends the type of data that can live in the Snowflake data cloud. It's enabled by a new feature called hybrid tables, a new table type in Snowflake. One of the big knocks against Snowflake was it couldn't handle and transaction data. Several database companies are creating this notion of a hybrid where both analytic and transactional workloads can live in the same data store. Oracle's doing this for example, with MySQL HeatWave and there are many others. We saw Mongo earlier this month add an analytics capability to its transaction system. Mongo also added sequel, which was kind of interesting. Here's what Constellation Research analyst Doug Henschen said about Snowflake's moves into transaction data. Play the clip. >> Well with Unistore, they're reaching out and trying to bring transactional data in. Hey, don't limit this to analytical information and there's other ways to do that like CDC and streaming but they're very closely tying that again to that marketplace, with the idea of bring your data over here and you can monetize it. Don't just leave it in that transactional database. So another reach to a broader play across a big community that they're building. >> And you're also seeing Snowflake expand its workload types in its unique way and through Snowpark and its stream lit acquisition, enabling Python so that native apps can be built in the data cloud and benefit from all that structure and the features that Snowflake is built in. Hence that Facebook analogy, or maybe the App Store, the Apple App Store as I propose as well. Python support also widens the aperture for machine intelligence workloads. We asked Snowflake senior VP of product, Christian Kleinerman which announcements he thought were the most impactful. And despite the who's your favorite child nature of the question, he did answer. Here's what he said. >> I think the native applications is the one that looks like, eh, I don't know about it on the surface but he has the biggest potential to change everything. That's create an entire ecosystem of solutions for within a company or across companies that I don't know that we know what's possible. >> Snowflake also announced support for Apache Iceberg, which is a new open table format standard that's emerging. So you're seeing Snowflake respond to these concerns about its lack of openness, and they're building optionality into their cloud. They also showed some cost op optimization tools both from Snowflake itself and from the ecosystem, notably Capital One which launched a software business on top of Snowflake focused on optimizing cost and eventually the rollout data management capabilities, and all kinds of features that Snowflake announced that the show around governance, cross cloud, what we call super cloud, a new security workload, and they reemphasize their ability to read non-native on-prem data into Snowflake through partnerships with Dell and Pure and a lot more. Let's hear from some of the analysts that came on theCUBE this week at Snowflake Summit to see what they said about the announcements and their takeaways from the event. This is Dave Menninger, Sanjeev Mohan, and Tony Bear, roll the clip. >> Our research shows that the majority of organizations, the majority of people do not have access to analytics. And so a couple of the things they've announced I think address those or help to address those issues very directly. So Snowpark and support for Python and other languages is a way for organizations to embed analytics into different business processes. And so I think that'll be really beneficial to try and get analytics into more people's hands. And I also think that the native applications as part of the marketplace is another way to get applications into people's hands rather than just analytical tools. Because most people in the organization are not analysts. They're doing some line of business function. They're HR managers, they're marketing people, they're sales people, they're finance people, right? They're not sitting there mucking around in the data, they're doing a job and they need analytics in that job. >> Primarily, I think it is to contract this whole notion that once you move data into Snowflake, it's a proprietary format. So I think that's how it started but it's usually beneficial to the customers, to the users because now if you have large amount of data in paket files you can leave it on S3, but then you using the Apache Iceberg table format in Snowflake, you get all the benefits of Snowflake's optimizer. So for example, you get the micro partitioning, you get the metadata. And in a single query, you can join, you can do select from a Snowflake table union and select from an iceberg table and you can do store procedure, user defined function. So I think what they've done is extremely interesting. Iceberg by itself still does not have multi-table transactional capabilities. So if I'm running a workload, I might be touching 10 different tables. So if I use Apache Iceberg in a raw format, they don't have it, but Snowflake does. So the way I see it is Snowflake is adding more and more capabilities right into the database. So for example, they've gone ahead and added security and privacy. So you can now create policies and do even cell level masking, dynamic masking, but most organizations have more than Snowflake. So what we are starting to see all around here is that there's a whole series of data catalog companies, a bunch of companies that are doing dynamic data masking, security and governance, data observability which is not a space Snowflake has gone into. So there's a whole ecosystem of companies that is mushrooming. Although, you know, so they're using the native capabilities of Snowflake but they are at a level higher. So if you have a data lake and a cloud data warehouse and you have other like relational databases, you can run these cross platform capabilities in that layer. So that way, you know, Snowflake's done a great job of enabling that ecosystem. >> I think it's like the last mile, essentially. In other words, it's like, okay, you have folks that are basically that are very comfortable with Tableau but you do have developers who don't want to have to shell out to a separate tool. And so this is where Snowflake is essentially working to address that constituency. To Sanjeev's point, and I think part of it, this kind of plays into it is what makes this different from the Hadoop era is the fact that all these capabilities, you know, a lot of vendors are taking it very seriously to put this native. Now, obviously Snowflake acquired Streamlit. So we can expect that the Streamlit capabilities are going to be native. >> I want to share a little bit about the higher level thinking at Snowflake, here's a chart from Frank Slootman's keynote. It's his version of the modern data stack, if you will. Now, Snowflake of course, was built on the public cloud. If there were no AWS, there would be no Snowflake. Now, they're all about bringing data and live data and expanding the types of data, including structured, we just heard about that, unstructured, geospatial, and the list is going to continue on and on. Eventually I think it's going to bleed into the edge if we can figure out what to do with that edge data. Executing on new workloads is a big deal. They started with data sharing and they recently added security and they've essentially created a PaaS layer. We call it a SuperPaaS layer, if you will, to attract application developers. Snowflake has a developer-focused event coming up in November and they've extended the marketplace with 1300 native apps listings. And at the top, that's the holy grail, monetization. We always talk about building data products and we saw a lot of that at this event, very, very impressive and unique. Now here's the thing. There's a lot of talk in the press, in the Wall Street and the broader community about consumption-based pricing and concerns over Snowflake's visibility and its forecast and how analytics may be discretionary. But if you're a company building apps in Snowflake and monetizing like Capital One intends to do, and you're now selling in the marketplace, that is not discretionary, unless of course your costs are greater than your revenue for that service, in which case is going to fail anyway. But the point is we're entering a new error where data apps and data products are beginning to be built and Snowflake is attempting to make the data cloud the defacto place as to where you're going to build them. In our view they're well ahead in that journey. Okay, let's talk about some of the bigger themes that we heard at the event. Bringing apps to the data instead of moving the data to the apps, this was a constant refrain and one that certainly makes sense from a physics point of view. But having a single source of data that is discoverable, sharable and governed with increasingly robust ecosystem options, it doesn't have to be moved. Sometimes it may have to be moved if you're going across regions, but that's unique and a differentiator for Snowflake in our view. I mean, I'm yet to see a data ecosystem that is as rich and growing as fast as the Snowflake ecosystem. Monetization, we talked about that, industry clouds, financial services, healthcare, retail, and media, all front and center at the event. My understanding is that Frank Slootman was a major force behind this shift, this development and go to market focus on verticals. It's really an attempt, and he talked about this in his keynote to align with the customer mission ultimately align with their objectives which not surprisingly, are increasingly monetizing with data as a differentiating ingredient. We heard a ton about data mesh, there were numerous presentations about the topic. And I'll say this, if you map the seven pillars Snowflake talks about, Benoit Dageville talked about this in his keynote, but if you map those into Zhamak Dehghani's data mesh framework and the four principles, they align better than most of the data mesh washing that I've seen. The seven pillars, all data, all workloads, global architecture, self-managed, programmable, marketplace and governance. Those are the seven pillars that he talked about in his keynote. All data, well, maybe with hybrid tables that becomes more of a reality. Global architecture means the data is globally distributed. It's not necessarily physically in one place. Self-managed is key. Self-service infrastructure is one of Zhamak's four principles. And then inherent governance. Zhamak talks about computational, what I'll call automated governance, built in. And with all the talk about monetization, that aligns with the second principle which is data as product. So while it's not a pure hit and to its credit, by the way, Snowflake doesn't use data mesh in its messaging anymore. But by the way, its customers do, several customers talked about it. Geico, JPMC, and a number of other customers and partners are using the term and using it pretty closely to the concepts put forth by Zhamak Dehghani. But back to the point, they essentially, Snowflake that is, is building a proprietary system that substantially addresses some, if not many of the goals of data mesh. Okay, back to the list, supercloud, that's our term. We saw lots of examples of clouds on top of clouds that are architected to spin multiple clouds, not just run on individual clouds as separate services. And this includes Snowflake's data cloud itself but a number of ecosystem partners that are headed in a very similar direction. Snowflake still talks about data sharing but now it uses the term collaboration in its high level messaging, which is I think smart. Data sharing is kind of a geeky term. And also this is an attempt by Snowflake to differentiate from everyone else that's saying, hey, we do data sharing too. And finally Snowflake doesn't say data marketplace anymore. It's now marketplace, accounting for its application market. Okay, let's take a quick look at the competitive landscape via this ETR X-Y graph. Vertical access remembers net score or spending momentum and the x-axis is penetration, pervasiveness in the data center. That's what ETR calls overlap. Snowflake continues to lead on the vertical axis. They guide it conservatively last quarter, remember, so I wouldn't be surprised if that lofty height, even though it's well down from its earlier levels but I wouldn't be surprised if it ticks down again a bit in the July survey, which will be in the field shortly. Databricks is a key competitor obviously at a strong spending momentum, as you can see. We didn't draw it here but we usually draw that 40% line or red line at 40%, anything above that is considered elevated. So you can see Databricks is quite elevated. But it doesn't have the market presence of Snowflake. It didn't get to IPO during the bubble and it doesn't have nearly as deep and capable go-to market machinery. Now, they're getting better and they're getting some attention in the market, nonetheless. But as a private company, you just naturally, more people are aware of Snowflake. Some analysts, Tony Bear in particular, believe Mongo and Snowflake are on a bit of a collision course long term. I actually can see his point. You know, I mean, they're both platforms, they're both about data. It's long ways off, but you can see them sort of in a similar path. They talk about kind of similar aspirations and visions even though they're quite in different markets today but they're definitely participating in similar tam. The cloud players are probably the biggest or definitely the biggest partners and probably the biggest competitors to Snowflake. And then there's always Oracle. Doesn't have the spending velocity of the others but it's got strong market presence. It owns a cloud and it knows a thing about data and it definitely is a go-to market machine. Okay, we're going to end on some of the things that we heard in the ecosystem. 'Cause look, we've heard before how particular technology, enterprise data warehouse, data hubs, MDM, data lakes, Hadoop, et cetera. We're going to solve all of our data problems and of course they didn't. And in fact, sometimes they create more problems that allow vendors to push more incremental technology to solve the problems that they created. Like tools and platforms to clean up the no schema on right nature of data lakes or data swamps. But here are some of the things that I heard firsthand from some customers and partners. First thing is, they said to me that they're having a hard time keeping up sometimes with the pace of Snowflake. It reminds me of AWS in 2014, 2015 timeframe. You remember that fire hose of announcements which causes increased complexity for customers and partners. I talked to several customers that said, well, yeah this is all well and good but I still need skilled people to understand all these tools that I'm integrated in the ecosystem, the catalogs, the machine learning observability. A number of customers said, I just can't use one governance tool, I need multiple governance tools and a lot of other technologies as well, and they're concerned that that's going to drive up their cost and their complexity. I heard other concerns from the ecosystem that it used to be sort of clear as to where they could add value you know, when Snowflake was just a better data warehouse. But to point number one, they're either concerned that they'll be left behind or they're concerned that they'll be subsumed. Look, I mean, just like we tell AWS customers and partners, you got to move fast, you got to keep innovating. If you don't, you're going to be left. Either if your customer you're going to be left behind your competitor, or if you're a partner, somebody else is going to get there or AWS is going to solve the problem for you. Okay, and there were a number of skeptical practitioners, really thoughtful and experienced data pros that suggested that they've seen this movie before. That's hence the same wine, new bottle. Well, this time around I certainly hope not given all the energy and investment that is going into this ecosystem. And the fact is Snowflake is unquestionably making it easier to put data to work. They built on AWS so you didn't have to worry about provisioning, compute and storage and networking and scaling. Snowflake is optimizing its platform to take advantage of things like Graviton so you don't have to, and they're doing some of their own optimization tools. The ecosystem is building optimization tools so that's all good. And firm belief is the less expensive it is, the more data will get brought into the data cloud. And they're building a data platform on which their ecosystem can build and run data applications, aka data products without having to worry about all the hard work that needs to get done to make data discoverable, shareable, and governed. And unlike the last 10 years, you don't have to be a keeper and integrate all the animals in the Hadoop zoo. Okay, that's it for today, thanks for watching. Thanks to my colleague, Stephanie Chan who helps research "Breaking Analysis" topics. Sometimes Alex Myerson is on production and manages the podcasts. Kristin Martin and Cheryl Knight help get the word out on social and in our newsletters, and Rob Hof is our editor in chief over at Silicon, and Hailey does some wonderful editing, thanks to all. Remember, all these episodes are available as podcasts wherever you listen. All you got to do is search Breaking Analysis Podcasts. I publish each week on wikibon.com and siliconangle.com and you can email me at David.Vellante@siliconangle.com or DM me @DVellante. If you got something interesting, I'll respond. If you don't, I'm sorry I won't. Or comment on my LinkedIn post. Please check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time. (upbeat music)
SUMMARY :
bringing you data driven that the ecosystem excitement here on the show floor. and the action in the ecosystem. Didn't start out that way, you know, One of the big knocks against Snowflake the idea of bring your data of the question, he did answer. is the one that looks like, and from the ecosystem, And so a couple of the So that way, you know, from the Hadoop era is the fact the defacto place as to where
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Frank Slootman | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Christian Kleinerman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Benoit Dageville | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Matt Sulkins | PERSON | 0.99+ |
JPMC | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Denise Persson | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Tony Bear | PERSON | 0.99+ |
Dave Menninger | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
July | DATE | 0.99+ |
Geico | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
Snowflake | TITLE | 0.99+ |
40% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
App Store | TITLE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
second principle | QUANTITY | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
1300 native apps | QUANTITY | 0.99+ |
Tony Bear | PERSON | 0.99+ |
David.Vellante@siliconangle.com | OTHER | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Snowflake Summit 2022 | EVENT | 0.99+ |
First | QUANTITY | 0.99+ |
two | DATE | 0.99+ |
Python | TITLE | 0.99+ |
10 different tables | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
ETR | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Snowflake | EVENT | 0.98+ |
one place | QUANTITY | 0.98+ |
each week | QUANTITY | 0.98+ |
O'Reilly | ORGANIZATION | 0.98+ |
This week | DATE | 0.98+ |
Hadoop World | EVENT | 0.98+ |
this week | DATE | 0.98+ |
Pure | ORGANIZATION | 0.98+ |
about 40 partners | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
last quarter | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
S3 | TITLE | 0.97+ |
Hadoop | LOCATION | 0.97+ |
single | QUANTITY | 0.97+ |
Caesars Forum | LOCATION | 0.97+ |
Iceberg | TITLE | 0.97+ |
single source | QUANTITY | 0.97+ |
Silicon | ORGANIZATION | 0.97+ |
Nearly 10,000 people | QUANTITY | 0.97+ |
Apache Iceberg | ORGANIZATION | 0.97+ |
theCUBE Insights with Industry Analysts | Snowflake Summit 2022
>>Okay. Okay. We're back at Caesar's Forum. The Snowflake summit 2022. The cubes. Continuous coverage this day to wall to wall coverage. We're so excited to have the analyst panel here, some of my colleagues that we've done a number. You've probably seen some power panels that we've done. David McGregor is here. He's the senior vice president and research director at Ventana Research. To his left is Tony Blair, principal at DB Inside and my in the co host seat. Sanjeev Mohan Sanremo. Guys, thanks so much for coming on. I'm glad we can. Thank you. You're very welcome. I wasn't able to attend the analyst action because I've been doing this all all day, every day. But let me start with you, Dave. What have you seen? That's kind of interested you. Pluses, minuses. Concerns. >>Well, how about if I focus on what I think valuable to the customers of snowflakes and our research shows that the majority of organisations, the majority of people, do not have access to analytics. And so a couple of things they've announced I think address those are helped to address those issues very directly. So Snow Park and support for Python and other languages is a way for organisations to embed analytics into different business processes. And so I think that will be really beneficial to try and get analytics into more people's hands. And I also think that the native applications as part of the marketplace is another way to get applications into people's hands rather than just analytical tools. Because most most people in the organisation or not, analysts, they're doing some line of business function. Their HR managers, their marketing people, their salespeople, their finance people right there, not sitting there mucking around in the data. They're doing a job and they need analytics in that job. So, >>Tony, I thank you. I've heard a lot of data mesh talk this week. It's kind of funny. Can't >>seem to get away from it. You >>can't see. It seems to be gathering momentum, but But what have you seen? That's been interesting. >>What I have noticed. Unfortunately, you know, because the rooms are too small, you just can't get into the data mesh sessions, so there's a lot of interest in it. Um, it's still very I don't think there's very much understanding of it, but I think the idea that you can put all the data in one place which, you know, to me, stuff like it seems to be kind of sort of in a way, it sounds like almost like the Enterprise Data warehouse, you know, Clouded Cloud Native Edition, you know, bring it all in one place again. Um, I think it's providing, sort of, You know, it's I think, for these folks that think this might be kind of like a a linchpin for that. I think there are several other things that actually that really have made a bigger impression on me. Actually, at this event, one is is basically is, um we watch their move with Eunice store. Um, and it's kind of interesting coming, you know, coming from mongo db last week. And I see it's like these two companies seem to be going converging towards the same place at different speeds. I think it's not like it's going to get there faster than Mongo for a number of different reasons, but I see like a number of common threads here. I mean, one is that Mongo was was was a company. It's always been towards developers. They need you know, start cultivating data, people, >>these guys going the other way. >>Exactly. Bingo. And the thing is that but they I think where they're converging is the idea of operational analytics and trying to serve all constituencies. The other thing, which which also in terms of serving, you know, multiple constituencies is how snowflake is laid out Snow Park and what I'm finding like. There's an interesting I economy. On one hand, you have this very ingrained integration of Anaconda, which I think is pretty ingenious. On the other hand, you speak, let's say, like, let's say the data robot folks and say, You know something our folks wanna work data signs us. We want to work in our environment and use snowflake in the background. So I see those kind of some interesting sort of cross cutting trends. >>So, Sandy, I mean, Frank Sullivan, we'll talk about there's definitely benefits into going into the walled garden. Yeah, I don't think we dispute that, but we see them making moves and adding more and more open source capabilities like Apache iceberg. Is that a Is that a move to sort of counteract the narrative that the data breaks is put out there. Is that customer driven? What's your take on that? >>Uh, primarily I think it is to contract this whole notion that once you move data into snowflake, it's a proprietary format. So I think that's how it started. But it's hugely beneficial to the customers to the users, because now, if you have large amounts of data in parquet files, you can leave it on s three. But then you using the the Apache iceberg table format. In a snowflake, you get all the benefits of snowflakes. Optimizer. So, for example, you get the, you know, the micro partitioning. You get the meta data. So, uh, in a single query, you can join. You can do select from a snowflake table union and select from iceberg table, and you can do store procedures, user defined functions. So I think they what they've done is extremely interesting. Uh, iceberg by itself still does not have multi table transactional capabilities. So if I'm running a workload, I might be touching 10 different tables. So if I use Apache iceberg in a raw format, they don't have it. But snowflake does, >>right? There's hence the delta. And maybe that maybe that closes over time. I want to ask you as you look around this I mean the ecosystems pretty vibrant. I mean, it reminds me of, like reinvent in 2013, you know? But then I'm struck by the complexity of the last big data era and a dupe and all the different tools. And is this different, or is it the sort of same wine new new bottle? You guys have any thoughts on that? >>I think it's different and I'll tell you why. I think it's different because it's based around sequel. So if back to Tony's point, these vendors are coming at this from different angles, right? You've got data warehouse vendors and you've got data lake vendors and they're all going to meet in the middle. So in your case, you're taught operational analytical. But the same thing is true with Data Lake and Data Warehouse and Snowflake no longer wants to be known as the Data Warehouse. There a data cloud and our research again. I like to base everything off of that. >>I love what our >>research shows that organisation Two thirds of organisations have sequel skills and one third have big data skills, so >>you >>know they're going to meet in the middle. But it sure is a lot easier to bring along those people who know sequel already to that midpoint than it is to bring big data people to remember. >>Mrr Odula, one of the founders of Cloudera, said to me one time, John Kerry and the Cube, that, uh, sequel is the killer app for a Yeah, >>the difference at this, you know, with with snowflake, is that you don't have to worry about taming the zoo. Animals really have thought out the ease of use, you know? I mean, they thought about I mean, from the get go, they thought of too thin to polls. One is ease of use, and the other is scale. And they've had. And that's basically, you know, I think very much differentiates it. I mean, who do have the scale, but it didn't have the ease of use. But don't I >>still need? Like, if I have, you know, governance from this vendor or, you know, data prep from, you know, don't I still have to have expertise? That's sort of distributed in those those worlds, right? I mean, go ahead. Yeah. >>So the way I see it is snowflake is adding more and more capabilities right into the database. So, for example, they've they've gone ahead and added security and privacy so you can now create policies and do even set level masking, dynamic masking. But most organisations have more than snowflake. So what we are starting to see all around here is that there's a whole series of data catalogue companies, a bunch of companies that are doing dynamic data masking security and governance data observe ability, which is not a space snowflake has gone into. So there's a whole ecosystem of companies that that is mushrooming, although, you know so they're using the native capabilities of snowflake, but they are at a level higher. So if you have a data lake and a cloud data warehouse and you have other, like relational databases, you can run these cross platform capabilities in that layer. So so that way, you know, snowflakes done a great job of enabling that ecosystem about >>the stream lit acquisition. Did you see anything here that indicated there making strong progress there? Are you excited about that? You're sceptical. Go ahead. >>And I think it's like the last mile. Essentially. In other words, it's like, Okay, you have folks that are basically that are very, very comfortable with tableau. But you do have developers who don't want to have to shell out to a separate tool. And so this is where Snowflake is essentially working to address that constituency, um, to San James Point. I think part of it, this kind of plays into it is what makes this different from the ado Pere is the fact that this all these capabilities, you know, a lot of vendors are taking it very seriously to make put this native obviously snowflake acquired stream. Let's so we can expect that's extremely capabilities are going to be native. >>And the other thing, too, about the Hadoop ecosystem is Claudia had to help fund all those different projects and got really, really spread thin. I want to ask you guys about this super cloud we use. Super Cloud is this sort of metaphor for the next wave of cloud. You've got infrastructure aws, azure, Google. It's not multi cloud, but you've got that infrastructure you're building a layer on top of it that hides the underlying complexities of the primitives and the a p I s. And you're adding new value in this case, the data cloud or super data cloud. And now we're seeing now is that snowflake putting forth the notion that they're adding a super path layer. You can now build applications that you can monetise, which to me is kind of exciting. It makes makes this platform even less discretionary. We had a lot of talk on Wall Street about discretionary spending, and that's not discretionary. If you're monetising it, um, what do you guys think about that? Is this something that's that's real? Is it just a figment of my imagination, or do you see a different way of coming any thoughts on that? >>So, in effect, they're trying to become a data operating system, right? And I think that's wonderful. It's ambitious. I think they'll experience some success with that. As I said, applications are important. That's a great way to deliver information. You can monetise them, so you know there's there's a good economic model around it. I think they will still struggle, however, with bringing everything together onto one platform. That's always the challenge. Can you become the platform that's hard, hard to predict? You know, I think this is This is pretty exciting, right? A lot of energy, a lot of large ecosystem. There is a network effect already. Can they succeed in being the only place where data exists? You know, I think that's going to be a challenge. >>I mean, the fact is, I mean, this is a classic best of breed versus the umbrella play. The thing is, this is nothing new. I mean, this is like the you know, the old days with enterprise applications were basically oracle and ASAP vacuumed up all these. You know, all these applications in their in their ecosystem, whereas with snowflake is. And if you look at the cloud, folks, the hyper scale is still building out their own portfolios as well. Some are, You know, some hyper skills are more partner friendly than others. What? What Snowflake is saying is that we're going to give all of you folks who basically are competing against the hyper skills in various areas like data catalogue and pipelines and all that sort of wonderful stuff will make you basically, you know, all equal citizens. You know the burden is on you to basically we will leave. We will lay out the A P. I s Well, we'll allow you to basically, you know, integrate natively to us so you can provide as good experience. But the but the onus is on your back. >>Should the ecosystem be concerned, as they were back to reinvent 2014 that Amazon was going to nibble away at them or or is it different? >>I find what they're doing is different. Uh, for example, data sharing. They were the first ones out the door were data sharing at a large scale. And then everybody has jumped in and said, Oh, we also do data sharing. All the hyper scholars came in. But now what snowflake has done is they've taken it to the next level. Now they're saying it's not just data sharing. It's up sharing and not only up sharing. You can stream the thing you can build, test deploy, and then monetise it. Make it discoverable through, you know, through your marketplace >>you can monetise it. >>Yes. Yeah, so So I I think what they're doing is they are taking it a step further than what hyper scale as they are doing. And because it's like what they said is becoming like the data operating system You log in and you have all of these different functionalities you can do in machine learning. Now you can do data quality. You can do data preparation and you can do Monetisation. Who do you >>think is snowflakes? Biggest competitor? What do you guys think? It's a hard question, isn't it? Because you're like because we all get the we separate computer from storage. We have a cloud data and you go, Okay, that's nice, >>but there's, like, a crack. I think >>there's uniqueness. I >>mean, put it this way. In the old days, it would have been you know, how you know the prime household names. I think today is the hyper scholars and the idea what I mean again, this comes down to the best of breed versus by, you know, get it all from one source. So where is your comfort level? Um, so I think they're kind. They're their co op a Titian the hyper scale. >>Okay, so it's not data bricks, because why they're smaller. >>Well, there is some okay now within the best of breed area. Yes, there is competition. The obvious is data bricks coming in from the data engineering angle. You know, basically the snowflake coming from, you know, from the from the data analyst angle. I think what? Another potential competitor. And I think Snowflake, basically, you know, admitted as such potentially is mongo >>DB. Yeah, >>Exactly. So I mean, yes, there are two different levels of sort >>of a on a longer term collision course. >>Exactly. Exactly. >>Sort of service now and in salesforce >>thing that was that we actually get when I say that a lot of people just laughed. I was like, No, you're kidding. There's no way. I said Excuse me, >>But then you see Mongo last week. We're adding some analytics capabilities and always been developers, as you say, and >>they trashed sequel. But yet they finally have started to write their first real sequel. >>We have M c M Q. Well, now we have a sequel. So what >>were those numbers, >>Dave? Two thirds. One third. >>So the hyper scale is but the hyper scale urz are you going to trust your hyper scale is to do your cross cloud. I mean, maybe Google may be I mean, Microsoft, perhaps aws not there yet. Right? I mean, how important is cross cloud, multi cloud Super cloud Whatever you want to call it What is your data? >>Shows? Cloud is important if I remember correctly. Our research shows that three quarters of organisations are operating in the cloud and 52% are operating across more than one cloud. So, uh, two thirds of the organisations are in the cloud are doing multi cloud, so that's pretty significant. And now they may be operating across clouds for different reasons. Maybe one application runs in one cloud provider. Another application runs another cloud provider. But I do think organisations want that leverage over the hyper scholars right they want they want to be able to tell the hyper scale. I'm gonna move my workloads over here if you don't give us a better rate. Uh, >>I mean, I I think you know, from a database standpoint, I think you're right. I mean, they are competing against some really well funded and you look at big Query barely, you know, solid platform Red shift, for all its faults, has really done an amazing job of moving forward. But to David's point, you know those to me in any way. Those hyper skills aren't going to solve that cross cloud cloud problem, right? >>Right. No, I'm certainly >>not as quickly. No. >>Or with as much zeal, >>right? Yeah, right across cloud. But we're gonna operate better on our >>Exactly. Yes. >>Yes. Even when we talk about multi cloud, the many, many definitions, like, you know, you can mean anything. So the way snowflake does multi cloud and the way mongo db two are very different. So a snowflake says we run on all the hyper scalar, but you have to replicate your data. What Mongo DB is claiming is that one cluster can have notes in multiple different clouds. That is right, you know, quite something. >>Yeah, right. I mean, again, you hit that. We got to go. But, uh, last question, um, snowflake undervalued, overvalued or just about right >>in the stock market or in customers. Yeah. Yeah, well, but, you know, I'm not sure that's the right question. >>That's the question I'm asking. You know, >>I'll say the question is undervalued or overvalued for customers, right? That's really what matters. Um, there's a different audience. Who cares about the investor side? Some of those are watching, but But I believe I believe that the from the customer's perspective, it's probably valued about right, because >>the reason I I ask it, is because it has so hyped. You had $100 billion value. It's the past service now is value, which is crazy for this student Now. It's obviously come back quite a bit below its IPO price. So But you guys are at the financial analyst meeting. Scarpelli laid out 2029 projections signed up for $10 billion.25 percent free time for 20% operating profit. I mean, they better be worth more than they are today. If they do >>that. If I If I see the momentum here this week, I think they are undervalued. But before this week, I probably would have thought there at the right evaluation, >>I would say they're probably more at the right valuation employed because the IPO valuation is just such a false valuation. So hyped >>guys, I could go on for another 45 minutes. Thanks so much. David. Tony Sanjeev. Always great to have you on. We'll have you back for sure. Having us. All right. Thank you. Keep it right there. Were wrapping up Day two and the Cube. Snowflake. Summit 2022. Right back. Mm. Mhm.
SUMMARY :
What have you seen? And I also think that the native applications as part of the I've heard a lot of data mesh talk this week. seem to get away from it. It seems to be gathering momentum, but But what have you seen? but I think the idea that you can put all the data in one place which, And the thing is that but they I think where they're converging is the idea of operational that the data breaks is put out there. So, for example, you get the, you know, the micro partitioning. I want to ask you as you look around this I mean the ecosystems pretty vibrant. I think it's different and I'll tell you why. But it sure is a lot easier to bring along those people who know sequel already the difference at this, you know, with with snowflake, is that you don't have to worry about taming the zoo. you know, data prep from, you know, don't I still have to have expertise? So so that way, you know, snowflakes done a great job of Did you see anything here that indicated there making strong is the fact that this all these capabilities, you know, a lot of vendors are taking it very seriously I want to ask you guys about this super cloud we Can you become the platform that's hard, hard to predict? I mean, this is like the you know, the old days with enterprise applications You can stream the thing you can build, test deploy, You can do data preparation and you can do We have a cloud data and you go, Okay, that's nice, I think I In the old days, it would have been you know, how you know the prime household names. You know, basically the snowflake coming from, you know, from the from the data analyst angle. Exactly. I was like, No, But then you see Mongo last week. But yet they finally have started to write their first real sequel. So what One third. So the hyper scale is but the hyper scale urz are you going to trust your hyper scale But I do think organisations want that leverage I mean, I I think you know, from a database standpoint, I think you're right. not as quickly. But we're gonna operate better on our Exactly. the hyper scalar, but you have to replicate your data. I mean, again, you hit that. but, you know, I'm not sure that's the right question. That's the question I'm asking. that the from the customer's perspective, it's probably valued about right, So But you guys are at the financial analyst meeting. But before this week, I probably would have thought there at the right evaluation, I would say they're probably more at the right valuation employed because the IPO valuation is just such Always great to have you on.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Frank Sullivan | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Tony Blair | PERSON | 0.99+ |
Tony Sanjeev | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Sandy | PERSON | 0.99+ |
David McGregor | PERSON | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
last week | DATE | 0.99+ |
52% | QUANTITY | 0.99+ |
Sanjeev Mohan Sanremo | PERSON | 0.99+ |
more than one cloud | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
2029 projections | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
San James Point | LOCATION | 0.99+ |
$10 billion.25 percent | QUANTITY | 0.99+ |
one application | QUANTITY | 0.99+ |
Odula | PERSON | 0.99+ |
John Kerry | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
Summit 2022 | EVENT | 0.99+ |
Data Warehouse | ORGANIZATION | 0.99+ |
Snowflake | EVENT | 0.98+ |
Scarpelli | PERSON | 0.98+ |
Data Lake | ORGANIZATION | 0.98+ |
one platform | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
today | DATE | 0.98+ |
10 different tables | QUANTITY | 0.98+ |
three quarters | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Apache | ORGANIZATION | 0.97+ |
Day two | QUANTITY | 0.97+ |
DB Inside | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.96+ |
one source | QUANTITY | 0.96+ |
one third | QUANTITY | 0.96+ |
Snowflake Summit 2022 | EVENT | 0.96+ |
One third | QUANTITY | 0.95+ |
two thirds | QUANTITY | 0.95+ |
Claudia | PERSON | 0.94+ |
one time | QUANTITY | 0.94+ |
one cloud provider | QUANTITY | 0.94+ |
Two thirds | QUANTITY | 0.93+ |
theCUBE | ORGANIZATION | 0.93+ |
data lake | ORGANIZATION | 0.92+ |
Snow Park | LOCATION | 0.92+ |
Cloudera | ORGANIZATION | 0.91+ |
two different levels | QUANTITY | 0.91+ |
three | QUANTITY | 0.91+ |
one cluster | QUANTITY | 0.89+ |
single query | QUANTITY | 0.87+ |
aws | ORGANIZATION | 0.84+ |
first ones | QUANTITY | 0.83+ |
Snowflake summit 2022 | EVENT | 0.83+ |
azure | ORGANIZATION | 0.82+ |
mongo db | ORGANIZATION | 0.82+ |
One | QUANTITY | 0.81+ |
Eunice store | ORGANIZATION | 0.8+ |
wave of | EVENT | 0.78+ |
cloud | ORGANIZATION | 0.77+ |
first real sequel | QUANTITY | 0.77+ |
M c M Q. | PERSON | 0.76+ |
Red shift | ORGANIZATION | 0.74+ |
Anaconda | ORGANIZATION | 0.73+ |
Snowflake | ORGANIZATION | 0.72+ |
ASAP | ORGANIZATION | 0.71+ |
Snow | ORGANIZATION | 0.68+ |
snowflake | TITLE | 0.66+ |
Park | TITLE | 0.64+ |
Cube | COMMERCIAL_ITEM | 0.63+ |
Apache | TITLE | 0.63+ |
Mrr | PERSON | 0.63+ |
senior vice president | PERSON | 0.62+ |
Wall Street | ORGANIZATION | 0.6+ |
Tony Baer, dbInsight | MongoDB World 2022
>>Welcome back to the big apple, everybody. The Cube's continuous coverage here of MongoDB world 2022. We're at the new Javet center. It's it's quite nice. It was built during the pandemic. I believe on top of a former bus terminal. I'm told by our next guest Tony bear, who's the principal at DB insight of data and database expert, longtime analyst, Tony. Good to see you. Thanks for coming >>On. Thanks >>For having us. You face to face >>And welcome to New York. >>Yeah. Right. >>New York is open for business. >>So, yeah. And actually, you know, it's interesting. We've been doing a lot of these events lately and, and especially the ones in Vegas, it's the first time everybody's been out, you know, face to face, not so much here, you know, people have been out and about a lot of masks >>In, >>In New York city, but, but it's good. And, and this new venue is fantastic >>Much nicer than the old Javits. >>Yeah. And I would say maybe 3000 people here. >>Yeah. Probably, but I think like most conferences right now are kind of, they're going through like a slow ramp up. And like for instance, you know, sapphires had maybe about one third, their normal turnout. So I think that you're saying like one third to one half seems to be the norm right now are still figuring out how we're, how and where we're gonna get back together. Yeah. >>I think that's about right. And, and I, but I do think that that in most of the cases that we've seen, it's exceeded people's expectations at tenants, but anyway sure. Let's talk about Mongo, very interesting company. You know, we've been kind of been watching their progression from just sort of document database and all the features and functions they're adding, you just published a piece this morning in venture beat is time for Mongo to get into analytics. Yes. You know? Yes. One of your favorite topics. Well, can they expand analytics? They seem to be doing that. Let's dig into it. Well, >>They're taking, they've been taking slow. They've been taking baby steps and there's good reason for that because first thing is an operational database. The last thing you wanna do is slow it down with very complex analytics. On the other hand, there's huge value to be had if you would, if you could, you know, turn, let's say a smart, if you can turn, let's say an operational database or a transaction database into a smart transaction database. In other words, for instance, you know, let's say if you're, you're, you're doing, you know, an eCommerce site and a customer has made an order, that's basically been out of the norm. Whether it be like, you know, good or bad, it would be nice. Basically, if at that point you could then have a next best action, which is where analytics comes in. But it's a very lightweight form of analytics. It's not gonna, it's actually, I think probably the best metaphor for this is real time credit scoring. It's not that they're doing your scoring you in real time. It's that the model has been computed offline so that when you come on in real time, it can make a smart decision. >>Got it. Okay. So, and I think it was your article where I, I wrote down some examples. Sure. Operational, you know, use cases, patient data. There's certainly retail. We had Forbes on earlier, right? Obviously, so very wide range of, of use cases for operational will, will Mongo, essentially, in your view, is it positioned to replace traditional R D BMS? >>Well, okay. That's a long that's, that's much, it's >>Sort of a loaded question, but >>That's, that's a very loaded question. I think that for certain cases, I think it will replace R D BMS, but I still, I mean, where I, where I depart from Mongo is I do not believe that they're going to replace all R D D BMSs. I think, for instance, like when you're doing financial transactions, you know, the world has been used to table, you know, you know, columns and rows and tables. That's, it's a natural form for something that's very structured like that. On the other hand, when you take a look, let's say OT data, or you're taking a look at home listings that tends to more naturally represent itself as documents. And so there's a, so it's kind of like documents are the way that let's say you normally see the world. Relational is the way that you would structure the world. >>Okay. Well, I like that. So, but I mean, in the early days, obviously, and even to this day, it's like the target for Mongo has been Oracle. Yeah. Right, right. And so, and then, you know, you talk to a lot of Oracle customers as do I sure. And they are running the most mission, critical applications in the world, and it's like banking and financial and so many. And, and, and, you know, they've kind of carved out that space, but are we, should we be rethinking the definition of, of mission critical? Is that changing? >>Well, number one, I think what we've traditionally associated mission critical systems with is our financial transaction systems and to a less, and also let's say systems that schedule operations. But the fact is there are many forms of operations where for instance, let's say you're in a social network, do you need to have that very latest update? Or, you know, basically, can you go off, let's say like, you know, a server that's eventually consistent. In other words, the, do you absolutely have, you know, it's just like when you go on Twitter, do you naturally see all the latest tweets? It's not the system's not gonna crash for that reason. Whereas let's say if you're doing it, you know, let's say an ATM banking ATM system, that system better be current. So I think there's a delineation. The fact is, is that in a social network, arguably that operational system is mission critical, but it's mission critical in a different way from a, you know, from, let's say a banking system. >>So coming back to this idea of, of this hybrid, I think, you know, I think Gartner calls it H tab hybrid, transactional analytics >>Is changed by >>The minute, right. I mean, you mentioned that in, in your article, but basically it's bringing analytics to transactions bringing those, those roles together. Right. Right. And you're saying with Mongo, it's, it's lightweight now take, you use two other examples in your article, my SQL heat wave. Right. I think you had a Google example as well, DB, those are, you're saying much, much heavier analytics, is that correct? Or >>I we'll put it this way. I think they're because they're coming from a relational background. And because they also are coming from companies that already have, you know, analytic database or data warehouses, if you will, that their analytic, you know, capabilities are gonna be much more fully rounded than what Mongo has at this point. It's not a criticism of a Mongo MongoDB per >>Per, is that by design though? Or ne not necessarily. Is that a function of maturity? >>I think it's function of maturity. Oh, okay. I mean, look, to a certain extent, it's also a function of design in terms of that the document model is a little, it's not impossible to basically model it for analytics, but it takes more, you know, transformation to, to decide which, you know, let's say field in that document is gonna be a column. >>Now, the big thing about some of these other, these hybrid systems is, is eliminating the need for two databases, right? Eliminating the need for, you know, complex ETL. Is, is that a value proposition that will emerge with, with Mongo in your view? >>You know, I, I mean, put it this way. I think that if you take a look at how they've, how Mongo is basically has added more function to its operations, someone talking about analytics here, for instance, adding streaming, you know, adding, adding, search, adding time series, that's a matter of like where they've eliminated the need to do, you know, transformation ETL, but that's not for analytics per se for analytics. I think through, you know, I mean through replication, there's still gonna be some transformation in terms of turning, let's say data, that's, that's formed in a document into something that's represented by columns. There is a form of transformation, you know, so that said, and Mongo is already, you know, it has some NA you know, nascent capability there, but it's all, but this is still like at a rev 1.0 level, you know, I expect a lot more >>Of so refin you, how Amazon says in the fullness of time, all workloads will be in the cloud. And we could certainly debate that. What do we mean by cloud? So, but there's a sort of analog for Mongo that I'll ask you in the fullness of time, will Mongo be in a position to replace data warehouses or data lakes? No. Or, or, or, and we know the answer is no. So that's of course, yeah. But are these two worlds on a quasi collision course? I think they >>More on a convergence course or the collision course, because number one is I said, the first principle and operational database is the last thing you wanna do is slow it down. And to do all this complex modeling that let's say that you would do in a data bricks, or very complex analytics that you would do in a snowflake that is going to get, you know, you know, no matter how much you partition the load, you know, in Atlas, and yes, you can have separate nodes. The fact is you really do not wanna burden the operational database with that. And that's not what it's meant for, but what it is meant for is, you know, can I make a smart decision on the spot? In other words, kinda like close the loop on that. And so therefore there's a, a form of lightweight analytic that you can perform in there. And actually that's also the same principle, you know, on which let's say for instance, you know, my SQL heat wave and Allo DBR based on, they're not, they're predicated on, they're not meant to replace, you know, whether it be exit data or big query, the idea there is to do more of the lightweight stuff, you know, and keep the database, you know, keep the operations, you know, >>Operating. And, but from a practitioner's standpoint, I, I, I can and should isolate you're saying that node, right. That's what they'll do. Sure. How does that affect cuz my understanding is that that the Mon Mongo specifically, but I think document databases generally will have a primary node. Right? And then you can set up secondary nodes, which then you have to think about availability, but, but would that analytic node be sort of fenced off? Is that part of the >>Well, that's actually what they're, they've already, I mean, they already laid the groundwork for it last year, by saying that you can set up separate nodes and dedicate them to analytics and what they've >>As, as a primary, >>Right? Yes, yes. For analytics and what they've added, what they're a, what they are adding this year is the fact to say like that separate node does not have to be the same instance class, you know, as, as, as, as the, >>What, what does that mean? Explain >>That in other words, it's a, you know, you could have BA you know, for instance, you could have a node for operations, that's basically very eye ops intensive, whereas you could have a node let's say for analytics that might be more compute intensive or, or more he, or, or more heavily, you know, configured with, with memory per se. And so the idea here is you can tailor in a node to the workload. So that's, you know what they're saying with, you know, and I forget what they're calling it, but the idea that you can have a different type, you can specify a different type of node, a different type of instance for the analytic node, I think is, you know, is a major step forward >>And that, and that that's enabled by the cloud and architecture. >>Of course. Yes. I mean, we're separating, compute from data is, is, is the starter. And so yeah. Then at that point you can then start to, you know, you know, to go less vanilla. I think, you know, the re you know, the, you know, the, I guess the fruition of this is going to be when they say, okay, you can run your, let's say your operational nodes, you know, dedicated, but we'll let you run your analytic nodes serverless. Can't do it yet, but I've gotta believe that's on the roadmap. >>Yeah. So seq brings a lot of overhead. So you get MQL, but now square this circle for me, cuz now you got Mago talking sequel. >>They had to start doing that some time. I mean, and I it's been a court take I've had from them from the, from the get go, which I said, I understand that you're looking at this as an alternative to SQL and that's perfectly valid, but don't deny the validity of SQL or the reason why we, you know, we need it. The fact is that you have, okay, the number, you know, according to Ty index, JavaScript is the seventh, most popular language. Most SQL follows closely behind at the ninth, most popular language you don't want to cl. And the fact is those people exist in the enterprise and they're, and they're disproportionately concentrated in analytics. I mean, you know, it's getting a little less, so now we're seeing like, you know, basically, you know, Python, the programmatic, but still, you know, a lot of sequel expertise there. It does not make, it makes no sense for Mongo to, to, to ignore or to overlook that audience. I think now they're, you know, you know, they're taking baby steps to start, you know, reaching out to them. >>It's interesting. You see it going both ways. See Oracle announces a Mongo, DB, Mongo. I mean, it's just convergence. You called it not, I love collisions, you know, >>I know it's like, because you thrive on drama and I thrive on can't. We all love each other, but you know, act. But the thing is actually, I've been, I wrote about this. I forget when I think it was like 2014 or 2016. It's when we, I was noticed I was noting basically the, you know, the rise of all these specialized databases and probably Amazon, you know, AWS is probably the best exemplar of that. I've got 15 or 16 or however, number of databases and they're all dedicated purpose. Right. But I also was, you know, basically saw that inevitably there was gonna be some overlap. It's not that all databases were gonna become one and the same we're gonna be, we're gonna become back into like the, you know, into a pan G continent or something like that. But that you're gonna have a relational database that can do JSON and, and a, and a document database that can do relational. I mean, you know, it's, to me, that's a no brainer. >>So I asked Andy Ja one time, I'd love to get your take on this, about those, you know, multiple data stores at the time. They probably had a thousand. I think they're probably up to 15 now, right? Different APIs, different S et cetera. And his response. I said, why don't you make it easier for, for customers and maybe build an abstraction or converge these? And he said, well, it's by design. What if you buy this? And, and what your thoughts are, cuz I, you know, he's a pretty straight shooter. Yeah. It's by design because it allows us as the market moves, we can move with it. And if we, if we give developers access to those low level primitives and APIs, then they can move with, with at market speed. Right. And so that again, by design, now we heard certainly Mongo poo pooing that today they didn't mention, they didn't call out Amazon. Yeah. Oracle has no compunction about specifically calling out Amazon. They do it all the time. What do you make of that? Can't Amazon have its cake and eat it too. In other words, extend some of the functionality of those specific databases without going to the Swiss army. >>I I'll put it this way. You, you kind of tapped in you're, you're sort of like, you know, killing me softly with your song there, which is that, you know, I was actually kind of went on a rant about this, actually know in, you know, come, you know, you know, my year ahead sort of out predictions. And I said, look, cloud folks, it's great that you're making individual SAS, you know, products easy to use. But now that I have to mix and match SAS products, you know, the burden of integration is on my shoulders. Start making my life easier. I think a good, you know, a good example of this would be, you know, for instance, you could take something like, you know, let's say like a Google big query. There's no reason why I can't have a piece of that that might, you know, might be paired, say, you know, say with span or something like that. >>The idea being is that if we're all working off a common, you know, common storage, we, you know, it's in cloud native, we can separate the computer engines. It means that we can use the right engine for the right part of the task. And the thing is that maybe, you know, myself as a consumer, I should not have to be choosing between big query and span. But the thing is, I should be able to say, look, I want to, you know, globally distribute database, but I also wanna do some analytics and therefore behind the scenes, you know, new microservices, it could connect the two wouldn't >>Microsoft synapse be an example of doing that. >>It should be an example. I wish I, I would love to hear more from Microsoft about this. They've been radio silent for about the past two or three years in data. You hardly hear about it, but synapse is actually those actually one of the ideas I had in mind now keep in mind that with synapse, you're not talking about, let's say, you know, I mean, it's, it's obviously a sequel data warehouse. It's not pure spark. It's basically their, it was their curated version of spark, but that's fine. But again, I would love to hear Microsoft talk more about that. They've been very quiet. >>Yeah. You, you, the intent is there to >>Simplify >>It exactly. And create an abstraction. Exactly. Yeah. They have been quiet about it. Yeah. Yeah. You would expect that, that maybe they're still trying to figure it out. So what's your prognosis from Mongo? I mean, since this company IP, you know, usually I, I tell and I tell everybody this, especially my kids, like don't buy a stock at IPO. You'll always get a better chance at a cheaper price to buy it. Yeah. And even though that was true with Mongo, you didn't have a big window. No. Like you did, for instance, with, with Facebook, certainly that's been the case with snowflake and sure. Alibaba, I mean, I name a zillion style was almost universal. Yeah. But, but since that, that, that first, you know, few months, period, this, this company has been on a roll. Right. And it, it obviously has been some volatility, but the execution has been outstanding. >>No question about that. I mean, the thing is, look what I, what I, and I'm just gonna talk on the product side on the sales side. Yeah. But on the product side, from the get go, they made a product that was easy for developers. Whereas let's say someone's giving an example, for instance, Cosmo CB, where to do certain operations. They had to go through multiple services in, you know, including Azure portal with Atlas, it's all within Atlas. So they've really, it's been kinda like design thinking from the start initially with, with the core Mongo DB, you know, you, the on premise, both this predates Atlas, I mean, part of it was that they were coming with a language that developers knew was just Javas script. The construct that they knew, which was JS on. So they started with that home core advantage, but they weren't the only ones doing that. But they did it with tooling that was very intuitive to developers that met developers, where they lived and what I give them, you know, then additional credit for is that when they went to the cloud and it wasn't an immediate thing, Atlas was not an overnight success, but they employed that same design thinking to Atlas, they made Atlas a good cloud experience. They didn't just do a lift and shift the cloud. And so that's why today basically like five or six years later, Atlas's most of their business. >>Yeah. It's what, 60% of the business now. Yeah. And then Dave, on the, on the earning scholar, maybe it wasn't Dave and somebody else in response to question said, yeah, ultimately this is the future will be be 90% of the business. I'm not gonna predict when. So my, my question is, okay, so let's call that the midterm midterm ATLA is gonna be 90% of the business with some exceptions that people just won't move to the cloud. What's next is the edge. A new opportunity is Mongo architecturally suited for the, I mean, it's certainly suited for the right, the home Depot store. Sure. You know, at the edge. Yeah. If you, if you consider that edge, which I guess it is form of edge, but how about the far edge EVs cell towers, you know, far side, real time, AI inferencing, what's the requirement there, can Mongo fit there? Any thoughts >>On that? I think the AI and the inferencing stuff is interesting. It's something which really Mongo has not tackled yet. I think we take the same principle, which is the lightweight stuff. In other words, you'll say, do let's say a classification or a prediction or some sort of prescriptive action in other words, where you're not doing some convolution, neural networking and trying to do like, you know, text, text to voice or, or, or vice versa. Well, you're not trying to do all that really fancy stuff. I think that's, you know, if you're keeping it SIM you know, kinda like the kiss principle, I think that's very much within Mongo's future. I think with the realm they have, they basically have the infrastructure to go out to the edge. I think with the fact that they've embraced GraphQL has also made them a lot more extensible. So I think they certainly do have, you know, I, I do see the edge as being, you know, you know, in, in, you know, in their, in their pathway. I do see basically lightweight analytics and lightweight, let's say machine learning definitely in their >>Future. And, but, and they would, would you agree that they're in a better position to tap that opportunity than say a snowflake or an Oracle now maybe M and a can change that. R D can maybe change that, but fundamentally from an architectural standpoint yeah. Are they in a better position? >>Good question. I think that that Mongo snowflake by virtual fact, I mean that they've been all, you know, all cloud start off with, I think makes it more difficult, not impossible to move out to the edge, but it means that, and I, and know, and I, and I said, they're really starting to making some tentative moves in that direction. I'm looking forward to next week to, you know, seeing what, you know, hearing what we're gonna, what they're gonna be saying about that. But I do think, right. You know, you know, to answer your question directly, I'd say like right now, I'd say Mongo probably has a, you know, has a head start there. >>I'm losing track of time. I could go forever with you. Tony bear DB insight with tons of insights. Thanks so much for coming back with. >>It's only one insight insight, Dave. Good to see you again. All >>Right. Good to see you. Thank you. Okay. Keep it right there. Right back at the Java center, Mongo DB world 2022, you're watching the cube.
SUMMARY :
We're at the new Javet center. You face to face and especially the ones in Vegas, it's the first time everybody's been out, you know, And, and this new venue is fantastic And like for instance, you know, sapphires had maybe about one third, their normal turnout. you just published a piece this morning in venture beat is time for Mongo It's that the model has been computed offline so that when you come on in Operational, you know, use cases, patient data. That's a long that's, that's much, it's transactions, you know, the world has been used to table, you know, you know, columns and rows and and then, you know, you talk to a lot of Oracle customers as do I sure. you know, it's just like when you go on Twitter, do you naturally see all the latest tweets? I mean, you mentioned that in, in your article, but basically it's bringing analytics to transactions bringing are coming from companies that already have, you know, analytic database or data warehouses, Per, is that by design though? but it takes more, you know, transformation to, to decide which, you know, Eliminating the need for, you know, complex ETL. I think through, you know, I mean through replication, there's still gonna be some transformation in terms of turning, but there's a sort of analog for Mongo that I'll ask you in the fullness of time, And actually that's also the same principle, you know, on which let's say for instance, And then you can set up secondary nodes, which then you have to think about availability, the fact to say like that separate node does not have to be the same instance class, you know, for the analytic node, I think is, you know, is a major step forward you know, the re you know, the, you know, the, I guess the fruition of this is going to be when they but now square this circle for me, cuz now you got Mago talking sequel. I think now they're, you know, you know, they're taking baby steps to start, you know, reaching out to them. You called it not, I love collisions, you know, I mean, you know, it's, to me, that's a no brainer. I said, why don't you make it easier for, for customers and maybe build an abstraction or converge these? I think a good, you know, a good example of this would be, you know, for instance, you could take something But the thing is, I should be able to say, look, I want to, you know, globally distribute database, let's say, you know, I mean, it's, it's obviously a sequel data warehouse. I mean, since this company IP, you know, usually I, I tell and I tell everybody this, to developers that met developers, where they lived and what I give them, you know, but how about the far edge EVs cell towers, you know, you know, you know, in, in, you know, in their, in their pathway. And, but, and they would, would you agree that they're in a better position to tap that opportunity I mean that they've been all, you know, all cloud start off with, I could go forever with you. Good to see you again. Right back at the Java center, Mongo DB
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Teresa | PERSON | 0.99+ |
Comcast | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Khalid Al Rumaihi | PERSON | 0.99+ |
Phil Soren | PERSON | 0.99+ |
Bahrain | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
TIBCO | ORGANIZATION | 0.99+ |
General Electric | ORGANIZATION | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Pega | ORGANIZATION | 0.99+ |
Khalid | PERSON | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
$100 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Sunnyvale | LOCATION | 0.99+ |
March 2015 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
46% | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Todd Nielsen | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
September | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
July | DATE | 0.99+ |
US | LOCATION | 0.99+ |
Atlas | ORGANIZATION | 0.99+ |
Bahrain Economic Development Board | ORGANIZATION | 0.99+ |
Kuwait | LOCATION | 0.99+ |
Malta | LOCATION | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Singapore | LOCATION | 0.99+ |
2012 | DATE | 0.99+ |
Gulf Cooperation Council | ORGANIZATION | 0.99+ |
So Cal | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
Tony Coleman, Temenos and Boris Bialek, MongoDB | MongoDB World 2022
>>Yeah, yeah, yeah. We're back at the center of the coverage of the world 20 twenty-two, the first live event in three years. Pretty amazing. And I'm really excited to have Tony Coleman. Here is the c e o of those who changing the finance and banking industry. And this is the global head of industry solutions. That would be welcome. Back to the cube. Welcome. First time. Um, so thanks for coming on. Thank you. >>Thanks for having us, >>Tony. Tell us about what are you guys up to? Disrupting the finance world. >>So tomorrow is everyone's banking platform. So we are a software company. We have over 3000 financial institutions around the world. Marketing tell me that that works out is over 1.2 billion people rely on terminal software for their banking and financial needs. 41 of the top 50 banks in the world run software and we are very proud to be powering all of those entities on their innovation journeys and bringing you know, that digital transformation that we've seen so much all over the past few years and enabling a lot of the world's unbanked through digital banking become, you know, members of the >>community. So basically you're bringing the software platform to enable that to somebody you don't have to build it themselves because they never get there. Absolutely. And and so that's why I don't know if you consider that disruptive. I guess I do to the industry to a certain extent. But when you think of disruption in the business, you think of Blockchain and crypto, and 50 is that is completely separate world and you guys participate in that as well. Well, I >>would say it's related right? I mean, I was doing a podcast recently and they had this idea of, um, buzzword jail where you could choose words to go into jail and I said 50 not because I think they're intrinsically bad, but I think just at the moment they are a rife for scam area. I think it's one of those one of these technologies and investment area that people don't understand it, and there's a lot of a lot of mistakes that can be made in that, >>Yeah, >>I mean, it's a fascinating piece that it could be truly transformative if we get it right, but it's very emerging, so we'll see so don't play a huge part in the Blockchain industry directly. We work with partners in that space, but in terms of digital assets and that sort of thing. Yeah, absolutely. >>So, Boris, you have industry solutions in your title. What does that entail? So >>basically, I'm responsible for all the verticals, and that includes great partners like Tony. And we're doing a lot of verticals by now. When you listen. Today in all these various talks, we have so much stuff ranging from banking, go retail, healthcare, insurance, you name it, we have it by now. And that's obviously the clients moving from the edge solution. Like touching a little toe in the water, but longer to going all in building biggest solutions you saw on stage the lady from this morning. These are not second Great. Yeah, we do something small now. We're part of the transformation journey. And this is where Tony and I can regularly together how we transform things and how we built a new way of banking is done with Michael services and technology surrounding it. Yeah, >>but what about performance in this world? Can you tell me about that? >>Yeah. This is an interesting thing because people always challenging what is performance and document databases. And Tony challenged us actually, six weeks before his own show several weeks ago in London and says, Boris, let's do a benchmark And maybe you bring your story because if I get too excited, I follow. >>Yeah, sure, that performance and efficiency topics close close to my heart. I have been for for years. And so, yeah, we every two or three years, we run a high water. We've got a high water benchmark, and this year we sort of double down literally double down on everything we did previously. So this was 200 million accounts, 100 million customers, and we were thrashing through 102,800 seventy-five transactions a second, which is a phenomenal number. And, uh, >>can I do that on the Blockchain? >>Wow. Yeah, exactly. Right. So this is you know, I get asked why we do such high numbers and the reason is very straightforward. If somebody wants 10,000 transactions a second, we're seeing banks now that need that sort of thing. If I can give them a benchmark report, this is 100,000. I don't need to keep doing benchmarks. 10. >>Yeah. Tell me more about the Anytime you get into benchmarks, you want to understand the configuration. The workload. Tell me more about that. So we have >>a pretty well path of a standard transaction mix. We call it a retail transaction mix. And so it's the tries to the workload. Is that because it's a simulation right around what you would do in your daily basis? So you're going to make payments you're going to check? Your balance is you're going to see what he's moved on your account. So we do all of that and we run it through a proper production, good environment. And this is really important. This is something we do in the lab you couldn't go live on. This is all all of the horrible, non functional requirements around high availability, >>security, security passes, private wings, all these things. And one thing is, they're doing this for a long time. So this is not like let's define something new for the world. Now, this is something Tony's doing for literally 10, 15 years now, right? >>It was only 15 years, but this >>is your benchmark >>top >>developed Okay, >>so we run it through and, um yeah, some fantastic numbers. And not just on the share sort of top-level numbers 100,000 transactions. A second response time out of it was fantastic. One-millisecond, which is just brilliant. So it means you get these really efficient numbers what that helped us do with, you know, some of the other partners that are involved in the benchmark as well. It meant that our throughput court, which is a really good measure of efficiency, is up to four times better than we ran it three years ago. So in terms of a sustainability piece, which is so important that that's really a huge improvement, that's down to application changes, architect changes as well as using appropriate technology in the right place. >>How important? With things like the number, of course, the memory size is the block sizes. All that stuff. >>We are very tiny. So this is the part. When I talk to people, we have what we call a system in the back of people. Look at me. Um, how many transactions on that one? So, to be fair, three-quarters, we're going to be one quarter or something else because we're still putting some components of and start procedures for disclosure. But when I think Seventy-five 1000 transactions on a single single 80 system, which is thirty-two cause you're saying correctly, something like that. This is a tiny machine in the world of banking. So before this was the main friends and now it's wonderful instance on a W s. And this is really amazing. Costed and environmental footprint is so, so important >>and there's a heavy right heavy environment. >>So the the way we the way we architect the solution is it follows something called a command query responsibility, segregated segregation. So what we do, we do all the commands inappropriate database for that piece, and that was running at about Twenty-five 1000 transactions a second and then we're streaming the data out of that directly into So actually I was doing more than the Seventy-five 1000 queries. A second, which is the part of it was also investing Twenty-five 1000 transactions the second at the same time >>and okay, and the workload had a high locality medium locality. It was just give us a picture of what that's like. Sorry. So, >>yeah, >>we don't have that. Yeah, >>so explain that That's not That's not the mindset for a document. Exactly. >>Exactly. In the document database, you don't have the hot spotting the one single field off the table, which is suddenly hot spotting. And now you have literally and recovery comes up and we say, What goes, goes together, get together belongs together, comes out together. So the number of, for example, it's much, much smaller and the document system, then historically, relationship. >>So it is not a good good indicator, necessarily >>anymore. That's what this is so much reduced. The number of access patterns are smaller, and I mean it is highly optimized, for example, internally as well. The internal structures, so that was very close to a >>traditional benchmark, would have a cash in front of a high cash rate. So 100 and 99% right, That's a high locality reference. But that's that's irrelevant. >>It's gone. There's no cashing in the middle anymore. It goes straight against the database. All these things are out, and that's what makes it so exciting and all the things in a real environment. I think we really need to stress it. It's not a test that at home. It's a real life environment out into the wild with the benchmark driving and driving. >>How did your customers respond? You did this for your recent event? >>Yeah, we did it for our use. A conference, our community for, um, which was a few weeks ago in London. Um, and the You know, the reaction was Certainly it was a great reception, of course, but the main thing that people are fascinated about, how much more efficient the whole platform it's explaining. So you know when we can run and it's a great number that we've got the team pulled out, which is so having doubled throughput on the platform from what we did three years ago, we're actually using 20% less infrastructure to give double the performance. Uh, macro-level, that's a phenomenal achievement. And that means that these changes that we make everything that we're doing benefits all of our customers. So all of the banks, when they take the latest release, is they get these benefits. Everything is that much more efficient. So everybody benefits from every investment, >>and this was running in the cloud. Is that correct? You're running out of this. >>So this was list, Um, 80 on a W s with a W s cases and processes. And so it was a really reality driven environment, >>pure pure cloud-native or using mana services on a W s. And then at least for the peace. It's >>awesome. I mean, uh, So now how convenient for the timing from, uh, the world. How are you socializing with your community? >>We're having this afternoon session as well, where we talk a little bit more detail about that, and he has a session as well tomorrow. So we see a lot of good feedback as well when we bring it up with clients. Obviously some clients get very specific because this reduction footprint is so huge when you think a client has 89 environments from early development systems to production to emergency standby, maybe a different cloud. All these things what day talks about the different Atlas features multi cloud environmentally. All this stuff comes to play. And this is why I'm so excited to work with them. We should bring up as well the other things which are available to ready already with your front and solutions with Infinity services because that's the other part of the modernization, the Michael Services, which Tony so politely not mentioning. So there's a lot of cool technology into that one, which fits to how it works in micros services. Happy I first all these what they called factors. Micro service a p. I cloud-native headless. I think that was the right order now. So all these things are reflected as well. But with their leadership chief now, I think a lot of companies have to play Catch-up now to what Tony and his team are delivering on the bank. This >>gets the modernization. We really haven't explicitly talks about that. Everything you've just said talks to modernization. So you typically in financial services find a lot of relation. Database twenty-year-old, hardened, etcetera, high availability. Give them credit for that. But a lot of times you'll see them just shift that into the cloud. You guys chose not to do that. What was the modernization journey look like? >>So it's a bit of, um yeah, a firm believer in pragmatism and using. I think you touched on earlier the appropriate technology. So >>horses for courses >>exactly right out of my mouth. And I was talking to one of the uh, the investor analysts earlier. And you know, the exact same question comes up, right? So if you've got a relation database or you've got a big legacy system and you're not gonna mainframe or whatever it is and you wanna pull that over when you it's not just a case of moving the data model from one paradigm to another. You need to look at it holistically, and you need to be ambitious. I think the industry has got, you know, quite nervous about some of these transformation projects, but in some ways it might be counter intuitive. I think being ambitious and being in bold is a better way. Better way through, you know, take take of you, look at it holistically. Layout of plan. It is hard. It is hard to do these sorts of transformations, but that's what makes it the challenge. That's what makes it fun. Take take those bold steps. Look at it holistically. Look at the end state and then work out a practical way. You can deliver value to the business and your customers as you deliver on the road. So >>did you migrate from a traditional R D B. M s to go. >>So So, Yeah, this is a conversation. So, uh, in the late nineties, the kind of the phrase document model hasn't really been coined yet. And for some of our work at the time, we refer to as a hierarchical model. Um, And at that point in time, really, if you wanted to sell to a bank, you needed to be running Oracle. So we took this data model and we got it running an article and then other relational databases as well, but actually under the colors there it is, sort of as well. So there is a project that we're looking at to say Well, okay, taking that model, which is in a relational database. And of course, you build over time, you do rely on some of the features of relations databases moving that over to something like, isn't it? You know, it's not quite as simple as just changing the data model. Um, so there's a few bits and pieces that we need to work through, but there is a concept that we are running, which is looking really promising and spurred on by the amazing results from the benchmark. That could be something That's really >>yeah, I think you know, 20 years ago you probably wouldn't even thought about it. It's just too risky. But today, with the modern tools and the cloud and you're talking about micro services and containers, it becomes potentially more feasible. >>But the other side of it is, you know, it's only relatively recently the Mongo who's had transaction support across multiple document multi collection transactions and in banking. As we all know, you know, it's highly regulated. That is, all of your worst possible non functional requirement. Security transaction reality. Thomas City You know, the whole the whole shebang. Your worst possible nightmare is Monday morning for >>us. So and I think one part which is exciting about this Tony is a very good practical example about this large scale modernization and cutting out by cutting off that layer and going back to the hierarchical internal structures. We're simply find a lot of the backing components of our because obviously translation which was done before, it's not need it anymore. And that is as well for me, an exciting example to see how long it takes what it is. So Tony space in my life experiments so to speak >>well, you're right because it used to be those migrations. Where how many line of code? How long do I have to freeze it? And that a lot of times lead people to say, Well, forget it, because the business is going to shut down. >>But now we do that. We do that. So I'm working, obviously, besides the work with a lot of financial clients, and but now it's my job is normally shift and left a pain in the game because the result of the work is when they move everything to the cloud and it was bad before. It will not be better in the cloud only because it's in somebody else's data center. So these modernization and innovation factor is absolutely critical. And it's only said that people get it by now. This shift and left over it is how can I innovate? How can accelerate innovation, and that leads very quickly to the document model discussion. >>Yeah, I think the world practitioners will tell you, if you really want to affect the operational model, have a meaningful impact on your business. You have to really modernized. You can't just lift shift that they're absolutely. You know, what's the difference between hundreds of millions or billions in some cases, versus, you know, some nice little hits here or there. >>So we see as well a lot of clients asking for solutions like the terminal solutions. And like others where there is not anymore discussion about how to move to the The question is how fast how can accelerate. We see the services request the first one. It's amazing. After the event, what we had in London, 100 clients calling us. So it's not our sales people calling upon the clients, the clients coming in. I saw it. How do we get started? And that is for me, from the vendor perspective, so to speak. Amazing moment >>yourself. You go, guys, we're gonna go. Thanks so much for that. You have to have you back and see how that goes. That. Yeah, that's a big story of if you're a great All right, keep it right there. Everybody will be right back. This is David for the Cube. You're watching our live coverage of mongo D B World 20 twenty-two from New York City. >>Yeah, >>Yeah, yeah, yeah, yeah
SUMMARY :
Here is the c e o of those Disrupting the finance world. So we are a software And and so that's why I don't know if you consider that disruptive. of, um, buzzword jail where you could choose words to go into I mean, it's a fascinating piece that it could be truly transformative if we get it right, So, Boris, you have industry solutions in your title. And that's obviously the clients moving show several weeks ago in London and says, Boris, let's do a benchmark And maybe you bring your story So this was 200 million accounts, 100 million customers, So this is you know, So we have This is something we do in the lab you couldn't go live on. So this is not like let's define something new for the world. So it means you get these really efficient numbers what that helped us do with, All that stuff. When I talk to people, we have what we call a system So the the way we the way we architect the solution is it follows something and okay, and the workload had a high locality medium locality. we don't have that. so explain that That's not That's not the mindset for a document. In the document database, you don't have the hot spotting the one single field so that was very close to a So 100 and It's a real life environment out into the wild with the benchmark driving and driving. So all of the banks, when they take the latest release, is they get these benefits. and this was running in the cloud. So this was list, Um, 80 on a W s with a W s cases And then at least for the peace. the timing from, uh, the world. So we see a lot of good feedback as well when we bring it So you typically in financial I think you touched on earlier the appropriate technology. And you know, the exact same question comes up, So So, Yeah, this is a conversation. yeah, I think you know, 20 years ago you probably wouldn't even thought about it. But the other side of it is, you know, it's only relatively recently the the backing components of our because obviously translation which was done before, it's not need it anymore. And that a lot of times lead people to say, of financial clients, and but now it's my job is normally shift and left a pain in the what's the difference between hundreds of millions or billions in some cases, versus, you know, So we see as well a lot of clients asking for solutions like You have to have you back and see how that goes.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Boris | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
100,000 | QUANTITY | 0.99+ |
London | LOCATION | 0.99+ |
Tony Coleman | PERSON | 0.99+ |
100 | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Temenos | PERSON | 0.99+ |
41 | QUANTITY | 0.99+ |
100 clients | QUANTITY | 0.99+ |
one quarter | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
Boris Bialek | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
three years | QUANTITY | 0.99+ |
Monday morning | DATE | 0.99+ |
One-millisecond | QUANTITY | 0.99+ |
100 million customers | QUANTITY | 0.99+ |
89 environments | QUANTITY | 0.99+ |
thirty-two | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
100,000 transactions | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
102,800 seventy-five transactions | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Michael Services | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
First time | QUANTITY | 0.98+ |
billions | QUANTITY | 0.98+ |
three-quarters | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
first one | QUANTITY | 0.98+ |
several weeks ago | DATE | 0.98+ |
Twenty-five 1000 transactions | QUANTITY | 0.98+ |
late nineties | DATE | 0.98+ |
80 | QUANTITY | 0.98+ |
David | PERSON | 0.98+ |
over 3000 financial institutions | QUANTITY | 0.98+ |
three years ago | DATE | 0.98+ |
MongoDB | ORGANIZATION | 0.98+ |
over 1.2 billion people | QUANTITY | 0.97+ |
Today | DATE | 0.97+ |
today | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
200 million accounts | QUANTITY | 0.96+ |
Seventy-five 1000 queries | QUANTITY | 0.96+ |
Seventy-five 1000 transactions | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.95+ |
15 years | QUANTITY | 0.95+ |
about Twenty-five 1000 transactions | QUANTITY | 0.95+ |
this morning | DATE | 0.94+ |
few weeks ago | DATE | 0.94+ |
one paradigm | QUANTITY | 0.94+ |
twenty-year-old | QUANTITY | 0.93+ |
one part | QUANTITY | 0.93+ |
second response | QUANTITY | 0.93+ |
Thomas City | PERSON | 0.93+ |
more | QUANTITY | 0.92+ |
one single field | QUANTITY | 0.92+ |
10, 15 years | QUANTITY | 0.92+ |
10,000 transactions a second | QUANTITY | 0.92+ |
50 banks | QUANTITY | 0.92+ |
Michael | PERSON | 0.92+ |
first | QUANTITY | 0.91+ |
first live event | QUANTITY | 0.9+ |
mongo D B World 20 twenty-two | TITLE | 0.9+ |
six weeks | DATE | 0.9+ |
Infinity services | ORGANIZATION | 0.83+ |
20 twenty-two | QUANTITY | 0.83+ |
single single 80 system | QUANTITY | 0.8+ |
Atlas | ORGANIZATION | 0.8+ |
50 | QUANTITY | 0.75+ |
four times | QUANTITY | 0.72+ |
for years | QUANTITY | 0.68+ |
a second | QUANTITY | 0.63+ |
every two | QUANTITY | 0.61+ |
double | QUANTITY | 0.59+ |
up | QUANTITY | 0.57+ |
Data Power Panel V3
(upbeat music) >> The stampede to cloud and massive VC investments has led to the emergence of a new generation of object store based data lakes. And with them two important trends, actually three important trends. First, a new category that combines data lakes and data warehouses aka the lakehouse is emerged as a leading contender to be the data platform of the future. And this novelty touts the ability to address data engineering, data science, and data warehouse workloads on a single shared data platform. The other major trend we've seen is query engines and broader data fabric virtualization platforms have embraced NextGen data lakes as platforms for SQL centric business intelligence workloads, reducing, or somebody even claim eliminating the need for separate data warehouses. Pretty bold. However, cloud data warehouses have added complimentary technologies to bridge the gaps with lakehouses. And the third is many, if not most customers that are embracing the so-called data fabric or data mesh architectures. They're looking at data lakes as a fundamental component of their strategies, and they're trying to evolve them to be more capable, hence the interest in lakehouse, but at the same time, they don't want to, or can't abandon their data warehouse estate. As such we see a battle royale is brewing between cloud data warehouses and cloud lakehouses. Is it possible to do it all with one cloud center analytical data platform? Well, we're going to find out. My name is Dave Vellante and welcome to the data platform's power panel on theCUBE. Our next episode in a series where we gather some of the industry's top analysts to talk about one of our favorite topics, data. In today's session, we'll discuss trends, emerging options, and the trade offs of various approaches and we'll name names. Joining us today are Sanjeev Mohan, who's the principal at SanjMo, Tony Baers, principal at dbInsight. And Doug Henschen is the vice president and principal analyst at Constellation Research. Guys, welcome back to theCUBE. Great to see you again. >> Thank guys. Thank you. >> Thank you. >> So it's early June and we're gearing up with two major conferences, there's several database conferences, but two in particular that were very interested in, Snowflake Summit and Databricks Data and AI Summit. Doug let's start off with you and then Tony and Sanjeev, if you could kindly weigh in. Where did this all start, Doug? The notion of lakehouse. And let's talk about what exactly we mean by lakehouse. Go ahead. >> Yeah, well you nailed it in your intro. One platform to address BI data science, data engineering, fewer platforms, less cost, less complexity, very compelling. You can credit Databricks for coining the term lakehouse back in 2020, but it's really a much older idea. You can go back to Cloudera introducing their Impala database in 2012. That was a database on top of Hadoop. And indeed in that last decade, by the middle of that last decade, there were several SQL on Hadoop products, open standards like Apache Drill. And at the same time, the database vendors were trying to respond to this interest in machine learning and the data science. So they were adding SQL extensions, the likes Hudi and Vertical we're adding SQL extensions to support the data science. But then later in that decade with the shift to cloud and object storage, you saw the vendor shift to this whole cloud, and object storage idea. So you have in the database camp Snowflake introduce Snowpark to try to address the data science needs. They introduced that in 2020 and last year they announced support for Python. You also had Oracle, SAP jumped on this lakehouse idea last year, supporting both the lake and warehouse single vendor, not necessarily quite single platform. Google very recently also jumped on the bandwagon. And then you also mentioned, the SQL engine camp, the Dremios, the Ahanas, the Starbursts, really doing two things, a fabric for distributed access to many data sources, but also very firmly planning that idea that you can just have the lake and we'll help you do the BI workloads on that. And then of course, the data lake camp with the Databricks and Clouderas providing a warehouse style deployments on top of their lake platforms. >> Okay, thanks, Doug. I'd be remiss those of you who me know that I typically write my own intros. This time my colleagues fed me a lot of that material. So thank you. You guys make it easy. But Tony, give us your thoughts on this intro. >> Right. Well, I very much agree with both of you, which may not make for the most exciting television in terms of that it has been an evolution just like Doug said. I mean, for instance, just to give an example when Teradata bought AfterData was initially seen as a hardware platform play. In the end, it was basically, it was all those after functions that made a lot of sort of big data analytics accessible to SQL. (clears throat) And so what I really see just in a more simpler definition or functional definition, the data lakehouse is really an attempt by the data lake folks to make the data lake friendlier territory to the SQL folks, and also to get into friendly territory, to all the data stewards, who are basically concerned about the sprawl and the lack of control in governance in the data lake. So it's really kind of a continuing of an ongoing trend that being said, there's no action without counter action. And of course, at the other end of the spectrum, we also see a lot of the data warehouses starting to edit things like in database machine learning. So they're certainly not surrendering without a fight. Again, as Doug was mentioning, this has been part of a continual blending of platforms that we've seen over the years that we first saw in the Hadoop years with SQL on Hadoop and data warehouses starting to reach out to cloud storage or should say the HDFS and then with the cloud then going cloud native and therefore trying to break the silos down even further. >> Now, thank you. And Sanjeev, data lakes, when we first heard about them, there were such a compelling name, and then we realized all the problems associated with them. So pick it up from there. What would you add to Doug and Tony? >> I would say, these are excellent points that Doug and Tony have brought to light. The concept of lakehouse was going on to your point, Dave, a long time ago, long before the tone was invented. For example, in Uber, Uber was trying to do a mix of Hadoop and Vertical because what they really needed were transactional capabilities that Hadoop did not have. So they weren't calling it the lakehouse, they were using multiple technologies, but now they're able to collapse it into a single data store that we call lakehouse. Data lakes, excellent at batch processing large volumes of data, but they don't have the real time capabilities such as change data capture, doing inserts and updates. So this is why lakehouse has become so important because they give us these transactional capabilities. >> Great. So I'm interested, the name is great, lakehouse. The concept is powerful, but I get concerned that it's a lot of marketing hype behind it. So I want to examine that a bit deeper. How mature is the concept of lakehouse? Are there practical examples that really exist in the real world that are driving business results for practitioners? Tony, maybe you could kick that off. >> Well, put it this way. I think what's interesting is that both data lakes and data warehouse that each had to extend themselves. To believe the Databricks hype it's that this was just a natural extension of the data lake. In point of fact, Databricks had to go outside its core technology of Spark to make the lakehouse possible. And it's a very similar type of thing on the part with data warehouse folks, in terms of that they've had to go beyond SQL, In the case of Databricks. There have been a number of incremental improvements to Delta lake, to basically make the table format more performative, for instance. But the other thing, I think the most dramatic change in all that is in their SQL engine and they had to essentially pretty much abandon Spark SQL because it really, in off itself Spark SQL is essentially stop gap solution. And if they wanted to really address that crowd, they had to totally reinvent SQL or at least their SQL engine. And so Databricks SQL is not Spark SQL, it is not Spark, it's basically SQL that it's adapted to run in a Spark environment, but the underlying engine is C++, it's not scale or anything like that. So Databricks had to take a major detour outside of its core platform to do this. So to answer your question, this is not mature because these are all basically kind of, even though the idea of blending platforms has been going on for well over a decade, I would say that the current iteration is still fairly immature. And in the cloud, I could see a further evolution of this because if you think through cloud native architecture where you're essentially abstracting compute from data, there is no reason why, if let's say you are dealing with say, the same basically data targets say cloud storage, cloud object storage that you might not apportion the task to different compute engines. And so therefore you could have, for instance, let's say you're Google, you could have BigQuery, perform basically the types of the analytics, the SQL analytics that would be associated with the data warehouse and you could have BigQuery ML that does some in database machine learning, but at the same time for another part of the query, which might involve, let's say some deep learning, just for example, you might go out to let's say the serverless spark service or the data proc. And there's no reason why Google could not blend all those into a coherent offering that's basically all triggered through microservices. And I just gave Google as an example, if you could generalize that with all the other cloud or all the other third party vendors. So I think we're still very early in the game in terms of maturity of data lakehouses. >> Thanks, Tony. So Sanjeev, is this all hype? What are your thoughts? >> It's not hype, but completely agree. It's not mature yet. Lakehouses have still a lot of work to do, so what I'm now starting to see is that the world is dividing into two camps. On one hand, there are people who don't want to deal with the operational aspects of vast amounts of data. They are the ones who are going for BigQuery, Redshift, Snowflake, Synapse, and so on because they want the platform to handle all the data modeling, access control, performance enhancements, but these are trade off. If you go with these platforms, then you are giving up on vendor neutrality. On the other side are those who have engineering skills. They want the independence. In other words, they don't want vendor lock in. They want to transform their data into any number of use cases, especially data science, machine learning use case. What they want is agility via open file formats using any compute engine. So why do I say lakehouses are not mature? Well, cloud data warehouses they provide you an excellent user experience. That is the main reason why Snowflake took off. If you have thousands of cables, it takes minutes to get them started, uploaded into your warehouse and start experimentation. Table formats are far more resonating with the community than file formats. But once the cost goes up of cloud data warehouse, then the organization start exploring lakehouses. But the problem is lakehouses still need to do a lot of work on metadata. Apache Hive was a fantastic first attempt at it. Even today Apache Hive is still very strong, but it's all technical metadata and it has so many different restrictions. That's why we see Databricks is investing into something called Unity Catalog. Hopefully we'll hear more about Unity Catalog at the end of the month. But there's a second problem. I just want to mention, and that is lack of standards. All these open source vendors, they're running, what I call ego projects. You see on LinkedIn, they're constantly battling with each other, but end user doesn't care. End user wants a problem to be solved. They want to use Trino, Dremio, Spark from EMR, Databricks, Ahana, DaaS, Frink, Athena. But the problem is that we don't have common standards. >> Right. Thanks. So Doug, I worry sometimes. I mean, I look at the space, we've debated for years, best of breed versus the full suite. You see AWS with whatever, 12 different plus data stores and different APIs and primitives. You got Oracle putting everything into its database. It's actually done some interesting things with MySQL HeatWave, so maybe there's proof points there, but Snowflake really good at data warehouse, simplifying data warehouse. Databricks, really good at making lakehouses actually more functional. Can one platform do it all? >> Well in a word, I can't be best at breed at all things. I think the upshot of and cogen analysis from Sanjeev there, the database, the vendors coming out of the database tradition, they excel at the SQL. They're extending it into data science, but when it comes to unstructured data, data science, ML AI often a compromise, the data lake crowd, the Databricks and such. They've struggled to completely displace the data warehouse when it really gets to the tough SLAs, they acknowledge that there's still a role for the warehouse. Maybe you can size down the warehouse and offload some of the BI workloads and maybe and some of these SQL engines, good for ad hoc, minimize data movement. But really when you get to the deep service level, a requirement, the high concurrency, the high query workloads, you end up creating something that's warehouse like. >> Where do you guys think this market is headed? What's going to take hold? Which projects are going to fade away? You got some things in Apache projects like Hudi and Iceberg, where do they fit Sanjeev? Do you have any thoughts on that? >> So thank you, Dave. So I feel that table formats are starting to mature. There is a lot of work that's being done. We will not have a single product or single platform. We'll have a mixture. So I see a lot of Apache Iceberg in the news. Apache Iceberg is really innovating. Their focus is on a table format, but then Delta and Apache Hudi are doing a lot of deep engineering work. For example, how do you handle high concurrency when there are multiple rights going on? Do you version your Parquet files or how do you do your upcerts basically? So different focus, at the end of the day, the end user will decide what is the right platform, but we are going to have multiple formats living with us for a long time. >> Doug is Iceberg in your view, something that's going to address some of those gaps in standards that Sanjeev was talking about earlier? >> Yeah, Delta lake, Hudi, Iceberg, they all address this need for consistency and scalability, Delta lake open technically, but open for access. I don't hear about Delta lakes in any worlds, but Databricks, hearing a lot of buzz about Apache Iceberg. End users want an open performance standard. And most recently Google embraced Iceberg for its recent a big lake, their stab at having supporting both lakes and warehouses on one conjoined platform. >> And Tony, of course, you remember the early days of the sort of big data movement you had MapR was the most closed. You had Horton works the most open. You had Cloudera in between. There was always this kind of contest as to who's the most open. Does that matter? Are we going to see a repeat of that here? >> I think it's spheres of influence, I think, and Doug very much was kind of referring to this. I would call it kind of like the MongoDB syndrome, which is that you have... and I'm talking about MongoDB before they changed their license, open source project, but very much associated with MongoDB, which basically, pretty much controlled most of the contributions made decisions. And I think Databricks has the same iron cloud hold on Delta lake, but still the market is pretty much associated Delta lake as the Databricks, open source project. I mean, Iceberg is probably further advanced than Hudi in terms of mind share. And so what I see that's breaking down to is essentially, basically the Databricks open source versus the everything else open source, the community open source. So I see it's a very similar type of breakdown that I see repeating itself here. >> So by the way, Mongo has a conference next week, another data platform is kind of not really relevant to this discussion totally. But in the sense it is because there's a lot of discussion on earnings calls these last couple of weeks about consumption and who's exposed, obviously people are concerned about Snowflake's consumption model. Mongo is maybe less exposed because Atlas is prominent in the portfolio, blah, blah, blah. But I wanted to bring up the little bit of controversy that we saw come out of the Snowflake earnings call, where the ever core analyst asked Frank Klutman about discretionary spend. And Frank basically said, look, we're not discretionary. We are deeply operationalized. Whereas he kind of poo-pooed the lakehouse or the data lake, et cetera, saying, oh yeah, data scientists will pull files out and play with them. That's really not our business. Do any of you have comments on that? Help us swing through that controversy. Who wants to take that one? >> Let's put it this way. The SQL folks are from Venus and the data scientists are from Mars. So it means it really comes down to it, sort that type of perception. The fact is, is that, traditionally with analytics, it was very SQL oriented and that basically the quants were kind of off in their corner, where they're using SaaS or where they're using Teradata. It's really a great leveler today, which is that, I mean basic Python it's become arguably one of the most popular programming languages, depending on what month you're looking at, at the title index. And of course, obviously SQL is, as I tell the MongoDB folks, SQL is not going away. You have a large skills base out there. And so basically I see this breaking down to essentially, you're going to have each group that's going to have its own natural preferences for its home turf. And the fact that basically, let's say the Python and scale of folks are using Databricks does not make them any less operational or machine critical than the SQL folks. >> Anybody else want to chime in on that one? >> Yeah, I totally agree with that. Python support in Snowflake is very nascent with all of Snowpark, all of the things outside of SQL, they're very much relying on partners too and make things possible and make data science possible. And it's very early days. I think the bottom line, what we're going to see is each of these camps is going to keep working on doing better at the thing that they don't do today, or they're new to, but they're not going to nail it. They're not going to be best of breed on both sides. So the SQL centric companies and shops are going to do more data science on their database centric platform. That data science driven companies might be doing more BI on their leagues with those vendors and the companies that have highly distributed data, they're going to add fabrics, and maybe offload more of their BI onto those engines, like Dremio and Starburst. >> So I've asked you this before, but I'll ask you Sanjeev. 'Cause Snowflake and Databricks are such great examples 'cause you have the data engineering crowd trying to go into data warehousing and you have the data warehousing guys trying to go into the lake territory. Snowflake has $5 billion in the balance sheet and I've asked you before, I ask you again, doesn't there has to be a semantic layer between these two worlds? Does Snowflake go out and do M&A and maybe buy ad scale or a data mirror? Or is that just sort of a bandaid? What are your thoughts on that Sanjeev? >> I think semantic layer is the metadata. The business metadata is extremely important. At the end of the day, the business folks, they'd rather go to the business metadata than have to figure out, for example, like let's say, I want to update somebody's email address and we have a lot of overhead with data residency laws and all that. I want my platform to give me the business metadata so I can write my business logic without having to worry about which database, which location. So having that semantic layer is extremely important. In fact, now we are taking it to the next level. Now we are saying that it's not just a semantic layer, it's all my KPIs, all my calculations. So how can I make those calculations independent of the compute engine, independent of the BI tool and make them fungible. So more disaggregation of the stack, but it gives us more best of breed products that the customers have to worry about. >> So I want to ask you about the stack, the modern data stack, if you will. And we always talk about injecting machine intelligence, AI into applications, making them more data driven. But when you look at the application development stack, it's separate, the database is tends to be separate from the data and analytics stack. Do those two worlds have to come together in the modern data world? And what does that look like organizationally? >> So organizationally even technically I think it is starting to happen. Microservices architecture was a first attempt to bring the application and the data world together, but they are fundamentally different things. For example, if an application crashes, that's horrible, but Kubernetes will self heal and it'll bring the application back up. But if a database crashes and corrupts your data, we have a huge problem. So that's why they have traditionally been two different stacks. They are starting to come together, especially with data ops, for instance, versioning of the way we write business logic. It used to be, a business logic was highly embedded into our database of choice, but now we are disaggregating that using GitHub, CICD the whole DevOps tool chain. So data is catching up to the way applications are. >> We also have databases, that trans analytical databases that's a little bit of what the story is with MongoDB next week with adding more analytical capabilities. But I think companies that talk about that are always careful to couch it as operational analytics, not the warehouse level workloads. So we're making progress, but I think there's always going to be, or there will long be a separate analytical data platform. >> Until data mesh takes over. (all laughing) Not opening a can of worms. >> Well, but wait, I know it's out of scope here, but wouldn't data mesh say, hey, do take your best of breed to Doug's earlier point. You can't be best of breed at everything, wouldn't data mesh advocate, data lakes do your data lake thing, data warehouse, do your data lake, then you're just a node on the mesh. (Tony laughs) Now you need separate data stores and you need separate teams. >> To my point. >> I think, I mean, put it this way. (laughs) Data mesh itself is a logical view of the world. The data mesh is not necessarily on the lake or on the warehouse. I think for me, the fear there is more in terms of, the silos of governance that could happen and the silo views of the world, how we redefine. And that's why and I want to go back to something what Sanjeev said, which is that it's going to be raising the importance of the semantic layer. Now does Snowflake that opens a couple of Pandora's boxes here, which is one, does Snowflake dare go into that space or do they risk basically alienating basically their partner ecosystem, which is a key part of their whole appeal, which is best of breed. They're kind of the same situation that Informatica was where in the early 2000s, when Informatica briefly flirted with analytic applications and realized that was not a good idea, need to redouble down on their core, which was data integration. The other thing though, that raises the importance of and this is where the best of breed comes in, is the data fabric. My contention is that and whether you use employee data mesh practice or not, if you do employee data mesh, you need data fabric. If you deploy data fabric, you don't necessarily need to practice data mesh. But data fabric at its core and admittedly it's a category that's still very poorly defined and evolving, but at its core, we're talking about a common meta data back plane, something that we used to talk about with master data management, this would be something that would be more what I would say basically, mutable, that would be more evolving, basically using, let's say, machine learning to kind of, so that we don't have to predefine rules or predefine what the world looks like. But so I think in the long run, what this really means is that whichever way we implement on whichever physical platform we implement, we need to all be speaking the same metadata language. And I think at the end of the day, regardless of whether it's a lake, warehouse or a lakehouse, we need common metadata. >> Doug, can I come back to something you pointed out? That those talking about bringing analytic and transaction databases together, you had talked about operationalizing those and the caution there. Educate me on MySQL HeatWave. I was surprised when Oracle put so much effort in that, and you may or may not be familiar with it, but a lot of folks have talked about that. Now it's got nowhere in the market, that no market share, but a lot of we've seen these benchmarks from Oracle. How real is that bringing together those two worlds and eliminating ETL? >> Yeah, I have to defer on that one. That's my colleague, Holger Mueller. He wrote the report on that. He's way deep on it and I'm not going to mock him. >> I wonder if that is something, how real that is or if it's just Oracle marketing, anybody have any thoughts on that? >> I'm pretty familiar with HeatWave. It's essentially Oracle doing what, I mean, there's kind of a parallel with what Google's doing with AlloyDB. It's an operational database that will have some embedded analytics. And it's also something which I expect to start seeing with MongoDB. And I think basically, Doug and Sanjeev were kind of referring to this before about basically kind of like the operational analytics, that are basically embedded within an operational database. The idea here is that the last thing you want to do with an operational database is slow it down. So you're not going to be doing very complex deep learning or anything like that, but you might be doing things like classification, you might be doing some predictives. In other words, we've just concluded a transaction with this customer, but was it less than what we were expecting? What does that mean in terms of, is this customer likely to turn? I think we're going to be seeing a lot of that. And I think that's what a lot of what MySQL HeatWave is all about. Whether Oracle has any presence in the market now it's still a pretty new announcement, but the other thing that kind of goes against Oracle, (laughs) that they had to battle against is that even though they own MySQL and run the open source project, everybody else, in terms of the actual commercial implementation it's associated with everybody else. And the popular perception has been that MySQL has been basically kind of like a sidelight for Oracle. And so it's on Oracles shoulders to prove that they're damn serious about it. >> There's no coincidence that MariaDB was launched the day that Oracle acquired Sun. Sanjeev, I wonder if we could come back to a topic that we discussed earlier, which is this notion of consumption, obviously Wall Street's very concerned about it. Snowflake dropped prices last week. I've always felt like, hey, the consumption model is the right model. I can dial it down in when I need to, of course, the street freaks out. What are your thoughts on just pricing, the consumption model? What's the right model for companies, for customers? >> Consumption model is here to stay. What I would like to see, and I think is an ideal situation and actually plays into the lakehouse concept is that, I have my data in some open format, maybe it's Parquet or CSV or JSON, Avro, and I can bring whatever engine is the best engine for my workloads, bring it on, pay for consumption, and then shut it down. And by the way, that could be Cloudera. We don't talk about Cloudera very much, but it could be one business unit wants to use Athena. Another business unit wants to use some other Trino let's say or Dremio. So every business unit is working on the same data set, see that's critical, but that data set is maybe in their VPC and they bring any compute engine, you pay for the use, shut it down. That then you're getting value and you're only paying for consumption. It's not like, I left a cluster running by mistake, so there have to be guardrails. The reason FinOps is so big is because it's very easy for me to run a Cartesian joint in the cloud and get a $10,000 bill. >> This looks like it's been a sort of a victim of its own success in some ways, they made it so easy to spin up single note instances, multi note instances. And back in the day when compute was scarce and costly, those database engines optimized every last bit so they could get as much workload as possible out of every instance. Today, it's really easy to spin up a new node, a new multi node cluster. So that freedom has meant many more nodes that aren't necessarily getting that utilization. So Snowflake has been doing a lot to add reporting, monitoring, dashboards around the utilization of all the nodes and multi node instances that have spun up. And meanwhile, we're seeing some of the traditional on-prem databases that are moving into the cloud, trying to offer that freedom. And I think they're going to have that same discovery that the cost surprises are going to follow as they make it easy to spin up new instances. >> Yeah, a lot of money went into this market over the last decade, separating compute from storage, moving to the cloud. I'm glad you mentioned Cloudera Sanjeev, 'cause they got it all started, the kind of big data movement. We don't talk about them that much. Sometimes I wonder if it's because when they merged Hortonworks and Cloudera, they dead ended both platforms, but then they did invest in a more modern platform. But what's the future of Cloudera? What are you seeing out there? >> Cloudera has a good product. I have to say the problem in our space is that there're way too many companies, there's way too much noise. We are expecting the end users to parse it out or we expecting analyst firms to boil it down. So I think marketing becomes a big problem. As far as technology is concerned, I think Cloudera did turn their selves around and Tony, I know you, you talked to them quite frequently. I think they have quite a comprehensive offering for a long time actually. They've created Kudu, so they got operational, they have Hadoop, they have an operational data warehouse, they're migrated to the cloud. They are in hybrid multi-cloud environment. Lot of cloud data warehouses are not hybrid. They're only in the cloud. >> Right. I think what Cloudera has done the most successful has been in the transition to the cloud and the fact that they're giving their customers more OnRamps to it, more hybrid OnRamps. So I give them a lot of credit there. They're also have been trying to position themselves as being the most price friendly in terms of that we will put more guardrails and governors on it. I mean, part of that could be spin. But on the other hand, they don't have the same vested interest in compute cycles as say, AWS would have with EMR. That being said, yes, Cloudera does it, I think its most powerful appeal so of that, it almost sounds in a way, I don't want to cast them as a legacy system. But the fact is they do have a huge landed legacy on-prem and still significant potential to land and expand that to the cloud. That being said, even though Cloudera is multifunction, I think it certainly has its strengths and weaknesses. And the fact this is that yes, Cloudera has an operational database or an operational data store with a kind of like the outgrowth of age base, but Cloudera is still based, primarily known for the deep analytics, the operational database nobody's going to buy Cloudera or Cloudera data platform strictly for the operational database. They may use it as an add-on, just in the same way that a lot of customers have used let's say Teradata basically to do some machine learning or let's say, Snowflake to parse through JSON. Again, it's not an indictment or anything like that, but the fact is obviously they do have their strengths and their weaknesses. I think their greatest opportunity is with their existing base because that base has a lot invested and vested. And the fact is they do have a hybrid path that a lot of the others lack. >> And of course being on the quarterly shock clock was not a good place to be under the microscope for Cloudera and now they at least can refactor the business accordingly. I'm glad you mentioned hybrid too. We saw Snowflake last month, did a deal with Dell whereby non-native Snowflake data could access on-prem object store from Dell. They announced a similar thing with pure storage. What do you guys make of that? Is that just... How significant will that be? Will customers actually do that? I think they're using either materialized views or extended tables. >> There are data rated and residency requirements. There are desires to have these platforms in your own data center. And finally they capitulated, I mean, Frank Klutman is famous for saying to be very focused and earlier, not many months ago, they called the going on-prem as a distraction, but clearly there's enough demand and certainly government contracts any company that has data residency requirements, it's a real need. So they finally addressed it. >> Yeah, I'll bet dollars to donuts, there was an EBC session and some big customer said, if you don't do this, we ain't doing business with you. And that was like, okay, we'll do it. >> So Dave, I have to say, earlier on you had brought this point, how Frank Klutman was poo-pooing data science workloads. On your show, about a year or so ago, he said, we are never going to on-prem. He burnt that bridge. (Tony laughs) That was on your show. >> I remember exactly the statement because it was interesting. He said, we're never going to do the halfway house. And I think what he meant is we're not going to bring the Snowflake architecture to run on-prem because it defeats the elasticity of the cloud. So this was kind of a capitulation in a way. But I think it still preserves his original intent sort of, I don't know. >> The point here is that every vendor will poo-poo whatever they don't have until they do have it. >> Yes. >> And then it'd be like, oh, we are all in, we've always been doing this. We have always supported this and now we are doing it better than others. >> Look, it was the same type of shock wave that we felt basically when AWS at the last moment at one of their reinvents, oh, by the way, we're going to introduce outposts. And the analyst group is typically pre briefed about a week or two ahead under NDA and that was not part of it. And when they dropped, they just casually dropped that in the analyst session. It's like, you could have heard the sound of lots of analysts changing their diapers at that point. >> (laughs) I remember that. And a props to Andy Jassy who once, many times actually told us, never say never when it comes to AWS. So guys, I know we got to run. We got some hard stops. Maybe you could each give us your final thoughts, Doug start us off and then-- >> Sure. Well, we've got the Snowflake Summit coming up. I'll be looking for customers that are really doing data science, that are really employing Python through Snowflake, through Snowpark. And then a couple weeks later, we've got Databricks with their Data and AI Summit in San Francisco. I'll be looking for customers that are really doing considerable BI workloads. Last year I did a market overview of this analytical data platform space, 14 vendors, eight of them claim to support lakehouse, both sides of the camp, Databricks customer had 32, their top customer that they could site was unnamed. It had 32 concurrent users doing 15,000 queries per hour. That's good but it's not up to the most demanding BI SQL workloads. And they acknowledged that and said, they need to keep working that. Snowflake asked for their biggest data science customer, they cited Kabura, 400 terabytes, 8,500 users, 400,000 data engineering jobs per day. I took the data engineering job to be probably SQL centric, ETL style transformation work. So I want to see the real use of the Python, how much Snowpark has grown as a way to support data science. >> Great. Tony. >> Actually of all things. And certainly, I'll also be looking for similar things in what Doug is saying, but I think sort of like, kind of out of left field, I'm interested to see what MongoDB is going to start to say about operational analytics, 'cause I mean, they're into this conquer the world strategy. We can be all things to all people. Okay, if that's the case, what's going to be a case with basically, putting in some inline analytics, what are you going to be doing with your query engine? So that's actually kind of an interesting thing we're looking for next week. >> Great. Sanjeev. >> So I'll be at MongoDB world, Snowflake and Databricks and very interested in seeing, but since Tony brought up MongoDB, I see that even the databases are shifting tremendously. They are addressing both the hashtag use case online, transactional and analytical. I'm also seeing that these databases started in, let's say in case of MySQL HeatWave, as relational or in MongoDB as document, but now they've added graph, they've added time series, they've added geospatial and they just keep adding more and more data structures and really making these databases multifunctional. So very interesting. >> It gets back to our discussion of best of breed, versus all in one. And it's likely Mongo's path or part of their strategy of course, is through developers. They're very developer focused. So we'll be looking for that. And guys, I'll be there as well. I'm hoping that we maybe have some extra time on theCUBE, so please stop by and we can maybe chat a little bit. Guys as always, fantastic. Thank you so much, Doug, Tony, Sanjeev, and let's do this again. >> It's been a pleasure. >> All right and thank you for watching. This is Dave Vellante for theCUBE and the excellent analyst. We'll see you next time. (upbeat music)
SUMMARY :
And Doug Henschen is the vice president Thank you. Doug let's start off with you And at the same time, me a lot of that material. And of course, at the and then we realized all the and Tony have brought to light. So I'm interested, the And in the cloud, So Sanjeev, is this all hype? But the problem is that we I mean, I look at the space, and offload some of the So different focus, at the end of the day, and warehouses on one conjoined platform. of the sort of big data movement most of the contributions made decisions. Whereas he kind of poo-pooed the lakehouse and the data scientists are from Mars. and the companies that have in the balance sheet that the customers have to worry about. the modern data stack, if you will. and the data world together, the story is with MongoDB Until data mesh takes over. and you need separate teams. that raises the importance of and the caution there. Yeah, I have to defer on that one. The idea here is that the of course, the street freaks out. and actually plays into the And back in the day when the kind of big data movement. We are expecting the end And the fact is they do have a hybrid path refactor the business accordingly. saying to be very focused And that was like, okay, we'll do it. So Dave, I have to say, the Snowflake architecture to run on-prem The point here is that and now we are doing that in the analyst session. And a props to Andy Jassy and said, they need to keep working that. Great. Okay, if that's the case, Great. I see that even the databases I'm hoping that we maybe have and the excellent analyst.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Frank | PERSON | 0.99+ |
Frank Klutman | PERSON | 0.99+ |
Tony Baers | PERSON | 0.99+ |
Mars | LOCATION | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Venus | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Holger Mueller | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
last year | DATE | 0.99+ |
$5 billion | QUANTITY | 0.99+ |
$10,000 | QUANTITY | 0.99+ |
14 vendors | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
SanjMo | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
8,500 users | QUANTITY | 0.99+ |
Sanjeev | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
32 concurrent users | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Ahana | ORGANIZATION | 0.99+ |
DaaS | ORGANIZATION | 0.99+ |
EMR | ORGANIZATION | 0.99+ |
32 | QUANTITY | 0.99+ |
Atlas | ORGANIZATION | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
Athena | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
Matt Burr, Pure Storage
(Intro Music) >> Hello everyone and welcome to this special cube conversation with Matt Burr who is the general manager of FlashBlade at Pure Storage. Matt, how you doing? Good to see you. >> I'm doing great. Nice to see you again, Dave. >> Yeah. You know, welcome back. We're going to be broadcasting this is at accelerate. You guys get big news. Of course, FlashBlade S we're going to dig into it. The famous FlashBlade now has new letter attached to it. Tell us what it is, what it's all about. >> (laughing) >> You know, it's easy to say. It's just the latest and greatest version of the FlashBlade, but obviously it's a lot more than that. We've had a lot of success with FlashBlade kind of across the board in particular with Meta and their research super cluster, which is one of the largest AI super clusters in the world. But, it's not enough to just build on the thing that you had, right? So, with the FlashBlade S, we've increased modularity, we've done things like, building co-design software and hardware and leveraging that into something that increases, or it actually doubles density, performance, power efficiency. On top of that, you can scale independently, storage, networking, and compute, which is pretty big deal because it gives you more flexibility, gives you a little more granularity around performance or capacity, depending on which direction you want to go. And we believe that, kind of the end of this is fundamentally the, I guess, the way to put it is sort of the highest performance and capacity optimization, unstructured data platform on the market today without the need for, kind of, an expensive data tier of cash or expected data cash and tier. So we're pretty excited about, what we've ended up with here. >> Yeah. So I think sometimes people forget, about how much core engineering Meta does. Facebook, you go on Facebook and play around and post things, but yeah, their backend cloud is just amazing. So talk a little bit more about the problem targets for FlashBlade. I mean, it's pretty wide scope and we're going to get into that, but what's the core of that. >> Yeah. We've talked about that extensively in the past, the use cases kind of generally remain the same. I know, we'll probably explore this a little bit more deeply, but you know, really what we're talking about here is performance and scalability. We have written essentially an unlimited Metadata software level, which gives us the ability to expand, we're already starting to think about computing an exabyte scale. Okay. So, the problem that the customer has of, Hey, I've got a Greenfield, object environment, or I've got a file environment and my 10 K and 7,500 RPM disc is just spiraling out of control in my environment. It's an environmental problem. It's a management problem, we have effectively, simplified the process of bringing together highly performant, very large multi petabyte to eventually exabyte scale unstructured data systems. >> So people are obviously trying to inject machine intelligence, AI, ML into applications, bring data into applications, bringing those worlds closer together. Analytics is obviously exploding. You see some other things happening in the news, read somewhere, protection and the like, where does FlashBlade fit in terms of FlashBlade S in some terms of some of these new use cases. >> All those things, we're only going wider and broader. So, we've talked in the past about having a having a horizontal approach to this market. The unstructured data market has often had vertical specificity. You could see successful infrastructure companies in oil and gas that may not play median entertainment, where you see, successful companies that play in media entertainment, but don't play well in financial services, for example. We're sort of playing the long game here with this and we're focused on, bringing an all Q L C architecture that combines our traditional kind of pure DFM with the software that is, now I guess seven years hardened from the original FlashBlade system. And so, when we look at customers and we look at kind of customers in three categories, right, we have customers that sort of fit into a very traditional, more than three, but kind of make bucketized this way, customers that fit into kind of this EDA HPC space, then you have that sort of data protection, which I believe kind of ransomware falls under that as well. The world has changed, right? So customers want their data back faster. Rapid restore is a real thing, right? We have customers that come to us and say, anybody can back up my data, but if I want to get something back fast and I mean in less than a week or a couple days, what do I do? So we can solve that problem. And then as you sort of accurately pointed out where you started, there is the AI ML side of things where the Invidia relationship that we have, right. DGX is are a pretty powerful weapon in that market and solving those problems. But they're not cheap. And keeping those DGX's running all the time requires an extremely efficient underpinning of a flash system. And we believe we have that market as well. >> It's interesting when pure was first coming out as a startup, you obviously had some cool new tech, but you know, your stack wasn't as hard. And now you've got seven years under your belt. The last time you were on the cube, we talked about some of the things that you guys were doing differently. We talked about UFFO, unified fast file and object. How does this new product, FlashBlade S, compare to some previous generations of FlashBlade in terms of solving unstructured data and some of these other trends that we've been talking about? >> Yeah. I touched on this a little bit earlier, but I want to go a little bit deeper on this concept of modularity. So for those that are familiar with Pure Storage, we have what's called the evergreen storage program. It's not as much a program as it is an engineering philosophy. The belief that everything we build should be modular in nature so that we can have essentially a chassi that has an a 100% modular components inside of it. Such that we can upgrade all of those features, non disruptively from one version to the next, you should think about that as you know, if you have an iPhone, when you go get a new iPhone, what do you do with your old iPhone? You either throw it away or you sell it. Well, imagine if your iPhone just got newer and better each time you renewed your, whatever it is, two year or three year subscription with apple. That's effectively what we have as a core philosophy, core operating engineering philosophy within pure. That is now a completely full and robust program with this instantiation of the FlashBlade S. And so kind of what that means is, for a customer I'm future proofed for X number of years, knowing that we have a run rate of being able to keep customers on the flash array side from the FA 400 all the way through the flash array X and Excel, which is about a 10 year time span. So, that then, and of itself sort of starts to play into customers that have concerns around ESG. Right? Last time I checked power space and cooling, still mattered in data center. So although I have people that tell me all the time, power space clearly doesn't matter anymore, but I know at the end of the day, most customers seem to say that it does, you're not throwing away refrigerator size pieces of equipment that once held spinning disc, something that's a size of a microwave that's populated with DFMs with all LC flash that you can actually upgrade over time. So if you want to scale more performance, we can do that through adding CPU. If you want to scale more capacity, we can do that through adding more And we're in control of those parameters because we're building our own DFM, our direct fabric modules on our own storage notes, if you will. So instead of relying on the consumer packaging of an SSD, we're upgrading our own stuff and growing it as we can. So again, on the ESG side, I think for many customers going into the next decade, it's going to be a huge deal. >> Yeah. Interesting comments, Matt. I mean, I don't know if you guys invented it, but you certainly popularize the idea of, no Fort lift upgrades and sort of set the industry on its head when you guys really drove that evergreen strategy and kind of on that note, you guys talk about simplicity. I remember last accelerate went deep with cause on your philosophy of keeping things simple, keeping things uncomplicated, you guys talk about using better science to do that. And you a lot of talk these days about outcomes. How does FlashBlade S support those claims and what do you guys mean by better science? >> Yeah. You know, better science is kind of a funny term. It was an internal term that I was on a sales call actually. And the customer said, well, I understand the difference between these two, but could you tell me how we got there and I was a little stumped on the answer. And I just said, well, I think we have better scientists and that kind of morphed into better science, a good example of that is our Metadata architecture, right? So our scalable Metadata allows us to avoid having that cashing tier, that other architectures have to rely on in order to anticipate, which files are going to need to be in read cash and read misses become very expensive. Now, a good follow up question there, not to do your job, but it's the question that I always get is, well, when you're designing your own hardware and your own software, what's the real material advantage of that? Well, the real material advantage of that is that you are in control of the combination and the interaction of those two things you don't give up the sort of the general purpose nature, if you will, of the performance characteristics that come along with things like commodity, you get a very specific performance profile. That's tailored to the software that's being married to it. Now in some instances you could say, well, okay, does that really matter? Well, when you start to talking about 20, 40, 50, 100, 500, petabyte data sets, every percentage matters. And so those individual percentages equate to space savings. They equate to power and cooling savings. We believe that we're going to have industry best dollars per lot. We're going to have industry best, kind of dollar PRU. So really the whole kind of game here is a round scale. >> Yeah. I mean, look, there's clearly places for the pure software defined. And then when cloud first came out, everybody said, oh, build the cloud and commodity, they don't build custom art. Now you see all the hyper scalers building custom software, custom hardware and software integration, custom Silicon. So co-innovation between hardware and software. It seems pretty as important, if not more important than ever, especially for some of these new workloads who knows what the edge is going to bring. What's the downside of not having that philosophy in your view? Is it just, you can't scale to the degree that you want, you can't support the new workloads or performance? What should customers be thinking about there? >> I think the downside plays in two ways. First is kind of the future and at scale, as I alluded to earlier around cost and just savings over time. Right? So if you're using a you know a commodity SSD, there's packaging around that SSD that is wasteful both in terms of- It's wasteful in the environmental sense and wasteful in the sort of computing performance sense. So that's kind of one thing. On the second side, it's easier for us to control the controllables around reliability when you can eliminate the number of things that actually sit in that workflow and by workflow, I mean when a right is acknowledged from a host and it gets down to the media, the more control you have over that, the more reliability you have over that piece. >> Yeah. I know. And we talked about ESG earlier. I know you guys, I'm going to talk a little bit about more news from accelerate within Invidia. You've certainly heard Jensen talk about the wasted CPU cycles in the data center. I think he's forecasted, 25 to 30% of the cycles are wasted on doing things like storage offload, or certainly networking and security. So now it sort of confirms your ESG thought, we can do things more efficiently, but as it relates to Invidia and some of the news around AIRI's, what is the AI RI? What's that stand for? What's the high level overview of AIRI. >> So the AIRI has been really successful for both us and Invidia. It's a really great partnership we're appreciative of the partnership. In fact, Tony pack day will be speaking here at accelerate. So, really looking forward to that, Look, there's a couple ways to look at this and I take the macro view on this. I know that there's a equally as good of a micro example, but I think the macro is really kind of where it's at. We don't have data center space anymore, right? There's only so many data centers we can build. There's only so much power we can create. We are going to reach a point in time where municipalities are going to struggle against the businesses that are in their municipalities for power. And now you're essentially bidding big corporations against people who have an electric bill. And that's only going to last so long, you know who doesn't win in that? The big corporation doesn't win in that. Because elected officials will have to find a way to serve the people so that they can get power. No matter how skewed we think that may be. That is the reality. And so, as we look at this transition, that first decade of disc to flash transition was really in the block world. The second decade, which it's really fortunate to have a multi decade company, of course. But the second decade of riding that wave from disk to flash is about improving space, power, efficiency, and density. And we sort of reach that, it's a long way of getting to the point about iMedia where these AI clusters are extremely powerful things. And they're only going to get bigger, right? They're not going to get smaller. It's not like anybody out there saying, oh, it's a Thad, or, this isn't going to be something that's going to yield any results or outcomes. They yield tremendous outcomes in healthcare. They yield tremendous outcomes in financial services. They use tremendous outcome in cancer research, right? These are not things that we as a society are going to give up. And in fact, we're going to want to invest more on them, but they come at a cost and one of the resources that is required is power. And so when you look at what we've done in particular with Invidia. You found something that is extremely power efficient that meets the needs of kind of going back to that macro view of both the community and the business. It's a win-win. >> You know and you're right. It's not going to get smaller. It's just going to continue to in momentum, but it could get increasingly distributed. And you think about, I talked about the edge earlier. You think about AI inferencing at the edge. I think about Bitcoin mining, it's very distributed, but it consumes a lot of power and so we're not exactly sure what the next level architecture is, but we do know that science is going to be behind it. Talk a little bit more about your Invidia relationship, because I think you guys were the first, I might be wrong about this, but I think you were the first storage company to announce a partnership with Invidia several years ago, probably four years ago. How is this new solution with a AIRI slash S building on that partnership? What can we expect with Invidia going forward? >> Yeah. I think what you can expect to see is putting the foot on the gas on kind of where we've been with Invidia. So, as I mentioned earlier Meta is by some measurements, the world's largest research super cluster, they're a huge Invidia customer and built on pure infrastructure. So we see kind of those types of well reference architectures, not that everyone's going to have a Meta scale reference architecture, but the base principles of what they're solving for are the base principles of what we're going to begin to see in the enterprise. I know that begin sounds like a strange word because there's already a big business in DGX. There's already a sizable business in performance, unstructured data. But those are only going to get exponentially bigger from here. So kind of what we see is a deepening and a strengthening of the of the relationship and opportunity for us to talk, jointly to customers that are going to be building these big facilities and big data centers for these types of compute related problems and talking about efficiency, right? DGX are much more efficient and Flash Blades are much more efficient. It's a great pairing. >> Yeah. I mean you're definitely, a lot of AI today is modeling in the cloud, seeing HPC and data just slam together all kinds of new use cases. And these types of partnerships are the only way that we're going to solve the future problems and go after these future opportunities. I'll give you a last word you got to be excited with accelerate, what should people be looking for, add accelerate and beyond. >> You know, look, I am really excited. This is going on my 12th year at Pure Storage, which has to be seven or eight accelerates whenever we started this thing. So it's a great time of the year, maybe take a couple off because of because of COVID, but I love reconnecting in particular with partners and customers and just hearing kind of what they have to say. And this is kind of a nice one. This is four years or five years worth of work for my team who candidly I'm extremely proud of for choosing to take on some of the solutions that they, or excuse me, some of the problems that they chose to take on and find solutions for. So as accelerate roles around, I think we have some pretty interesting evolutions of the evergreen program coming to be announced. We have some exciting announcements in the other product arenas as well, but the big one for this event is FlashBlade. And I think that we will see. Look, no one's going to completely control this transition from disc to flash, right? That's a that's a macro trend. But there are these points in time where individual companies can sort of accelerate the pace at which it's happening. And that happens through cost, it happens through performance. My personal belief is this will be one of the largest points of those types of acceleration in this transformation from disc to flash and unstructured data. This is such a leap. This is essentially the equivalent of us going from the 400 series on the block side to the X, for those that you're familiar with the flash array lines. So it's a huge, huge leap for us. I think it's a huge leap for the market. And look, I think you should be proud of the company you work for. And I am immensely proud of what we've created here. And I think one of the things that is a good joy in life is to be able to talk to customers about things you care about. I've always told people my whole life, inefficiency is the bane of my existence. And I think we've rooted out ton of inefficiency with this product and looking forward to going and reclaiming a bunch of data center space and power without sacrificing any performance. >> Well congratulations on making it into the second decade. And I'm looking forward to the orange and the third decade, Matt Burr, thanks so much for coming back in the cubes. It's good to see you. >> Thanks, Dave. Nice to see you as well. We appreciate it. >> All right. And thank you for watching. This is Dave Vellante for the Cube. And we'll see you next time. (outro music)
SUMMARY :
Good to see you. to see you again, Dave. We're going to be broadcasting kind of the end of this the problem targets for FlashBlade. in the past, the use cases kind of happening in the news, We have customers that come to us and say, that you guys were doing differently. that tell me all the time, and kind of on that note, the general purpose nature, if you will, to the degree that you want, First is kind of the future and at scale, and some of the news around AIRI's, that meets the needs of I talked about the edge earlier. of the of the relationship are the only way that we're going to solve of the company you work for. and the third decade, Nice to see you as well. This is Dave Vellante for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Burr | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
AIRI | ORGANIZATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
10 K | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
three year | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
12th year | QUANTITY | 0.99+ |
7,500 RPM | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
two year | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
less than a week | QUANTITY | 0.99+ |
first decade | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
seven years | QUANTITY | 0.99+ |
second side | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
second decade | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
four years ago | DATE | 0.99+ |
more than three | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
100 | QUANTITY | 0.98+ |
next decade | DATE | 0.98+ |
two ways | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
one version | QUANTITY | 0.98+ |
several years ago | DATE | 0.98+ |
30% | QUANTITY | 0.98+ |
two | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Tony | PERSON | 0.97+ |
two things | QUANTITY | 0.97+ |
500 | QUANTITY | 0.97+ |
Pure Storage | ORGANIZATION | 0.97+ |
FlashBlade | TITLE | 0.97+ |
today | DATE | 0.94+ |
third decade | QUANTITY | 0.94+ |
FlashBlade | EVENT | 0.94+ |
a couple days | QUANTITY | 0.9+ |
first storage company | QUANTITY | 0.88+ |
each time | QUANTITY | 0.88+ |
ESG | ORGANIZATION | 0.87+ |
Jensen | PERSON | 0.85+ |
DGX | ORGANIZATION | 0.85+ |
FlashBlade S | TITLE | 0.85+ |
three categories | QUANTITY | 0.85+ |
FlashBlade S | COMMERCIAL_ITEM | 0.82+ |
about a 10 year | QUANTITY | 0.82+ |
400 series | QUANTITY | 0.78+ |
Tony Baer, Doug Henschen and Sanjeev Mohan, Couchbase | Couchbase Application Modernization
(upbeat music) >> Welcome to this CUBE Power Panel where we're going to talk about application modernization, also success templates, and take a look at some new survey data to see how CIOs are thinking about digital transformation, as we get deeper into the post isolation economy. And with me are three familiar VIP guests to CUBE audiences. Tony Bear, the principal at DB InSight, Doug Henschen, VP and principal analyst at Constellation Research and Sanjeev Mohan principal at SanjMo. Guys, good to see you again, welcome back. >> Thank you. >> Glad to be here. >> Thanks for having us. >> Glad to be here. >> All right, Doug. Let's get started with you. You know, this recent survey, which was commissioned by Couchbase, 650 CIOs and CTOs, and IT practitioners. So obviously very IT heavy. They responded to the following question, "In response to the pandemic, my organization accelerated our application modernization strategy and of course, an overwhelming majority, 94% agreed or strongly agreed." So I'm sure, Doug, that you're not shocked by that, but in the same survey, modernizing existing technologies was second only behind cyber security is the top investment priority this year. Doug, bring us into your world and tell us the trends that you're seeing with the clients and customers you work with in their modernization initiatives. >> Well, the survey, of course, is spot on. You know, any Constellation Research analyst, any systems integrator will tell you that we saw more transformation work in the last two years than in the prior six to eight years. A lot of it was forced, you know, a lot of movement to the cloud, a lot of process improvement, a lot of automation work, but transformational is aspirational and not every company can be a leader. You know, at Constellation, we focus our research on those market leaders and that's only, you know, the top 5% of companies that are really innovating, that are really disrupting their markets and we try to share that with companies that want to be fast followers, that these are the next 20 to 25% of companies that don't want to get left behind, but don't want to hit some of the same roadblocks and you know, pioneering pitfalls that the real leaders are encountering when they're harnessing new technologies. So the rest of the companies, you know, the cautious adopters, the laggards, many of them fall by the wayside, that's certainly what we saw during the pandemic. Who are these leaders? You know, the old saw examples that people saw at the Amazons, the Teslas, the Airbnbs, the Ubers and Lyfts, but new examples are emerging every year. And as a consumer, you immediately recognize these transformed experiences. One of my favorite examples from the pandemic is Rocket Mortgage. No disclaimer required, I don't own stock and you're not client, but when I wanted to take advantage of those record low mortgage interest rates, I called my current bank and some, you know, stall word, very established conventional banks, I'm talking to you Bank of America, City Bank, and they were taking days and weeks to get back to me. Rocket Mortgage had the locked in commitment that day, a very proactive, consistent communications across web, mobile, email, all customer touchpoints. I closed in a matter of weeks an entirely digital seamless process. This is back in the gloves and masks days and the loan officer came parked in our driveway, wiped down an iPad, handed us that iPad, we signed all those documents digitally, completely electronic workflow. The only wet signatures required were those demanded by the state. So it's easy to spot these transformed experiences. You know, Rocket had most of that in place before the pandemic, and that's why they captured 8% of the national mortgage market by 2020 and they're on track to hit 10% here in 2022. >> Yeah, those are great examples. I mean, I'm not a shareholder either, but I am a customer. I even went through the same thing in the pandemic. It was all done in digital it was a piece of cake and I happened to have to do another one with a different firm and stuck with that firm for a variety of reasons and it was night and day. So to your point, it was a forced merge to digital. If you were there beforehand, you had real advantage, it could accelerate your lead during the pandemic. Okay, now Tony bear. Mr. Bear, I understand you're skeptical about all this buzz around digital transformation. So in that same survey, the data shows that the majority of respondents said that their digital initiatives were largely reactive to outside forces, the pandemic compliance changes, et cetera. But at the same time, they indicated that the results while somewhat mixed were generally positive. So why are you skeptical? >> The reason being, and by the way, I have nothing against application modernization. The problem... I think the problem I ever said, it often gets conflated with digital transformation and digital transformation itself has become such a buzzword and so overused that it's really hard, if not impossible to pin down (coughs) what digital transformation actually means. And very often what you'll hear from, let's say a C level, you know, (mumbles) we want to run like Google regardless of whether or not that goal is realistic you know, for that organization (coughs). The thing is that we've been using, you know, businesses have been using digital data since the days of the mainframe, since the... Sorry that data has been digital. What really has changed though, is just the degree of how businesses interact with their customers, their partners, with the whole rest of the ecosystem and how their business... And how in many cases you take look at the auto industry that the nature of the business, you know, is changing. So there is real change of foot, the question is I think we need to get more specific in our goals. And when you look at it, if we can boil it down to a couple, maybe, you know, boil it down like really over simplistically, it's really all about connectedness. No, I'm not saying connectivity 'cause that's more of a physical thing, but connectedness. Being connected to your customer, being connected to your supplier, being connected to the, you know, to the whole landscape, that you operate in. And of course today we have many more channels with which we operate, you know, with customers. And in fact also if you take a look at what's happening in the automotive industry, for instance, I was just reading an interview with Bill Ford, you know, their... Ford is now rapidly ramping up their electric, you know, their electric vehicle strategy. And what they realize is it's not just a change of technology, you know, it is a change in their business, it's a change in terms of the relationship they have with their customer. Their customers have traditionally been automotive dealers who... And the automotive dealers have, you know, traditionally and in many cases by state law now have been the ones who own the relationship with the end customer. But when you go to an electric vehicle, the product becomes a lot more of a software product. And in turn, that means that Ford would have much more direct interaction with its end customers. So that's really what it's all about. It's about, you know, connectedness, it's also about the ability to act, you know, we can say agility, it's about ability not just to react, but to anticipate and act. And so... And of course with all the proliferation, you know, the explosion of data sources and connectivity out there and the cloud, which allows much more, you know, access to compute, it changes the whole nature of the ball game. The fact is that we have to avoid being overwhelmed by this and make our goals more, I guess, tangible, more strictly defined. >> Yeah, now... You know, great points there. And I want to just bring in some survey data, again, two thirds of the respondents said their digital strategies were set by IT and only 26% by the C-suite, 8% by the line of business. Now, this was largely a survey of CIOs and CTOs, but, wow, doesn't seem like the right mix. It's a Doug's point about, you know, leaders in lagers. My guess is that Rocket Mortgage, their digital strategy was led by the chief digital officer potentially. But at the same time, you would think, Tony, that application modernization is a prerequisite for digital transformation. But I want to go to Sanjeev in this war in the survey. And respondents said that on average, they want 58% of their IT spend to be in the public cloud three years down the road. Now, again, this is CIOs and CTOs, but (mumbles), but that's a big number. And there was no ambiguity because the question wasn't worded as cloud, it was worded as public cloud. So Sanjeev, what do you make of that? What's your feeling on cloud as flexible architecture? What does this all mean to you? >> Dave, 58% of IT spend in the cloud is a huge change from today. Today, most estimates, peg cloud IT spend to be somewhere around five to 15%. So what this number tells us is that the cloud journey is still in its early days, so we should buckle up. We ain't seen nothing yet, but let me add some color to this. CIOs and CTOs maybe ramping up their cloud deployment, but they still have a lot of problems to solve. I can tell you from my previous experience, for example, when I was in Gartner, I used to talk to a lot of customers who were in a rush to move into the cloud. So if we were to plot, let's say a maturity model, typically a maturity model in any discipline in IT would have something like crawl, walk, run. So what I was noticing was that these organizations were jumping straight to run because in the pandemic, they were under the gun to quickly deploy into the cloud. So now they're kind of coming back down to, you know, to crawl, walk, run. So basically they did what they had to do under the circumstances, but now they're starting to resolve some of the very, very important issues. For example, security, data privacy, governance, observability, these are all very big ticket items. Another huge problem that nav we are noticing more than we've ever seen, other rising costs. Cloud makes it so easy to onboard new use cases, but it leads to all kinds of unexpected increase in spikes in your operating expenses. So what we are seeing is that organizations are now getting smarter about where the workloads should be deployed. And sometimes it may be in more than one cloud. Multi-cloud is no longer an aspirational thing. So that is a huge trend that we are seeing and that's why you see there's so much increased planning to spend money in public cloud. We do have some issues that we still need to resolve. For example, multi-cloud sounds great, but we still need some sort of single pane of glass, control plane so we can have some fungibility and move workloads around. And some of this may also not be in public cloud, some workloads may actually be done in a more hybrid environment. >> Yeah, definitely. I call it Supercloud. People win sometimes-- >> Supercloud. >> At that term, but it's above multi-cloud, it floats, you know, on topic. But so you clearly identified some potholes. So I want to talk about the evolution of the application experience 'cause there's some potholes there too. 81% of their respondents in that survey said, "Our development teams are embracing the cloud and other technologies faster than the rest of the organization can adopt and manage them." And that was an interesting finding to me because you'd think that infrastructure is code and designing insecurity and containers and Kubernetes would be a great thing for organizations, and it is I'm sure in terms of developer productivity, but what do you make of this? Does the modernization path also have some potholes, Sanjeev? What are those? >> So, first of all, Dave, you mentioned in your previous question, there's no ambiguity, it's a public cloud. This one, I feel it has quite a bit of ambiguity because it talks about cloud and other technologies, that sort of opens up the kimono, it's like that's everything. Also, it says that the rest of the organization is not able to adopt and manage. Adoption is a business function, management is an IT function. So I feed this question is a bit loaded. We know that app modernization is here to stay, developing in the cloud removes a lot of traditional barriers or procuring instantiating infrastructure. In addition, developers today have so many more advanced tools. So they're able to develop the application faster because they have like low-code/no-code options, they have notebooks to write the machine learning code, they have the entire DevOps CI/CD tool chain that makes it easy to version control and push changes. But there are potholes. For example, are developers really interested in fixing data quality problems, all data, privacy, data, access, data governance? How about monitoring? I doubt developers want to get encumbered with all of these operationalization management pieces. Developers are very keen to deliver new functionality. So what we are now seeing is that it is left to the data team to figure out all of these operationalization productionization things that the developers have... You know, are not truly interested in that. So which actually takes me to this topic that, Dave, you've been quite actively covering and we've been talking about, see, the whole data mesh. >> Yeah, I was going to say, it's going to solve all those data quality problems, Sanjeev. You know, I'm a sucker for data mesh. (laughing) >> Yeah, I know, but see, what's going to happen with data mesh is that developers are now going to have more domain resident power to develop these applications. What happens to all of the data curation governance quality that, you know, a central team used to do. So there's a lot of open ended questions that still need to be answered. >> Yeah, That gets automated, Tony, right? With computational governance. So-- >> Of course. >> It's not trivial, it's not trivial, but I'm still an optimist by the end of the decade we'll start to get there. Doug, I want to go to you again and talk about the business case. We all remember, you know, the business case for modernization that is... We remember the Y2K, there was a big it spending binge and this was before the (mumbles) of the enterprise, right? CIOs, they'd be asked to develop new applications and the business maybe helps pay for it or offset the cost with the initial work and deployment then IT got stuck managing the sprawling portfolio for years. And a lot of the apps had limited adoption or only served a few users, so there were big pushes toward rationalizing the portfolio at that time, you know? So do I modernize, they had to make a decision, consolidate, do I sunset? You know, it was all based on value. So what's happening today and how are businesses making the case to modernize, are they going through a similar rationalization exercise, Doug? >> Well, the Y2K era experience that you talked about was back in the days of, you know, throw the requirements over the wall and then we had waterfall development that lasted months in some cases years. We see today's most successful companies building cross functional teams. You know, the C-suite the line of business, the operations, the data and analytics teams, the IT, everybody has a seat at the table to lead innovation and modernization initiatives and they don't start, the most successful companies don't start by talking about technology, they start by envisioning a business outcome by envisioning a transformed customer experience. You hear the example of Amazon writing the press release for the product or service it wants to deliver and then it works backwards to create it. You got to work backwards to determine the tech that will get you there. What's very clear though, is that you can't transform or modernize by lifting and shifting the legacy mess into the cloud. That doesn't give you the seamless processes, that doesn't give you data driven personalization, it doesn't give you a connected and consistent customer experience, whether it's online or mobile, you know, bots, chat, phone, everything that we have today that requires a modern, scalable cloud negative approach and agile deliver iterative experience where you're collaborating with this cross-functional team and course correct, again, making sure you're on track to what's needed. >> Yeah. Now, Tony, both Doug and Sanjeev have been, you know, talking about what I'm going to call this IT and business schism, and we've all done surveys. One of the things I'd love to see Couchbase do in future surveys is not only survey the it heavy, but also survey the business heavy and see what they say about who's leading the digital transformation and who's in charge of the customer experience. Do you have any thoughts on that, Tony? >> Well, there's no question... I mean, it's kind like, you know, the more things change. I mean, we've been talking about that IT and the business has to get together, we talked about this back during, and Doug, you probably remember this, back during the Y2K ERP days, is that you need these cross functional teams, we've been seeing this. I think what's happening today though, is that, you know, back in the Y2K era, we were basically going into like our bedrock systems and having to totally re-engineer them. And today what we're looking at is that, okay, those bedrock systems, the ones that basically are keeping the lights on, okay, those are there, we're not going to mess with that, but on top of that, that's where we're going to innovate. And that gives us a chance to be more, you know, more directed and therefore we can bring these related domains together. I mean, that's why just kind of, you know, talk... Where Sanjeev brought up the term of data mesh, I've been a bit of a cynic about data mesh, but I do think that work and work is where we bring a bunch of these connected teams together, teams that have some sort of shared context, though it's everybody that's... Every team that's working, let's say around the customer, for instance, which could be, you know, in marketing, it could be in sales, order processing in some cases, you know, in logistics and delivery. So I think that's where I think we... You know, there's some hope and the fact is that with all the advanced, you know, basically the low-code/no-code tools, they are ways to bring some of these other players, you know, into the process who previously had to... Were sort of, you know, more at the end of like a, you know, kind of a... Sort of like they throw it over the wall type process. So I do believe, but despite all my cynicism, I do believe there's some hope. >> Thank you. Okay, last question. And maybe all of you could answer this. Maybe, Sanjeev, you can start it off and then Doug and Tony can chime in. In the survey, about a half, nearly half of the 650 respondents said they could tangibly show their organizations improve customer experiences that were realized from digital projects in the last 12 months. Now, again, not surprising, but we've been talking about digital experiences, but there's a long way to go judging from our pandemic customer experiences. And we, again, you know, some were great, some were terrible. And so, you know, and some actually got worse, right? Will that improve? When and how will it improve? Where's 5G and things like that fit in in terms of improving customer outcomes? Maybe, Sanjeev, you could start us off here. And by the way, plug any research that you're working on in this sort of area, please do. >> Thank you, Dave. As a resident optimist on this call, I'll get us started and then I'm sure Doug and Tony will have interesting counterpoints. So I'm a technology fan boy, I have to admit, I am in all of all these new companies and how they have been able to rise up and handle extreme scale. In this time that we are speaking on this show, these food delivery companies would have probably handled tens of thousands of orders in minutes. So these concurrent orders, delivery, customer support, geospatial location intelligence, all of this has really become commonplace now. It used to be that, you know, large companies like Apple would be able to handle all of these supply chain issues, disruptions that we've been facing. But now in my opinion, I think we are seeing this in, Doug mentioned Rocket Mortgage. So we've seen it in FinTech and shopping apps. So we've seen the same scale and it's more than 5G. It includes things like... Even in the public cloud, we have much more efficient, better hardware, which can do like deep learning networks much more efficiently. So machine learning, a lot of natural language programming, being able to handle unstructured data. So in my opinion, it's quite phenomenal to see how technology has actually come to rescue and as, you know, billions of us have gone online over the last two years. >> Yeah, so, Doug, so Sanjeev's point, he's saying, basically, you ain't seen nothing yet. What are your thoughts here, your final thoughts. >> Well, yeah, I mean, there's some incredible technologies coming including 5G, but you know, it's only going to pave the cow path if the underlying app, if the underlying process is clunky. You have to modernize, take advantage of, you know, serverless scalability, autonomous optimization, advanced data science. There's lots of cutting edge capabilities out there today, but you know, lifting and shifting you got to get your hands dirty and actually modernize on that data front. I mentioned my research this year, I'm doing a lot of in depth looks at some of the analytical data platforms. You know, these lake houses we've had some conversations about that and helping companies to harness their data, to have a more personalized and predictive and proactive experience. So, you know, we're talking about the Snowflakes and Databricks and Googles and Teradata and Vertica and Yellowbrick and that's the research I'm focusing on this year. >> Yeah, your point about paving the cow path is right on, especially over the pandemic, a lot of the processes were unknown. But you saw this with RPA, paving the cow path only got you so far. And so, you know, great points there. Tony, you get the last word, bring us home. >> Well, I'll put it this way. I think there's a lot of hope in terms of that the new generation of developers that are coming in are a lot more savvy about things like data. And I think also the new generation of people in the business are realizing that we need to have data as a core competence. So I do have optimism there that the fact is, I think there is a much greater consciousness within both the business side and the technical. In the technology side, the organization of the importance of data and how to approach that. And so I'd like to just end on that note. >> Yeah, excellent. And I think you're right. Putting data at the core is critical data mesh I think very well describes the problem and (mumbles) credit lays out a solution, just the technology's not there yet, nor are the standards. Anyway, I want to thank the panelists here. Amazing. You guys are always so much fun to work with and love to have you back in the future. And thank you for joining today's broadcast brought to you by Couchbase. By the way, check out Couchbase on the road this summer at their application modernization summits, they're making up for two years of shut in and coming to you. So you got to go to couchbase.com/roadshow to find a city near you where you can meet face to face. In a moment. Ravi Mayuram, the chief technology officer of Couchbase will join me. You're watching theCUBE, the leader in high tech enterprise coverage. (bright music)
SUMMARY :
Guys, good to see you again, welcome back. but in the same survey, So the rest of the companies, you know, and I happened to have to do another one it's also about the ability to act, So Sanjeev, what do you make of that? Dave, 58% of IT spend in the cloud I call it Supercloud. it floats, you know, on topic. Also, it says that the say, it's going to solve that still need to be answered. Yeah, That gets automated, Tony, right? And a lot of the apps had limited adoption is that you can't transform or modernize One of the things I'd love to see and the business has to get together, nearly half of the 650 respondents and how they have been able to rise up you ain't seen nothing yet. and that's the research paving the cow path only got you so far. in terms of that the new and love to have you back in the future.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Ravi Mayuram | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Tony Bear | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Doug Henschen | PERSON | 0.99+ |
Bank of America | ORGANIZATION | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
Sanjeev Mohan | PERSON | 0.99+ |
Sanjeev | PERSON | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
94% | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
58% | QUANTITY | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Yellowbrick | ORGANIZATION | 0.99+ |
8% | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
today | DATE | 0.99+ |
City Bank | ORGANIZATION | 0.99+ |
Bill Ford | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
81% | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
DB InSight | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Today | DATE | 0.99+ |
2020 | DATE | 0.99+ |
Couchbase | ORGANIZATION | 0.99+ |
Snowflakes | ORGANIZATION | 0.99+ |
5% | QUANTITY | 0.98+ |
650 CIOs | QUANTITY | 0.98+ |
Amazons | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Lyfts | ORGANIZATION | 0.98+ |
second | QUANTITY | 0.98+ |
SanjMo | ORGANIZATION | 0.98+ |
26% | QUANTITY | 0.98+ |
Ubers | ORGANIZATION | 0.98+ |
three years | QUANTITY | 0.98+ |
650 respondents | QUANTITY | 0.98+ |
pandemic | EVENT | 0.97+ |
this year | DATE | 0.97+ |
15% | QUANTITY | 0.97+ |
Rocket | ORGANIZATION | 0.97+ |
more than one cloud | QUANTITY | 0.97+ |
25% | QUANTITY | 0.97+ |
Tony bear | PERSON | 0.97+ |
around five | QUANTITY | 0.96+ |
two thirds | QUANTITY | 0.96+ |
about a half | QUANTITY | 0.96+ |
Steve Kenniston, The Storage Alchemist & Tony Bryston, Town of Gilbert | Dell Technologies World 202
>>The cube presents, Dell technologies world brought to you by Dell. >>Welcome back to Dell technologies, world 2022. We're live in Vegas. Very happy to be here. Uh, this is the cubes multi-year coverage. This is year 13 for covering either, you know, EMC world or, uh, Dell world. And now of course, Dell tech world. My name is Dave Volante and I'm here with longtime Cub alum cube guest, Steve Kenon, the storage Alchemist, who's, uh, Beckett, Dell, uh, and his data protection role. And Tony Bryson is the chief information security officer of the town of Gilbert town in Arizona. Most, most towns don't have a CISO, but Tony, we're a thrilled, you're here to tell us that story. How did you become a CISO and how does the town of Gilbert have a CISO? >>Well, thank you for having me here. Uh, believe it or not. The town of Gilbert is actually the fourth largest municipality in Arizona. We serve as 281,000 citizens. So it's a fairly large enterprise. We're a billion dollar enterprise. And it got to the point where the, uh, cybersecurity concerns were at such a point that they elected to bring in their first chief information security officer. And I managed to, uh, be the lucky gentleman that got that particular position. >>That's awesome. And there's a, is there a CIO as well? Are you guys peers? Do you, how what's the reporting structure look like? >>We have a chief technology officer. Okay. I report through his office mm-hmm <affirmative> and then he reports, uh, directly to the town executive. >>So you guys talk a lot, you I'm sure you present a lot to the, to the board or wherever the governance structure is. Yeah, >>We do. I, I do quarterly report outs to the, I report through to the town council. Uh, let them know exactly what our cyber security posture is like, the type of threats that we're facing. As a matter of fact, I have to do one when I return to, uh, Gilbert from this particular conference. So really looking forward to that one, cuz this is an interesting time to be in cyber security. >>So obviously a sea. So Steve is gonna say, cyber's the number one priority, but I would say the CTO is gonna say the, say the same thing I would say the board is gonna say the same thing. I would also say Steve, that, uh, cyber and cyber resilience is probably the number one topic here at the show. When you walk around and you see the cyber demonstrations, the security demonstrations, they're packed, it's kind of your focus. Um, it's a good call. >>Yeah. <laugh> I'm the luckiest guy in storage, right? <laugh> um, yeah, there hasn't I in the last 24 months, I don't think that there's been a, a meeting that I've been to with a customer, no matter who's in the room where, uh, cyber resiliency, cybersecurity hasn't come up. I mean, it is, it is one of the hot topics in last night. I mean, Michael was just here. Uh, Michael Dell was just here last night. He came into the showroom floor, he came back, he took a look at what we were offering for cyber capabilities and was impressed. And, and so, so that's really good. >>Yeah. So I noticed, you know, when I talked to a lot of CIOs in particular, they would tell me that the pre pandemic, their cyber resiliency was very Dr. Focused, right. They really, it really wasn't an organizational resilience. It was a, if there's an oh crap moment, they could get it back in theory. And they sort of rethought that. Do you see you that amongst your peers, Tony? >>I think so. I think that people are quickly starting to understand that you just can't focus on, in, on protecting yourself from something that you think may never happen. The reality is that you're likely to see some type of cyber event, so you better be prepared for it. And you protect yourself against that. So plan for resiliency plan with making sure that you have the right people in place that can take that challenge on, because it's not a matter of if it's a matter of when >>I would imagine. Well, Steve, you and I have talked about this, that, you know, the data protection business used to be, we used to call it backup in recovery and security, which is a whole different animal, but they're really starting to come together. It's kind of an Adjay. I, I know you've got this, uh, Maverick report that, that you want to talk about. What, what is that as a new Gartner research? I, I'm not familiar with it. >>Yeah. So it's some very interesting Gartner research and what I think, and I'd be curious to, Tony's take on, especially after that last question is, you know, a lot of people are, are spending a lot of money to keep the bad actors out. Right. And Gardner's philosophy on this whole, um, it's, it's, you're going to get hacked. So embrace the breach, that's their report. Right. So what they're suggesting is you're spending a lot of money, but, but we're witnessing a lot of attacks still coming in. Are you prepared to recover that when it happens? Right. And so their philosophy is it's time to start thinking about the recovery aspects of, you know, if, if they're gonna get through, how do you handle that? Right. >>Well, so you got announcements this week, big one of the big four, I guess, or big five cyber recovery vault. It's been, you're enhancing that you guys are talking things like, you know, air gaps and so forth. Give us the overview of the news there. >>Yeah. So there's, uh, cyber recovery vault for AWS for the cloud. There is, uh, a lot of stuff we're doing with, uh, cyber recovery vault for, uh, Aw, uh, Azure also, right along with the cyber sense technology, which is the technology that scans the data. Once it comes in from the backup to ensure that it clean and can be recovered and you can feel confident that your recoveries look good, right? So now, now you can do that OnPrem, or you can do it through a colo. You can do it with in the cloud, or you can, uh, ask Dell technologies with our apex business services to help provide cyber recovery services wherever for you at your co at yet OnPrem or for you from the cloud. So it's kind of giving the customer, allowing them to keep that freedom of choice of how they want to operate, but provide them those same recovery capabilities. >>So Tony, give us paint us a picture without giving away too much for the bad guys. How, how you approach this, maybe are you using some of these products? What's your sort of infrastructure look like? >>Yeah. Without giving away the state secrets, um, we are heavily invested in the cyber recovery vault and cyber sense. Uh, it plays heavily in our strategy. We wanna make sure we have a safe Harbor for our data. And that's something that, that the Dell power protect cyber recovery vault provides to us. Uh, we're exceptionally excited about the, the development that's going on, especially with apex. We're looking at that, and that has really captured our imagination. It could be a game changer for us as a town because we're, we're a small organization transitioning to a midsize organization and what apex provides and what the Dell cyber recovery vault provides to us. Putting those two together gives us the elasticity we need as a small organization to expand quickly and deal with our internal data concerns. >>So cyber recovery as a service is what you're interested in. Let me ask you a question. Are you interested in a managed service or are you interested in managing it yourself? >>That's a great question, personally. I would prefer that we went with managed services. I think that from a manager's perspective, you get a bigger bang for the buck going with managed services. You have people that work with that technology all the time. You don't have to ramp people up and develop that expertise in house. You also then have that peace of mind that you have more people that are doing the services and it acts as a force multiplier for you. So from a dollar and cents perspective, it's the way that you want to go. When I start talking to my internal people, of course, there's that, that sense of fear that comes with the unknown and especially outsourcing that type of critical infrastructure, the there's some concern there, but I think that with education, with exposure, to some of the things that we get from the managed service, it makes sense for everybody to go that >>Route and, and you can, I presume sort of POC it and then expand it and then get more comfortable with it and then say, okay, when it's hardened and ready now, this is the, the Def facto standard across the organization. >>I suspect we'll end up in a hybrid environment to begin with where we'll some assets on site, and then we'll have some assets in the cloud. And that's again, where apex will be that, that big linchpin for us and really make it all work. How >>Important are air gaps? >>Oh, they're incredibly, incredibly, uh, needed right now. You cannot have true data of security without having an air gap. A lot of the ransomware that we see moves laterally through your organization. So if you have, uh, all your data backed up in the same data center that your, your backups and your primary data sources are in odds are they're all gonna get owned at the same time. So having that air gap solution in there is critical to having the peace of mind that allows the CISO to sleep at night. >>I always tell my crypto and NFT readers, this doesn't apply to data centers. You gotta air back air, air gap, your crypto, you know, when you're NFT. So how do you guys Steve deal with, with air gap? Can you explain the solutions? >>So in the, in the cyber recovery vault itself, it is driven through, uh, you've got one, uh, power protect, uh, appliance on one, one side in your data center, and then wherever your, your, your vaulted area is, whether it be a colo, whether it be on pre wherever it might be. Uh, we create a connection between between the two that is one directional, right? So we send the data to that vault. We call it the vault and, you know, we replicate a copy of your backup data. Once it lives over there, we make a copy of that data. And then what we do is with the cyber sense technology that Tony was talking about, we scan that data and we validate it against, with a whole cyber sense is built on IML machine learning. We look at a couple hundred different kind of profiles that come through and compare it to the, to the day before as backup and the day before that and understand kind of what's changing. >>And is it changing the right way? Right? Like there might be some reasons it it's supposed to change that way. Right. But things that look anomalous, we send up a warning when we let the people know that, you know, whoever's monitoring, something's going on. You might want to take a look. And then based on that, if there's whatever's happening in the environment, we have the ability to then recover that data back to the, to the original system. You can use the vault as a, as a clean room area, if you want to send people to it, depending on kind of what's going on in, in, in your main data center. So there's a lot of things we do to protect that. Do >>You recommend, like changing the timing of when you take, you know, snapshots or you do the same time every day, it's gotta create different patterns or >>I'll tell you that's, that's one thing to keep the, keep the hackers on their tow, right? It it's tough to do operationally, right? Because you kind that's processes. But, but the reality is if you really are that, uh, concerned about attacks, that makes a lot of sense, >>Tony, what's the CISOs number one challenge today? >>Uh, I, it has to be resilience. It has to be making sure your organization that if or when they get hit, that you're able to pick the pieces back up and get the operation back up as quickly and efficiently as possible. Making sure that the, the mission critical data is immediately, uh, recoverable and be able to be put back into play. >>And, and what's the biggest challenge or best practice in terms of doing that? Obviously the technology, the people, the process >>Right now, I would probably say it's it's people, uh, we're going through the, the, um, a period of, of uncertainty in the marketplace when it comes to trying to find people. So it is difficult to find the right people to do certain things, which is why managed services is so important to an organization of our size and, and what we're trying to do, where we are, are incorporating such big ideas. We need those manager services because we just can't find the bodies that can do some of this work. >>You got an interesting background, you a PhD in psychology, you're an educator, you're a golf pro and you're a CISO. I I've never met anybody like you, Tony <laugh>. So, thanks for coming on, Steve, give you the last word. >>Well, I think I, I think one of the things that Tony said, and I wanted to parlay this a little bit, uh, from that Gartner report, I even talked about people is so critical when it comes to cyber resiliency and that sort of thing. And one of the things I talked about in that embraced the breach report is as you're looking to hire staff for your environment, right, you wanna, you know, a lot of people might shy away from hiring that CSO that got fired because they had a cyber event. Right, right. Oh, maybe they didn't do their job. But the reality is, is those folks, because this is very new. I mean, of course we've been talking about cyber for a couple of years, but, but getting that experience under your belt and understanding what happens in the event. I mean, there are a lot of companies that run things like cyber ranges, resiliency, ranges to put people through the paces of, Hey, this is what have happens when an event happens and are you prepared to respond? I think there's a big set of learning lessons that happens when you go through one of those events and it helps kind of educate the people about what's needed. >>It's a great point. Failure used to mean fire right in this industry. And, and today it's different. The adversary is very well armed and quite capable and motivated that learning even during, even when you fail, can be applied to succeed in the future or not fail, I guess there's no such thing as success in your business. Guys. Thanks so much for coming on the cube. Really appreciate your time. Thank you. Thanks very >>Much. >>All right. And thank you for watching the cubes coverage of Dell tech world 2022. This is Dave Valenti. We'll be back with John furrier, Lisa Martin and David Nicholson. Two days of wall to wall coverage left. Keep it with us.
SUMMARY :
This is year 13 for covering either, you know, EMC world or, uh, Dell world. Well, thank you for having me here. Are you guys peers? I report through his office mm-hmm <affirmative> and then he reports, So you guys talk a lot, you I'm sure you present a lot to the, to the board or wherever the governance structure is. As a matter of fact, I have to do one when I return to, uh, So Steve is gonna say, cyber's the number one priority, I mean, it is, it is one of the hot topics in last night. Do you see you that amongst your peers, Tony? I think that people are quickly starting to understand that you just can't focus Well, Steve, you and I have talked about this, that, you know, the data protection business used to be, especially after that last question is, you know, a lot of people are, are spending a lot of things like, you know, air gaps and so forth. So it's kind of giving the customer, allowing them to keep that freedom of How, how you approach this, that the Dell power protect cyber recovery vault provides to us. Are you interested in a managed service or are you interested in it's the way that you want to go. Route and, and you can, I presume sort of POC it and then expand it and then get more comfortable I suspect we'll end up in a hybrid environment to begin with where we'll some assets on So if you have, uh, all your data backed up in the same data center that your, So how do you guys Steve deal with, with air gap? you know, we replicate a copy of your backup data. if you want to send people to it, depending on kind of what's going on in, in, in your main data center. But, but the reality is if you really are that, uh, concerned about attacks, Uh, I, it has to be resilience. the right people to do certain things, which is why managed services is so important to an organization You got an interesting background, you a PhD in psychology, you're an educator, I think there's a big set of learning lessons that happens when you go through one of those events that learning even during, even when you fail, can be applied to succeed in the And thank you for watching the cubes coverage of Dell tech world 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Steve Kenon | PERSON | 0.99+ |
Tony Bryson | PERSON | 0.99+ |
Dave Valenti | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Steve Kenniston | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
Gardner | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Gilbert | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John furrier | PERSON | 0.99+ |
Gilbert | LOCATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Two days | QUANTITY | 0.99+ |
The Storage Alchemist | ORGANIZATION | 0.99+ |
last night | DATE | 0.99+ |
Tony Bryston | PERSON | 0.99+ |
281,000 citizens | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
this week | DATE | 0.98+ |
apex | ORGANIZATION | 0.97+ |
Alchemist | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
fourth largest municipality | QUANTITY | 0.96+ |
Maverick | PERSON | 0.96+ |
Dell Technologies | ORGANIZATION | 0.95+ |
OnPrem | ORGANIZATION | 0.95+ |
one side | QUANTITY | 0.94+ |
billion dollar | QUANTITY | 0.93+ |
Beckett | PERSON | 0.9+ |
last 24 months | DATE | 0.89+ |
one thing | QUANTITY | 0.88+ |
EMC | ORGANIZATION | 0.85+ |
first chief information | QUANTITY | 0.84+ |
pandemic | EVENT | 0.83+ |
lot of money | QUANTITY | 0.79+ |
2022 | DATE | 0.79+ |
NFT | ORGANIZATION | 0.78+ |
multi-year | QUANTITY | 0.75+ |
Azure | TITLE | 0.69+ |
CISO | ORGANIZATION | 0.63+ |
Town | LOCATION | 0.63+ |
officer | QUANTITY | 0.62+ |
big | QUANTITY | 0.59+ |
hundred | QUANTITY | 0.58+ |
couple of years | QUANTITY | 0.58+ |
money | QUANTITY | 0.51+ |
couple | QUANTITY | 0.5+ |
Tony Bishop, Digital Realty | Dell Technologies World 2022
(upbeat music) >> I'm Dave Nicholson and welcome to Dell Technologies World 2022. I'm delighted to be joined by Tony Bishop. Tony is senior vice president, enterprise strategy at Digital Realty. Tony, welcome to theCUBE. >> Thank you, Dave. Happy to be here. >> So Tony, tell me about your role at Digital Realty and give us a little background on Digital Realty and what you do. >> Absolutely, so my job is to figure out how to make our product and experience relevant for enterprises and partners alike. Digital Realty is probably one of the best kept secrets in the industry. It's the largest provider of multi-tenant data center capacity in the world, over 300 data centers, 50 submetros, 26 countries, six continents. So it's a substantial provider of data center infrastructure capacity to hyperscale clouds to the largest enterprise in the world and everywhere in between. >> So what's the connection with Dell? What are you guys doing with Dell? >> I think it's going to be a marriage made in heaven in terms of the partnership. You think of Dell as the largest leading provider of critical IT infrastructure for companies around the world. They bring expertise in building the most relevant performant efficient infrastructure, combine that with the largest most relevant full spectrum capability provider of data center capacity. And together you create this integrated pre-engineered kind of experience where infrastructure can be delivered on demand, secure and compliant, performant and efficient and really unlock the opportunity that's trapped in the world around data. >> So speaking of data, you have a unique view at Digital Realty because you're seeing things in aggregate, in a way that maybe a single client wouldn't be seeing them. What are some of the trends and important things we need to be aware of as we move forward from a data center, from an IT perspective, frankly. >> Yeah, it's an excellent question. The good part of the vantage point is we see emerging trends as they start to unfold 'cause you have the most unique diverse set of customers coming together and coming together, almost organized like in a community effect because you have them connecting and attaching to each other's infrastructure sharing data. And what we've seen is in explosion in data being created, data being processed, aggregated, stored, and then being enriched. And it's really around that, what we call the data creation life cycle, where what we're seeing is that data then needs to be shared across many different devices, applications, systems, companies, users, and that ends up creating this new type of workflow driven world that's very intelligent and is going to cause a radical explosion in all our eyes of needing more infrastructure and more infrastructure faster and more infrastructure as a service. >> Yeah, when you talk about data and you talk about all of these connectivity points and communication points, talk about how some of those are explained to us. Some of these are outside of your facilities and some of them are within your facilities. In this virtualized abstracted world we live in it's easy to think that everything lives in our endpoint mobile device but talk about how that gravity associated with data affects things moving forward. >> Absolutely, glad you brought up about the mobile device because I think it's probably the easiest thing to attach to, to think about how the mobile device has radically liberated and transformed end users and in versions of mobile devices, even being sensors, not just people on a mobile phone proliferating everywhere. So that proliferation of these endpoints that are accessing and coming over different networks mobile networks, wifi networks, corporate networks, all end up generating data that then needs to be brought together and processed. And what we found is that we've found a study that we've been spending multiple years and multiple millions of dollars building into an index in a tool called the Data Gravity Index where we've been able to quantify not only this data creation life cycle, but how big and how fast and how it creates a gravitational effect because as more data gets shared with more applications, it becomes very localized. And so we've now measured and predicted for 700 mentors around the world where that data gravity effect is occurring and it's affecting every industry, every enterprise, and it's going to fundamentally change how infrastructure needs to be architected because it needs to become data centric. It used to be connectivity centric but with these mobile phones and endpoints going everywhere you have to create a meeting place. And it has to be a meeting place where the data comes together and then systems and services are brought and user traffic comes in and out of. >> So in other words, despite your prowess in this space you guys have yet to solve the speed of light issue and the cost of bandwidth moving between sites. So is it fair to say that in an ideal world you could have dozens of actually different customers, separate entities that are physically living in data center locations that are built and posted and run by Digital Realty, communicating with one another. So when these services are communicating instead of communicating over a hundred miles or a thousand miles, it's like one side of the chicken wire fence to the other, not that you use chicken wire in your data center but you get the point, is that fair. >> It is, it's like the mall analogy, right? You're building these data malls and everybody's bringing their relevant infrastructure and then using private secure connections between each other and then enabling the ability for data to be exchanged, enriched and new business be conducted. So no, physics hasn't been solved, Dave, just to add to that. And what we're finding is it's not just physics. One of the other things that we're continuing to see and hear from customers and that we continue to study as a trend is regulations, compliance and security are becoming as big a factors as physics is. So it's not just physics and cost which I agree with what you're saying but there's also these other dimensions that's in effect in placement, connectivity in the management of data and infrastructure, basically, in all major metros around the world where companies do business and providers support them, or customers come to meet them both physically and digitally. It's an interesting trend, right? I think a number of the industrians call it a digital twin where there's a virtual version and of a digital version and a physical version and that's probably the best way to think of us, is that secure meeting place where each can have their own secure infrastructure of what's being digitized but actually being placed physically. >> Yeah, that's interesting. When you look at this from the Dell, Digital Realty partnership perspective we know here at theCUBE that Dell is trying to make consumption of what they build, very, very simple for end user customers. Removing the complexity of the underlying hardware. There's a saying that the hardware doesn't matter anymore. You hear things referred to as serverless or no code, low code, those sort of abstract away from the reality of what's going on under the covers. But APEX, as an example from Dell allows things to be consumed as operational expense, dramatically simplifying the process of consuming that hardware. Now, if you go down to almost the concrete layer where Digital Realty starts up, you're looking at things like density and square footage and power consumption, right? >> Yep. >> So tell me, you mentioned infrastructure. Tell me about the kind of optimization from a hardware standpoint that you expect to see from Dell. >> Yeah, in the data center, the subset of an industry, they call it digital or mission critical infrastructure, the space, the power, the secure housing, how do you create physical isolation? How do you deal with cooling and containment? How do you deal with different physical loads? 'Cause some of the more dense computers likely working with Dell and some of the various semiconductors that Dell takes and wraps into intelligent compute and storage blocks, the specialized processing for our use cases like artificial intelligence and machine learning, they run very fast, they generate a lot of heat and they consume a lot of power. So that means you have to be very smart about the critical infrastructure and the type of server infrastructure storage coming together where the heat can be quickly removed. The power is obviously distributed to it, so it can run as constant and as fast as possible to unlock insights and processing. And then you also need to be able to deal with things like, hey, the cabling between the server and the storage has to be that when you're running parallel calculations that there's an equal distance between the cabling. Well, if I don't think about how I'm physically bringing the server storage and all of that together and then having space that can accommodate and ensure the equal cabling in the layout, oh and then handle these very heavy physical computers. So that physical load into the floor, it becomes very problematic. So it's hidden, most people don't understand that engineering but that's the partnership that why we're excited about with Dell is you're bringing all that critical expertise of supporting all those various types of use cases of infrastructure combinations and then combining the engineering understanding of how do I build for the right performance, the right density, the right TCO and also do it where physical layout of having things in proximity and in a contiguous space can then be the way to unlock processing of data and connecting to others. >> Yeah, so from an end user perspective, I don't need to care about any of what you just said. All I heard was wawawawawa (chuckles). I will consume my APEX delivered Dell by the drink, as a service, as OPEX, however I want to consume it. But I can rest assured that Digital Realty and Dell are actually taking care of those meaningful things that are happening under the hood. Maybe I'm revealing my long term knuckle dragging hardware guy credentials when I just get that little mentioning. >> (indistinct) you got it, performance secure compliant and I don't need to worry about it. The two of you're taking care of it and you're taking care of it for me. And every major mentor around the world delivered in the experience it needs to be delivered in. >> So from the Digital Realty point of view, what are the things that not necessarily keep you up at night worrying, but sort of wake you up in the morning early with a sense of renewed opportunity when it comes to the data center space, a lot of people would think, well we're in the era of cloud, no one's building any data centers except for monster cloud players. But that's definitely not the case, is it? There's a demand for what you folks are building and delivering. So first, what's the opportunity look like and then what are the constraints that are out there? Is it dirt, is it power? What are the constraints you face? >> We have probably all the above, is the shortest answer, right? So we're not wawawa, right Dave? But what we are is the opportunity is huge because it's not one platform, there's many platforms there isn't one business that exists today that doesn't use many applications, doesn't consume many different services both internally and externally, and doesn't generate a ton of data that they may not even know where it is. So that's the exciting part. And that continues to force a requirement that says I need to be able to connect to all those clouds which you can do at our platform but I also need to be able to put infrastructure or the storage of data next to it and in between it. So it's like an integration approach that says if I think physical first think physical that's within logical proximity to where I have employees, customers, partners, I have business presence. That's what drives us, and in our industry continues to grow both. And we see it in our own business. It's a double digit growth rate for both commercial oriented enterprises and service providers in the telco cloud, or content kind of space. So it's kind of like a best of both worlds. I think that's what gets us excited. If I should take a second part of the question, what ends up boring is like all of us, it is a physical world, physical world start with, do we have enough power? Is it durable, sustainable and secure? Is it available? Do we have the right connectivity options. Keeping things available is a full-time job, making it so that you can accommodate local nuances when you start going in different regions and countries and metros there's a lot of regional policy compliance or market specific needs that have to be factored in. But you're still trying to deliver that consistent physical availability and experience. So it's a good problem to have but it's a critical infrastructure problem that I would put in the same kind of bucket as power companies, energy companies, telecommunication companies, because it's a meeting place for all of that. >> So you've been in this business, not just at Digital Realty but you you've been in this part of the IT world for a while. >> Yeah. >> How has the persona of a customer for a Digital Realty changed over time? Have we seen the kind of consolidation that people would expect in this space in terms of fewer but larger customers coming in and seeking floor space? >> Well, I think it's been the opposite of what probably people predict. And I pause there intentionally being very candid and open. And it's probably why that using data as the proxy to understand, is that it's a many to many world that's only getting bigger, not smaller. As much as companies consolidate, there's more that appear. Innovation is driving new businesses and new industries or the digitization of old industries which is then creating a whole multiplier effect. So what we're seeing is we're actually seeing a rapid uptake in the enterprise side of our business which is why I'm here in driving that. That really was much more nominal five years ago for being the provider of the space and capabilities for telcos and large hyperscalers continues to go because it's not like a once and done, it's I need to do this in many places. I need to continue to bring as there's a push towards the edge, I need to be able to create meeting places for all of it. And so to us, we're seeing a constant growth in more companies becoming customers on the enterprise side more enterprises deploying in more places solving more use cases. And more service providers figuring out new ways to monetize by bringing their infrastructure and making an accessibility to be connected to on our platform. >> So if I'm here hearing you right, you're saying that people who believe that we are maybe a few years away from everything being in a single cloud are completely off base. >> Mmh hmm. >> That is not the direction that we're heading, from your view, right? >> We love our cloud customers, they're going to continue to grow. But it's not all going to one cloud. I think what you would see is, that you would see where a great way to assess that and break it down is enterprise IT, Gartner's Forecast 4.2, four and a half trillion a year in spend, less than a third of that's hitting public cloud. So there's a long tail first of all, it's not going to one cloud of people. There's like seven or eight major players and then you go, okay, well, what do I do if it's not in seven or eight major players? Well, then I need to put it next to it. Oh, that's why we'll go to a Digital Realty. >> Makes a lot of sense. Tony Bishop, Digital Realty. Thanks for joining us on theCUBE. Have a great Dell Technologies World. For me, Dave Nicholson, stay tuned more live coverage from Dell Technologies World 2022 as we resume in just a moment. (soft music)
SUMMARY :
I'm delighted to be joined by Tony Bishop. Happy to be here. and what you do. capacity in the world, I think it's going to be What are some of the and is going to cause a radical and you talk about all of and it's going to fundamentally change and the cost of bandwidth and that's probably the There's a saying that the Tell me about the kind of optimization the storage has to be any of what you just said. and I don't need to worry about it. What are the constraints you face? and service providers in the telco cloud, but you you've been in as the proxy to understand, So if I'm here hearing you right, and then you go, okay, well, what do I do Makes a lot of sense.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tony | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Tony Bishop | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
Digital Realty | ORGANIZATION | 0.99+ |
six continents | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
700 mentors | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
26 countries | QUANTITY | 0.99+ |
one cloud | QUANTITY | 0.99+ |
50 submetros | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
over 300 data centers | QUANTITY | 0.99+ |
over a hundred miles | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
one platform | QUANTITY | 0.98+ |
second part | QUANTITY | 0.98+ |
less than a third | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
four and a half trillion a year | QUANTITY | 0.97+ |
millions of dollars | QUANTITY | 0.97+ |
One | QUANTITY | 0.95+ |
each | QUANTITY | 0.95+ |
both worlds | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
eight major players | QUANTITY | 0.95+ |
a thousand miles | QUANTITY | 0.95+ |
single cloud | QUANTITY | 0.93+ |
one business | QUANTITY | 0.93+ |
one side | QUANTITY | 0.91+ |
Dell Technologies World 2022 | EVENT | 0.89+ |
today | DATE | 0.89+ |
twin | QUANTITY | 0.86+ |
telco | ORGANIZATION | 0.85+ |
double | QUANTITY | 0.83+ |
single client | QUANTITY | 0.82+ |
Digital | ORGANIZATION | 0.78+ |
OPEX | ORGANIZATION | 0.76+ |
Technologies World 2022 | EVENT | 0.73+ |
Forecast 4.2 | TITLE | 0.72+ |
APEX | ORGANIZATION | 0.72+ |
Data Gravity Index | OTHER | 0.7+ |
ton of data | QUANTITY | 0.69+ |
Dell Technologies World | ORGANIZATION | 0.66+ |
theCUBE | ORGANIZATION | 0.6+ |
Breaking Analysis: Enterprise Technology Predictions 2022
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> The pandemic has changed the way we think about and predict the future. As we enter the third year of a global pandemic, we see the significant impact that it's had on technology strategy, spending patterns, and company fortunes Much has changed. And while many of these changes were forced reactions to a new abnormal, the trends that we've seen over the past 24 months have become more entrenched, and point to the way that's coming ahead in the technology business. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we welcome our partner and colleague and business friend, Erik Porter Bradley, as we deliver what's becoming an annual tradition for Erik and me, our predictions for Enterprise Technology in 2022 and beyond Erik, welcome. Thanks for taking some time out. >> Thank you, Dave. Luckily we did pretty well last year, so we were able to do this again. So hopefully we can keep that momentum going. >> Yeah, you know, I want to mention that, you know, we get a lot of inbound predictions from companies and PR firms that help shape our thinking. But one of the main objectives that we have is we try to make predictions that can be measured. That's why we use a lot of data. Now not all will necessarily fit that parameter, but if you've seen the grading of our 2021 predictions that Erik and I did, you'll see we do a pretty good job of trying to put forth prognostications that can be declared correct or not, you know, as black and white as possible. Now let's get right into it. Our first prediction, we're going to go run into spending, something that ETR surveys for quarterly. And we've reported extensively on this. We're calling for tech spending to increase somewhere around 8% in 2022, we can see there on the slide, Erik, we predicted spending last year would increase by 4% IDC. Last check was came in at five and a half percent. Gardner was somewhat higher, but in general, you know, not too bad, but looking ahead, we're seeing an acceleration from the ETR September surveys, as you can see in the yellow versus the blue bar in this chart, many of the SMBs that were hard hit by the pandemic are picking up spending again. And the ETR data is showing acceleration above the mean for industries like energy, utilities, retail, and services, and also, notably, in the Forbes largest 225 private companies. These are companies like Mars or Koch industries. They're predicting well above average spending for 2022. So Erik, please weigh in here. >> Yeah, a lot to bring up on this one, I'm going to be quick. So 1200 respondents on this, over a third of which were at the C-suite level. So really good data that we brought in, the usual bucket of, you know, fortune 500, global 2000 make up the meat of that median, but it's 8.3% and rising with momentum as we see. What's really interesting right now is that energy and utilities. This is usually like, you know, an orphan stock dividend type of play. You don't see them at the highest point of tech spending. And the reason why right now is really because this state of tech infrastructure in our energy infrastructure needs help. And it's obvious, remember the Florida municipality break reach last year? When they took over the water systems or they had the ability to? And this is a real issue, you know, there's bad nation state actors out there, and I'm no alarmist, but the energy and utility has to spend this money to keep up. It's really important. And then you also hit on the retail consumer. Obviously what's happened, the work from home shift created a shop from home shift, and the trends that are happening right now in retail. If you don't spend and keep up, you're not going to be around much longer. So I think the really two interesting things here to call out are energy utilities, usually a laggard in IT spend and it's leading, and also retail consumer, a lot of changes happening. >> Yeah. Great stuff. I mean, I recall when we entered the pandemic, really ETR was the first to emphasize the impact that work from home was going to have, so I really put a lot of weight on this data. Okay. Our next prediction is we're going to get into security, it's one of our favorite topics. And that is that the number one priority that needs to be addressed by organizations in 2022 is security and you can see, in this slide, the degree to which security is top of mind, relative to some other pretty important areas like cloud, productivity, data, and automation, and some others. Now people may say, "Oh, this is obvious." But I'm going to add some context here, Erik, and then bring you in. First, organizations, they don't have unlimited budgets. And there are a lot of competing priorities for dollars, especially with the digital transformation mandate. And depending on the size of the company, this data will vary. For example, while security is still number one at the largest public companies, and those are of course of the biggest spenders, it's not nearly as pronounced as it is on average, or in, for example, mid-sized companies and government agencies. And this is because midsized companies or smaller companies, they don't have the resources that larger companies do. Larger companies have done a better job of securing their infrastructure. So these mid-size firms are playing catch up and the data suggests cyber is even a bigger priority there, gaps that they have to fill, you know, going forward. And that's why we think there's going to be more demand for MSSPs, managed security service providers. And we may even see some IPO action there. And then of course, Erik, you and I have talked about events like the SolarWinds Hack, there's more ransomware attacks, other vulnerabilities. Just recently, like Log4j in December. All of this has heightened concerns. Now I want to talk a little bit more about how we measure this, you know, relatively, okay, it's an obvious prediction, but let's stick our necks out a little bit. And so in addition to the rise of managed security services, we're calling for M&A and/or IPOs, we've specified some names here on this chart, and we're also pointing to the digital supply chain as an area of emphasis. Again, Log4j really shone that under a light. And this is going to help the likes of Auth0, which is now Okta, SailPoint, which is called out on this chart, and some others. We're calling some winners in end point security. Erik, you're going to talk about sort of that lifecycle, that transformation that we're seeing, that migration to new endpoint technologies that are going to benefit from this reset refresh cycle. So Erik, weigh in here, let's talk about some of the elements of this prediction and some of the names on that chart. >> Yeah, certainly. I'm going to start right with Log4j top of mind. And the reason why is because we're seeing a real paradigm shift here where things are no longer being attacked at the network layer, they're being attacked at the application layer, and in the application stack itself. And that is a huge shift left. And that's taking in DevSecOps now as a real priority in 2022. That's a real paradigm shift over the last 20 years. That's not where attacks used to come from. And this is going to have a lot of changes. You called out a bunch of names in there that are, they're either going to work. I would add to that list Wiz. I would add Orca Security. Two names in our emerging technology study, in addition to the ones you added that are involved in cloud security and container security. These names are either going to get gobbled up. So the traditional legacy names are going to have to start writing checks and, you know, legacy is not fair, but they're in the data center, right? They're, on-prem, they're not cloud native. So these are the names that money is going to be flowing to. So they're either going to get gobbled up, or we're going to see some IPO's. And on the other thing I want to talk about too, is what you mentioned. We have CrowdStrike on that list, We have SentinalOne on the list. Everyone knows them. Our data was so strong on Tanium that we actually went positive for the first time just today, just this morning, where that was released. The trifecta of these are so important because of what you mentioned, under resourcing. We can't have security just tell us when something happens, it has to automate, and it has to respond. So in this next generation of EDR and XDR, an automated response has to happen because people are under-resourced, salaries are really high, there's a skill shortage out there. Security has to become responsive. It can't just monitor anymore. >> Yeah. Great. And we should call out too. So we named some names, Snyk, Aqua, Arctic Wolf, Lacework, Netskope, Illumio. These are all sort of IPO, or possibly even M&A candidates. All right. Our next prediction goes right to the way we work. Again, something that ETR has been on for awhile. We're calling for a major rethink in remote work for 2022. We had predicted last year that by the end of 2021, there'd be a larger return to the office with the norm being around a third of workers permanently remote. And of course the variants changed that equation and, you know, gave more time for people to think about this idea of hybrid work and that's really come in to focus. So we're predicting that is going to overtake fully remote as the dominant work model with only about a third of the workers back in the office full-time. And Erik, we expect a somewhat lower percentage to be fully remote. It's now sort of dipped under 30%, at around 29%, but it's still significantly higher than the historical average of around 15 to 16%. So still a major change, but this idea of hybrid and getting hybrid right, has really come into focus. Hasn't it? >> Yeah. It's here to stay. There's no doubt about it. We started this in March of 2020, as soon as the virus hit. This is the 10th iteration of the survey. No one, no one ever thought we'd see a number where only 34% of people were going to be in office permanently. That's a permanent number. They're expecting only a third of the workers to ever come back fully in office. And against that, there's 63% that are saying their permanent workforce is going to be either fully remote or hybrid. And this, I can't really explain how big of a paradigm shift this is. Since the start of the industrial revolution, people leave their house and go to work. Now they're saying that's not going to happen. The economic impact here is so broad, on so many different areas And, you know, the reason is like, why not? Right? The productivity increase is real. We're seeing the productivity increase. Enterprises are spending on collaboration tools, productivity tools, We're seeing an increased perception in productivity of their workforce. And the CFOs can cut down an expense item. I just don't see a reason why this would end, you know, I think it's going to continue. And I also want to point out these results, as high as they are, were before the Omicron wave hit us. I can only imagine what these results would have been if we had sent the survey out just two or three weeks later. >> Yeah. That's a great point. Okay. Next prediction, we're going to look at the supply chain, specifically in how it's affecting some of the hardware spending and cloud strategies in the future. So in this chart, ETRS buyers, have you experienced problems procuring hardware as a result of supply chain issues? And, you know, despite the fact that some companies are, you know, I would call out Dell, for example, doing really well in terms of delivering, you can see that in the numbers, it's pretty clear, there's been an impact. And that's not not an across the board, you know, thing where vendors are able to deliver, especially acute in PCs, but also pronounced in networking, also in firewall servers and storage. And what's interesting is how companies are responding and reacting. So first, you know, I'm going to call the laptop and PC demand staying well above pre-COVID norms. It had peaked in 2012. Pre-pandemic it kept dropping and dropping and dropping, in terms of, you know, unit volume, where the market was contracting. And we think can continue to grow this year in double digits in 2022. But what's interesting, Erik, is when you survey customers, is despite the difficulty they're having in procuring network hardware, there's as much of a migration away from existing networks to the cloud. You could probably comment on that. Their networks are more fossilized, but when it comes to firewalls and servers and storage, there's a much higher propensity to move to the cloud. 30% of customers that ETR surveyed will replace security appliances with cloud services and 41% and 34% respectively will move to cloud compute and storage in 2022. So cloud's relentless march on traditional on-prem models continues. Erik, what do you make of this data? Please weigh in on this prediction. >> As if we needed another reason to go to the cloud. Right here, here it is yet again. So this was added to the survey by client demand. They were asking about the procurement difficulties, the supply chain issues, and how it was impacting our community. So this is the first time we ran it. And it really was interesting to see, you know, the move there. And storage particularly I found interesting because it correlated with a huge jump that we saw on one of our vendor names, which was Rubrik, had the highest net score that it's ever had. So clearly we're seeing some correlation with some of these names that are there, you know, really well positioned to take storage, to take data into the cloud. So again, you didn't need another reason to, you know, hasten this digital transformation, but here we are, we have it yet again, and I don't see it slowing down anytime soon. >> You know, that's a really good point. I mean, it's not necessarily bad news for the... I mean, obviously you wish that it had no change, would be great, but things, you know, always going to change. So we'll talk about this a little bit later when we get into the Supercloud conversation, but this is an opportunity for people who embrace the cloud. So we'll come back to that. And I want to hang on cloud a bit and share some recent projections that we've made. The next prediction is the big four cloud players are going to surpass 167 billion, an IaaS and PaaS revenue in 2022. We track this. Observers of this program know that we try to create an apples to apples comparison between AWS, Azure, GCP and Alibaba in IaaS and PaaS. So we're calling for 38% revenue growth in 2022, which is astounding for such a massive market. You know, AWS is probably not going to hit a hundred billion dollar run rate, but they're going to be close this year. And we're going to get there by 2023, you know they're going to surpass that. Azure continues to close the gap. Now they're about two thirds of the size of AWS and Google, we think is going to surpass Alibaba and take the number three spot. Erik, anything you'd like to add here? >> Yeah, first of all, just on a sector level, we saw our sector, new survey net score on cloud jumped another 10%. It was already really high at 48. Went up to 53. This train is not slowing down anytime soon. And we even added an edge compute type of player, like CloudFlare into our cloud bucket this year. And it debuted with a net score of almost 60. So this is really an area that's expanding, not just the big three, but everywhere. We even saw Oracle and IBM jump up. So even they're having success, taking some of their on-prem customers and then selling them to their cloud services. This is a massive opportunity and it's not changing anytime soon, it's going to continue. >> And I think the operative word there is opportunity. So, you know, the next prediction is something that we've been having fun with and that's this Supercloud becomes a thing. Now, the reason I say we've been having fun is we put this concept of Supercloud out and it's become a bit of a controversy. First, you know, what the heck's the Supercloud right? It's sort of a buzz-wordy term, but there really is, we believe, a thing here. We think there needs to be a rethinking or at least an evolution of the term multi-cloud. And what we mean is that in our view, you know, multicloud from a vendor perspective was really cloud compatibility. It wasn't marketed that way, but that's what it was. Either a vendor would containerize its legacy stack, shove it into the cloud, or a company, you know, they'd do the work, they'd build a cloud native service on one of the big clouds and they did do it for AWS, and then Azure, and then Google. But there really wasn't much, if any, leverage across clouds. Now from a buyer perspective, we've always said multicloud was a symptom of multi-vendor, meaning I got different workloads, running in different clouds, or I bought a company and they run on Azure, and I do a lot of work on AWS, but generally it wasn't necessarily a prescribed strategy to build value on top of hyperscale infrastructure. There certainly was somewhat of a, you know, reducing lock-in and hedging the risk. But we're talking about something more here. We're talking about building value on top of the hyperscale gift of hundreds of billions of dollars in CapEx. So in addition, we're not just talking about transforming IT, which is what the last 10 years of cloud have been like. And, you know, doing work in the cloud because it's cheaper or simpler or more agile, all of those things. So that's beginning to change. And this chart shows some of the technology vendors that are leaning toward this Supercloud vision, in our view, building on top of the hyperscalers that are highlighted in red. Now, Jerry Chan at Greylock, they wrote a piece called Castles in the Cloud. It got our thinking going, and he and the team at Greylock, they're building out a database of all the cloud services and all the sub-markets in cloud. And that got us thinking that there's a higher level of abstraction coalescing in the market, where there's tight integration of services across clouds, but the underlying complexity is hidden, and there's an identical experience across clouds, and even, in my dreams, on-prem for some platforms, so what's new or new-ish and evolving are things like location independence, you've got to include the edge on that, metadata services to optimize locality of reference and data source awareness, governance, privacy, you know, application independent and dependent, actually, recovery across clouds. So we're seeing this evolve. And in our view, the two biggest things that are new are the technology is evolving, where you're seeing services truly integrate cross-cloud. And the other big change is digital transformation, where there's this new innovation curve developing, and it's not just about making your IT better. It's about SaaS-ifying and automating your entire company workflows. So Supercloud, it's not just a vendor thing to us. It's the evolution of, you know, the, the Marc Andreessen quote, "Every company will be a SaaS company." Every company will deliver capabilities that can be consumed as cloud services. So Erik, the chart shows spending momentum on the y-axis and net score, or presence in the ETR data center, or market share on the x-axis. We've talked about snowflake as the poster child for this concept where the vision is you're in their cloud and sharing data in that safe place. Maybe you could make some comments, you know, what do you think of this Supercloud concept and this change that we're sensing in the market? >> Well, I think you did a great job describing the concept. So maybe I'll support it a little bit on the vendor level and then kind of give examples of the ones that are doing it. You stole the lead there with Snowflake, right? There is no better example than what we've seen with what Snowflake can do. Cross-portability in the cloud, the ability to be able to be, you know, completely agnostic, but then build those services on top. They're better than anything they could offer. And it's not just there. I mean, you mentioned edge compute, that's a whole nother layer where this is coming in. And CloudFlare, the momentum there is out of control. I mean, this is a company that started off just doing CDN and trying to compete with Okta Mite. And now they're giving you a full soup to nuts with security and actual edge compute layer, but it's a fantastic company. What they're doing, it's another great example of what you're seeing here. I'm going to call out HashiCorp as well. They're more of an infrastructure services, a little bit more of an open-source freemium model, but what they're doing as well is completely cloud agnostic. It's dynamic. It doesn't care if you're in a container, it doesn't matter where you are. They recently IPO'd and they're down 25%, but their data looks so good across both of our emerging technology and TISA survey. It's certainly another name that's playing on this. And another one that we mentioned as well is Rubrik. If you need storage, compute, and in the cloud layer and you need to be agnostic to it, they're another one that's really playing in this space. So I think it's a great concept you're bringing up. I think it's one that's here to stay and there's certainly a lot of vendors that fit into what you're describing. >> Excellent. Thank you. All right, let's shift to data. The next prediction, it might be a little tough to measure. Before I said we're trying to be a little black and white here, but it relates to Data Mesh, which is, the ideas behind that term were created by Zhamak Dehghani of ThoughtWorks. And we see Data Mesh is really gaining momentum in 2022, but it's largely going to be, we think, confined to a more narrow scope. Now, the impetus for change in data architecture in many companies really stems from the fact that their Hadoop infrastructure really didn't solve their data problems and they struggle to get more value out of their data investments. Data Mesh prescribes a shift to a decentralized architecture in domain ownership of data and a shift to data product thinking, beyond data for analytics, but data products and services that can be monetized. Now this a very powerful in our view, but they're difficult for organizations to get their heads around and further decentralization creates the need for a self-service platform and federated data governance that can be automated. And not a lot of standards around this. So it's going to take some time. At our power panel a couple of weeks ago on data management, Tony Baer predicted a backlash on Data Mesh. And I don't think it's going to be so much of a backlash, but rather the adoption will be more limited. Most implementations we think are going to use a starting point of AWS and they'll enable domains to access and control their own data lakes. And while that is a very small slice of the Data Mesh vision, I think it's going to be a starting point. And the last thing I'll say is, this is going to take a decade to evolve, but I think it's the right direction. And whether it's a data lake or a data warehouse or a data hub or an S3 bucket, these are really, the concept is, they'll eventually just become nodes on the data mesh that are discoverable and access is governed. And so the idea is that the stranglehold that the data pipeline and process and hyper-specialized roles that they have on data agility is going to evolve. And decentralized architectures and the democratization of data will eventually become a norm for a lot of different use cases. And Erik, I wonder if you'd add anything to this. >> Yeah. There's a lot to add there. The first thing that jumped out to me was that that mention of the word backlash you said, and you said it's not really a backlash, but what it could be is these are new words trying to solve an old problem. And I do think sometimes the industry will notice that right away and maybe that'll be a little pushback. And the problems are what you already mentioned, right? We're trying to get to an area where we can have more assets in our data site, more deliverable, and more usable and relevant to the business. And you mentioned that as self-service with governance laid on top. And that's really what we're trying to get to. Now, there's a lot of ways you can get there. Data fabric is really the technical aspect and data mesh is really more about the people, the process, and the governance, but the two of those need to meet, in order to make that happen. And as far as tools, you know, there's even cataloging names like Informatica that play in this, right? Istio plays in this, Snowflake plays in this. So there's a lot of different tools that will support it. But I think you're right in calling out AWS, right? They have AWS Lake, they have AWS Glue. They have so much that's trying to drive this. But I think the really important thing to keep here is what you said. It's going to be a decade long journey. And by the way, we're on the shoulders of giants a decade ago that have even gotten us to this point to talk about these new words because this has been an ongoing type of issue, but ultimately, no matter which vendors you use, this is going to come down to your data governance plan and the data literacy in your business. This is really about workflows and people as much as it is tools. So, you know, the new term of data mesh is wonderful, but you still have to have the people and the governance and the processes in place to get there. >> Great, thank you for that, Erik. Some great points. All right, for the next prediction, we're going to shine the spotlight on two of our favorite topics, Snowflake and Databricks, and the prediction here is that, of course, Databricks is going to IPO this year, as expected. Everybody sort of expects that. And while, but the prediction really is, well, while these two companies are facing off already in the market, they're also going to compete with each other for M&A, especially as Databricks, you know, after the IPO, you're going to have, you know, more prominence and a war chest. So first, these companies, they're both looking pretty good, the same XY graph with spending velocity and presence and market share on the horizontal axis. And both Snowflake and Databricks are well above that magic 40% red dotted line, the elevated line, to us. And for context, we've included a few other firms. So you can see kind of what a good position these two companies are really in, especially, I mean, Snowflake, wow, it just keeps moving to the right on this horizontal picture, but maintaining the next net score in the Y axis. Amazing. So, but here's the thing, Databricks is using the term Lakehouse implying that it has the best of data lakes and data warehouses. And Snowflake has the vision of the data cloud and data sharing. And Snowflake, they've nailed analytics, and now they're moving into data science in the domain of Databricks. Databricks, on the other hand, has nailed data science and is moving into the domain of Snowflake, in the data warehouse and analytics space. But to really make this seamless, there has to be a semantic layer between these two worlds and they're either going to build it or buy it or both. And there are other areas like data clean rooms and privacy and data prep and governance and machine learning tooling and AI, all that stuff. So the prediction is they'll not only compete in the market, but they'll step up and in their competition for M&A, especially after the Databricks IPO. We've listed some target names here, like Atscale, you know, Iguazio, Infosum, Habu, Immuta, and I'm sure there are many, many others. Erik, you care to comment? >> Yeah. I remember a year ago when we were talking Snowflake when they first came out and you, and I said, "I'm shocked if they don't use this war chest of money" "and start going after more" "because we know Slootman, we have so much respect for him." "We've seen his playbook." And I'm actually a little bit surprised that here we are, at 12 months later, and he hasn't spent that money yet. So I think this prediction's just spot on. To talk a little bit about the data side, Snowflake is in rarefied air. It's all by itself. It is the number one net score in our entire TISA universe. It is absolutely incredible. There's almost no negative intentions. Global 2000 organizations are increasing their spend on it. We maintain our positive outlook. It's really just, you know, stands alone. Databricks, however, also has one of the highest overall net sentiments in the entire universe, not just its area. And this is the first time we're coming up positive on this name as well. It looks like it's not slowing down. Really interesting comment you made though that we normally hear from our end-user commentary in our panels and our interviews. Databricks is really more used for the data science side. The MLAI is where it's best positioned in our survey. So it might still have some catching up to do to really have that caliber of usability that you know Snowflake is seeing right now. That's snowflake having its own marketplace. There's just a lot more to Snowflake right now than there is Databricks. But I do think you're right. These two massive vendors are sort of heading towards a collision course, and it'll be very interesting to see how they deploy their cash. I think Snowflake, with their incredible management and leadership, probably will make the first move. >> Well, I think you're right on that. And by the way, I'll just add, you know, Databricks has basically said, hey, it's going to be easier for us to come from data lakes into data warehouse. I'm not sure I buy that. I think, again, that semantic layer is a missing ingredient. So it's going to be really interesting to see how this plays out. And to your point, you know, Snowflake's got the war chest, they got the momentum, they've got the public presence now since November, 2020. And so, you know, they're probably going to start making some aggressive moves. Anyway, next prediction is something, Erik, that you and I have talked about many, many times, and that is observability. I know it's one of your favorite topics. And we see this world screaming for more consolidation it's going all in on cloud native. These legacy stacks, they're fighting to stay relevant, but the direction is pretty clear. And the same XY graph lays out the players in the field, with some of the new entrants that we've also highlighted, like Observe and Honeycomb and ChaosSearch that we've talked about. Erik, we put a big red target around Splunk because everyone wants their gold. So please give us your thoughts. >> Oh man, I feel like I've been saying negative things about Splunk for too long. I've got a bad rap on this name. The Splunk shareholders come after me all the time. Listen, it really comes down to this. They're a fantastic company that was designed to do logging and monitoring and had some great tool sets around what you could do with it. But they were designed for the data center. They were designed for prem. The world we're in now is so dynamic. Everything I hear from our end user community is that all net new workloads will be going to cloud native players. It's that simple. So Splunk has entrenched. It's going to continue doing what it's doing and it does it really, really well. But if you're doing something new, the new workloads are going to be in a dynamic environment and that's going to go to the cloud native players. And in our data, it is extremely clear that that means Datadog and Elastic. They are by far number one and two in net score, increase rates, adoption rates. It's not even close. Even New Relic actually is starting to, you know, entrench itself really well. We saw New Relic's adoption's going up, which is super important because they went to that freemium model, you know, to try to get their little bit of an entrenched customer base and that's working as well. And then you made a great list here, of all the new entrants, but it goes beyond this. There's so many more. In our emerging technology survey, we're seeing Century, Catchpoint, Securonix, Lucid Works. There are so many options in this space. And let's not forget, the biggest data that we're seeing is with Grafana. And Grafana labs as yet to turn on their enterprise. Elastic did it, why can't Grafana labs do it? They have an enterprise stack. So when you look at how crowded this space is, there has to be consolidation. I recently hosted a panel and every single guy on that panel said, "Please give me a consolidation." Because they're the end users trying to actually deploy these and it's getting a little bit confusing. >> Great. Thank you for that. Okay. Last prediction. Erik, might be a little out of your wheelhouse, but you know, you might have some thoughts on it. And that's a hybrid events become the new digital model and a new category in 2022. You got these pure play digital or virtual events. They're going to take a back seat to in-person hybrids. The virtual experience will eventually give way to metaverse experiences and that's going to take some time, but the physical hybrid is going to drive it. And metaverse is ultimately going to define the virtual experience because the virtual experience today is not great. Nobody likes virtual. And hybrid is going to become the business model. Today's pure virtual experience has to evolve, you know, theCUBE first delivered hybrid mid last decade, but nobody really wanted it. We did Mobile World Congress last summer in Barcelona in an amazing hybrid model, which we're showing in some of the pictures here. Alex, if you don't mind bringing that back up. And every physical event that we're we're doing now has a hybrid and virtual component, including the pre-records. You can see in our studios, you see that the green screen. I don't know. Erik, what do you think about, you know, the Zoom fatigue and all this. I know you host regular events with your round tables, but what are your thoughts? >> Well, first of all, I think you and your company here have just done an amazing job on this. So that's really your expertise. I spent 20 years of my career hosting intimate wall street idea dinners. So I'm better at navigating a wine list than I am navigating a conference floor. But I will say that, you know, the trend just goes along with what we saw. If 35% are going to be fully remote. If 70% are going to be hybrid, then our events are going to be as well. I used to host round table dinners on, you know, one or two nights a week. Now those have gone virtual. They're now panels. They're now one-on-one interviews. You know, we do chats. We do submitted questions. We do what we can, but there's no reason that this is going to change anytime soon. I think you're spot on here. >> Yeah. Great. All right. So there you have it, Erik and I, Listen, we always love the feedback. Love to know what you think. Thank you, Erik, for your partnership, your collaboration, and love doing these predictions with you. >> Yeah. I always enjoy them too. And I'm actually happy. Last year you made us do a baker's dozen, so thanks for keeping it to 10 this year. >> (laughs) We've got a lot to say. I know, you know, we cut out. We didn't do much on crypto. We didn't really talk about SaaS. I mean, I got some thoughts there. We didn't really do much on containers and AI. >> You want to keep going? I've got another 10 for you. >> RPA...All right, we'll have you back and then let's do that. All right. All right. Don't forget, these episodes are all available as podcasts, wherever you listen, all you can do is search Breaking Analysis podcast. Check out ETR's website at etr.plus, they've got a new website out. It's the best data in the industry, and we publish a full report every week on wikibon.com and siliconangle.com. You can always reach out on email, David.Vellante@siliconangle.com I'm @DVellante on Twitter. Comment on our LinkedIn posts. This is Dave Vellante for the Cube Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (mellow music)
SUMMARY :
bringing you data-driven and predict the future. So hopefully we can keep to mention that, you know, And this is a real issue, you know, And that is that the number one priority and in the application stack itself. And of course the variants And the CFOs can cut down an expense item. the board, you know, thing interesting to see, you know, and take the number three spot. not just the big three, but everywhere. It's the evolution of, you know, the, the ability to be able to be, and the democratization of data and the processes in place to get there. and is moving into the It is the number one net score And by the way, I'll just add, you know, and that's going to go to has to evolve, you know, that this is going to change anytime soon. Love to know what you think. so thanks for keeping it to 10 this year. I know, you know, we cut out. You want to keep going? This is Dave Vellante for the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Erik | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jerry Chan | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
March of 2020 | DATE | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2022 | DATE | 0.99+ |
Tony Baer | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
8.3% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
December | DATE | 0.99+ |
38% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
November, 2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Erik Porter Bradley | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
41% | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Mars | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
Netskope | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
63% | QUANTITY | 0.99+ |
Arctic Wolf | ORGANIZATION | 0.99+ |
167 billion | QUANTITY | 0.99+ |
Slootman | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
35% | QUANTITY | 0.99+ |
34% | QUANTITY | 0.99+ |
Snyk | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Florida | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
4% | QUANTITY | 0.99+ |
Greylock | ORGANIZATION | 0.99+ |
Dell APEX Data Storage Services + Equinix Colo | CUBE Conversation
(upbeat music) >> Welcome to this CUBE conversation. I'm Lisa Martin, pleased to welcome back Caitlin Gordon, vice president of product management at Dell Technologies. Caitlin is great to see you again though virtually. >> This is good to see you as well, Lisa. >> Tony Frank is here as well, global client executive at Equinix. Tony, welcome to the program. >> Thank you, Lisa. Good to be here. >> We're going to be talking about some news. Caitlin, let's go back. You and I, before we started filming, we're trying to remember when did we last see each other of course it was virtual, but just refresh the audience's memories with respect to the catalyst for Dell to go into this as a service offering. >> Yeah, I think we're all losing track of the virtual months here. (all laughs) Go back in time a little bit. Yeah, exactly right. The first actual APEX offers really came to market in the spring in May with our APEX Data Storage Services. And at that time we actually had preannounced, but we're going to talk more about here today with our partnership with Equinix. But if we take a step back, why did Dell talk about this as a project and is now really investing for the future, it really connects to a lot of the conversations you guys have here in theCUBE, right? What's happening in IT. What's happening with our customers is that they're looking for outcomes. Yes, they're predominantly, still buying products today, but they're really starting to look for outcomes. They want to be buying those outcomes. They want to have something that is an operating expense for them. Something that we can take, we as the technology, the infrastructure experts can take on, the management can take on the ownership of that equipment and really enable them to focus on their business. So really consumption-based, usage-based infrastructure, all being elastic resources that Dell owns and manages, but customers can still operate. And of course, one of the first offers was APEX Data Storage Services. >> Talk to me a little bit Caitlin about outcomes. I just want to understand what Dell actually is focusing on for its customers, where outcomes are concerned. >> Yeah, and it's interesting as a company, it's a pretty big transformation for us. We have always been a product led company, but it's not really about the product. So when I talk about APEX Data Storage Services, you're not going to hear me mention a product name or anything 'cause what it's about, it's about offering our customers what they're actually looking for, which is in the case of storage, they're looking for, I want either block or file storage. I want a certain tier. So it was at a higher performance. I want a certain capacity of it. And I want to commit for some period of time. That's it. Those are the questions we ask. There's no product names and sizing, and it's really, really simple. And that's what we're talking about. It's really the beginning of really trying to deliver customers an outcome versus a product. >> Got it. APEX Data Storage Services, this is Dell's efforts to supply managed file and block storage of services. Talk to me about that. Talk to me about some of the things, how does it enable the fast time to value as little as 14 days for your customers? >> Yeah, so there's a lot of really important things we're doing here. We're not just taking the products we had and kind of packaging it up in a new financial model. There's a lot of parts to this. It all centers around the APEX console. So the APEX console is where you start, begin really ongoing manage and experience these outcomes from Dell Technologies. And it starts with selecting the service you want. So if you select that you want APEX Data Store and Services, you've pick your type, you pick your tier, pick your time period, and you pick your size, right? And then you're off to the races. And we will be able to, what we're committing to do is delivering that in this view and as little as 14 days, time to value. And for us, one of the benefits of being able to do this as Dell, we have always really thrived in our supply chain and the ability to have that predictability and being able to deliver things as a service, including storage is really something it's just an extension of what we've been able to do there. And our partnerships with Equinix actually is going to enable us to even look at that further and see what we can do to really bring value to our customers as quickly as possible. >> That speed, that time to value is even more important as we've lived through the last tumultuous 18 months. Let's break into the news now. You guys pre-announced some the partnership with Equinix, but talk to me about with respect to APEX Data Storage Services, what's being announced? Caitlin, let's start with you and then Tony will bring you into the conversation. >> Yeah, absolutely. So again, we first released the APEX Data Storage Services in the spring, and we're already enhancing that today. Couple of exciting things. So geographic expansion, so expanding out into additional regions across Europe and Asia who are expanding our support. So we talked about the fact that it's block and it's file. Well, actually on our file capability here in our file outcome, we now will have the ability to support an S3 protocol. So you can do that app development and run your operations all off the same platform. So that's an exciting new expansion there. We're also enabling partners sell-through. Our partners are really, really important, whether they're resell partners or technology partners like Equinix. Partner sell-through is another important piece. And of course, most important for our conversation today is the exciting new announcement of the fact that we are going to offer APEX Data Storage Services available on Equinix facilities all integrated into the APEX console. The fifth question is now, where do you want your APEX Data Storage Services? You can select a Dell provided facility and you get the choice to select the different cities of Equinix locations. And we're going to provide that single bill and experience through Dell, but on the backend, we've worked with Tony and team for months to get this, to be a very streamlined experience for our customers. >> Tony talk to us about this from Equinix's perspective. >> Yeah, we're very excited. Caitlin, thank you very much and Lisa, thank you. Very excited to be part of what Dell is doing with APEX and enable enterprise customers to deliver, to get delivered to them at Equinix facilities storage as a service, in addition to additional Equinix capabilities, really enabling agile enterprises to distribute their infrastructure across the world, leveraging Dell product, Dell management, and to get access to partners, to their other footprints, to cloud service providers, et cetera, all within the footprint of Equinix. >> So Caitlin, APEX Data Storage Service in secure colo facilities in conjunction with Equinix, talk to me about what the reception has been from from Dell customers. >> Yeah, it's been really fun. I mean, first of all, when we thought about data center providers are a critical part of us being able to deliver that outcome to customers. And when we looked at the ecosystem of partners it was very clear who we were going to be partnering with. Equinix was really the best partner for us. We all already had been working together in many different ways and matters taking this partnership to the next level and what we've already seen actually all the way since earlier this year, we've had many, many customers coming to us either first step first, it was separately, but now it's actually jointly to say, I'm having a challenge and here's my challenge. And most of these conversations start in one way. I'm getting out of the data center business. And the nice thing for us is that between our two companies, we can solve that, right? We have the combination of the right infrastructure. And with our partnership with Equinix, you partner that with the data center services, you can actually give that full outcome to a customer and we were solving those separately and now we're solving those together. >> The spokes wanting to get out of the data center. If we think about in the last year and a half, how inaccessible the data centers were, Tony, I want to get your perspective on the colo market. And as we look at IT today, the acceleration of it and digital and cloud adoption and getting out of the data centers that we've seen in the last 18 months, help me understand why the colo market is really key today for the future of IT. >> Absolutely Lisa. So focusing on outcomes as Caitlin outlined earlier is a really important part of really how IT has managed this pandemic and thinking about how do we solve for this vast distributed set of employees that we used to have aggregated in a single building or multiple buildings, but really spearheaded in a couple locations. And all of a sudden everything became out in rural America, out rural Europe, out everywhere, employees were spread out and they needed a way for as an IT team to bring together the network, the security and the ability to be very agile and focus on an outcome as opposed to, how am I going to get this next piece of equipment, this next storage device, this next compute system in my data center and add the cooling and the power and all the things that they have to think about. And really it was an outcome. How do I give my employees the best experience possible, my partners that access they need to my systems and the various ways that we interact together? So the colo market as a whole has been really changed dramatically through the whole pandemic. And if you didn't know Zoom two years ago, it's your best friend now, or it's your least favorite way to do business. But the only way we have to do business in the world that we're living in today. >> A lifeline, and here we are zooming with each other right now. Let's talk about, Tony I want to stick with you. Let's talk about this partnership between Dell and Equinix. Why is this such a compelling partnership? Talk to me about that from Equinix's perspective. >> Yeah, we're so excited to be able to be partnered with the number one pro leader and provider of infrastructure and infrastructure services. We have really been a niche provider for the last 15 years. We're a 21, 22 year old company, and we focused on developing ecosystems. And those were at first the internet. We brought the telecom providers together to make the internet work. And then on top of that started enabling things like digital trading, also enabling all sorts of ad exchanges so that you see the banner ads that apply to you when you go to a website. And so we were well known within those ecosystems that we worked within, but getting out to the enterprise has been a big challenge. And Dell brings us those relationships. They bring that expertise, that trusted advisor kind of role. And so being able to extend our sales team and really leverage what Dell has done across small, medium, large, and very large enterprise is a real win for us. And it allows us to achieve a scale that we wouldn't have been able to achieve by ourselves without breaking the bank, trying to hire people and trying to get them familiar with those customers. And so Dell brings us into that. We're able to complete what I call the three legged stool, the compute, the storage, and now the networking aspects can be dealt with in a single conversation around an outcome. And APEX gives us a chance to really be agilely available as Dell's customers define that for themselves and to deploy the infrastructure where they need it and to achieve those outcomes that they're trying to get to. >> So some ostensible value that Equinix is getting by the Dell partnership, He said, pulling us into the enterprise, facilitating that scale. Caitlin, talk to me about this from Dell's lens. What makes this partnership so compelling for Dell and the future of it as a service? >> I'm laughing as Tony's talking through that 'cause it tees it up perfectly. From Dell's perspective, when we looked at data center providers, one of the challenges for us is we're a global IT provider. So we had to partner with someone who understood what it meant to operate and manage data centers at a global scale and locations all over the world. There were very short list to choose from. Once you look at it from that lens, but more importantly, and what Tony, you already hit on, the networking, the interconnects that we have in our partnership with Equinix are incredibly valuable. 'Cause ultimately, although customers start going to a colo facility because they want out of the data center business, they don't want to be managing racks and power and cooling and all of that, oftentimes what actually the value they find once they get there and why they stay and grow is those interconnects, the ability to connect to other tenants in these facilities and the ability to connect into the hyperscalers and the richness of those interconnects with Equinix was truly unmatched. And that's why it's been such an important partnership for us. >> Tony, what's been some feedback from the Equinix customer base. >> Well, it's really funny. I spent half of my time trying to figure out with my team, how we're going to solve for storage as a service, the next geography, the next product. But the other half of the time is spent who on the team is the right person to go pair up with the Dell team and get the Dell team brought into a discussion. And it's going by directionally right now, the volume is picking up. The velocity is picking up and it really seems to be like a snowball, just going down the hill, it's just picking up speed. And with every interaction we're gaining trust with each other, we're gaining competence in what the message is and how to solve for it. And we're working out the various ways, in a predictive way, what are most people asking for? But the wonderful thing is there's custom availability to figure out a solution for just about any problem that the IT or infrastructure focus teams in the enterprise are looking to solve for. >> Tony, sticking with you for a final question or two, in terms of the last few months, or have you seen any industries in particular that are really readily adopting this? We've seen so much change across industries in the last 18 months. I'm just curious if you're seeing any industries that are particularly taking advantage of this capability in this partnership. >> Yeah, I would point to highly regulated industries thinking about financial, thinking about governments, and it's not just a U.S. situation. This is a global situation and data sovereignty where that matters to a particular customer is really important that they keep that data in the geography that it needs to stay in is defined by the different governments around the world. You see the financial industry has been a first mover towards electronic trading and really disrupted thankfully prior to the pandemic, the way trading was done because in-person trading wasn't going to happen anymore. And so in the highly regulated world, the healthcare, the financials. Those folks are definitely looking for a solution that has certifications across the board to help them say to their auditors we've got this covered. That's something we're able to bring to the table for Dell. And then it also helps that the first movers sort of towards a digital infrastructure were insurance companies and others that saw the value of leveraging partnerships and bringing together things as quickly and fast as they could without deploying huge global networks to try and make it all happen. They can instead virtually meet in the same room, leveraging our software defined network called Equinix Fabric. It's been a real win for the regulated industries certainly. >> Got it, thanks for that Tony. Caitlin, last question for you. This is Dell managed so single bill from Dell, where can the viewers go to learn more information about this new partnership? >> Delltechnologies.com/apex, you'll learn more about all things, APEX, really the APEX consoles, the experience. So you can learn more about it there. And then of course, your friendly neighborhood, Dell EMC rep, and or channel partner. Now that we've got that partner enablement as well. >> Delltechnologies.com/apex. Caitlin and Tony, thank you so much for joining us today sharing the exciting news about what's new with Dell and Equinix, and what's in it for your customers and your partners. We appreciate your time. >> Thanks Lisa. >> Thank you Lisa. >> For Caitlin Gordon and Tony Frank, I'm Lisa Martin, you've been watching theCUBE conversation. (upbeat music)
SUMMARY :
Caitlin is great to see Tony Frank is here as well, Good to be here. but just refresh the audience's and really enable them to Talk to me a little bit but it's not really about the product. how does it enable the fast time to value and the ability to have but talk to me about with respect in the spring, and we're Tony talk to us about this and to get access to partners, talk to me about what And the nice thing for us is and getting out of the data centers and all the things that Talk to me about that from and now the networking and the future of it as a service? and the ability to connect from the Equinix customer base. and it really seems to be in terms of the last few months, in the geography that it needs to stay in to learn more information really the APEX consoles, the experience. sharing the exciting news about what's new For Caitlin Gordon and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Caitlin | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Caitlin Gordon | PERSON | 0.99+ |
APEX Data Storage Services | ORGANIZATION | 0.99+ |
Tony Frank | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
14 days | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
fifth question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
APEX Data Storage Services | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
single | QUANTITY | 0.98+ |