Image Title

Search Results for Ezmeral Data:

Accelerating Your Data driven Journey The HPE Ezmeral Strategic Road Ahead | HPE Ezmeral Day 2021


 

>>Yeah. Okay. Now we're going to dig deeper into HP es moral and try to better understand how it's going to impact customers. And with me to do that are Robert Christensen is the vice president strategy in the office of the C, T. O. And Kumar Srikanth is the chief technology officer and head of software both, of course, with Hewlett Packard Enterprise. Gentlemen, welcome to the program. Thanks for coming on. >>Good seeing you. Thanks for having us. >>Always. Great. Great to see you guys. So, Esmeralda, kind of a interesting name. Catchy name. But tomorrow, what exactly is H P E s bureau? >>Yeah. It's indeed a catchy name. Our branding team done a fantastic job. I believe it's actually a derivation from Esmeralda. The Spanish for Emerald Berlin. Supposed to have some very mystical powers. Um, and they derived as moral from there, and we all actually, initially that we heard it was interesting. Um, so as well was our effort to take all the software, the platform tools that HB has and provide these modern operating platform to the customers and put it under one brand. It has a modern container platform. It has a persistent stories distribute the date of February. It has been foresight, as many of our customers similar, So it's the think of it as a container platform offering for modernization of the civilization of the customers. >>Yeah, it's an interesting to talk about platform, so it's not a lot of times people think product, but you're positioning it as a platform, so it has a broader implications. >>That's very true. So as the customers are thinking of this civilization, modernization containers and microservices, as you know there has become, has become the stable whole. So it's actually a container orchestration platform. It offers open source proven. It is as well as the persistence always bolted to >>so by the way, s moral, I think emerald in Spain, I think in the culture it also has immunity powers as well. So immunity >>from >>lock in and all those other terrible diseases. Maybe it helps us with covid to rob Robert. When you talk to customers, what problems do you probe for that that is immoral. Can can do a good job solving. >>Yeah, they That's a really great question because a lot of times they don't even know what it is that they're trying to solve for, other than just a very narrow use case. But the idea here is to give them a platform by which they can bridge both the public and private environment for what to do an application development specifically in the data side. So when they're looking to bring Container Ization, which originally got started on the public cloud and has moved its way, I should say, become popular in the public cloud and has moved its way on premises. Now Esmeralda really opens the door to three fundamental things. But how do I maintain an open architecture like you're referring to some low or oh, no lock in of my applications And there were two. How do I gain a data fabric or data consistency of accessing the data so I don't have to rewrite those applications when I do move them around and then, lastly, where everybody is heading down, the real value is in the AI ML initiatives that companies are are really bringing that value of their data and locking the data at where the data is being generated and stored. And so the is moral platform is those multiple pieces that I was talking about stacked together to deliver those solutions for the client. >>So come on, what's the How does it work? What's the sort of I p or the secret sauce behind it all? What makes HP different? >>Continuing our team of medical force around, uh, it's a moral platform for optimizing the data Indians who were close. I think I would say there are three unique characteristics of this platform. Number one is actually provides you both an ability to run stable and stateless were close under the same platform, and number two is as we were thinking about. Unlike analogues, covenant is open source. It actually produce you all open source government as well as an orchestration behind you. So you can actually you can provide this hybrid, um, thing that drivers was talking about. And then actually we built the work flows into it. For example, we're actually announced along with Esmeralda MLS, but on their customers can actually do the work flow management. Our own specifically did the work force. So the magic is if you want to see the secrets of is all the efforts that have been gone into some of the I p acquisitions that HBs the more years we should be. Blue Data bar in the nimble emphasize, all these pieces are coming together and providing a modern digitalization platform for the customers. >>So these pieces, they all have a little bit of a machine intelligence in them. Yeah, People used to think of a I as the sort of separate thing, having the same thing with containers, right? But now it's getting embedded in into the stack. What? What is the role of machine intelligence or machine learning in Edinburgh? >>I would take a step back and say, You know this very well. They're the customer's data amount of data that is being generated, and 95% or 98% of data is machine generated, and it has a serious amount of gravity, and it is sitting at the edge, and we were the only the only one that edge to the cloud data fabric that's built. So the number one is that we are bringing computer or a cloud to the data. They're taking the data to the cloud like if you go, it's a cloud like experience that provides the customer. Yeah, is not much value to us if we don't harness the data. So I said this in one of the blood. Of course, we have gone from collecting the data era to the finding insights into the data so that people have used all sorts of analysis that we are to find data is the new oil to the air and the data. And then now you're applications have to be modernized. And nobody wants to write an obligation in a non microservices fashion because you want to build the modernization. So if you bring these three things, I want to have a data. Gravity have lots of data. I had to build an area applications and I want to have an idea those three things I think we bring together to the customs. >>So, Robert, let's stay on customers from it. I mean, you know, I want to understand the business impact, the business case. I mean, why should all the you know, the cloud developers have all the fun? You mentioned that you're bridging the cloud and on Prem, uh, they talk about when you talk to customers and what they are seeing is the business impact. What's the real drivers for them. >>That's a great question because at the end of the day I think the reason survey that was that cost and performance is still the number one requirement for the real close. Second is agility, the speed of which they want to move. And so those two are the top of mind every time. But the thing we find in as moral, which is so impactful, is that nobody brings together the silicon, the hardware, the platform and all that stacked together work and combined, like as moral does with the platforms that we have and specifically, you know, when we start getting 90 92 93% utilization out of ai ml workloads on very expensive hardware, it really, really is a competitive advantage over a public cloud offering which does not offer those kind of services. And the cost models are so significantly different. So we do that by collapsing the stack. We take out as much intellectual property, give me, um, as much software pieces that are necessary. So we are closest to the silicon closest to the applications bring into the hardware itself, meaning that we can inter leave the applications, meaning that you can get to true multi tendency on a particular platform that allows you to deliver a cost optimized solution. So when you talk about the money side, absolutely. There's just nothing out there and then on the second side, which is agility. Um, one of the things that we know is today is that applications need to be built in pipelines. Right? This is something that has been established now for quite some time now. That's really making its way on premises. And what Kumar was talking about was, how do we modernize? How do we do that? Well, there's going to be something that you want to break into Microservices and containers. There's something you don't now the ones that they're going to do that they're gonna get that speed and motion etcetera out of the gate. And they can put that on premises, which is relatively new these days to the on premises world. So we think both will be the advantage. >>Okay, I want to unpack that a little bit. So the cost is clearly really 90 plus percent utilization. I mean, come on. You know, even even a pre virtualization. We know what it was like even with virtualization, you never really got that high. I mean, people would talk about it, but are you really able to sustain that in real world workloads? >>Yeah, I think when you I think when you when you make your exchangeable currency into small pieces, you can insert them into many areas. And we have one customer was running 18 containers on a single server and each of those containers, as you know, early days of data. You actually modernized what we consider we won containers of micro B. Um, so if you actually build these microservices and you have all anti affinity rules and you have rationing formulas all correctly, you can pack being part of these things extremely violent. We have seen this again. It's not a guarantee. It all depends on your application and your I mean, as an engineer, we want to always understand how this can be that sport. But it is a very modern utilization of the platform with the data and once you know where the data is, and then it becomes very easy to match those >>now. The other piece of the value proposition that I heard Robert is it's basically an integrated stack, so I don't have to cobble together a bunch of open source components. It's there. There's legal implications. There's obviously performance implications that I would imagine that resonates is particularly with the enterprise buyer, because they have the time to do all this integration. >>That's a very good point. So there is an interesting, uh, interesting question that enterprise they want to have an open source, so there is no lock in. But they also need help to implement and deploy and manage it because they don't have expertise. And we all know that Katie has actually brought that AP the past layer standardization. So what we have done is we've given the open source and you write to the covenant is happy, but at the same time orchestration, persistent stories, the data fabric, the ai algorithms, all of them are bolted into it. And on the top of that, it's available both as a licensed software and run on Prem. And the same software runs on the Green Lake so you can actually pay as you go and you don't we run it for them in in a collar or or in their own data center. >>Oh, good. I was one of my latter questions, so I can get this as a service paid by the drink. Essentially, I don't have to install a bunch of stuff on Prem and pay >>a perpetual license container at the service and the service in the last Discover. And now it's gone production. So both MLRS is available. You can run it on friends on the top of Admiral Container platform or you can run inside of the Green Bay. >>Robert, are there any specific use case patterns that you see emerging amongst customers? >>Yeah, absolutely. So there's a couple of them. So we have a really nice relationship that we see with any of the Splunk operators that were out there today. Right? So Splunk containerized their operator. That operator is the number one operator, for example, for Splunk, um, in the i t operation side or notifications as well as on the security operation side. So we found that that runs highly effective on top of his moral on top of our platforms that we just talked about what, uh, Kumar just talked about, but I want to also give a little bit of backgrounds to that same operator platform. The way that the Admiral platform has done is that we've been able to make highly active, active with a check availability at 95 nines for that same spark operator on premises on the kubernetes open source, which is, as far as I'm concerned. Very, very high end computer science work. You understand how difficult that is? Uh, that's number one. Number two, you'll see spark just a spark. Workloads as a whole. All right. Nobody handles spark workloads like we do. So we put a container around them, and we put them inside the pipeline of moving people through that basic, uh uh, ml ai pipeline of getting a model through its system through its train and then actually deployed to our MLS pipeline. This is a key fundamental for delivering value in the data space as well. And then, lastly, this is This is really important. When you think about the data fabric that we offer, um, the data fabric itself, it doesn't necessarily have to be bolted with the container platform to container at the actual data. Fabric itself can be deployed underneath a number of our for competitive platforms who don't handle data. Well, we know that we know that they don't handle it very well at all. And we get lots and lots of calls for people say, Hey, can you take your as Merrill data for every and solve my large scale, highly challenging data problems, we say yes. And then when you're ready for a real world full time but enterprise already, container platform would be happy to privilege. >>So you're saying if I'm inferring correctly, you're one of the values? Is your simplifying that whole data pipeline and the whole data science science project? Unintended, I guess. >>Okay, >>that's so so >>absolutely So where does the customer start? I mean, what what are the engagements like? Um, what's the starting point? >>It's being is probably one of the most trusted enterprise supplier for many, many years, and we have a phenomenal workforce of the both. The PowerPoint next is one of the leading world leading support organization. There are many places to start with. The right one is Obviously all these services are available on the green leg as we just start apart and they can start on a pay as you go basis. We have many customers that. Actually, some of the grandfather from the early days of pleaded and map are and they're already running, and they actually improvised on when, as they move into their next generation modernization, um, you can start with simple as metal container platform with persist with the story compared to this operation and can implement as as little as $10 and to start working. Um, and finally, there is a a big company like HP E. As an enterprise company defined next services. It's very easy for the customers to be able to get that support on the day to operation. >>Thank you for watching everybody's day volonte for the Cube. Keep it right there for more great content from Esmeralda. >>A mhm, okay.

Published Date : Mar 17 2021

SUMMARY :

Christensen is the vice president strategy in the office of the C, T. O. And Kumar Srikanth is the chief technology Thanks for having us. Great to see you guys. It has been foresight, as many of our customers similar, So it's the think of Yeah, it's an interesting to talk about platform, so it's not a lot of times people think product, So as the customers are thinking of this civilization, so by the way, s moral, I think emerald in Spain, I think in the culture it also has immunity When you talk to customers, what problems do you probe for that that is immoral. And so the is moral platform is those multiple pieces that I was talking about stacked together So the magic is if you want to see the secrets of is all the efforts What is the role of machine intelligence They're taking the data to the cloud like if you go, it's a cloud like experience that I mean, you know, I want to understand the business impact, But the thing we find in as moral, which is so impactful, So the cost is clearly really 90 plus percent of the platform with the data and once you know where the data is, The other piece of the value proposition that I heard Robert is it's basically an integrated stack, on the Green Lake so you can actually pay as you go and you don't we by the drink. You can run it on friends on the top of Admiral Container platform or you can run inside of the the container platform to container at the actual data. data pipeline and the whole data science science project? It's being is probably one of the most trusted enterprise supplier for many, Thank you for watching everybody's day volonte for the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RobertPERSON

0.99+

SpainLOCATION

0.99+

95%QUANTITY

0.99+

18 containersQUANTITY

0.99+

$10QUANTITY

0.99+

FebruaryDATE

0.99+

EdinburghLOCATION

0.99+

SplunkORGANIZATION

0.99+

twoQUANTITY

0.99+

Robert ChristensenPERSON

0.99+

KatiePERSON

0.99+

98%QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

KumarPERSON

0.99+

Kumar SrikanthPERSON

0.99+

HPORGANIZATION

0.99+

eachQUANTITY

0.99+

SecondQUANTITY

0.99+

H P EORGANIZATION

0.99+

bothQUANTITY

0.99+

T. O.PERSON

0.99+

PowerPointTITLE

0.99+

one customerQUANTITY

0.99+

tomorrowDATE

0.98+

90 plus percentQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

Emerald BerlinPERSON

0.98+

second sideQUANTITY

0.98+

HPEORGANIZATION

0.97+

three thingsQUANTITY

0.96+

EsmeraldaPERSON

0.96+

Esmeralda MLSORGANIZATION

0.96+

PremORGANIZATION

0.95+

single serverQUANTITY

0.94+

HP E.ORGANIZATION

0.94+

three unique characteristicsQUANTITY

0.93+

SpanishOTHER

0.93+

number twoQUANTITY

0.93+

one brandQUANTITY

0.91+

three fundamental thingsQUANTITY

0.89+

MerrillORGANIZATION

0.85+

Number oneQUANTITY

0.83+

coupleQUANTITY

0.83+

Green LakeORGANIZATION

0.83+

90 92 93%QUANTITY

0.8+

Number twoQUANTITY

0.8+

EzmeralLOCATION

0.8+

BayLOCATION

0.79+

HBORGANIZATION

0.77+

GravityORGANIZATION

0.77+

AdmiralORGANIZATION

0.74+

IzationTITLE

0.73+

robPERSON

0.7+

95 ninesQUANTITY

0.68+

IndiansPERSON

0.68+

DiscoverORGANIZATION

0.67+

GreenORGANIZATION

0.64+

number oneQUANTITY

0.61+

CubeCOMMERCIAL_ITEM

0.57+

Ezmeral DayEVENT

0.55+

2021DATE

0.55+

EsmeraldaORGANIZATION

0.54+

ContainerORGANIZATION

0.5+

AdmiralTITLE

0.49+

A Day in the Life of Data with the HPE Ezmeral Data Fabric


 

>>Welcome everyone to a day in the life of data with HPE as well. Data fabric, the session is being recorded and will be available for replay at a later time. When you want to come back and view it again, feel free to add any questions that you have into the chat. And Chad and I joined stark. We'll, we'll be more than willing to answer your questions. And now let me turn it over to Jimmy Bates. >>Thanks. Uh, let me go ahead and share my screen here and we'll get started. >>Hey everyone. Uh, once again, my name is Jimmy Bates. I'm a director of solutions architecture here for HPS Merle in the Americas. Uh, today I'd like to walk you through a journey on how our everyday life is evolving, how everything about our world continues to grow more connected about, and about how here at HPE, how we support the data that represents that digital evolution for our customers, with the HPE as rural data fabric to start with, let's define that term data. The concept of that data can be simplified to a record of life's events. No matter if it's personal professional or mechanical in nature, data is just records that represent and describe what has happened, what is happening or what we think will happen. And it turns out the more complete record we have of these events, the easier it is to figure out what comes next. >>Um, I like to refer to that as the omnipotence protocol. Um, let's look at this from a personal perspective of two very different people. Um, let me introduce you to James. He's a native citizen of the digital world. He's, he's been, he's been a citizen of this, uh, an a career professional in the it world for years. He's always on always connected. He loves to get all the information he needs on a smartphone. He works constantly with analytics. He predicts what his customers need, what they want, where they are, uh, and how best to reach them. Um, he's fully embraced the use of data in his life. This is Sue SCA. She's, she's a bit of a, um, of an opposite to James. She's not yet immigrated to our digital world. She's been dealing with the changes that are prevalent in our times. And she started a new business that allows her customers, the option of, um, of expressing their personalities and the mask that they wear. She wants to make sure her customers can upload images, logos, and designs in order to deliver that customized mask, uh, to brighten their interactions with others while being safe as they go about their day. But she needs a crash course in digital and the digital journey. She's recently as, as most of us have as transitioned from an office culture to a work from home culture, and she wants to continue to grow that revenue venture on the side >>At the core of these personalities is a journey that is, that is representative common challenge that we're all facing today. Our world has been steadily shrinking as our ability to reach out to one another has steadily increased. We're all on that journey together to know more about what is happening to be connected to what our business is doing to be instantly responsive to our customer needs and to deliver that personalized service to every individual. And it as moral, we see this across every industry, the challenge of providing tailored experiences to potential customers in a connected world to provide constant information on deliveries that we requested or provide an easier commute to our destination to, to change the inventories, um, to the just-in-time arrival for our fabrications to identify quality issues in real time to alter the production of each product. So it's tailored to the request of the end user to deliver energy in, in smarter, more efficient ways, uh, without injury w while protecting the environment and to identify those, those, uh, medical emerging threats, and to deliver those personalized treatments safely. >>And at the core of all of these changes, all of these different industries is data. Um, if you look at the major technology trends, um, they've been evolving down this path for some time now, we're we're well into our cloud journey. The mobile platform world is, is now just part of our core strategies. IOT is feeding constant streams of data often over those mobile, uh, platforms. And the edge is increasingly just part of our core, all of this combined with the massive amounts of data that's becoming, becoming available through it is driving autonomous solutions with machine learning and AI. Uh, this is, this is just one aspect of this, this data journey that we're on, but for success, it's got, uh, sorry for success. It's got to be paired. Um, it's gotta be paired with action. >>Um, >>Well, when you look at the, uh, um, if we take a look at James and Cisco, right, we can start to see, um, with the investments in those actions, um, how their travel they're realizing >>Their goals, >>Services, efforts, you know, uh, focused, deliver new data-driven applications are done in new ways that are smaller in nature and kind of rapidly iterate, um, to respond to the digital needs of, of our new world, um, containerization to deploy and manage those apps anywhere in our connected world, they need to be secure we'll time streaming architecture, um, from, from the, from the beginning to allow for continual interactions with our changing customer demands and all of this, especially in our current environment, while running cost reduction initiatives. This is just the current world that, that our solutions must live in. Um, with that framework in mind, um, I'd like to take the remainder of our time and kind of walk through some of the use cases where, where we at HPE helped organizations through this journey with, with, with the ASML data fabrics, >>Let's >>Start with what's happening in the mobile world. In fact, the HPE as moral data fabric is being used by a number of companies to provide infinitely personalized experiences. In this case, it could be James could be sushi. It could be anyone that opens up their smartphone in the morning, uh, quickly checking what's transpiring in the world with a selection of curated, relative relevant articles, images, and videos provided by data-driven algorithm workloads, all that data, the logs, the recommendations, and the delivery of those recommendations are done through a variety of companies using HP as rural software, um, that provides a very personalized experience for our users. In addition, other companies monitor the service quality of those mobile devices to ensure optimize connectivity as they move throughout their day. The same is true for digital communication for that video communication, what we're doing right now, especially in these days where it's our primary method of connecting as we deal with limited physical engagements. Um, there's been a clear spike in the usage of these types of services. HPE, as Merle is helping a number of these companies deliver on real time telemetry analysis, predicting demand, latency, monitoring, user experience, and analyzing in real time, responding with autonomous adjustments to maintain pleasant experiences for all participants involved. >>Um, >>Another area, um, we're eight or HBS ML data fabric is playing a crucial role in the daily experience inside our automobiles. We invest a lot of ourselves in our cars. We expect tailored experiences that help us stay safe and connected as we move from one destination to another, in the areas of autonomous driving connected car, a number of major car companies in the world are using our data fabric to take autonomous driving to the next level where it should be effectively collecting all data from sensors and cameras, and then feeding that back into a global data fabric. So that engineers that develop cars can train next generation, future driving algorithms that make our driving experience safer and more autonomy going forward. >>Now let's take a look at a different mode of travel. Uh, the airline industry is being impaired. Varied is being impacted very differently today from, from the car companies, with our software, uh, we help airlines travel agencies, and even us as consumers deal with pricing, calculations and challenges, uh, with, um, air traffic services. We, we deal with, um, um, uh, delivering services around route predictions on time arrivals, weather patterns, and tagging and tracking luggage. We help people with flight connections and finding out what the figuring out what the best options are for your, for your travel. Uh, we collect mountains of data, secure it in a global data fabric, so it can provide, be provided back in an analyzed form with it. The stressed industry can contain some very interesting insights, provide competitive offerings and better services to us as travelers. >>This is also true for powering biometrics. At scale, we work with the biggest biometrics databases in the world, providing the back end for their enormous biometric authentication pursuit. Just to kind of give you a rough idea. A biometric authentication is done with a number of different data points from fingerprints. I re scans numerous facial features. All of these data points are captured for every individual and uploaded into the database, such that when the user is requesting services, their biometric metrics can be pooled and validated in seconds. From a scale perspective, they're onboarding 1 million people a day more than 200 million a year with a hundred percent business continuity and the options do multi-master and a global data fabric as needed ensuring that users will have no issues in securely accessing their pension payouts medical services or what other types of services. They may be guaranteed >>Pivoting >>To a very different industry. Even agriculture was being impacted in digital ways. Using HPE as well, data fabric, we help farmers become more digital. We help them predict weather patterns, optimize sea production. We even helped see producers create custom seed for very specific weather and ground conditions. We combine all of these things to help optimize production and ensure we can feed future generations. In some cases, all of these data sources collected at the edge can be provided back to insurance companies to help farmers issue claims when micro patterns affect farmers in negative ways, we all benefit from optimized farming and the HBS Modena fabric is there to assist in that journey. We provide the framework and the workload guidance to collect relevant data, analyze it and optimize food production. Our customers demonstrate the agricultural industry is most definitely my immigrating to our digital world. >>Now >>That we've got the food, we need to ship it along with everything else, all over the world, as well as offer can be found in action in many of the largest logistics companies in the world. I mean, just tracking things with greater efficiency can lead to astounding insights. What flights and ships did the package take? What Hans held it along its journey, what weather conditions did it encounter? What, what customs office did it go through and, and how much of it's requested and being delivered this along with hundreds of other telemetry points can be used to provide very accurate trade and economic predictions around what's going on with trade in the world. These data sets are being used very intensively to understand economy conditions and plan for future event consequences. We also help answer, uh, questions for shipping containers that are, that are more basic. Uh, like where is my container located at is my container still on the correct ship? Uh, surprisingly, uh, this helps cut down on those pesky little events like lost containers. >>Um, it's astounding the amount of data that's in DNA, and it's not just the pairs. It's, it's the never ending patterns found with other patterns that none of it can be fully understood unless the micro is maintained in context to the macro. You can't really understand these small patterns unless you maintain that overall understanding of the entire DNA structure to help the HVS mold data fabric can be found across every aspect of the medical field. Most recently was there providing the software framework to collect genomic sequencing, landing it in the data fabric, empowering connected availability for analysis to predict and find patterns of significance to shorten the effort it takes to identify those potential triggers and make things like vaccines become becoming available. In record time. >>Data is about people at HPE asthma. We keep people connected all around the world. We do this in a variety of ways. We we've already looked at several of the ways that that happens. We help you find data. You need, we help you get from point a to point B. We help make sure those birthday gifts show up on time. Some other interesting ways we connect people via recipes, through social platforms and online services. We help people connect to that new recipe that is unexpected, but may just be the kind of thing you need for dinner tonight at HPDs where we provide our customers with the power to deliver services that are tailored to the individual from edge to core, from containers to cloud. Many of the services you encounter everyday are delivered to you through an HV as oral global data fabric. You may not see it, but we're there in the morning in the morning when you get up and we're there in the evening. Um, when you wind down, um, at HPE as role, we make data globally available across everywhere that your business needs to go. Um, I'd like to thank everyone, uh, for the time that you've given us today. And I'd like to turn it back over and open up the floor for questions at this time, >>Jimmy, here's a question. What are the ways consumers can get started with HPS >>The fabric? Well, um, uh, there's several ways to get started, right? We, we, uh, first off we have software available that you can download that there's extensive documentation and use cases posted on our website. Um, uh, we have services that we offer, like, um, assessment services that can come in and help you assess the, the data challenges that you're having, whether you're, you're just dealing with a scale issue, a security issue, or trying to migrate to a more containerized approach. We have a services to help you come in, assess that aspect. Um, we have a getting started bundles, um, and we have, um, so there's all kinds of services that, that help you get started on your journey. So what >>Does a typical first deployment look like? >>Well, that's, that's a very, very interesting question. Um, a typical first deployment, it really kind of varies depending on where you're at in the material. Are you James? Are you, um, um, Cisco, right? It really depends on, on where you're at in your journey. Um, but a typical deployment, um, is, is, is involved. Uh, we, we like to come in, we we'd like to do workshops, really understand your specific challenges and problems so that we can determine what solutions are best for you. Um, that to take a look at when we kind of settle on that we, we, um, the first deployment, uh, is, um, there's typically, um, a deployment of, uh, a, uh, a service offering, um, w with a software to kind of get you started along the way we kind of bundle that aspect. Um, as you move forward, if you're more mature and you already have existing container solutions, you already have existing, large scale data aspects of it. Um, it's really about the specific use case of your current problem that you're dealing with. Um, every solution, um, is tailored towards the individual challenges and problems that, that each one of us are facing. >>I break, they mentioned as part of the asthma family. So how does data fabric pair with the other solutions within Israel? >>Well, so I like to say there's, um, there, there's, there's three main areas, um, from a software standpoint, um, for when you count some of our, um, offerings with the GreenLake solution, but there are, so there are really four main areas with ESMO. There's the data fabric offering, which is really focused on, on, on, on delivering that data at scale for AI ML workloads for big data workloads for containerized workloads. There is the ESMO container platform, which really solves a lot of, um, some of the same problems, but really focus more on a compute delivery, uh, and a hundred percent Kubernetes environment. We also have security offerings, um, which, which help you take in this containerized world, uh, that help you take the different aspects of, um, securing those applications. Um, so that when the application, the containerized applications move from one framework or one infrastructure from one to the other, it really helps those, the security go with those applications so that they can operate in a zero trust environment. And of course, all of this, uh, options of being available to you, where everything has a service, including the hardware through some of our GreenLake offerings. So those are kind of the areas that, uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. >>Well, thanks, Jimmy really appreciate it. That's all the questions we have right now. So is there anything that you'd like to close with? >>Uh, you know, the, um, I I'm, I find it I'm very, uh, I'm honored to be here at HPE. Um, I, I really find it, it's amazing. Uh, as we work with our customers solving some really challenging problems that are core to their business, um, it's, it's always an interesting, um, interesting, um, day in the office because, uh, every problem is different because every problem is tailored to the specific challenges that our customers face. Um, while they're all will well, we will, what we went over today is a lot of the general areas and the general concepts that we're all on together in a journey, but the devil's always in the details. It's about understanding the specific challenges in the organization and, and as moral software is designed to help adapt, um, and, and empower your growth in your, in your company. So that you're focused on your business, in the complexity of delivering services across this connected world. That's what as will takes off your plate so that you don't have to worry about that. It just works, and you can focus on the things that impact your business more directly. >>Okay. Well, we really thank everyone for coming today and hope you learned, uh, an idea about how data fabric can begin to help your business with it. All of a sudden analytics, thank you for coming. Thanks.

Published Date : Mar 17 2021

SUMMARY :

Welcome everyone to a day in the life of data with HPE as well. Uh, let me go ahead and share my screen here and we'll get started. that digital evolution for our customers, with the HPE as rural data fabric to and designs in order to deliver that customized mask, uh, to brighten their interactions with others while protecting the environment and to identify those, those, uh, medical emerging threats, all of this combined with the massive amounts of data that's becoming, becoming available through it is This is just the current world that, that our solutions must live in. the service quality of those mobile devices to ensure optimize connectivity as they move a number of major car companies in the world are using our data fabric to take autonomous uh, we help airlines travel agencies, and even us as consumers deal with pricing, Just to kind of give you a rough idea. from optimized farming and the HBS Modena fabric is there to assist in that journey. and how much of it's requested and being delivered this along with hundreds of other telemetry points landing it in the data fabric, empowering connected availability for analysis to Many of the services you encounter everyday are delivered to you through What are the ways consumers can get started with HPS We have a services to help you uh, a service offering, um, w with a software to kind of get you started with the other solutions within Israel? uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. That's all the questions we have right now. in the organization and, and as moral software is designed to help adapt, an idea about how data fabric can begin to help your business with it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

ChadPERSON

0.99+

Jimmy BatesPERSON

0.99+

JimmyPERSON

0.99+

CiscoORGANIZATION

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

HansPERSON

0.99+

HPS MerleORGANIZATION

0.99+

IsraelLOCATION

0.99+

hundredsQUANTITY

0.99+

HPEORGANIZATION

0.99+

AmericasLOCATION

0.99+

tonightDATE

0.99+

each productQUANTITY

0.98+

HPDsORGANIZATION

0.98+

three main areasQUANTITY

0.97+

ESMOTITLE

0.97+

four main areasQUANTITY

0.96+

more than 200 million a yearQUANTITY

0.96+

MerleORGANIZATION

0.96+

hundred percentQUANTITY

0.95+

one aspectQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

first deploymentQUANTITY

0.94+

one frameworkQUANTITY

0.93+

two very different peopleQUANTITY

0.92+

one infrastructureQUANTITY

0.92+

zero trustQUANTITY

0.88+

Sue SCAPERSON

0.88+

1 million people a dayQUANTITY

0.87+

firstQUANTITY

0.84+

ModenaCOMMERCIAL_ITEM

0.82+

HBSORGANIZATION

0.82+

each oneQUANTITY

0.82+

one destinationQUANTITY

0.77+

eightQUANTITY

0.73+

yearsQUANTITY

0.72+

A DayQUANTITY

0.67+

telemetry pointsQUANTITY

0.67+

KubernetesTITLE

0.61+

EzmeralORGANIZATION

0.58+

JamesORGANIZATION

0.56+

HPEOTHER

0.53+

The Data Drop: Industry Insights | HPE Ezmeral Day 2021


 

(upbeat music) >> Welcome friends to HPE Ezmeral's analytics unleashed. I couldn't be more excited to have you here today. We have a packed and informative agenda. It's going to give you not just a perspective on what HPE Ezmeral is and what it can do for your organization, but you should leave here with some insights and perspectives that will help you on your edge to cloud data journey in general. The lineup we have today is awesome. We have industry experts like Kirk Borne, who's going to talk about the shape this space will take to key customers and partners who are using Ezmeral technology as a fundamental part of their stack to solve really big, hairy, complex real data problems. We will hear from the execs who are leading this effort to understand the strategy and roadmap forward as well as give you a sneak peek into the new ISV ecosystem that is hosted in the Ezmeral marketplace. And finally, we have some live music being played in the form of three different demos. There's going to be a fun time so do jump in and chat with us at any time or engage with us on Twitter in real time. So grab some coffee, buckle up and let's get going. (upbeat music) Getting data right is one of the top priorities for organizations to affect digital strategy. So right now we're going to dig into the challenges customers face when trying to deploy enterprise wide data strategies and with me to unpack this topic is Kirk Borne, principal data scientist, and executive advisor, Booz Allen Hamilton. Kirk, great to see you. Thank you sir, for coming into the program. >> Great to be here, Dave. >> So hey, enterprise scale data science and engineering initiatives, they're non-trivial. What do you see as some of the challenges in scaling data science and data engineering ops? >> The first challenge is just getting it out of the sandbox because so many organizations, they, they say let's do cool things with data, but how do you take it out of that sort of play phase into an operational phase? And so being able to do that is one of the biggest challenges, and then being able to enable that for many different use cases then creates an enormous challenge because do you replicate the technology and the team for each individual use case or can you unify teams and technologies to satisfy all possible use cases. So those are really big challenges for companies organizations everywhere to about. >> What about the idea of, you know, industrializing those those data operations? I mean, what does that, what does that mean to you? Is that a security connotation, a compliance? How do you think about it? >> It's actually, all of those I'm industrialized to me is sort of like, how do you not make it a one-off but you make it a sort of a reproducible, solid risk compliant and so forth system that can be reproduced many different times. And again, using the same infrastructure and the same analytic tools and techniques but for many different use cases. So we don't have to rebuild the wheel, reinvent the wheel re reinvent the car. So to speak every time you need a different type of vehicle you need to build a car or a truck or a race car. There's some fundamental principles that are common to all of those. And that's what that industrialization is. And it includes security compliance with regulations and all those things but it also means just being able to scale it out to to new opportunities beyond the ones that you dreamed of when you first invented the thing. >> Yeah. Data by its very nature as you well know, it's distributed, but for a you've been at this awhile for years we've been trying to sort of shove everything into a monolithic architecture and in in hardening infrastructures or around that. And in many organizations it's become a block to actually getting stuff done. But so how, how are you seeing things like the edge emerge How do you, how do you think about the edge? How do you see that evolving and how do you think customers should be dealing with with edge and edge data? >> Well, that's really kind of interesting. I had many years at NASA working on data systems, and back in those days, the idea was you would just put all the data in a big data center and then individual scientists would retrieve that data and do analytics on it do their analysis on their local computer. And you might say that's sort of like edge analytics so to speak because they're doing analytics at their home computer, but that's not what edge means. It means actually doing the analytics the insights discovery at the point of data collection. And so that's that's really real time business decision-making you don't bring the data back and then try to figure out some time in the future what to do. And I think in autonomous vehicles a good example of why you don't want to do that because if you collect data from all the cameras and radars and lidars that are on a self-driving car and you move that data back to a data cloud while the car is driving down the street and let's say a child walks in front of the car you send all the data back at computes and does some object recognition and pattern detection. And 10 minutes later, it sends a message to the car. Hey, you need to put your brakes off. Well, it's a little kind of late at that point. And so you need to make those discoveries those insight discoveries, those pattern discoveries and hence the proper decisions from the patterns in the data at the point of data collection. And so that's data analytics at the edge. And so, yes, you can ring the data back to a central cloud or distributed cloud. It almost doesn't even matter if, if if your data is distributed sort of any use case in any data scientist or any analytic team and the business can access it then what you really have is a data mesh or a data fabric that makes it accessible at the point that you need it, whether it's at the edge or on some static post event processing, for example typical business quarter reporting takes a long look at your last three months of business. Well, that's fine in that use case, but you can't do that for a lot of other real time analytic decision making. >> Well, that's interesting. I mean, it sounds like you think of the edge not as a place, but as you know where it makes sense to actually, you know the first opportunity, if you will, to process the data at at low latency where it needs to be low latency is that a good way to think about it? >> Yeah, absolutely. It's the low latency that really matters. Sometimes we think we're going to solve that with things like 5G networks. We're going to be able to send data really fast across the wire. But again, that self-driving car has yet another example because what if you, all of a sudden the network drops out you still need to make the right decision with the network not even being . >> That darn speed of light problem. And so you use this term data mesh or data fabric double-click on that. What do you mean by that? >> Well, for me, it's, it's, it's, it's sort of a unified way of thinking about all your data. And when I think of mesh, I think of like a weaving on a loom, or you're creating a blanket or a cloth and you do weaving and you do that all that cross layering of the different threads. And so different use cases in different applications in different techniques can make use of this one fabric no matter what, where it is in the, in the business or again, if it's at the edge or, or back at the office one unified fabric, which has a global namespace. So anyone can access the data they need and sort of uniformly no matter where they're using it. And so it's, it's a way of unifying all of the data and use cases and sort of a virtual environment that it could have that no log you need to worry about. So what's what's the actual file name or what's the actual server this thing is on you can just do that for whatever use case you have. Let's I think it helps you enterprises now to reach a stage which I like to call the self-driving enterprise. Okay. So it's modeled after the self-driving car. So the self-driving enterprise needs the business leaders in the business itself, you would say needs to make decisions oftentimes in real time. All right. And so you need to do sort of predictive modeling and cognitive awareness of the context of what's going on. So all of these different data sources enable you to do all those things with data. And so, for example, any kind of a decision in a business any kind of decision in life, I would say is a prediction. It's you say to yourself if I do this such and such will happen if I do that, this other thing will happen. So a decision is always based upon a prediction about outcomes, and you want to optimize that outcome. So both predictive and prescriptive analytics need to happen in this in this same stream of data and not statically afterwards. And so that's, self-driving enterprises enabled by having access to data wherever you and whenever you need it. And that's what that fabric, that data fabric and data mesh provides for you, at least in my opinion. >> Well, so like carrying that analogy like the self-driving vehicle you're abstracting that complexity away in in this metadata layer that understands whether it's on prem or in the public cloud or across clouds or at the edge where the best places to process that data. What makes sense, does it make sense to move it or not? Ideally, I don't have to. Is that how you're thinking about it is that why we need this notion of a data fabric >> Right. It really abstracts away all the sort of the complexity that the it aspects of the job would require, but not every person in the business is going to have that familiarity with with the servers and the access protocols and all kinds of it related things. And so abstracting that away. And that's in some sense, what containers do basically the containers abstract away all the information about servers and connectivity and protocols and all this kind of thing. You just want to deliver some data to an analytic module that delivers me an insight or a prediction. I don't need to think about all those other things. And so that abstraction really makes it empowering for the entire organization. We like to talk a lot about data democratization and analytics democratization. This really gives power to every person in the organization to do things without becoming an it expert. >> So the last, last question we have time for here. So it sounds like. Kirk, the next 10 years of data are not going to be like the last 10 years, it'd be quite different. >> I think so. I think we're moving to this. Well, first of all, we're going to be focused way more on the why question, like, why are we doing this stuff? The more data we collect, we need to know why we're doing it. And what are the phrases I've seen a lot in the past year which I think is going to grow in importance in the next 10 years is observability. So observability to me is not the same as monitoring. Some people say monitoring is what we do. But what I like to say is, yeah, that's what you do but why you do it is observability. You have to have a strategy. Why, what, why am I collecting this data? Why am I collecting it here? Why am I collecting it at this time resolution? And so, so getting focused on those, why questions create be able to create targeted analytics solutions for all kinds of diff different business problems. And so it really focuses it on small data. So I think the latest Gartner data and analytics trending reports, so we're going to see a lot more focus on small data in the near future >> Kirk borne. You're a dot connector. Thanks so much for coming on the cube and being a part of the program. >> My pleasure (upbeat music) (relaxing upbeat music)

Published Date : Mar 17 2021

SUMMARY :

It's going to give you What do you see as some of the challenges and the team for each individual use case So to speak every time you need and how do you think customers at the point that you need the first opportunity, if you It's the low latency that really matters. And so you use this term data mesh in the business itself, you would say or at the edge where the best in the business is going to So the last, last question data in the near future on the cube and being

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

NASAORGANIZATION

0.99+

Kirk BornePERSON

0.99+

KirkPERSON

0.99+

GartnerORGANIZATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

three different demosQUANTITY

0.98+

todayDATE

0.98+

first challengeQUANTITY

0.98+

first opportunityQUANTITY

0.98+

HPEORGANIZATION

0.98+

past yearDATE

0.96+

EzmeralORGANIZATION

0.96+

HPE EzmeralORGANIZATION

0.95+

TwitterORGANIZATION

0.94+

firstQUANTITY

0.93+

10 minutes laterDATE

0.93+

each individualQUANTITY

0.91+

Booz AllenORGANIZATION

0.83+

next 10 yearsDATE

0.83+

2021DATE

0.82+

HamiltonPERSON

0.79+

last 10 yearsDATE

0.7+

yearsQUANTITY

0.59+

three monthsQUANTITY

0.59+

Ezmeral DayEVENT

0.43+

HPE Compute Engineered for your Hybrid World-Containers to Deploy Higher Performance AI Applications


 

>> Hello, everyone. Welcome to theCUBE's coverage of "Compute Engineered for your Hybrid World," sponsored by HPE and Intel. Today we're going to discuss the new 4th Gen Intel Xeon Scalable process impact on containers and AI. I'm John Furrier, your host of theCUBE, and I'm joined by three experts to guide us along. We have Jordan Plum, Senior Director of AI and products for Intel, Bradley Sweeney, Big Data and AI Product Manager, Mainstream Compute Workloads at HPE, and Gary Wang, Containers Product Manager, Mainstream Compute Workloads at HPE. Welcome to the program gentlemen. Thanks for coming on. >> Thanks John. >> Thank you for having us. >> This segment is going to be talking about containers to deploy high performance AI applications. This is a really important area right now. We're seeing a lot more AI deployed, kind of next gen AI coming. How is HPE supporting and testing and delivering containers for AI? >> Yeah, so what we're doing from HPE's perspective is we're taking these container platforms, combining with the next generation Intel servers to fully validate the deployment of the containers. So what we're doing is we're publishing the reference architectures. We're creating these automation scripts, and also creating a monitoring and security strategy for these container platforms. So for customers to easily deploy these Kubernete clusters and to easily secure their community environments. >> Gary, give us a quick overview of the new Proliant DL 360 and 380 Gen 11 servers. >> Yeah, the load, for example, for container platforms what we're seeing mostly is the DL 360 and DL 380 for matching really well for container use cases, especially for AI. The DL 360, with the expended now the DDR five memory and the new PCI five slots really, really helps the speeds to deploy these container environments and also to grow the data that's required to store it within these container environments. So for example, like the DL 380 if you want to deploy a data fabric whether it's the Ezmeral data fabric or different vendors data fabric software you can do so with the DL 360 and DL 380 with the new Intel Xeon processors. >> How does HP help customers with Kubernetes deployments? >> Yeah, like I mentioned earlier so we do a full validation to ensure the container deployment is easy and it's fast. So we create these automation scripts and then we publish them on GitHub for customers to use and to reference. So they can take that and then they can adjust as they need to. But following the deployment guide that we provide will make the, deploy the community deployment much easier, much faster. So we also have demo videos that's also published and then for reference architecture document that's published to guide the customer step by step through the process. >> Great stuff. Thanks everyone. We'll be going to take a quick break here and come back. We're going to do a deep dive on the fourth gen Intel Xeon scalable process and the impact on AI and containers. You're watching theCUBE, the leader in tech coverage. We'll be right back. (intense music) Hey, welcome back to theCUBE's continuing coverage of "Compute Engineered for your Hybrid World" series. I'm John Furrier with the Cube, joined by Jordan Plum with Intel, Bradley Sweeney with HPE, and Gary Wang from HPE. We're going to do a drill down and do a deeper dive into the AI containers with the fourth gen Intel Xeon scalable processors we appreciate your time coming in. Jordan, great to see you. I got to ask you right out of the gate, what is the view right now in terms of Intel's approach to containers for AI? It's hot right now. AI is booming. You're seeing kind of next gen use cases. What's your approach to containers relative to AI? >> Thanks John and thanks for the question. With the fourth generation Xeon scalable processor launch we have tested and validated this platform with over 400 deep learning and machine learning models and workloads. These models and workloads are publicly available in the framework repositories and they can be downloaded by anybody. Yet customers are not only looking for model validation they're looking for model performance and performance is usually a combination of a given throughput at a target latency. And to do that in the data center all the way to the factory floor, this is not always delivered from these generic proxy models that are publicly available in the industry. >> You know, performance is critical. We're seeing more and more developers saying, "Hey, I want to go faster on a better platform, faster all the time." No one wants to run slower stuff, that's for sure. Can you talk more about the different container approaches Intel is pursuing? >> Sure. First our approach is to meet the customers where they are and help them build and deploy AI everywhere. Some customers just want to focus on deployment they have more mature use cases, and they just want to download a model that works that's high performing and run. Others are really focused more on development and innovation. They want to build and train models from scratch or at least highly customize them. Therefore we have several container approaches to accelerate the customer's time to solution and help them meet their business SLA along their AI journey. >> So what developers can just download these containers and just go? >> Yeah, so let me talk about the different kinds of containers we have. We start off with pre-trained containers. We'll have about 55 or more of these containers where the model is actually pre-trained, highly performant, some are optimized for low latency, others are optimized for throughput and the customers can just download these from Intel's website or from HPE and they can just go into production right away. >> That's great. A lot of choice. People can just get jump right in. That's awesome. Good, good choice for developers. They want more faster velocity. We know that. What else does Intel provide? Can you share some thoughts there? What you guys else provide developers? >> Yeah, so we talked about how hey some are just focused on deployment and they maybe they have more mature use cases. Other customers really want to do some more customization or optimization. So we have another class of containers called development containers and this includes not just the kind of a model itself but it's integrated with the framework and some other capabilities and techniques like model serving. So now that customers can download just not only the model but an entire AI stack and they can be sort of do some optimizations but they can also be sure that Intel has optimized that specific stack on top of the HPE servers. >> So it sounds simple to just get started using the DL model and containers. Is that it? Where, what else are customers looking for? What can you take a little bit deeper? >> Yeah, not quite. Well, while the customer customer's ability to reproduce performance on their site that HPE and Intel have measured in our own labs is fantastic. That's not actually what the customer is only trying to do. They're actually building very complex end-to-end AI pipelines, okay? And a lot of data scientists are really good at building models, really good at building algorithms but they're less experienced in building end-to-end pipelines especially 'cause the number of use cases end-to-end are kind of infinite. So we are building end-to-end pipeline containers for use cases like media analytics and sentiment analysis, anomaly detection. Therefore a customer can download these end-to-end containers, right? They can either use them as a reference, just like, see how we built them and maybe they have some changes in their own data center where they like to use different tools, but they can just see, "Okay this is what's possible with an end-to-end container on top of an HPE server." And other cases they could actually, if the overlap in the use case is pretty close, they can just take our containers and go directly into production. So this provides developers, all three types of containers that I discussed provide developers an easy starting point to get them up and running quickly and make them productive. And that's a really important point. You talked a lot about performance, John. But really when we talk to data scientists what they really want to be is productive, right? They're under pressure to change the business to transform the business and containers is a great way to get started fast >> People take product productivity, you know, seriously now with developer productivity is the hottest trend obviously they want performance. Totally nailed it. Where can customers get these containers? >> Right. Great, thank you John. Our pre-trained model containers, our developmental containers, and our end-to-end containers are available at intel.com at the developer catalog. But we'd also post these on many third party marketplaces that other people like to pull containers from. And they're frequently updated. >> Love the developer productivity angle. Great stuff. We've still got more to discuss with Jordan, Bradley, and Gary. We're going to take a short break here. You're watching theCUBE, the leader in high tech coverage. We'll be right back. (intense music) Welcome back to theCUBE's coverage of "Compute Engineered for your Hybrid World." I'm John Furrier with theCUBE and we'll be discussing and wrapping up our discussion on containers to deploy high performance AI. This is a great segment on really a lot of demand for AI and the applications involved. And we got the fourth gen Intel Xeon scalable processors with HP Gen 11 servers. Bradley, what is the top AI use case that Gen 11 HP Proliant servers are optimized for? >> Yeah, thanks John. I would have to say intelligent video analytics. It's a use case that's supplied across industries and verticals. For example, a smart hospital solution that we conducted with Nvidia and Artisight in our previous customer success we've seen 5% more hospital procedures, a 16 times return on investment using operating room coordination. With that IVA, so with the Gen 11 DL 380 that we provide using the the Intel four gen Xeon processors it can really support workloads at scale. Whether that is a smart hospital solution whether that's manufacturing at the edge security camera integration, we can do it all with Intel. >> You know what's really great about AI right now you're starting to see people starting to figure out kind of where the value is does a lot of the heavy lifting on setting things up to make humans more productive. This has been clearly now kind of going neck level. You're seeing it all in the media now and all these new tools coming out. How does HPE make it easier for customers to manage their AI workloads? I imagine there's going to be a surge in demand. How are you guys making it easier to manage their AI workloads? >> Well, I would say the biggest way we do this is through GreenLake, which is our IT as a service model. So customers deploying AI workloads can get fully-managed services to optimize not only their operations but also their spending and the cost that they're putting towards it. In addition to that we have our Gen 11 reliance servers equipped with iLO 6 technology. What this does is allows customers to securely manage their server complete environment from anywhere in the world remotely. >> Any last thoughts or message on the overall fourth gen intel Xeon based Proliant Gen 11 servers? How they will improve workload performance? >> You know, with this generation, obviously the performance is only getting ramped up as the needs and requirements for customers grow. We partner with Intel to support that. >> Jordan, gimme the last word on the container's effect on AI applications. Your thoughts as we close out. >> Yeah, great. I think it's important to remember that containers themselves don't deliver performance, right? The AI stack is a very complex set of software that's compiled together and what we're doing together is to make it easier for customers to get access to that software, to make sure it all works well together and that it can be easily installed and run on sort of a cloud native infrastructure that's hosted by HPE Proliant servers. Hence the title of this talk. How to use Containers to Deploy High Performance AI Applications. Thank you. >> Gentlemen. Thank you for your time on the Compute Engineered for your Hybrid World sponsored by HPE and Intel. Again, I love this segment for AI applications Containers to Deploy Higher Performance. This is a great topic. Thanks for your time. >> Thank you. >> Thanks John. >> Okay, I'm John. We'll be back with more coverage. See you soon. (soft music)

Published Date : Dec 27 2022

SUMMARY :

Welcome to the program gentlemen. and delivering containers for AI? and to easily secure their of the new Proliant DL 360 and also to grow the data that's required and then they can adjust as they need to. and the impact on AI and containers. And to do that in the about the different container and they just want to download a model and they can just go into A lot of choice. and they can be sort of So it sounds simple to just to use different tools, is the hottest trend to pull containers from. on containers to deploy we can do it all with Intel. for customers to manage and the cost that they're obviously the performance on the container's effect How to use Containers on the Compute Engineered We'll be back with more coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jordan PlumPERSON

0.99+

GaryPERSON

0.99+

JohnPERSON

0.99+

NvidiaORGANIZATION

0.99+

Gary WangPERSON

0.99+

BradleyPERSON

0.99+

HPEORGANIZATION

0.99+

John FurrierPERSON

0.99+

16 timesQUANTITY

0.99+

5%QUANTITY

0.99+

JordanPERSON

0.99+

ArtisightORGANIZATION

0.99+

DL 360COMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

three expertsQUANTITY

0.99+

DL 380COMMERCIAL_ITEM

0.99+

HPORGANIZATION

0.99+

Compute Engineered for your Hybrid WorldTITLE

0.98+

FirstQUANTITY

0.98+

Bradley SweeneyPERSON

0.98+

over 400 deep learningQUANTITY

0.97+

intelORGANIZATION

0.97+

theCUBEORGANIZATION

0.96+

Gen 11 DL 380COMMERCIAL_ITEM

0.95+

XeonCOMMERCIAL_ITEM

0.95+

TodayDATE

0.95+

fourth genQUANTITY

0.92+

GitHubORGANIZATION

0.91+

380 Gen 11COMMERCIAL_ITEM

0.9+

about 55 or moreQUANTITY

0.89+

four gen XeonCOMMERCIAL_ITEM

0.88+

Big DataORGANIZATION

0.88+

Gen 11COMMERCIAL_ITEM

0.87+

five slotsQUANTITY

0.86+

ProliantCOMMERCIAL_ITEM

0.84+

GreenLakeORGANIZATION

0.75+

Compute Engineered for your HybridTITLE

0.7+

EzmeralORGANIZATION

0.68+

Kam Amir, Cribl | HPE Discover 2022


 

>> TheCUBE presents HPE Discover 2022 brought to you by HPE. >> Welcome back to theCUBE's coverage of HPE Discover 2022. We're here at the Venetian convention center in Las Vegas Dave Vellante for John Furrier. Cam Amirs here is the director of technical alliances at Cribl'. Cam, good to see you. >> Good to see you too. >> Cribl'. Cool name. Tell us about it. >> So let's see. Cribl' has been around now for about five years selling products for the last two years. Fantastic company, lots of growth, started there 2020 and we're roughly 400 employees now. >> And what do you do? Tell us more. >> Yeah, sure. So I run the technical alliances team and what we do is we basically look to build integrations into platforms such as HPE GreenLake and Ezmeral. And we also work with a lot of other companies to help get data from various sources into their destinations or, you know other enrichments of data in that data pipeline. >> You know, you guys have been on theCUBE. Clint's been on many times, Ed Bailey was on our startup showcase. You guys are successful in this overfunded observability space. So, so you guys have a unique approach. Tell us about why you guys are successful in the product and some of the things you've been doing there. >> Yeah, absolutely. So our product is very complimentary to a lot of the technologies that already exist. And I used to joke around that everyone has these like pretty dashboards and reports but they completely glaze over the fact that it's not easy to get the data from those sources to their destinations. So for us, it's this capability with Cribl' Stream to get that data easily and repeatably into these destinations. >> Yeah. You know, Cam, you and I are both at the Snowflake Summit to John's point. They were like a dozen observability companies there. >> Oh yeah. >> And really beginning to be a crowded space. So explain what value you bring to that ecosystem. >> Yeah, sure. So the ecosystem that we see there is there are a lot of people that are kind of sticking to like effectively getting data and showing you dashboards reports about monitoring and things of that sort. For us, the value is how can we help customers kind of accelerate their adoption of these platforms, how to go from like your legacy SIM or your legacy monitoring solution to like the next-gen observability platform or next-gen security platform >> and what you do really well is the integration and bringing those other toolings to, to do that? >> Correct, correct. And we make it repeatable. >> How'd you end up here? >> HP? So we actually had a customer that actually deployed our software on the HPS world platform. And it was kind of a light bulb moment that, okay this is actually a different approach than going to your traditional, you know, AWS, Google, et cetera. So we decided to kind of hunt this down and figure out how we could be a bigger player in this space. >> You saw the data fabric announcement? I'm not crazy about the term, data fabric is an old NetApp term, and then Gartner kind of twisted it. I like data mesh, but anyway, it doesn't matter. We kind of know what it is, but but when you see an announcement like that how do you look at it? You know, what does it mean to to Cribl' and your customers? >> Yeah. So what we've seen is that, so we work with the data fabric team and we're able to kind of route our data to their, as a data lake, so we can actually route the data from, again all these very sources into this data lake and then have it available for whatever customers want to do with it. So one of the big things that I know Clint talks about is we give customers this, we sell choice. So we give them the ability to choose where they want to send their data, whether that's, you know HP's data lake and data fabric or some other object store or some other destination. They have that choice to do so. >> So you're saying that you can stream with any destination the customer wants? What are some examples? What are the popular destinations? >> Yeah so a lot of the popular destinations are your typical object stores. So any of your cloud object stores, whether it be AWS three, Google cloud storage or Azure blob storage. >> Okay. And so, and you can pull data from any source? >> Laughter: I'd be very careful, but absolutely. What we've seen is that a lot of people like to kind of look at traditional data sources like Syslog and they want to get it to us, a next-gen SIM, but to do so it needs to be converted to like a web hook or some sort of API call. And so, or vice versa, they have this brand new Zscaler for example, and they want to get that data into their SIM but there's no way to do it 'cause a SIM only accepts it as a Syslog event. So what we can do is we actually transform the data and make it so that it lands into that SIM in the format that it needs to be and easily make that a repeatable process >> So, okay. So wait, so not as a Syslog event but in whatever format the destination requires? >> Correct, correct. >> Okay. What are the limits on that? I mean, is this- >> Yeah. So what we've seen is that customers will be able to take, for example they'll take this Syslog event, it's unstructured data but they need to put it into say common information model for Splunk or Elastic common schema for Elastic search or just JSON format for Elastic. And so what we can do is we can actually convert those events so that they land in that transformed state, but we can also route a copy of that event in unharmed fashion, to like an S3 bucket for object store for that long term compliance user >> You can route it to any, basically any object store. Is that right? Is that always the sort of target? >> Correct, correct. >> So on the message here at HPE, first of all I'll get to the marketplace point in a second, but it's cloud to edge is kind of their theme. So data streaming sounds expensive. I mean, you know so how do you guys deal with the streaming egress issue? What does that mean to customers? You guys claim that you can save money on that piece. It's a hotly contested discussion point. >> Laughter: So one of the things that we actually just announced in our 350 release yesterday is the capability of getting data from Windows events, or from Windows hosts, I'm sorry. So a product that we also have is called Cribl' Edge. So our capability of being able to collect data from the edge and then transit it out to whether it be an on-prem, or self-hosted deployment of Cribl', or or maybe some sort of other destination object store. What we do is we actually take the data in in transit and reduce the volume of events. So we can do things like remove white space or remove events that are not really needed and compress or optimize that data so that the egress cost to your point are actually lowered. >> And your data reduction approach is, is compression? It's a compression algorithm? >> So it is a combination, yeah, so it's a combination. So there's some people what they'll do is they'll aggregate the events. So sometimes for example, VPC flow logs are very chatty and you don't need to have all those events. So instead you convert those to metrics. So suddenly you reduced those events from, you know high volume events to metrics that are so small and you still get the same value 'cause you still see the trends and everything. And if later on down the road, you need to reinvestigate those events, you can rehydrate that data with Cribl' replay >> And you'll do the streaming in real time, is that right? >> Yeah. >> So Kafka, is that what you would use? Or other tooling? >> Laughter: So we are complimentary to a Kafka deployment. Customer's already deployed and they've invested in Kafka, We can read off of Kafka and feed back into Kafka. >> If not, you can use your tooling? >> If not, we can be replacing that. >> Okay talk about your observations in the multi-cloud hybrid world because hybrid obviously everyone knows it's a steady state now. On public cloud, on premise edge all one thing, cloud operations, DevOps, data as code all the things we talk about. What's the customer view? You guys have a unique position. What's going on in the customer base? How are they looking at hybrid and specifically multi-cloud, is it stitching together multiple hybrids? Or how do you guys work across those landscapes? >> So what we've seen is a lot of customers are in multiple clouds. That's, you know, that's going to happen. But what we've seen is that if they want to egress data from say one cloud to another the way that we've architected our solution is that we have these worker nodes that reside within these hybrid, these other cloud event these other clouds, I should say so that transmitting data, first egress costs are lowered, but being able to have this kind of, easy way to collect the data and also stitch it back together, join it back together, to a single place or single location is one option that we offer customers. Another solution that we've kind of announced recently is Search. So not having to move the data from all these disparate data sources and data lakes and actually just search the data in place. That's another capability that we think is kind of popular in this hybrid approach. >> And talk about now your relationship with HPE you guys obviously had customers that drove you to Greenlake, obviously what's your experience with them and also talk about the marketplace presence. Is that new? How long has that been going on? Have you seen any results? >> Yeah, so we've actually just started our, our journey into this HPE world. So the first thing was obviously the customer's bringing us into this ecosystem and now our capabilities of, I guess getting ready to be on the marketplace. So having a presence on the marketplace has been huge giving us kind of access to just people that don't even know who we are, being that we're, you know a five year old company. So it's really good to have that exposure. >> So you're going to get customers out of this? >> That's the idea. [Laughter] >> Bring in new market, that's the idea of their GreenLake is that partners fill in. What's your impression so far of GreenLake? Because there seems to be great momentum around HP and opening up their channel their sales force, their customer base. >> Yeah. So it's been very beneficial for us, again being a smaller company and we are a channel first company so that obviously helps, you know bring out the word with other channel partners. But HP has been very, you know open arm kind of getting us into the system into the ecosystem and obviously talking, or giving the good word about Cribl' to their customers. >> So, so you'll be monetizing on GreenLake, right? That's the, the goal. >> That's the goal. >> What do you have to do to get into a position? Obviously, you got a relationship you're in the marketplace. Do you have to, you know, write to their API's or do you just have to, is that a checkbox? Describe what you have to do to monetize. >> Sure. So we have to first get validated on the platform. So the validation process validates that we can work on the Ezmeral GreenLake platform. Once that's been completed, then the idea is to have our logo show up on the marketplace. So customers say, Hey, look, I need to have a way to get transit data or do stuff with data specifically around logs, metrics, and traces into my logging solution or my SIM. And then what we do with them on the back end is we'll see this transaction occur right to their API to basically say who this customer is. 'Cause again, the idea is to have almost a zero touch kind of involvement, but we will actually have that information given to us. And then we can actually monetize on top of it. >> And the visualization component will come from the observability vendor. Is that right? Or is that somewhat, do you guys do some of that? >> So the visualization is right now we're basically just the glue that gets the data to the visualization engine. As we kind of grow and progress our search product that's what will probably have more of a visualization component. >> Do you think your customers are going to predominantly use an observability platform for that visualization? I mean, obviously you're going to get there. Are they going to use Grafana? Or some other tool? >> Or yeah, I think a lot of customers, obviously, depending on what data and what they're trying to accomplish they will have that choice now to choose, you know Grafana for their metrics, logs, et cetera or some sort of security product for their security events but same data, two different kind of use cases. And we can help enable that. >> Cam, I want to ask you a question. You mentioned you were at Splunk and Clint, the CEO and co-founder, was at Splunk too. That brings up the question I want to get your perspective on, we're seeing a modern network here with HPE, with Aruba, obviously clouds kind of going next level you got on premises, edge, all one thing, distributed computing basically, cyber security, a data problem that's solved a lot by you guys and people in this business, making sure data available machine learnings are growing and powering AI like you read about. What's changed in this business? Because you know, Splunking logs is kind of old hat you know, and now you got observability. Unification is a big topic. What's changed now? What's different about the market today around data and these platforms and, and tools? What's your perspective on that? >> I think one of the biggest things is people have seen the amount of volume of data that's coming in. When I was at Splunk, when we hit like a one terabyte deal that was a big deal. Now it's kind of standard. You're going to do a terabyte of data per day. So one of the big things I've seen is just the explosion of data growth, but getting value out of that data is very difficult. And that's kind of why we exist because getting all that volume of data is one thing. But being able to actually assert value from it, that's- >> And that's the streaming core product? That's the whole? >> Correct. >> Get data to where it needs to be for whatever application needs whether it's cyber or something else. >> Correct, correct. >> What's the customer uptake? What's the customer base like for you guys now? How many, how many customers you guys have? What are they doing with the data? What are some of the common things you're seeing? >> Yeah. I mean, it's, it's the basic blocking and tackling, we've significantly grown our customer base and they all have the same problem. They come to us and say, look, I just need to get data from here to there. And literally the routing use case is our biggest use case because it's simple and you take someone that's a an expensive engineer and operations engineer instead of having them going and doing the plumbing of data of just getting logs from one source to another, we come in and actually make that a repeatable process and make that easy. And so that's kind of just our very basic value add right from the get go. >> You can automate that, automate that, make it repeatable. Say what's in the name? Where'd the name come from? >> So Cribl', if you look it up, it's actually kind of an old shiv to get to siphon dirt from gold, right? So basically you just, that's kind of what we do. We filter out all the dirt and leave you the gold bits so you can get value. >> It's kind of what we do on theCUBE. >> It's kind of the gold nuggets. Get all these highlights, hitting Twitter, the golden, the gold nuggets. Great to have you on. >> Cam, thanks for, for coming on, explaining that sort of you guys are filling that gap between, Hey all the observability claims, which are all wonderful but then you got to get there. They got to have a route to get there. That's what got to do. Cribl' rhymes with tribble. Dave Vellante for John Furrier covering HPE Discover 2022. You're watching theCUBE. We'll be right back.

Published Date : Jun 29 2022

SUMMARY :

2022 brought to you by HPE. Cam Amirs here is the director Tell us about it. for the last two years. And what do you do? So I run the of the things you've been doing there. that it's not easy to get the data and I are both at the Snowflake So explain what value you So the ecosystem that we we make it repeatable. to your traditional, you You saw the data fabric So one of the big things So any of your cloud into that SIM in the format the destination requires? I mean, is this- but they need to put it into Is that always the sort of target? You guys claim that you can that the egress cost to your And if later on down the road, you need to Laughter: So we are all the things we talk about. So not having to move the data customers that drove you So it's really good to have that exposure. That's the idea. Bring in new market, that's the idea so that obviously helps, you know So, so you'll be monetizing Describe what you have to do to monetize. 'Cause again, the idea is to And the visualization the data to the visualization engine. are going to predominantly use now to choose, you know Cam, I want to ask you a question. So one of the big things I've Get data to where it needs to be And literally the routing use Where'd the name come from? So Cribl', if you look Great to have you on. of you guys are filling

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ed BaileyPERSON

0.99+

SplunkORGANIZATION

0.99+

CriblORGANIZATION

0.99+

Kam AmirPERSON

0.99+

Cam AmirsPERSON

0.99+

HPORGANIZATION

0.99+

ClintPERSON

0.99+

John FurrierPERSON

0.99+

ArubaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AWSORGANIZATION

0.99+

ElasticTITLE

0.99+

one terabyteQUANTITY

0.99+

2020DATE

0.99+

HPEORGANIZATION

0.99+

yesterdayDATE

0.99+

KafkaTITLE

0.99+

one optionQUANTITY

0.99+

Las VegasLOCATION

0.99+

CamPERSON

0.99+

GartnerORGANIZATION

0.99+

GrafanaORGANIZATION

0.98+

400 employeesQUANTITY

0.98+

TheCUBEORGANIZATION

0.98+

oneQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

SplunkTITLE

0.98+

one thingQUANTITY

0.98+

todayDATE

0.98+

TwitterORGANIZATION

0.97+

bothQUANTITY

0.97+

firstQUANTITY

0.97+

first thingQUANTITY

0.96+

WindowsTITLE

0.96+

CriblPERSON

0.96+

one sourceQUANTITY

0.96+

first companyQUANTITY

0.95+

single locationQUANTITY

0.95+

about five yearsQUANTITY

0.95+

S3TITLE

0.94+

five year oldQUANTITY

0.91+

SyslogTITLE

0.91+

single placeQUANTITY

0.91+

JohnPERSON

0.91+

CriblTITLE

0.88+

last two yearsDATE

0.84+

NetAppTITLE

0.83+

GreenLakeORGANIZATION

0.83+

zero touchQUANTITY

0.82+

Cribl' StreamORGANIZATION

0.81+

EzmeralORGANIZATION

0.8+

two differentQUANTITY

0.78+

a terabyte of data per dayQUANTITY

0.76+

Venetian convention centerLOCATION

0.75+

350 releaseQUANTITY

0.75+

ZscalerTITLE

0.74+

one cloudQUANTITY

0.7+

GreenlakeORGANIZATION

0.65+

HPE Discover 2022EVENT

0.62+

Breaking Analysis: What you May not Know About the Dell Snowflake Deal


 

>> From theCUBE Studios in Palo Alto, in Boston bringing you Data Driven Insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the pre-cloud era hardware companies would run benchmarks, showing how database and or application performance ran better on their systems relative to competitors or previous generation boxes. And they would make a big deal out of it. And the independent software vendors, you know they'd do a little golf clap if you will, in the form of a joint press release it became a game of leaprog amongst hardware competitors. That was pretty commonplace over the years. The Dell Snowflake Deal underscores that the value proposition between hardware companies and ISVs is changing and has much more to do with distribution channels, volumes and the amount of data that lives On-Prem in various storage platforms. For cloud native ISVs like Snowflake they're realizing that despite their Cloud only dogma they have to grit their teeth and deal with On-premises data or risk getting shut out of evolving architectures. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we unpack what little is known about the Snowflake announcement from Dell Technologies World and discuss the implications of a changing Cloud landscape. We'll also share some new data for Cloud and Database platforms from ETR that shows Snowflake has actually entered the Earth's orbit when it comes to spending momentum on its platform. Now, before we get into the news I want you to listen to Frank's Slootman's answer to my question as to whether or not Snowflake would ever architect the platform to run On-Prem because it's doable technically, here's what he said, play the clip >> Forget it, this will only work in the Public Cloud. Because it's, this is how the utility model works, right. I think everybody is coming through this realization, right? I mean, excuses are running out at this point. You know, we think that it'll, people will come to the Public Cloud a lot sooner than we will ever come to the Private Cloud. It's not that we can't run a private Cloud. It's just diminishes the potential and the value that we bring. >> So you may be asking yourselves how do you square that circle? Because basically the Dell Snowflake announcement is about bringing Snowflake to the private cloud, right? Or is it let's get into the news and we'll find out. Here's what we know at Dell Technologies World. One of the more buzzy announcements was the, by the way this was a very well attended vet event. I should say about I would say 8,000 people by my estimates. But anyway, one of the more buzzy announcements was Snowflake can now run analytics on Non-native Snowflake data that lives On-prem in a Dell object store Dell's ECS to start with. And eventually it's software defined object store. Here's Snowflake's clark, Snowflake's Clark Patterson describing how it works this past week on theCUBE. Play the clip. The way it works is I can now access Non-native Snowflake data using what materialized views, external tables How does that work? >> Some combination of the, all the above. So we've had in Snowflake, a capability called External Tables, which you refer to, it goes hand in hand with this notion of external stages. Basically there's a through the combination of those two capabilities, it's a metadata layer on data, wherever it resides. So customers have actually used this in Snowflake for data lake data outside of Snowflake in the Cloud, up until this point. So it's effectively an extension of that functionality into the Dell On-Premises world, so that we can tap into those things. So we use the external stages to expose all the metadata about what's in the Dell environment. And then we build external tables in Snowflake. So that data looks like it is in Snowflake. And then the experience for the analyst or whomever it is, is exactly as though that data lives in the Snowflake world. >> So as Clark explained, this capability of External tables has been around in the Cloud for a while, mainly to suck data out of Cloud data lakes. Snowflake External Tables use file level metadata, for instance, the name of the file and the versioning so that it can be queried in a stage. A stage is just an external location outside of Snowflake. It could be an S3 bucket or an Azure Blob and it's soon will be a Dell object store. And in using this feature, the Dell looks like it lives inside of Snowflake and Clark essentially, he's correct to say to an analyst that looks exactly like the data is in Snowflake, but uh, not exactly the data's read only which means you can't do what are called DML operations. DML stands for Data Manipulation Language and allows for things like inserting data into tables or deleting and modifying existing data. But the data can be queried. However, the performance of those queries to External Tables will almost certainly be slower. Now users can build things like materialized views which are going to speed things up a bit, but at the end of the day, it's going to run faster than the Cloud. And you can be almost certain that's where Snowflake wants it to run, but some organizations can't or won't move data into the Cloud for a variety of reasons, data sovereignty, compliance security policies, culture, you know, whatever. So data can remain in place On-prem, or it can be moved into the Public Cloud with this new announcement. Now, the compute today presumably is going to be done in the Public Cloud. I don't know where else it's going to be done. They really didn't talk about the compute side of things. Remember, one of Snowflake's early innovations was to separate compute from storage. And what that gave them is you could more efficiently scale with unlimited resources when you needed them. And you could shut off the compute when you don't need us. You didn't have to buy, and if you need more storage you didn't have to buy more compute and vice versa. So everybody in the industry has copied that including AWS with Redshift, although as we've reported not as elegantly as Snowflake did. RedShift's more of a storage tiering solution which minimizes the compute required but you can't really shut it off. And there are companies like Vertica with Eon Mode that have enabled this capability to be done On-prem, you know, but of course in that instance you don't have unlimited elastic compute scale on-Prem but with solutions like Dell Apex and HPE GreenLake, you can certainly, you can start to simulate that Cloud elasticity On-prem. I mean, it's not unlimited but it's sort of gets you there. According to a Dell Snowflake joint statement, the companies the quote, the companies will pursue product integrations and joint go to market efforts in the second half of 2022. So that's a little vague and kind of benign. It's not really clear when this is going to be available based on that statement from the two first, but, you know, we're left wondering will Dell develop an On-Prem compute capability and enable queries to run locally maybe as part of an extended apex offering? I mean, we don't know really not sure there's even a market for that but it's probably a good bet that again, Snowflake wants that data to land in the Snowflake data Cloud kind of makes you wonder how this deal came about. You heard Sloop on earlier Snowflake has always been pretty dogmatic about getting data into its native snowflake format to enable the best performance as we talked about but also data sharing and governance. But you could imagine that data architects they're building out their data mesh we've reported on this quite extensively and their data fabric and those visions around that. And they're probably telling Snowflake, Hey if you want to be a strategic partner of ours you're going to have to be more inclusive of our data. That for whatever reason we're not putting in your Cloud. So Snowflake had to kind of hold its nose and capitulate. Now the good news is it further opens up Snowflakes Tam the total available market. It's obviously good marketing posture. And ultimately it provides an on ramp to the Cloud. And we're going to come back to that shortly but let's look a little deeper into what's happening with data platforms and to do that we'll bring in some ETR data. Now, let me just say as companies like Dell, IBM, Cisco, HPE, Lenovo, Pure and others build out their hybrid Clouds. The cold hard fact is not only do they have to replicate the Cloud Operating Model. You will hear them talk about that a lot, but they got to do that. So it, and that's critical from a user experience but in order to gain that flywheel momentum they need to build a robust ecosystem that goes beyond their proprietary portfolios. And, you know, honestly they're really not even in the first inning most companies and for the likes of Snowflake to sort of flip this, they've had to recognize that not everything is moving into the Cloud. Now, let's bring up the next slide. One of the big areas of discussion at Dell Tech World was Apex. That's essentially Dell's nascent as a service offering. Apex is infrastructure as a Service Cloud On-prem and obviously has the vision of connecting to the Cloud and across Clouds and out to the Edge. And it's no secret that database is one of the most important ingredients of infrastructure as a service generally in Cloud Infrastructure specifically. So this chart here shows the ETR data for data platforms inside of Dell accounts. So the beauty of ETR platform is you can cut data a million different ways. So we cut it. We said, okay, give us the Cloud platforms inside Dell accounts, how are they performing? Now, this is a two dimensional graphic. You got net score or spending momentum on the vertical axis and what ETR now calls Overlap formally called Market Share which is a measure of pervasiveness in the survey. That's on the horizontal axis that red dotted line at 40% represents highly elevated spending on the Y. The table insert shows the raw data for how the dots are positioned. Now, the first call out here is Snowflake. According to ETR quote, after 13 straight surveys of astounding net scores, Snowflake has finally broken the trend with its net score dropping below the 70% mark among all respondents. Now, as you know, net score is measured by asking customers are you adding the platform new? That's the lime green in the bar that's pointing from Snowflake in the graph and or are you increasing spend by 6% or more? That's the forest green is spending flat that's the gray is you're spend decreasing by 6% or worse. That's the pinkish or are you decommissioning the platform bright red which is essentially zero for Snowflake subtract the reds from the greens and you get a net score. Now, what's somewhat interesting is that snowflakes net score overall in the survey is 68 which is still huge, just under 70%, but it's net score inside the Dell account base drops to the low sixties. Nonetheless, this chart tells you why Snowflake it's highly elevated spending momentum combined with an increasing presence in the market over the past two years makes it a perfect initial data platform partner for Dell. Now and in the Ford versus Ferrari dynamic. That's going on between the likes of Dell's apex and HPE GreenLake database deals are going to become increasingly important beyond what we're seeing with this recent Snowflake deal. Now noticed by the way HPE is positioned on this graph with its acquisition of map R which is now part of HPE Ezmeral. But if these companies want to be taken seriously as Cloud players, they need to further expand their database affinity to compete ideally spinning up databases as part of their super Clouds. We'll come back to that that span multiple Clouds and include Edge data platforms. We're a long ways off from that. But look, there's Mongo, there's Couchbase, MariaDB, Cloudera or Redis. All of those should be on the short list in my view and why not Microsoft? And what about Oracle? Look, that's to be continued on maybe as a future topic in a, in a Breaking Analysis but I'll leave you with this. There are a lot of people like John Furrier who believe that Dell is playing with fire in the Snowflake deal because he sees it as a one way ticket to the Cloud. He calls it a one way door sometimes listen to what he said this past week. >> I would say that that's a dangerous game because we've seen that movie before, VMware and AWS. >> Yeah, but that we've talked about this don't you think that was the right move for VMware? >> At the time, but if you don't nurture the relationship AWS will take all those customers ultimately from VMware. >> Okay, so what does the data say about what John just said? How is VMware actually doing in Cloud after its early missteps and then its subsequent embracing of AWS and other Clouds. Here's that same XY graphic spending momentum on the Y and pervasiveness on the X and the same table insert that plots the dots and the, in the breakdown of Dell's net score granularity. You see that at the bottom of the chart in those colors. So as usual, you see Azure and AWS up and to the right with Google well behind in a distant third, but still in the mix. So very impressive for Microsoft and AWS to have both that market presence in such elevated spending momentum. But the story here in context is that the VMware Cloud on AWS and VMware's On-Prem Cloud like VMware Cloud Foundation VCF they're doing pretty well in the market. Look, at HPE, gaining some traction in Cloud. And remember, you may not think HPE and Dell and VCF are true Cloud but these are customers answering the survey. So their perspective matters more than the purest view. And the bad news is the Dell Cloud is not setting the world on fire from a momentum standpoint on the vertical axis but it's above the line of zero and compared to Dell's overall net score of 20 you could see it's got some work to do. Okay, so overall Dell's got a pretty solid net score to you know, positive 20, as I say their Cloud perception needs to improve. Look, Apex has to be the Dell Cloud brand not Dell reselling VMware. And that requires more maturity of Apex it's feature sets, its selling partners, its compensation models and it's ecosystem. And I think Dell clearly understands that. I think they're pretty open about that. Now this includes partners that go beyond being just sellers has to include more tech offerings in the marketplace. And actually they got to build out a marketplace like Cloud Platform. So they got a lot of work to do there. And look, you've got Oracle coming up. I mean they're actually kind of just below the magic 40% in the line which is pro it's pretty impressive. And we've been telling you for years, you can hate Oracle all you want. You can hate its price, it's closed system all of that it's red stack shore. You can say it's legacy. You can say it's old and outdated, blah, blah, blah. You can say Oracle is irrelevant in trouble. You are dead wrong. When it comes to mission critical workloads. Oracle is the king of the hill. They're a founder led company that knows exactly what it's doing and they're showing Cloud momentum. Okay, the last point is that while Microsoft AWS and Google have major presence as shown on the X axis. VMware and Oracle now have more than a hundred citations in the survey. You can see that on the insert in the right hand, right most column. And IBM had better keep the momentum from last quarter going, or it won't be long before they get passed by Dell and HP in Cloud. So look, John might be right. And I would think Snowflake quietly agrees that this Dell deal is all about access to Dell's customers and their data. So they can Hoover it into the Snowflake Data Cloud but the data right now, anyway doesn't suggest that's happening with VMware. Oh, by the way, we're keeping an eye close eye on NetApp who last September ink, a similar deal to VMware Cloud on AWS to see how that fares. Okay, let's wrap with some closing thoughts on what this deal means. We learned a lot from the Cloud generally in AWS, specifically in two pizza teams, working backwards, customer obsession. We talk about flywheel all the time and we've been talking today about marketplaces. These have all become common parlance and often fundamental narratives within strategic plans investor decks and customer presentations. Cloud ecosystems are different. They take both competition and partnerships to new heights. You know, when I look at Azure service offerings like Apex, GreenLake and similar services and I see the vendor noise or hear the vendor noise that's being made around them. I kind of shake my head and ask, you know which movie were these companies watching last decade? I really wish we would've seen these initiatives start to roll out in 2015, three years before AWS announced Outposts not three years after but Hey, the good news is that not only was Outposts a wake up call for the On-Prem crowd but it's showing how difficult it is to build a platform like Outposts and bring it to On-Premises. I mean, Outpost isn't currently even a rounding era in the marketplace. It really doesn't do much in terms of database support and support of other services. And, you know, it's unclear where that that is going. And I don't think it has much momentum. And so the Hybrid Cloud Vendors they've had time to figure it out. But now it's game on, companies like Dell they're promising a consistent experience between On-Prem into the Cloud, across Clouds and out to the Edge. They call it MultCloud which by the way my view has really been multi-vendor Chuck, Chuck Whitten. Who's the new co-COO of Dell called it Multi-Cloud by default. (laughing) That's really, I think an accurate description of that. I call this new world Super Cloud. To me, it's different than MultiCloud. It's a layer that runs on top of hyperscale infrastructure kind of hides the underlying complexity of the Cloud. It's APIs, it's primitives. And it stretches not only across Clouds but out to the Edge. That's a big vision and that's going to require some seriously intense engineering to build out. It's also going to require partnerships that go beyond the portfolios of companies like Dell like their own proprietary stacks if you will. It's going to have to replicate the Cloud Operating Model and to do that, you're going to need more and more deals like Snowflake and even deeper than Snowflake, not just in database. Sure, you'll need to have a catalog of databases that run in your On-Prem and Hybrid and Super Cloud but also other services that customers can tap. I mean, can you imagine a day when Dell offers and embraces a directly competitive service inside of apex. I have trouble envisioning that, you know not with their historical posture, you think about companies like, you know, Nutanix, you know, or Cisco where they really, you know those relationships cooled quite quickly but you know, look, think about it. That's what AWS does. It offers for instance, Redshift and Snowflake side by side happily and the Redshift guys they probably hate Snowflake. I wouldn't blame them, but the EC Two Folks, they love them. And Adam SloopesKy understands that ISVs like Snowflake are a key part of the Cloud ecosystem. Again, I have a hard time envisioning that occurring with Dell or even HPE, you know maybe less so with HPE, but what does this imply that the Edge will allow companies like Dell to a reach around on the Cloud and somehow create a new type of model that begrudgingly accommodates the Public Cloud but drafts of the new momentum of the Edge, which right now to these companies is kind of mostly telco and retail. It's hard to see that happening. I think it's got to evolve in a more comprehensive and inclusive fashion. What's much more likely is companies like Dell are going to substantially replicate that Cloud Operating Model for the pieces that they own pieces that they control which admittedly are big pieces of the market. But unless they're able to really tap that ecosystem magic they're not going to be able to grow much beyond their existing install bases. You take that lime green we showed you earlier that new adoption metric from ETR as an example, by my estimates, AWS and Azure are capturing new accounts at a rate between three to five times faster than Dell and HPE. And in the more mature US and mere markets it's probably more like 10 X and a major reason is because of the Cloud's robust ecosystem and the optionality and simplicity of transaction that that is bringing to customers. Now, Dell for its part is a hundred billion dollar revenue company. And it has the capability to drive that kind of dynamic. If it can pivot its partner ecosystem mindset from kind of resellers to Cloud services and technology optionality. Okay, that's it for now? Thanks to my colleagues, Stephanie Chan who helped research topics for Breaking Analysis. Alex Myerson is on the production team. Kristen Martin and Cheryl Knight and Rob Hof, on editorial they helped get the word out and thanks to Jordan Anderson for the new Breaking Analysis branding and graphics package. Remember these episodes are all available as podcasts wherever you listen. All you do is search Breaking Analysis podcasts. You could check out ETR website @etr.ai. We publish a full report every week on wikibon.com and siliconangle.com. You want to get in touch. @dave.vellente @siliconangle.com. You can DM me @dvellante. You can make a comment on our LinkedIn posts. This is Dave Vellante for the Cube Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 7 2022

SUMMARY :

bringing you Data Driven and the amount of data that lives On-Prem and the value that we bring. One of the more buzzy into the Dell On-Premises world, Now and in the Ford I would say that At the time, but if you And it has the capability to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jordan AndersonPERSON

0.99+

Stephanie ChanPERSON

0.99+

IBMORGANIZATION

0.99+

DellORGANIZATION

0.99+

Clark PattersonPERSON

0.99+

Alex MyersonPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

Rob HofPERSON

0.99+

LenovoORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

2015DATE

0.99+

GoogleORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

ClarkPERSON

0.99+

HPORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

HPEORGANIZATION

0.99+

6%QUANTITY

0.99+

FordORGANIZATION

0.99+

threeQUANTITY

0.99+

40%QUANTITY

0.99+

Chuck WhittenPERSON

0.99+

VMwareORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

FerrariORGANIZATION

0.99+

Adam SloopesKyPERSON

0.99+

EarthLOCATION

0.99+

13 straight surveysQUANTITY

0.99+

70%QUANTITY

0.99+

firstQUANTITY

0.99+

68QUANTITY

0.99+

last quarterDATE

0.99+

RedshiftTITLE

0.99+

siliconangle.comOTHER

0.99+

theCUBE StudiosORGANIZATION

0.99+

SnowflakeEVENT

0.99+

SnowflakeTITLE

0.99+

8,000 peopleQUANTITY

0.99+

bothQUANTITY

0.99+

20QUANTITY

0.99+

VCFORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

Breaking Analysis: The Hybrid Cloud Tug of War Gets Real


 

>> From the theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> Well, it looks like hybrid cloud is finally here. We've seen a decade of posturing, marchitecture, slideware and narrow examples of hybrid cloud, but there's little question that the definition of cloud is expanding to include on-premises workloads in hybrid models. Now depending on which numbers you choose to represent IT spending, public cloud only accounts for actually less than 5% of the total pie. So the big question is, how will this now evolve? Customers want control, they want governance, they want security, flexibility and a feature-rich set of services to build their digital businesses. It's unlikely that they can buy all that, so they're going to have to build it with partners, specifically vendors, SI's, consultancies and their own developers. The tug of war to win the new cloud day has finally started in earnest between the hyperscalers and the largest enterprise tech companies in the world. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this Breaking Analysis, we'll walk you through how we see the battle for hybrid cloud, how we got here, where we are and where it's headed. First, I want to go back to 2009, in a blog post by a man named Chuck Hollis. Chuck Hollis, at the time, was a CTO and marketing guru inside of EMC who, remember, owned VMware. Chuck was kind of this hybrid, multi-tool player, pun intended. EMC at the time had a big stake, a lot at stake, as the ascendancy of AWS was threatening the historical models, which had defined enterprise IT. Now around that time, NIST published its first draft of a cloud computing definition which, as I recall, included language, something to the effect of accessing remote services over the public network, i.e., public IP networks. Now, NIST has essentially or since evolved that definition, but the original draft was very favorable to the public cloud. And the vendor community, the traditional vendor community, said hang on, we're in this game too. So that was 2009 when Chuck Hollis published this slide. He termed it Private Cloud, a term which he saw buried inside of a Gartner research post or research note that was not really fleshed out and defined. The idea was pretty compelling. The definition of cloud centered on control, where you, as the customer, had on-prem workloads that could span public and on-prem clouds, if you will, with federated security and a data plan that spanned the states. Essentially, you had an internal and an external cloud with a single point of control. This is basically what the hybrid cloud vision has become. An abstraction layer that spans on-prem and public clouds and we can extend that across clouds and out to the edge, where a customer has a single point of control and federated governance and security. Now we know this is still aspirational, but we're now seeing vendor offerings that put forth this promise and a roadmap to get there from different points of view, that we're going to talk about today. The NIST definition now reads cloud computing is a model for enabling ubiquitous, convenient on-demand network access to a shared pool of configurable computing resources, e.g., network server storage, applications and services, that can be rapidly provisioned and released with minimal management effort or service provider interaction. So there you have it, that is inclusive of on-prem, but it took the industry a decade plus to actually get where we are today. And they did so by essentially going to school with the public cloud offerings. Now in 2018, AWS announced Outposts and that was another wake up call to the on-prem community. Externally, they pointed to the validation that hybrid cloud was real. Hey, AWS is doing it so clearly they've capitulated, but most on-prem vendors at the time didn't have a coherent offering for hybrid, but the point is the on-prem vendors responded as they saw AWS moving past the demilitarized zone into enemy lines. And here's what the competitive landscape of hybrid offerings looks like today. All three US-based hyperscalers have an offering or multiple offerings in various forms, Outposts from Amazon and other services that they offer, Google Anthos and Azure Arc, they're all so prominent, but the real action today is coming from the on-prem vendors. Every major company has an offering. Now most of these stemmed from services-led and finance-led initiatives, but they're evolving to true Azure Service models. HPE GreenLake is prominent and the company's CEO, Antonio Neri, is putting the whole company behind Azure Service. HPE claims to be the first, it uses that in its marketing, with such an Azure Service offering, but actually Oracle was their first with Cloud@Customer. You know, possibly Microsoft could make a claim to being early as well, but it really doesn't matter. Let's see, Dell has responded with Apex and is going hard after this opportunity. Cisco has Cisco Plus and Lenovo has TruScale. IBM also has a long services and finance-led history and has announced pockets of Azure Service in areas like storage. And Pure Storage is an example that we chose of a segment player, of course within storage, that has a strong Azure Service offering, and there are others like that. So the landscape is getting very busy. And so, let's break this down a bit. AWS is bringing its programmable infrastructure model and its own hardware to what it calls the edge. And it looks at on-prem data centers as just another edge node. So that's how they're de-positioning the on-prem crowd, but the fact is, when you really look at what Outposts can do today, it's limited, but AWS will move quickly so expect a continued rapid evolution of their model and the services that are supported on Outposts. Azure gets its hardware from partners and has relationships with virtually everyone that matters. Anthos is, as well, a software layer and Google created Kubernetes as the great equalizer in cloud. And it was a nice open source gift to the industry and has obviously taken off. So the cloud guys have the advantage of owning a cloud. The pure on-prem players, they don't, but the on-prem crowd has rich stacks, much richer and more mature in a lot of areas, as it relates to supporting on-premises workloads and much more so than the cloud players, but they don't have mature cloud stacks. They're kind of just getting started with things like subscription billing and API-based microservices offerings. They got to figure out Salesforce compensation and just the overall Azure service mentality versus the historical product box mentality, and that takes time. And they're each coming at this from their respective different points of view and points of strength. HPE is doing a very good job of marketing and go-to market. It probably has the cleanest model, enabled by the company's split from HP, but it has some gaps that it's needed to fill and it's doing so through acquisitions. Ezmeral, for example, is it's new data play. It just bought Zerto to facilitate backup as a service. And it's expanded partnerships to fill gaps in the portfolio. Some partnerships, which they couldn't do before because it created conflicts inside of HPE or HP. Dell is all about the portfolio, the breadth of the portfolio, the go-to-market prowess and its supply chain advantage. It's very serious about Azure Service with Apex and it's driving hard to win that day. Cisco comes at this from a huge portfolio and of course, a point of strength and networking, which maybe is a bit tougher to offer as a service, but Cisco has a large and fast growing subscription business in collaborations, security and other areas, so it's cloud-like in that regard. And Oracle, of course, has the huge advantage of an extremely rich functional stack and it owns a cloud, which has dramatically improved in the past few years, but Oracle is narrow to the red stack, at least today. Oracle, if it wanted to, we think, could dominate the database cloud, it could be the database cloud, especially if it decided to open its cloud to competitive database offerings and run them in the Oracle cloud. Hmm. Wonder if Oracle will ever move in that direction. Now a big part of this shift is the appeal of OPEX versus CAPEX. Let's take a look at some ETR data that digs a bit deeper into this topic. This data is from an August ETR drill down, asking CIOs and IT buyers how their budgets are split between OPEX and CAPEX. The mid point of the yellow line shows where we are today, 57% OPEX, expecting to grow to 63% one year from now. That's not a huge difference, there's not a huge difference when you drill into global 2000, which kind of surprised me. I thought global 2000 would be heavier CAPEX, but they seem to be accelerating the shift to OPEX slightly faster than the overall base, but not really in a meaningful way. So I didn't really discern big differences there. Now, when you dig further into industries and look at subscription versus consumption models for OPEX, you see about 60/40 favoring subscription models, with most industry slowly moving toward consumption or usage based models over time. There are a couple of outliers, but generally speaking, that's the trend. What's perhaps more interesting is when you drill into subscription versus usage based models by product area, and that's what this chart shows. It shows by tech segment, the percent subscription, that's the blue, versus consumption or usage based, that's the gray bars, yellow being indifferent or maybe it's I don't know. What stands out are two areas that are more usage heavy, consumption heavy. That's database, data warehousing, and IS. So database is surely weighted by companies like Snowflake and offerings like Redshift and other cloud databases from Azure and Google and other managed services, but the IS piece, while not surprising, is, we think, relevant because most of the legacy vendor Azure Service offerings are borrowing from a SaaS-oriented subscription model with a hardware twist. In other words, as a customer, you're committing to a term and a minimum spend over the life of that term. You're locked in for a year or three years, whatever it is, to account for the hardware and headroom the vendor has to install because they want to allow you to increase your usage. So that's the usage based model. See, you're then paying by the drink for that consumption above that minimum threshold. So it's a hybrid subscription consumption model, which is actually quite interesting. And we've been saying, what would really be cool is if one of the on-prem penguins on the iceberg would actually jump in and offer a true consumption model right out of the box, as a disruptive move to the industry and to the cloud players, and take that risk. And I think that might happen once they feel comfortable with the financial model and they have nailed the product market fit, but right now, the model is what it is. And even AWS without post requires a threshold and a minimum commitment. So we'd love to see someone take that chance and offer true cloud consumption pricing to facilitate more experimentation and lower risk for the customer entry points. Now let's take a look at some of these players and see what kind of spending momentum they have. This is our popular XY chart-view that plots net score or spending velocity on the x-axis and market share or pervasiveness in the data set on the... Oh, sorry, net score or spending momentum on the y-axis and pervasiveness or market share on the x-axis. Now this is cut by cloud computing vendors, as defined by the customers responding. There were nearly 1500 respondents in the ETR survey, so a couple of points here. Note the red line is the elevated line. In other words, anything above that is considered really robust momentum. And no surprise, Azure, AWS and Google are above that line. Azure and AWS always battle it out for top share of voice in the x-axis in this survey. Now this, remember, is the July survey, but ETR, they gave me a sneak peek at the October results that they're going to be releasing in the coming week and Dell cloud and VMware cloud, which is VCF and maybe some other components, not VMware cloud and AWS, that's a separate beast, but those two are moving up in the y-axis. So they're demonstrating spending momentum. IBM is moving down and Oracle is at a respectable 20% on the y-axis. Now, interestingly, HPE and Lenovo don't show up in the cloud taxonomy, in that cloud cut, and neither does Cisco. I believe I'm correct in that this is an open-ended question, i.e., who are your cloud suppliers? So the customers are not resonating with that messaging yet, but I'm going to double check on that. Now to widen the aperture a bit, we said let's do a cut of the on-prem and cloud players within cloud accounts, so we can include HPE and Cisco and see how they're doing inside of cloud accounts. So that's what this chart does. It's a filter on 975 customers who identify themselves as cloud accounts. So here we were able to add in Cisco and HPE. Now, Lenovo still doesn't show up on the data. It shows up in laptops and desktops, but not as prominent in the enterprise, not prominent at all, but HPE Ezmeral did show up and it's moving forward in the October survey, again, part of the sneak peek. Ezmeral is HPE's data platform that they've introduced, combining the assets of MapR, BlueData and some other organic development. Now, as you can see, HPE and Cisco, they show up on the chart, as I said, and you can see the rope in the tug of war is starting to get a little bit more taut. The cloud guys have momentum and big account presence, but the on-prem folks also have big footprints, rich stacks and many have strong services arms, and a lot of customer affinity. So let's wrap with some comments about how this will shake out and what's some of the markers we can watch. Now, the first thing I'll say is we're starting to hear the right language come out of the vendor community. The idea that they're investing in a layer to abstract the underlying complexity of the clouds and on-prem infrastructure and turning the world into, essentially, a programmable interface to resources. The question is, what about giving access through that layer to underlying primitives in the public cloud? VMware has been very clear on this. They will facilitate that access. I believe Red Hat as well. So watch to the degree in which the large on-prem players are enabling that access for developers. We believe this is the right direction overall, but it's also very hard and it's going to require lots of resources and R & D. I would say at this point that each company has its respective strengths and weaknesses. I see HPE mostly focused today on making its on-prem offerings work like a cloud, whereas some of the others, VMware, Dell and Cisco, are stressing to a greater degree, in my view, enabling multi-cloud and edge connections, cross connections. Not that HPE isn't open to that when you ask them about it, but its marketing is more on-prem leaning, in my opinion. Now all of the traditional vendors, in my view, are still defensive about the cloud, although I would say much less so each day. Increasingly, they look at the public cloud as an opportunity to build value on top of that abstraction layer, if you will. As I said earlier, these on-prem guys, they all have ways to go. They're in the early stages of figuring out what a cloud operating model looks like, how it works, what services to offer, how to pay sellers and partners, but the public cloud vendors, they're miles ahead in that regard, but at the same time, they're navigating into on-prem territory. And they're very immature, in most cases. So how do they service all this stuff? How do they establish partnerships and so forth? And how do they build stacks on prem that are as rich as they are in the cloud? And what's their motivation to do that? Are they getting pulled, digging their heels in? Or are they really serious about it? Now, in some respects, Oracle is in the best position here in terms of hybrid maturity, but again, it's narrowly focused on the Red Stack. I would say the same for Pure Storage, more mature as a service, but narrowly focused, of course, on storage. Let's talk marketplace and ecosystems. One of the hallmarks of public clouds is optionality of tooling. Just all you do is go to the AWS Marketplace and you'll see what I mean. It's got this endless bevy of choices. It's got one of everything in there and you can buy directly from your AWS Console. So watch how the hybrid cloud plays out in terms of partner inclusion and ease of doing business, that's another sign of maturity. Let's talk developers and edge. This is by far the most important and biggest hole in the hybrid portfolios, outside the public cloud players. If you're going to build infrastructure as code, who do you expect to code it? How are the on-prem players cultivating developer communities? IBM paid 34 billion to buy its way in. Actually, in today's valuation terms, you might say that's looking like a good play, but still, that cash outlay is equal to one third of IBM's revenue. So big, big bet on OpenShift, but IBM's infrastructure strategy is fragmented and its cloud business, as IBM reports in its financial statements, is a services-heavy, kitchen sink set of offerings. It's very confusing. So they got to still do some clean up there, but they're serious about the architectural battle for hybrid cloud, as Arvind Krishna calls it. Now VMware, by cobbling together the misfit developer toys of the remnants from the EMC Federation, including Pivotal, is trying to get there. You know, but when you talk to customers, they're still not all in on VMware's developer affinity. Now Cisco has DevNet, but that's basically CCIE's and other trained networking engineers learning to code in languages like Python. It's not necessarily true devs, although they're upskilling. It's a start and they're investing, Cisco, that is, investing in the community, leveraging their champions, and I would say Dell could do the same with, for example, the numerous EMC storage admins that are out there. Now Oracle bought Sun to get Java, and that's a large community of developers, but even so, when you compare AWS and Microsoft ecosystems to the others, it's not even close in terms of developer affinity. So lots of work to be done there. One other point is Pure's acquisition of Portworx, again, while narrowly focused, is a good move and instructive of the changes going on in infrastructure. Now how does this all relate to the edge? Well, I'm not going to talk much about that today, but suffice to say, developers, in our view, will win the edge. And right now, they're coding in the cloud. Now they're often coding in the cloud and moving work on prem, wrapping them in containers, but watch how sticky that model is for the respective players. The other thing to watch is cadence of offerings. Another hallmark of cloud is a rapid expansion of features. The public cloud players don't appear to be slowing down and the on-prem folks seem to be accelerating. I've been watching HPE and GreenLake and their cadence of offerings, and watch how quickly the newbies of Azure Service can add functionality, I have no doubt Dell is going to be right there as well, as is Cisco and others. Also pay attention to financial metrics, watch how Azure Service impacts the income statements and how the companies deal with that because as you shift to deferred revenue models, it's going to hurt profitability. And I'm not worried about that at all because it won't hurt cashflow, or at least it shouldn't. As long as the companies communicate to Wall Street and they're transparent, i.e., they don't shift reporting definitions every year and a half or two years, but watch for metrics around retention and churn, RPO or Remaining Performance Obligations, billing versus bookings, increased average contract values, cohort selling, the impact on both gross margin and operating margin. These are the things you watch with SaaS companies and essentially, these big hardware players are becoming Azure Service slash SaaS companies. These are going to be the key indicators of success and the proof in the pudding of the transition to Azure Service. It should be positive for these companies, assuming they get the product market fit right, and can create a flywheel effect with their respective ecosystems and partner channels. Now I'm sure you can think of other important factors to watch, but I'm going to leave it here for now. Remember these episodes, they're all available as podcasts, wherever you listen. All you got to do is search Breaking Analysis podcast and please subscribe, check out ETR's website at etr.plus. We also publish a full report every week on wikibon.com and siliconangle.com. You can get in touch with me, email david.vellante@siliconangle.com or you can DM me @dvellante. You can comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, everybody, stay safe, be well. And we'll see you next time. (soft music)

Published Date : Oct 15 2021

SUMMARY :

From the theCUBE Studios and a data plan that spanned the states.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Chuck HollisPERSON

0.99+

CiscoORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Chuck HollisPERSON

0.99+

LenovoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

OctoberDATE

0.99+

2009DATE

0.99+

Antonio NeriPERSON

0.99+

2018DATE

0.99+

DellORGANIZATION

0.99+

975 customersQUANTITY

0.99+

NISTORGANIZATION

0.99+

three yearsQUANTITY

0.99+

HPORGANIZATION

0.99+

JulyDATE

0.99+

Arvind KrishnaPERSON

0.99+

Palo AltoLOCATION

0.99+

20%QUANTITY

0.99+

EMCORGANIZATION

0.99+

HPEORGANIZATION

0.99+

ChuckPERSON

0.99+

AugustDATE

0.99+

34 billionQUANTITY

0.99+

57%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

63%QUANTITY

0.99+

VMwareORGANIZATION

0.99+

twoQUANTITY

0.99+

EMC FederationORGANIZATION

0.99+

a yearQUANTITY

0.99+

FirstQUANTITY

0.99+

PythonTITLE

0.99+

firstQUANTITY

0.99+

JavaTITLE

0.99+

OneQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

less than 5%QUANTITY

0.99+

PivotalORGANIZATION

0.99+

Azure ServiceTITLE

0.99+

two yearsQUANTITY

0.99+

first draftQUANTITY

0.99+

GartnerORGANIZATION

0.98+

Did HPE GreenLake Just Set a New Bar in the On-Prem Cloud Services Market?


 

>> Welcome back to The Cube's coverage of HPE's GreenLake announcements. My name is Dave Vellante and you're watching the Cube. I'm here with Holger Mueller, who is an analyst at Constellation Research. And Matt Maccaux is the global field CTO of Ezmeral software at HPE. We're going to talk data. Gents, great to see you. >> Holger: Great to be here. >> So, Holger, what do you see happening in the data market? Obviously data's hot, you know, digital, I call it the force marks to digital. Everybody realizes wow, digital business, that's a data business. We've got to get our data act together. What do you see in the market is the big trends, the big waves? >> We are all young enough or old enough to remember when people were saying data is the new oil, right? Nothing has changed, right? Data is the key ingredient, which matters to enterprise, which they have to store, which they have to enrich, which they have to use for their decision-making. It's the foundation of everything. If you want to go into machine learning or (indistinct) It's growing very fast, right? We have the capability now to look at all the data in enterprise, which weren't able 10 years ago to do that. So data is main center to everything. >> Yeah, it's even more valuable than oil, I think, right? 'Cause with oil, you can only use once. Data, you can, it's kind of polyglot. I can go in different directions and it's amazing, right? >> It's the beauty of digital products, right? They don't get consumed, right? They don't get fired up, right? And no carbon footprint, right? "Oh wait, wait, we have to think about carbon footprint." Different story, right? So to get to the data, you have to spend some energy. >> So it's that simple, right? I mean, it really is. Data is fundamental. It's got to be at the core. And so Matt, what are you guys announcing today, and how does that play into what Holger just said? >> What we're announcing today is that organizations no longer need to make a difficult choice. Prior to today, organizations were thinking if I'm going to do advanced machine learning and really exploit my data, I have to go to the cloud. But all my data's still on premises because of privacy rules, industry rules. And so what we're announcing today, through GreenLake Services, is a cloud services way to deliver that same cloud-based analytical capability. Machine learning, data engineering, through hybrid analytics. It's a unified platform to tie together everything from data engineering to advance data science. And we're also announcing the world's first Kubernetes native object store, that is hybrid cloud enabled. Which means you can keep your data connected across clouds in a data fabric, or Dave, as you say, mesh. >> Okay, can we dig into that a little bit? So, you're essentially saying that, so you're going to have data in both places, right? Public cloud, edge, on-prem, and you're saying, HPE is announcing a capability to connect them, I think you used the term fabric. I'm cool, by the way, with the term fabric, we can, we'll parse that out another time. >> I love for you to discuss textiles. Fabrics vs. mesh. For me, every fabric breaks down to mesh if you put it on a microscope. It's the same thing. >> Oh wow, now that's really, that's too detailed for my brain, right this moment. But, you're saying you can connect all those different estates because data by its very nature is everywhere. You're going to unify that, and what, that can manage that through sort of a single view? >> That's right. So, the management is centralized. We need to be able to know where our data is being provisioned. But again, we don't want organizations to feel like they have to make the trade off. If they want to use cloud surface A in Azure, and cloud surface B in GCP, why not connect them together? Why not allow the data to remain in sync or not, through a distributed fabric? Because we use that term fabric over and over again. But the idea is let the data be where it most naturally makes sense, and exploit it. Monetization is an old tool, but exploit it in a way that works best for your users and applications. >> In sync or not, that's interesting. So it's my choice? >> That's right. Because the back of an automobile could be a teeny tiny, small edge location. It's not always going to be in sync until it connects back up with a training facility. But we still need to be able to manage that. And maybe that data gets persisted to a core data center. Maybe it gets pushed to the cloud, but we still need to know where that data is, where it came from, its lineage, what quality it has, what security we're going to wrap around that, that all should be part of this fabric. >> Okay. So, you've got essentially a governance model, at least maybe you're working toward that, and maybe it's not all baked today, but that's the north star. Is this fabric connect, single management view, governed in a federated fashion? >> Right. And it's available through the most common API's that these applications are already written in. So, everybody today's talking S3. I've got to get all of my data, I need to put it into an object store, it needs to be S3 compatible. So, we are extending this capability to be S3 native. But it's optimized for performance. Today, when you put data in an object store, it's kind of one size fits all. Well, we know for those streaming analytical capabilities, those high performance workloads, it needs to be tuned for that. So, how about I give you a very small object on the very fastest disk in your data center and maybe that cheaper location somewhere else. And so we're giving you that balance as part of the overall management estate. >> Holger, what's your take on this? I mean, Frank Slootman says we'll never, we're not going halfway house. We're never going to do on-prem, we're only in the cloud. So that basically says, okay, he's ignoring a pretty large market by choice. You're not, Matt, you must love those words. But what do you see as the public cloud players, kind of the moves on-prem, particularly in this realm? >> Well, we've seen lots of cloud players who were only cloud coming back towards on-premise, right? We call it the next generation compute platform where I can move data and workloads between on-premise and ideally, multiple clouds, right? Because I don't want to be logged into public cloud vendors. And we see two trends, right? One trend is the traditional hardware supplier of on-premise has not scaled to cloud technology in terms of big data analytics. They just missed the boat for that in the past, this is changing. You guys are a traditional player and changing this, so congratulations. The other thing, is there's been no innovation for the on-premise tech stack, right? The only technology stack to run modern application has been invested for a long time in the cloud. So what we see since two, three years, right? With the first one being Google with Kubernetes, that are good at GKE on-premise, then onto us, right? Bringing their tech stack with compromises to on-premises, right? Acknowledging exactly what we're talking about, the data is everywhere, data is important. Data gravity is there, right? It's just the network's fault, where the networks are too slow, right? If you could just move everything anywhere we want like juggling two balls, then we'd be in different place. But that's the not enough investment for the traditional IT players for that stack, and the modern stack being there. And now every public cloud player has an on-premise offering with different flavors, different capabilities. >> I want to give you guys Dave's story of kind of history and you can kind of course correct, and tell me how this, Matt, maybe fits into what's happened with customers. So, you know, before Hadoop, obviously you had to buy a big Oracle database and you know, you running Unix, and you buy some big storage subsystem if you had any money left over, you know, you maybe, you know, do some actual analytics. But then Hadoop comes in, lowers the cost, and then S3 kneecaps the entire Hadoop market, right? >> I wouldn't say that, I wouldn't agree. Sorry to jump on your history. Because the fascinating thing, what Hadoop brought to the enterprise for the first time, you're absolutely right, affordable, right, to do that. But it's not only about affordability because S3 as the affordability. The big thing is you can store information without knowing how to analyze it, right? So, you mentioned Snowflake, right? Before, it was like an Oracle database. It was Starschema for data warehouse, and so on. You had to make decisions how to store that data because compute capabilities, storage capabilities, were too limited, right? That's what Hadoop blew away. >> I agree, no schema on, right. But then that created data lakes, which create a data swamps, and that whole mess, and then Spark comes in and help clean it out, okay, fine. So, we're cool with that. But the early days of Hadoop, you had, companies would have a Hadoop monolith, they probably had their data catalog in Excel or Google sheets, right? And so now, my question to you, Matt, is there's a lot of customers that are still in that world. What do they do? They got an option to go to the cloud. I'm hearing that you're giving them another option? >> That's right. So we know that data is going to move to the cloud, as I mentioned. So let's keep that data in sync, and governed, and secured, like you expect. But for the data that can't move, let's bring those cloud native services to your data center. And so that's a big part of this announcement is this unified analytics. So that you can continue to run the tools that you want to today while bringing those next generation tools based on Apache Spark, using libraries like Delta Lake so you can go anything from Tableaux through Presto sequel, to advance machine learning in your Jupiter notebooks on-premises where you know your data is secured. And if it happens to sit in existing Hadoop data lake, that's fine too. We don't want our customers to have to make that trade off as they go from one to the other. Let's give you the best of both worlds, or as they say, you can eat your cake and have it too. >> Okay, so. Now let's talk about sort of developers on-prem, right? They've been kind of... If they really wanted to go cloud native, they had to go to the cloud. Do you feel like this changes the game? Do on-prem developers, do they want that capability? Will they lean into that capability? Or will they say no, no, the cloud is cool. What's your take? >> I love developers, right? But it's about who makes the decision, who pays the developers, right? So the CXOs in the enterprises, they need exactly, this is why we call the next-gen computing platform, that you can move your code assets. It's very hard to build software, so it's very valuable to an enterprise. I don't want to have limited to one single location or certain computing infrastructure, right? Luckily, we have Kubernetes to be able to move that, but I want to be able to deploy it on-premise if I have to. I want to deploy it, would be able to deploy in the multiple clouds which are available. And that's the key part. And that makes developers happy too, because the code you write has got to run multiple places. So you can build more code, better code, instead of building the same thing multiple places, because a little compiler change here, a little compiler change there. Nobody wants to do portability testing and rewriting, recertified for certain platforms. >> The head of application development or application architecture and the business are ultimately going to dictate that, number one. Number two, you're saying that developers shouldn't care because it can write once, run anywhere. >> That is the promise, and that's the interesting thing which is available now, 'cause people know, thanks to Kubernetes as a container platform and the abstraction which containers provide, and that makes everybody's life easier. But it goes much more higher than the Head of Apps, right? This is the digital transformation strategy, the next generation application the company has to build as a response to a pandemic, as a pivot, as digital transformation, as digital disruption capability. >> I mean, I see a lot of organizations basically modernizing by building some kind of abstraction to their backend systems, modernizing it through cloud native, and then saying, hey, as you were saying Holger, run it anywhere you want, or connect to those cloud apps, or connect across clouds, connect to other on-prem apps, and eventually out to the edge. Is that what you see? >> It's so much easier said than done though. Organizations have struggled so much with this, especially as we start talking about those data intensive app and workloads. Kubernetes and Hadoop? Up until now, organizations haven't been able to deploy those services. So, what we're offering as part of these GreenLake unified analytics services, a Kubernetes runtime. It's not ours. It's top of branch open source. And open source operators like Apache Spark, bringing in Delta Lake libraries, so that if your developer does want to use cloud native tools to build those next generation advanced analytics applications, but prod is still on-premises, they should just be able to pick that code up, and because we are deploying 100% open-source frameworks, the code should run as is. >> So, it seems like the strategy is to basically build, now that's what GreenLake is, right? It's a cloud. It's like, hey, here's your options, use whatever you want. >> Well, and it's your cloud. That's, what's so important about GreenLake, is it's your cloud, in your data center or co-lo, with your data, your tools, and your code. And again, we know that organizations are going to go to a multi or hybrid cloud location and through our management capabilities, we can reach out if you don't want us to control those, not necessarily, that's okay, but we should at least be able to monitor and audit the data that sits in those other locations, the applications that are running, maybe I register your GKE cluster. I don't manage it, but at least through a central pane of glass, I can tell the Head of Applications, what that person's utilization is across these environments. >> You know, and you said something, Matt, that struck, resonated with me, which is this is not trivial. I mean, not as simple to do. I mean what you see, you see a lot of customers or companies, what they're doing, vendors, they'll wrap their stack in Kubernetes, shove it in the cloud, it's essentially hosted stack, right? And, you're kind of taking a different approach. You're saying, hey, we're essentially building a cloud that's going to connect all these estates. And the key is you're going to have to keep, and you are, I think that's probably part of the reason why we're here, announcing stuff very quickly. A lot of innovation has to come out to satisfy that demand that you're essentially talking about. >> Because we've oversimplified things with containers, right? Because containers don't have what matters for data, and what matters for enterprise, which is persistence, right? I have to be able to turn my systems down, or I don't know when I'm going to use that data, but it has to stay there. And that's not solved in the container world by itself. And that's what's coming now, the heavy lifting is done by people like HPE, to provide that persistence of the data across the different deployment platforms. And then, there's just a need to modernize my on-premise platforms. Right? I can't run on a server which is two, three years old, right? It's no longer safe, it doesn't have trusted identity, all the good stuff that you need these days, right? It cannot be operated remotely, or whatever happens there, where there's two, three years, is long enough for a server to have run their course, right? >> Well you're a software guy, you hate hardware anyway, so just abstract that hardware complexity away from you. >> Hardware is the necessary evil, right? It's like TSA. I want to go somewhere, but I have to go through TSA. >> But that's a key point, let me buy a service, if I need compute, give it to me. And if I don't, I don't want to hear about it, right? And that's kind of the direction that you're headed. >> That's right. >> Holger: That's what you're offering. >> That's right, and specifically the services. So GreenLake's been offering infrastructure, virtual machines, IaaS, as a service. And we want to stop talking about that underlying capability because it's a dial tone now. What organizations and these developers want is the service. Give me a service or a function, like I get in the cloud, but I need to get going today. I need it within my security parameters, access to my data, my tools, so I can get going as quickly as possible. And then beyond that, we're going to give you that cloud billing practices. Because, just because you're deploying a cloud native service, if you're still still being deployed via CapEx, you're not solving a lot of problems. So we also need to have that cloud billing model. >> Great. Well Holger, we'll give you the last word, bring us home. >> It's very interesting to have the cloud qualities of subscription-based pricing maintained by HPE as the cloud vendor from somewhere else. And that gives you that flexibility. And that's very important because data is essential to enterprise processes. And there's three reasons why data doesn't go to the cloud, right? We know that. It's privacy residency requirement, there is no cloud infrastructure in the country. It's performance, because network latency plays a role, right? Especially for critical appraisal. And then there's not invented here, right? Remember Charles Phillips saying how old the CIO is? I know if they're going to go to the cloud or not, right? So, it was not invented here. These are the things which keep data on-premise. You know that load, and HP is coming on with a very interesting offering. >> It's physics, it's laws, it's politics, and sometimes it's cost, right? Sometimes it's too expensive to move and migrate. Guys, thanks so much. Great to see you both. >> Matt: Dave, it's always a pleasure. All right, and thank you for watching the Cubes continuous coverage of HPE's big GreenLake announcements. Keep it right there for more great content. (calm music begins)

Published Date : Sep 28 2021

SUMMARY :

And Matt Maccaux is the global field CTO I call it the force marks to digital. So data is main center to everything. 'Cause with oil, you can only use once. So to get to the data, you And so Matt, what are you I have to go to the cloud. capability to connect them, It's the same thing. You're going to unify that, and what, We need to be able to know So it's my choice? It's not always going to be in sync but that's the north star. I need to put it into an object store, But what do you see as for that in the past, I want to give you guys Sorry to jump on your history. And so now, my question to you, Matt, And if it happens to sit in they had to go to the cloud. because the code you write has and the business the company has to build as and eventually out to the edge. to pick that code up, So, it seems like the and audit the data that sits to have to keep, and you are, I have to be able to turn my systems down, guy, you hate hardware anyway, I have to go through TSA. And that's kind of the but I need to get going today. the last word, bring us home. I know if they're going to go Great to see you both. the Cubes continuous coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

MattPERSON

0.99+

Matt MaccauxPERSON

0.99+

HolgerPERSON

0.99+

DavePERSON

0.99+

Holger MuellerPERSON

0.99+

twoQUANTITY

0.99+

100%QUANTITY

0.99+

Charles PhillipsPERSON

0.99+

Constellation ResearchORGANIZATION

0.99+

HPEORGANIZATION

0.99+

ExcelTITLE

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

three yearsQUANTITY

0.99+

GreenLakeORGANIZATION

0.99+

three reasonsQUANTITY

0.99+

TodayDATE

0.99+

GoogleORGANIZATION

0.99+

two ballsQUANTITY

0.98+

firstQUANTITY

0.98+

OracleORGANIZATION

0.98+

10 years agoDATE

0.98+

EzmeralORGANIZATION

0.98+

both worldsQUANTITY

0.98+

first timeQUANTITY

0.98+

S3TITLE

0.98+

One trendQUANTITY

0.98+

GreenLake ServicesORGANIZATION

0.98+

first oneQUANTITY

0.98+

SnowflakeTITLE

0.97+

both placesQUANTITY

0.97+

KubernetesTITLE

0.97+

onceQUANTITY

0.96+

bothQUANTITY

0.96+

two trendsQUANTITY

0.96+

Delta LakeTITLE

0.95+

GoogleTITLE

0.94+

HadoopTITLE

0.94+

CapExORGANIZATION

0.93+

TableauxTITLE

0.93+

AzureTITLE

0.92+

GKEORGANIZATION

0.92+

CubesORGANIZATION

0.92+

UnixTITLE

0.92+

one single locationQUANTITY

0.91+

single viewQUANTITY

0.9+

SparkTITLE

0.86+

ApacheORGANIZATION

0.85+

pandemicEVENT

0.82+

HadoopORGANIZATION

0.81+

three years oldQUANTITY

0.8+

singleQUANTITY

0.8+

KubernetesORGANIZATION

0.74+

big wavesEVENT

0.73+

Apache SparkORGANIZATION

0.71+

Number twoQUANTITY

0.69+

Next Gen Analytics & Data Services for the Cloud that Comes to You | An HPE GreenLake Announcement


 

(upbeat music) >> Welcome back to theCUBE's coverage of HPE GreenLake announcements. We're seeing the transition of Hewlett Packard Enterprise as a company, yes they're going all in for as a service, but we're also seeing a transition from a hardware company to what I look at increasingly as a data management company. We're going to talk today to Vishal Lall who's GreenLake cloud services solutions at HPE and Matt Maccaux who's a global field CTO, Ezmeral Software at HPE. Gents welcome back to theCube. Good to see you again. >> Thank you for having us here. >> Thanks Dave. >> So Vishal let's start with you. What are the big mega trends that you're seeing in data? When you talk to customers, when you talk to partners, what are they telling you? What's your optic say? >> Yeah, I mean, I would say the first thing is data is getting even more important. It's not that data hasn't been important for enterprises, but as you look at the last, I would say 24 to 36 months has become really important, right? And it's become important because customers look at data and they're trying to stitch data together across different sources, whether it's marketing data, it's supply chain data, it's financial data. And they're looking at that as a source of competitive advantage. So, customers were able to make sense out of the data, enterprises that are able to make sense out of that data, really do have a competitive advantage, right? And they actually get better business outcomes. So that's really important, right? If you start looking at, where we are from an analytics perspective, I would argue we are in maybe the third generation of data analytics. Kind of the first one was in the 80's and 90's with data warehousing kind of EDW. A lot of companies still have that, but think of Teradata, right? The second generation more in the 2000's was around data lakes, right? And that was all about Hadoop and others, and really the difference between the first and the second generation was the first generation was more around structured data, right? Second became more about unstructured data, but you really couldn't run transactions on that data. And I would say, now we are entering this third generation, which is about data lake houses, right? Customers what they want really is, or enterprises, what they want really is they want structured data. They want unstructured data altogether. They want to run transactions on them, right? They want to use the data to mine it for machine learning purposes, right? Use it for SQL as well as non-SQL, right? And that's kind of where we are today. So, that's really what we are hearing from our customers in terms of at least the top trends. And that's how we are thinking about our strategy in context of those trends. >> So lake house use that term. It's an increasing popular term. It connotes, "Okay, I've got the best of data warehouse "and I've got the best of data lake. "I'm going to try to simplify the data warehouse. "And I'm going to try to clean up the data swamp "if you will." Matt, so, talk a little bit more about what you guys are doing specifically and what that means for your customers. >> Well, what we think is important is that there has to be a hybrid solution, that organizations are going to build their analytics. They're going to deploy algorithms, where the data either is being produced or where it's going to be stored. And that could be anywhere. That could be in the trunk of a vehicle. It could be in a public cloud or in many cases, it's on-premises in the data center. And where organizations struggle is they feel like they have to make a choice and a trade-off going from one to the other. And so what HPE is offering is a way to unify the experiences of these different applications, workloads, and algorithms, while connecting them together through a fabric so that the experience is tied together with consistent, security policies, not having to refactor your applications and deploying tools like Delta lake to ensure that the organization that needs to build a data product in one cloud or deploy another data product in the trunk of an automobile can do so. >> So, Vishal I wonder if we could talk about some of the patterns that you're seeing with customers as you go to deploy solutions. Are there other industry patterns? Are there any sort of things you can share that you're discerning? >> Yeah, no, absolutely. As we kind of hear back from our customers across industries, I think the problem sets are very similar, right? Whether you look at healthcare customers. You look at telco customers, you look at consumer goods, financial services, they're all quite similar. I mean, what are they looking for? They're looking for making sense, making business value from the data, breaking down the silos that I think Matt spoke about just now, right? How do I stitch intelligence across my data silos to get more business intelligence out of it. They're looking for openness. I think the problem that's happened is over time, people have realized that they are locked in with certain vendors or certain technologies. So, they're looking for openness and choice. So that's an important one that we've at least heard back from our customers. The other one is just being able to run machine learning on algorithms on the data. I think that's another important one for them as well. And I think the last one I would say is, TCO is important as customers over the last few years have realized going to public cloud is starting to become quite expensive, to run really large workloads on public cloud, especially as they want to egress data. So, cost performance, trade offs are starting to become really important and starting to enter into the conversation now. So, I would say those are some of the key things and themes that we are hearing from customers cutting across industries. >> And you talked to Matt about basically being able to essentially leave the data where it belongs, bring the compute to data. We talk about that all the time. And so that has to include on-prem, it's got to include the cloud. And I'm kind of curious on the edge, where you see that 'cause that's... Is that an eventual piece? Is that something that's actually moving in parallel? There's lot of fuzziness as an observer in the edge. >> I think the edge is driving the most interesting use cases. The challenge up until recently has been, well, I think it's always been connectivity, right? Whether we have poor connection, little connection or no connection, being able to asynchronously deploy machine learning jobs into some sort of remote location. Whether it's a very tiny edge or it's a very large edge, like a factory floor, the challenge as Vishal mentioned is that if we're going to deploy machine learning, we need some sort of consistency of runtime to be able to execute those machine learning models. Yes, we need consistent access to data, but consistent access in terms of runtime is so important. And I think Hadoop got us started down this path, the ability to very efficiently and cost-effectively run large data jobs against large data sets. And it attempted to work into the source ecosystem, but because of the monolithic deployment, the tightly coupling of the compute and the data, it never achieved that cloud native vision. And so what as role in HPE through GreenLake services is delivering with open source-based Kubernetes, open source Apache Spark, open source Delta lake libraries, those same cloud native services that you can develop on your workstation, deploy in your data center in the same way you deploy through automation out at the edge. And I think that is what's so critical about what we're going to see over the next couple of years. The edge is driving these use cases, but it's consistency to build and deploy those machine learning models and connect it consistently with data that's what's going to drive organizations to success. >> So you're saying you're able to decouple, to compute from the storage. >> Absolutely. You wouldn't have a cloud if you didn't decouple compute from storage. And I think this is sort of the demise of Hadoop was forcing that coupling. We have high-speed networks now. Whether I'm in a cloud or in my data center, even at the edge, I have high-performance networks, I can now do distributed computing and separate compute from storage. And so if I want to, I can have high-performance compute for my really data intensive applications and I can have cost-effective storage where I need to. And by separating that off, I can now innovate at the pace of those individual tools in that opensource ecosystem. >> So, can I stay on this for a second 'cause you certainly saw Snowflake popularize that, they were kind of early on. I don't know if they're the first, but they certainly one of the most successful. And you saw Amazon Redshift copied it. And Redshift was kind of a bolt on. What essentially they did is they teared off. You could never turn off the compute. You still had to pay for a little bit compute, that's kind of interesting. Snowflakes at the t-shirt sizes, so there's trade offs there. There's a lot of ways to skin the cat. How did you guys skin the cat? >> What we believe we're doing is we're taking the best of those worlds. Through GreenLake cloud services, the ability to pay for and provision on demand the computational services you need. So, if someone needs to spin up a Delta lake job to execute a machine learning model, you spin up that. We're of course spinning that up behind the scenes. The job executes, it spins down, and you only pay for what you need. And we've got reserve capacity there. So you, of course, just like you would in the public cloud. But more importantly, being able to then extend that through a fabric across clouds and edge locations, so that if a customer wants to deploy in some public cloud service, like we know we're going to, again, we're giving that consistency across that, and exposing it through an S3 API. >> So, Vishal at the end of the day, I mean, I love to talk about the plumbing and the tech, but the customer doesn't care, right? They want the lowest cost. They want the fastest outcome. They want the greatest value. My question is, how are you seeing data organizations evolve to sort of accommodate this third era of this next generation? >> Yeah. I mean, the way at least, kind of look at, from a customer perspective, what they're trying to do is first of all, I think Matt addressed it somewhat. They're looking at a consistent experience across the different groups of people within the company that do something to data, right? It could be a SQL users. People who's just writing a SQL code. It could be people who are writing machine learning models and running them. It could be people who are writing code in Spark. Right now they are, you know the experience is completely disjointed across them, across the three types of users or more. And so that's one thing that they trying to do, is just try to get that consistency. We spoke about performance. I mean the disjointedness between compute and storage does provide the agility, because there customers are looking for elasticity. How can I have an elastic environment? So, that's kind of the other thing they're looking at. And performance and DCU, I think a big deal now. So, I think that that's definitely on a customer's mind. So, as enterprises are looking at their data journey, those are the at least the attributes that they are trying to hit as they organize themselves to make the most out of the data. >> Matt, you and I have talked about this sort of trend to the decentralized future. We're sort of hitting on that. And whether it's in a first gen data warehouse, second gen data lake, data hub, bucket, whatever, that essentially should ideally stay where it is, wherever it should be from a performance standpoint, from a governance standpoint and a cost perspective, and just be a node on this, I like the term data mesh, but be a node on that, and essentially allow the business owners, those with domain context to you've mentioned data products before to actually build data products, maybe air quotes, but a data product is something that can be monetized. Maybe it cuts costs. Maybe it adds value in other ways. How do you see HPE fitting into that long-term vision which we know is going to take some time to play out? >> I think what's important for organizations to realize is that they don't have to go to the public cloud to get that experience they're looking for. Many organizations are still reluctant to push all of their data, their critical data, that is going to be the next way to monetize business into the public cloud. And so what HPE is doing is bringing the cloud to them. Bringing that cloud from the infrastructure, the virtualization, the containerization, and most importantly, those cloud native services. So, they can do that development rapidly, test it, using those open source tools and frameworks we spoke about. And if that model ends up being deployed on a factory floor, on some common X86 infrastructure, that's okay, because the lingua franca is Kubernetes. And as Vishal mentioned, Apache Spark, these are the common tools and frameworks. And so I want organizations to think about this unified analytics experience, where they don't have to trade off security for cost, efficiency for reliability. HPE through GreenLake cloud services is delivering all of that where they need to do it. >> And what about the speed to quality trade-off? Have you seen that pop up in customer conversations, and how are organizations dealing with that? >> Like I said, it depends on what you mean by speed. Do you mean a computational speed? >> No, accelerating the time to insights, if you will. We've got to go faster, faster, agile to the data. And it's like, "Whoa, move fast break things. "Whoa, whoa. "What about data quality and governance and, right?" They seem to be at odds. >> Yeah, well, because the processes are fundamentally broken. You've got a developer who maybe is able to spin up an instance in the public cloud to do their development, but then to actually do model training, they bring it back on-premises, but they're waiting for a data engineer to get them the data available. And then the tools to be provisioned, which is some esoteric stack. And then runtime is somewhere else. The entire process is broken. So again, by using consistent frameworks and tools, and bringing that computation to where the data is, and sort of blowing this construct of pipelines out of the water, I think is what is going to drive that success in the future. A lot of organizations are not there yet, but that's I think aspirationally where they want to be. >> Yeah, I think you're right. I think that is potentially an answer as to how you, not incrementally, but revolutionized sort of the data business. Last question, is talking about GreenLake, how this all fits in. Why GreenLake? Why do you guys feel as though it's differentiable in the market place? >> So, I mean, something that you asked earlier as well, time to value, right? I think that's a very important attribute and kind of a design factor as we look at GreenLake. If you look at GreenLake overall, kind of what does it stand for? It stands for experience. How do we make sure that we have the right experience for the users, right? We spoke about it in context of data. How do we have a similar experience for different users of data, but just broadly across an enterprise? So, it's all about experience. How do you automate it, right? How do you automate the workloads? How do you provision fast? How do you give folks a cloud... An experience that they have been used to in the public cloud, on using an Apple iPhone? So it's all about experience, I think that's number one. Number two is about choice and openness. I mean, as we look at GreenLake is not a proprietary platform. We are very, very clear that the design, one of the important design principles is about choice and openness. And that's the reason we are, you hear us talk about Kubernetes, about Apaches Spark, about Delta lake et cetera, et cetera, right? We're using kind of those open source models where customers have a choice. If they don't want to be on GreenLake, they can go to public cloud tomorrow. Or they can run in our Holos if they want to do it that way or in their Holos, if they want to do it. So they should have the choice. Third is about performance. I mean, what we've done is it's not just about the software, but we as a company know how to configure infrastructure for that workload. And that's an important part of it. I mean if you think about the machine learning workloads, we have the right Nvidia chips that accelerate those transactions. So, that's kind of the last, the third one, and the last one, I think, as I spoke about earlier is cost. We are very focused on TCO, but from a customer perspective, we want to make sure that we are giving a value proposition, which is just not about experience and performance and openness, but also about costs. So if you think about GreenLake, that's kind of the value proposition that we bring to our customers across those four dimensions. >> Guys, great conversation. Thanks so much, really appreciate your time and insights. >> Matt: Thanks for having us here, David. >> All right, you're welcome. And thank you for watching everybody. Keep it right there for more great content from HPE GreenLake announcements. You're watching theCUBE. (upbeat music)

Published Date : Sep 28 2021

SUMMARY :

Good to see you again. What are the big mega trends enterprises that are able to "and I've got the best of data lake. fabric so that the experience about some of the patterns that And I think the last one I would say is, And so that has to include on-prem, the ability to very efficiently to compute from the storage. of the demise of Hadoop of the most successful. services, the ability to pay for end of the day, I mean, So, that's kind of the other I like the term data mesh, bringing the cloud to them. on what you mean by speed. to insights, if you will. that success in the future. in the market place? And that's the reason we are, Thanks so much, really appreciate And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DavePERSON

0.99+

VishalPERSON

0.99+

Matt MaccauxPERSON

0.99+

HPEORGANIZATION

0.99+

MattPERSON

0.99+

24QUANTITY

0.99+

Vishal LallPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

firstQUANTITY

0.99+

SecondQUANTITY

0.99+

second generationQUANTITY

0.99+

first generationQUANTITY

0.99+

third generationQUANTITY

0.99+

tomorrowDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

SparkTITLE

0.99+

ThirdQUANTITY

0.99+

first oneQUANTITY

0.99+

36 monthsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

second generationQUANTITY

0.99+

telcoORGANIZATION

0.99+

GreenLakeORGANIZATION

0.98+

RedshiftTITLE

0.98+

first genQUANTITY

0.98+

oneQUANTITY

0.98+

one thingQUANTITY

0.98+

TeradataORGANIZATION

0.98+

third oneQUANTITY

0.97+

SQLTITLE

0.97+

theCUBEORGANIZATION

0.97+

second genQUANTITY

0.96+

S3TITLE

0.96+

todayDATE

0.96+

Ezmeral SoftwareORGANIZATION

0.96+

AppleORGANIZATION

0.96+

three typesQUANTITY

0.96+

2000'sDATE

0.95+

thirdQUANTITY

0.95+

90'sDATE

0.95+

HPE GreenLakeORGANIZATION

0.95+

TCOORGANIZATION

0.94+

Delta lakeORGANIZATION

0.93+

80'sDATE

0.91+

Number twoQUANTITY

0.88+

lastDATE

0.88+

theCubeORGANIZATION

0.87+

AmazonORGANIZATION

0.87+

ApacheORGANIZATION

0.87+

KubernetesTITLE

0.86+

KubernetesORGANIZATION

0.83+

HadoopTITLE

0.83+

first thingQUANTITY

0.82+

SnowflakeTITLE

0.82+

four dimensionsQUANTITY

0.8+

HolosTITLE

0.79+

yearsDATE

0.78+

secondQUANTITY

0.75+

X86TITLE

0.73+

next couple of yearsDATE

0.73+

Delta lakeTITLE

0.69+

Apaches SparkORGANIZATION

0.65+

Keith Townsend


 

(intro music) >> We're back on theCUBE unpacking HPE's GreenLake announcements. I'm here with Keith Townsend, the CTO advisor. Keith, always awesome to see you, man. >> Good to be back on theCUBE. >> So, let's talk about these announcements. Let's break it down. Where do you want to start? >> So-- >> Cloud services? >> Cloud services, one of the things that we've gone back and forth with HPE over the past few years is that I don't understand GreenLake. Like, is it a financial scheme? Is it a cloud services? And I think data services, the data services announcement around Zirdle and Marketplace really elevates GreenLake to a cloud service. Kind of on par with some of the hyperscalers on how they think about architectures around data centers and, and fabrics and services to enterprise customers. >> When you say on, par in what regard? >> So, one of the things I didn't get, separate to the GreenLake announcement, we've heard a lot about HPE's containers services, Ezmeral, and they have a data fabric and it does things that the storage solutions does. Okay. That seems like a marketplace upon itself. And then the data services with the Zirdle acquisition, completely different marketplace? No, HPE is bringing all of that together, logically. So a cloud architect, similar to how they could go to AWS's console, select some services, deploy those services in their AWS VPC. Now, I can conceptually do that with HPE. I can go to HPE's GreenLake console, choose the services I need to build my app, and deploy it. That is something new within all these traditional OEM providers. >> Because of the cloud nativeness on-prem, bringing that capability. >> So, bringing the Aruba Central concepts, you know, Aruba Central, I think I read a stat, a hundred thousand customers on Aruba Central with a million interactions an hour. So this scale is hyperscale scale. This base to have a centralized marketplace and have those on those cloud-like services, but on-premises or in Niccolo, I think puts HPE near the top, if not the top for building private cloud services on-premises. >> Lets say you're a CTO at an organization that's an HPE customer or an architect. You're all in, on HPE, been working with the company for a long, long time. Wouldn't you want a view of your estate, your applications and workloads, where you could manage on-prem, Cloud, whether it's AWS, Azure, Google, take advantage of the cloud native, go across cloud, abstract all that complexity away, maybe eventually go out to the edge. Is that what you want? >> That's what I want, it's aspirational. No one between Microsoft to HPE, no one is able to give me that today. So as a CTO, I'm looking at platforms and seeing is the building blocks there. We talked to the HP storage team of how they're building the abstractions, that they can take anything from their ProLiant line, build the necessary storage underlay, and then abstract that away with a GreenLake. You can do that with AWS EBC, Azure storage. It really doesn't matter because they're building that abstractions. So, aspirationally they're there, they have the right vision. It's about excecution. >> Okay, so that is the right direction in your view, you, I mean, I think that that is clearly where customers want to go. >> A lot of work >> Keith: A lot of work. to get there, and it's a race, right? I mean, that's, you know, I feel as though, as a service is good starting point, but there's, there's a long way to go. And, so how do you feel about HPE's chances there, how they're positioning relative to not only the, there other sort of on-prem competitors, but public cloud players. >> So they're asking the right questions. They're asking the right questions of the right players. It's about relationships. Dave, you know this more than anyone that if you don't have the right relationships inside of the customers, you're not going to get there. And I think that's HPE's number one struggle. The, no slant to the VP of operations, but the VP of operations doesn't want to change his operations. He doesn't want disruption. What COO was coming to you and saying, "I want to be disruptive." Same thing in VP of operations, IT operations, they don't want disruption, but this has been HPE's traditional customer. HPE needs to get into the chief data officers, the chief marketing officers' office, and have these very difficult conversations in sales so that they can eventually show that they can't execute. I think that's the one of their primary challenges. >> So, okay that's good. I'm glad you brought that up because I think Ezmeral starts to go in that direction, it feels as though the first phase is let's pick off analytics. Let's make analytics on-prem as attractive and simple as it is in the cloud. And then, beyond that, let's support this notion of decentralized data and federated governance. And that is aspirational today. But no, as to your point, nobody really has that. AWS really, you know, they're not going after that, across clouds at this point in time, Microsoft is with arc, I guess, and Google kind of has Anthos and they're kind of doing it, but, but yeah, I'm not sure you're going to trust your cloud provider to be that player. So it's kind of like jump ball here, isn't it? >> You know, AWS make a strategic partnership with one of HP's primary competitors, because there was a gap. We know Andy Jassy, former president and CEO of AWS, doesn't typically partner with traditional OEMs, unless there's a real gap in his portfolio that he needed to do and he did it with VMware and he did it with HPE's primary competitor in storage and one of their primary competitors in storage. HPE sees the opportunity. The question is, do they have the workforce? Do they have the field teams, the field CTOs, the solution architects that can go and talk the talk to these customers and this new audience that they need to convince that HPE is just as, as respected a snowflake in these, in this data area. >> Can partners fill that gap? >> Partners definitely can fill that gap, but HPE still has the same challenge for partners: transforming partners from speaking boxes to solutions. I've spent a short stint at VMware. I was surprised at how rigid the channel is and these large organizations and making that transition. >> The other thing, when you think about it as a service that at least that I look for when, if you could comment is the pace, you know, we all would go to, we go to these events, go to re-invent and it's just this fire hose of announcements. We're seeing HPE on a cadence. You know, it's not like a once a year dealio with GreenLake. We're seeing, you know, some stuff with HPC. We're seeing the acquisition of Zerto of the, the DR services, the data protection as a service, Ezmeral. Do you feel like that pace is accelerating? And is it fast enough? >> You know what, I famously said on theCUBE that VMware moves at the pace of the CIO. HPE needs to move a little bit faster than the CIO because the CIO isn't their only customer. They have the opportunity to get customers outside of the CIO and I think they're moving fast enough. This is really hard stuff. Especially when you start to deal with data and the most valuable asset of an organization. Can you move too fast? You absolutely can. One of the other analysts said that you don't want to become the, the forgotten about data services company of the other, of the two thousands. You don't want to make that mistake in the twenties. So I, right now, I think I feel as if HPE is making the right cadence, bringing along their old customers, new customers. Challenge of all of the big OEMs is how do you not erode your base customer base and, but still move fast enough to satisfy the move fast break stuff crowd. >> Keep close to your customers. Keith, we got to leave it there. Thanks so much for coming back on theCUBE. >> I'd love to have you back. >> As always, Dave, great time. All right. And thank you for watching. Keep it right there for more great content from HPE GreenLake announcements. You're watching theCUBE.

Published Date : Sep 26 2021

SUMMARY :

Townsend, the CTO advisor. Where do you want to start? one of the things that that the storage solutions does. Because of the cloud So, bringing the Aruba Is that what you want? and seeing is the building blocks there. Okay, so that is the right direction I mean, that's, you inside of the customers, and simple as it is in the cloud. can go and talk the talk to but HPE still has the same is the pace, you know, They have the opportunity to Keep close to your customers. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

KeithPERSON

0.99+

AWSORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

Andy JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

HPORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

first phaseQUANTITY

0.99+

OneQUANTITY

0.99+

two thousandsQUANTITY

0.99+

HPCORGANIZATION

0.98+

Aruba CentralLOCATION

0.98+

todayDATE

0.97+

ZertoORGANIZATION

0.96+

oneQUANTITY

0.95+

HPE GreenLakeORGANIZATION

0.94+

GreenLakeTITLE

0.93+

HPETITLE

0.92+

a hundred thousand customersQUANTITY

0.9+

once a yearQUANTITY

0.83+

an hourQUANTITY

0.82+

twentiesQUANTITY

0.81+

Aruba CentralLOCATION

0.8+

AzureORGANIZATION

0.76+

a million interactionsQUANTITY

0.74+

NiccoloLOCATION

0.74+

EzmeralPERSON

0.74+

MarketplaceORGANIZATION

0.74+

CTOORGANIZATION

0.71+

AnthosORGANIZATION

0.7+

EzmeralORGANIZATION

0.7+

ZirdleORGANIZATION

0.61+

pastDATE

0.58+

EBCTITLE

0.58+

yearsDATE

0.53+

theCUBEORGANIZATION

0.46+

AzureTITLE

0.45+

VPCCOMMERCIAL_ITEM

0.4+

ProLiantCOMMERCIAL_ITEM

0.31+

Breaking Analysis: How JPMC is Implementing a Data Mesh Architecture on the AWS Cloud


 

>> From theCUBE studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is braking analysis with Dave Vellante. >> A new era of data is upon us, and we're in a state of transition. You know, even our language reflects that. We rarely use the phrase big data anymore, rather we talk about digital transformation or digital business, or data-driven companies. Many have come to the realization that data is a not the new oil, because unlike oil, the same data can be used over and over for different purposes. We still use terms like data as an asset. However, that same narrative, when it's put forth by the vendor and practitioner communities, includes further discussions about democratizing and sharing data. Let me ask you this, when was the last time you wanted to share your financial assets with your coworkers or your partners or your customers? Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we want to share our assessment of the state of the data business. We'll do so by looking at the data mesh concept and how a leading financial institution, JP Morgan Chase is practically applying these relatively new ideas to transform its data architecture. Let's start by looking at what is the data mesh. As we've previously reported many times, data mesh is a concept and set of principles that was introduced in 2018 by Zhamak Deghani who's director of technology at ThoughtWorks, it's a global consultancy and software development company. And she created this movement because her clients, who were some of the leading firms in the world had invested heavily in predominantly monolithic data architectures that had failed to deliver desired outcomes in ROI. So her work went deep into trying to understand that problem. And her main conclusion that came out of this effort was the world of data is distributed and shoving all the data into a single monolithic architecture is an approach that fundamentally limits agility and scale. Now a profound concept of data mesh is the idea that data architectures should be organized around business lines with domain context. That the highly technical and hyper specialized roles of a centralized cross functional team are a key blocker to achieving our data aspirations. This is the first of four high level principles of data mesh. So first again, that the business domain should own the data end-to-end, rather than have it go through a centralized big data technical team. Second, a self-service platform is fundamental to a successful architectural approach where data is discoverable and shareable across an organization and an ecosystem. Third, product thinking is central to the idea of data mesh. In other words, data products will power the next era of data success. And fourth data products must be built with governance and compliance that is automated and federated. Now there's lot more to this concept and there are tons of resources on the web to learn more, including an entire community that is formed around data mesh. But this should give you a basic idea. Now, the other point is that, in observing Zhamak Deghani's work, she is deliberately avoided discussions around specific tooling, which I think has frustrated some folks because we all like to have references that tie to products and tools and companies. So this has been a two-edged sword in that, on the one hand it's good, because data mesh is designed to be tool agnostic and technology agnostic. On the other hand, it's led some folks to take liberties with the term data mesh and claim mission accomplished when their solution, you know, maybe more marketing than reality. So let's look at JP Morgan Chase in their data mesh journey. Is why I got really excited when I saw this past week, a team from JPMC held a meet up to discuss what they called, data lake strategy via data mesh architecture. I saw that title, I thought, well, that's a weird title. And I wondered, are they just taking their legacy data lakes and claiming they're now transformed into a data mesh? But in listening to the presentation, which was over an hour long, the answer is a definitive no, not at all in my opinion. A gentleman named Scott Hollerman organized the session that comprised these three speakers here, James Reid, who's a divisional CIO at JPMC, Arup Nanda who is a technologist and architect and Serita Bakst who is an information architect, again, all from JPMC. This was the most detailed and practical discussion that I've seen to date about implementing a data mesh. And this is JP Morgan's their approach, and we know they're extremely savvy and technically sound. And they've invested, it has to be billions in the past decade on data architecture across their massive company. And rather than dwell on the downsides of their big data past, I was really pleased to see how they're evolving their approach and embracing new thinking around data mesh. So today, we're going to share some of the slides that they use and comment on how it dovetails into the concept of data mesh that Zhamak Deghani has been promoting, and at least as we understand it. And dig a bit into some of the tooling that is being used by JP Morgan, particularly around it's AWS cloud. So the first point is it's all about business value, JPMC, they're in the money business, and in that world, business value is everything. So Jr Reid, the CIO showed this slide and talked about their overall goals, which centered on a cloud first strategy to modernize the JPMC platform. I think it's simple and sensible, but there's three factors on which he focused, cut costs always short, you got to do that. Number two was about unlocking new opportunities, or accelerating time to value. But I was really happy to see number three, data reuse. That's a fundamental value ingredient in the slide that he's presenting here. And his commentary was all about aligning with the domains and maximizing data reuse, i.e. data is not like oil and making sure there's appropriate governance around that. Now don't get caught up in the term data lake, I think it's just how JP Morgan communicates internally. It's invested in the data lake concept, so they use water analogies. They use things like data puddles, for example, which are single project data marts or data ponds, which comprise multiple data puddles. And these can feed in to data lakes. And as we'll see, JPMC doesn't strive to have a single version of the truth from a data standpoint that resides in a monolithic data lake, rather it enables the business lines to create and own their own data lakes that comprise fit for purpose data products. And they do have a single truth of metadata. Okay, we'll get to that. But generally speaking, each of the domains will own end-to-end their own data and be responsible for those data products, we'll talk about that more. Now the genesis of this was sort of a cloud first platform, JPMC is leaning into public cloud, which is ironic since the early days, in the early days of cloud, all the financial institutions were like never. Anyway, JPMC is going hard after it, they're adopting agile methods and microservices architectures, and it sees cloud as a fundamental enabler, but it recognizes that on-prem data must be part of the data mesh equation. Here's a slide that starts to get into some of that generic tooling, and then we'll go deeper. And I want to make a couple of points here that tie back to Zhamak Deghani's original concept. The first is that unlike many data architectures, this puts data as products right in the fat middle of the chart. The data products live in the business domains and are at the heart of the architecture. The databases, the Hadoop clusters, the files and APIs on the left-hand side, they serve the data product builders. The specialized roles on the right hand side, the DBA's, the data engineers, the data scientists, the data analysts, we could have put in quality engineers, et cetera, they serve the data products. Because the data products are owned by the business, they inherently have the context that is the middle of this diagram. And you can see at the bottom of the slide, the key principles include domain thinking, an end-to-end ownership of the data products. They build it, they own it, they run it, they manage it. At the same time, the goal is to democratize data with a self-service as a platform. One of the biggest points of contention of data mesh is governance. And as Serita Bakst said on the Meetup, metadata is your friend, and she kind of made a joke, she said, "This sounds kind of geeky, but it's important to have a metadata catalog to understand where data resides and the data lineage in overall change management. So to me, this really past the data mesh stink test pretty well. Let's look at data as products. CIO Reid said the most difficult thing for JPMC was getting their heads around data product, and they spent a lot of time getting this concept to work. Here's the slide they use to describe their data products as it related to their specific industry. They set a common language and taxonomy is very important, and you can imagine how difficult that was. He said, for example, it took a lot of discussion and debate to define what a transaction was. But you can see at a high level, these three product groups around wholesale, credit risk, party, and trade and position data as products, and each of these can have sub products, like, party, we'll have to know your customer, KYC for example. So a key for JPMC was to start at a high level and iterate to get more granular over time. So lots of decisions had to be made around who owns the products and the sub-products. The product owners interestingly had to defend why that product should even exist, what boundaries should be in place and what data sets do and don't belong in the various products. And this was a collaborative discussion, I'm sure there was contention around that between the lines of business. And which sub products should be part of these circles? They didn't say this, but tying it back to data mesh, each of these products, whether in a data lake or a data hub or a data pond or data warehouse, data puddle, each of these is a node in the global data mesh that is discoverable and governed. And supporting this notion, Serita said that, "This should not be infrastructure-bound, logically, any of these data products, whether on-prem or in the cloud can connect via the data mesh." So again, I felt like this really stayed true to the data mesh concept. Well, let's look at some of the key technical considerations that JPM discussed in quite some detail. This chart here shows a diagram of how JP Morgan thinks about the problem, and some of the challenges they had to consider were how to write to various data stores, can you and how can you move data from one data store to another? How can data be transformed? Where's the data located? Can the data be trusted? How can it be easily accessed? Who has the right to access that data? These are all problems that technology can help solve. And to address these issues, Arup Nanda explained that the heart of this slide is the data in ingestor instead of ETL. All data producers and contributors, they send their data to the ingestor and the ingestor then registers the data so it's in the data catalog. It does a data quality check and it tracks the lineage. Then, data is sent to the router, which persists the data in the data store based on the best destination as informed by the registration. This is designed to be a flexible system. In other words, the data store for a data product is not fixed, it's determined at the point of inventory, and that allows changes to be easily made in one place. The router simply reads that optimal location and sends it to the appropriate data store. Nowadays you see the schema infer there is used when there is no clear schema on right. In this case, the data product is not allowed to be consumed until the schema is inferred, and then the data goes into a raw area, and the inferer determines the schema and then updates the inventory system so that the data can be routed to the proper location and properly tracked. So that's some of the detail of how the sausage factory works in this particular use case, it was very interesting and informative. Now let's take a look at the specific implementation on AWS and dig into some of the tooling. As described in some detail by Arup Nanda, this diagram shows the reference architecture used by this group within JP Morgan, and it shows all the various AWS services and components that support their data mesh approach. So start with the authorization block right there underneath Kinesis. The lake formation is the single point of entitlement and has a number of buckets including, you can see there the raw area that we just talked about, a trusted bucket, a refined bucket, et cetera. Depending on the data characteristics at the data catalog registration block where you see the glue catalog, that determines in which bucket the router puts the data. And you can see the many AWS services in use here, identity, the EMR, the elastic MapReduce cluster from the legacy Hadoop work done over the years, the Redshift Spectrum and Athena, JPMC uses Athena for single threaded workloads and Redshift Spectrum for nested types so they can be queried independent of each other. Now remember very importantly, in this use case, there is not a single lake formation, rather than multiple lines of business will be authorized to create their own lakes, and that creates a challenge. So how can that be done in a flexible and automated manner? And that's where the data mesh comes into play. So JPMC came up with this federated lake formation accounts idea, and each line of business can create as many data producer or consumer accounts as they desire and roll them up into their master line of business lake formation account. And they cross-connect these data products in a federated model. And these all roll up into a master glue catalog so that any authorized user can find out where a specific data element is located. So this is like a super set catalog that comprises multiple sources and syncs up across the data mesh. So again to me, this was a very well thought out and practical application of database. Yes, it includes some notion of centralized management, but much of that responsibility has been passed down to the lines of business. It does roll up to a master catalog, but that's a metadata management effort that seems compulsory to ensure federated and automated governance. As well at JPMC, the office of the chief data officer is responsible for ensuring governance and compliance throughout the federation. All right, so let's take a look at some of the suspects in this world of data mesh and bring in the ETR data. Now, of course, ETR doesn't have a data mesh category, there's no such thing as that data mesh vendor, you build a data mesh, you don't buy it. So, what we did is we use the ETR dataset to select and filter on some of the culprits that we thought might contribute to the data mesh to see how they're performing. This chart depicts a popular view that we often like to share. It's a two dimensional graphic with net score or spending momentum on the vertical axis and market share or pervasiveness in the data set on the horizontal axis. And we filtered the data on sectors such as analytics, data warehouse, and the adjacencies to things that might fit into data mesh. And we think that these pretty well reflect participation that data mesh is certainly not all compassing. And it's a subset obviously, of all the vendors who could play in the space. Let's make a few observations. Now as is often the case, Azure and AWS, they're almost literally off the charts with very high spending velocity and large presence in the market. Oracle you can see also stands out because much of the world's data lives inside of Oracle databases. It doesn't have the spending momentum or growth, but the company remains prominent. And you can see Google Cloud doesn't have nearly the presence in the dataset, but it's momentum is highly elevated. Remember that red dotted line there, that 40% line, anything over that indicates elevated spending momentum. Let's go to Snowflake. Snowflake is consistently shown to be the gold standard in net score in the ETR dataset. It continues to maintain highly elevated spending velocity in the data. And in many ways, Snowflake with its data marketplace and its data cloud vision and data sharing approach, fit nicely into the data mesh concept. Now, a caution, Snowflake has used the term data mesh in it's marketing, but in our view, it lacks clarity, and we feel like they're still trying to figure out how to communicate what that really is. But is really, we think a lot of potential there to that vision. Databricks is also interesting because the firm has momentum and we expect further elevated levels in the vertical axis in upcoming surveys, especially as it readies for its IPO. The firm has a strong product and managed service, and is really one to watch. Now we included a number of other database companies for obvious reasons like Redis and Mongo, MariaDB, Couchbase and Terradata. SAP as well is in there, but that's not all database, but SAP is prominent so we included them. As is IBM more of a database, traditional database player also with the big presence. Cloudera includes Hortonworks and HPE Ezmeral comprises the MapR business that HPE acquired. So these guys got the big data movement started, between Cloudera, Hortonworks which is born out of Yahoo, which was the early big data, sorry early Hadoop innovator, kind of MapR when it's kind of owned course, and now that's all kind of come together in various forms. And of course, we've got Talend and Informatica are there, they are two data integration companies that are worth noting. We also included some of the AI and ML specialists and data science players in the mix like DataRobot who just did a monster $250 million round. Dataiku, H2O.ai and ThoughtSpot, which is all about democratizing data and injecting AI, and I think fits well into the data mesh concept. And you know we put VMware Cloud in there for reference because it really is the predominant on-prem infrastructure platform. All right, let's wrap with some final thoughts here, first, thanks a lot to the JP Morgan team for sharing this data. I really want to encourage practitioners and technologists, go to watch the YouTube of that meetup, we'll include it in the link of this session. And thank you to Zhamak Deghani and the entire data mesh community for the outstanding work that you're doing, challenging the established conventions of monolithic data architectures. The JPM presentation, it gives you real credibility, it takes Data Mesh well beyond concept, it demonstrates how it can be and is being done. And you know, this is not a perfect world, you're going to start somewhere and there's going to be some failures, the key is to recognize that shoving everything into a monolithic data architecture won't support massive scale and agility that you're after. It's maybe fine for smaller use cases in smaller firms, but if you're building a global platform in a data business, it's time to rethink data architecture. Now much of this is enabled by the cloud, but cloud first doesn't mean cloud only, doesn't mean you'll leave your on-prem data behind, on the contrary, you have to include non-public cloud data in your Data Mesh vision just as JPMC has done. You've got to get some quick wins, that's crucial so you can gain credibility within the organization and grow. And one of the key takeaways from the JP Morgan team is, there is a place for dogma, like organizing around data products and domains and getting that right. On the other hand, you have to remain flexible because technologies is going to come, technology is going to go, so you got to be flexible in that regard. And look, if you're going to embrace the metaphor of water like puddles and ponds and lakes, we suggest maybe a little tongue in cheek, but still we believe in this, that you expand your scope to include data ocean, something John Furry and I have talked about and laughed about extensively in theCUBE. Data oceans, it's huge. It's the new data lake, go transcend data lake, think oceans. And think about this, just as we're evolving our language, we should be evolving our metrics. Much the last the decade of big data was around just getting the stuff to work, getting it up and running, standing up infrastructure and managing massive, how much data you got? Massive amounts of data. And there were many KPIs built around, again, standing up that infrastructure, ingesting data, a lot of technical KPIs. This decade is not just about enabling better insights, it's a more than that. Data mesh points us to a new era of data value, and that requires the new metrics around monetizing data products, like how long does it take to go from data product conception to monetization? And how does that compare to what it is today? And what is the time to quality if the business owns the data, and the business has the context? the quality that comes out of them, out of the shoot should be at a basic level, pretty good, and at a higher mark than out of a big data team with no business context. Automation, AI, and very importantly, organizational restructuring of our data teams will heavily contribute to success in the coming years. So we encourage you, learn, lean in and create your data future. Okay, that's it for now, remember these episodes, they're all available as podcasts wherever you listen, all you got to do is search, breaking analysis podcast, and please subscribe. Check out ETR's website at etr.plus for all the data and all the survey information. We publish a full report every week on wikibon.com and siliconangle.com. And you can get in touch with us, email me david.vellante@siliconangle.com, you can DM me @dvellante, or you can comment on my LinkedIn posts. This is Dave Vellante for theCUBE insights powered by ETR. Have a great week everybody, stay safe, be well, and we'll see you next time. (upbeat music)

Published Date : Jul 12 2021

SUMMARY :

This is braking analysis and the adjacencies to things

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JPMCORGANIZATION

0.99+

Dave VellantePERSON

0.99+

2018DATE

0.99+

Zhamak DeghaniPERSON

0.99+

James ReidPERSON

0.99+

JP MorganORGANIZATION

0.99+

JP MorganORGANIZATION

0.99+

ClouderaORGANIZATION

0.99+

Serita BakstPERSON

0.99+

IBMORGANIZATION

0.99+

HPEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Scott HollermanPERSON

0.99+

HortonworksORGANIZATION

0.99+

BostonLOCATION

0.99+

40%QUANTITY

0.99+

JP Morgan ChaseORGANIZATION

0.99+

SeritaPERSON

0.99+

YahooORGANIZATION

0.99+

Arup NandaPERSON

0.99+

eachQUANTITY

0.99+

ThoughtWorksORGANIZATION

0.99+

firstQUANTITY

0.99+

OracleORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

david.vellante@siliconangle.comOTHER

0.99+

each lineQUANTITY

0.99+

TerradataORGANIZATION

0.99+

RedisORGANIZATION

0.99+

$250 millionQUANTITY

0.99+

first pointQUANTITY

0.99+

three factorsQUANTITY

0.99+

SecondQUANTITY

0.99+

MapRORGANIZATION

0.99+

todayDATE

0.99+

InformaticaORGANIZATION

0.99+

TalendORGANIZATION

0.99+

John FurryPERSON

0.99+

Zhamak DeghaniPERSON

0.99+

first platformQUANTITY

0.98+

YouTubeORGANIZATION

0.98+

fourthQUANTITY

0.98+

singleQUANTITY

0.98+

OneQUANTITY

0.98+

ThirdQUANTITY

0.97+

CouchbaseORGANIZATION

0.97+

three speakersQUANTITY

0.97+

two dataQUANTITY

0.97+

first strategyQUANTITY

0.96+

oneQUANTITY

0.96+

one placeQUANTITY

0.96+

Jr ReidPERSON

0.96+

single lakeQUANTITY

0.95+

SAPORGANIZATION

0.95+

wikibon.comOTHER

0.95+

siliconangle.comOTHER

0.94+

AzureORGANIZATION

0.93+

Matt Maccaux, HPE | HPE Discover 2021


 

(bright music) >> Data by its very nature is distributed and siloed, but most data architectures today are highly centralized. Organizations are increasingly challenged to organize and manage data, and turn that data into insights. This idea of a single monolithic platform for data, it's giving way to new thinking. Where a decentralized approach, with open cloud native principles and federated governance, will become an underpinning of digital transformations. Hi everybody. This is Dave Volante. Welcome back to HPE Discover 2021, the virtual version. You're watching theCube's continuous coverage of the event and we're here with Matt Maccaux, who's a field CTO for Ezmeral Software at HPE. We're going to talk about HPE software strategy, and Ezmeral and specifically how to take AI analytics to scale and ensure the productivity of data teams. Matt, welcome to theCube. Good to see you. >> Good to see you again, Dave. Thanks for having me today. >> You're welcome. So talk a little bit about your role as a CTO. Where do you spend your time? >> I spend about half of my time talking to customers and partners about where they are on their digital transformation journeys and where they struggle with this sort of last phase where we start talking about bringing those cloud principles and practices into the data world. How do I take those data warehouses, those data lakes, those distributed data systems, into the enterprise and deploy them in a cloud-like manner? Then the other half of my time is working with our product teams to feed that information back, so that we can continually innovate to the next generation of our software platform. >> So when I remember, I've been following HP and HPE, for a long, long time, theCube has documented, we go back to sort of when the company was breaking in two parts, and at the time a lot of people were saying, "Oh, HP is getting rid of their software business, they're getting out of software." I said, "No, no, no, hold on. They're really focusing", and the whole focus around hybrid cloud and now as a service, you've really retooling that business and sharpened your focus. So tell us more about Ezmeral, it's a cool name, but what exactly is Ezmeral software? >> I get this question all the time. So what is Ezmeral? Ezmeral is a software platform for modern data and analytics workloads, using open source software components. We came from some inorganic growth. We acquired a company called Cytec, that brought us a zero trust approach to doing security with containers. We bought BlueData who came to us with an orchestrator before Kubernetes even existed in mainstream. They were orchestrating workloads using containers for some of these more difficult workloads. Clustered applications, distributed applications like Hadoop. Then finally we acquired MapR, which gave us this scale out distributed file system and additional analytical capabilities. What we've done is we've taken those components and we've also gone out into the marketplace to see what open source projects exist to allow us to bring those cloud principles and practices to these types of workloads, so that we can take things like Hadoop, and Spark, and Presto, and deploy and orchestrate them using open source Kubernetes. Leveraging GPU's, while providing that zero trust approach to security, that's what Ezmeral is all about is taking those cloud practices and principles, but without locking you in. Again, using those open source components where they exist, and then committing and contributing back to the opensource community where those projects don't exist. >> You know, it's interesting, thank you for that history, and when I go back, I have been there since the early days of Big Data and Hadoop and so forth and MapR always had the best product, but they couldn't get it out. Back then it was like kumbaya, open source, and they had this kind of proprietary system but it worked and that's why it was the best product. So at the same time they participated in open source projects because everybody did, that's where the innovation is going. So you're making that really hard to use stuff easier to use with Kubernetes orchestration, and then obviously, I'm presuming with the open source chops, sort of leaning into the big trends that you're seeing in the marketplace. So my question is, what are those big trends that you're seeing when you speak to technology executives which is a big part of what you do? >> So the trends are, I think, are a couplefold, and it's funny about Hadoop, but I think the final nails in the coffin have been hammered in with the Hadoop space now. So that leading trend, of where organizations are going, we're seeing organizations wanting to go cloud first. But they really struggle with these data-intensive workloads. Do I have to store my data in every cloud? Am I going to pay egress in every cloud? Well, what if my data scientists are most comfortable in AWS, but my data analysts are more comfortable in Azure, how do I provide that multi-cloud experience for these data workloads? That's the number one question I get asked, and that's probably the biggest struggle for these chief data officers, chief digital officers, is how do I allow that innovation but maintaining control over my data compliance especially when we talk international standards, like GDPR, to restrict access to data, the ability to be forgotten, in these multinational organizations how do I sort of square all of those components? Then how do I do that in a way that just doesn't lock me into another appliance or software vendor stack? I want to be able to work within the confines of the ecosystem, use the tools that are out there, but allow my organization to innovate in a very structured compliant way. >> I mean, I love this conversation and you just, to me, you hit on the key word, which is organization. I want to talk about what some of the barriers are. And again, you heard my wrap up front. I really do think that we've created, not only from a technology standpoint, and yes the tooling is important, but so is the organization, and as you said an analyst might want to work in one environment, a data scientist might want to work in another environment. The data may be very distributed. You might have situations where they're supporting the line of business. The line of business is trying to build new products, and if I have to go through this monolithic centralized organization, that's a barrier for me. And so we're seeing that change, that I kind of alluded to it up front, but what do you see as the big barriers that are blocking this vision from becoming a reality? >> It very much is organization, Dave. The technology's actually no longer the inhibitor here. We have enough technology, enough choices out there that technology is no longer the issue. It's the organization's willingness to embrace some of those technologies and put just the right level of control around accessing that data. Because if you don't allow your data scientists and data analysts to innovate, they're going to do one of two things. They're either going to leave, and then you have a huge problem keeping up with your competitors, or they're going to do it anyway. And they're going to do it in a way that probably doesn't comply with the organizational standards. So the more progressive enterprises that I speak with have realized that they need to allow these various analytical users to choose the tools they want, to self provision those as they need to and get access to data in a secure and compliant way. And that means we need to bring the cloud to generally where the data is because it's a heck of a lot easier than trying to bring the data where the cloud is, while conforming to those data principles, and that's HPE's strategy. You've heard it from our CEO for years now. Everything needs to be delivered as a service. It's Ezmeral Software that enables that capability, such as self-service and secure data provisioning, et cetera. >> Again, I love this conversation because if you go back to the early days of Hadoop, that was what was profound about a Hadoop. Bring five megabytes of code to a petabyte of data, and it didn't happen. We shoved it all into a data lake and it became a data swamp. And that's okay, it's a one dot oh, you know, maybe in data as is like data warehouses, data hubs, data lakes, maybe this is now a four dot oh, but we're getting there. But open source, one thing's for sure, it continues to gain momentum, it's where the innovation is. I wonder if you could comment on your thoughts on the role that open-source software plays for large enterprises, maybe some of the hurdles that are there, whether they're legal or licensing, or just fears, how important is open source software today? >> I think the cloud native developments, following the 12 factor applications, microservices based, paved the way over the last decade to make using open source technology tools and libraries mainstream. We have to tip our hats to Red Hat, right? For allowing organizations to embrace something so core as an operating system within the enterprise. But what everyone realized is that it's support that's what has to come with that. So we can allow our data scientists to use open source libraries, packages, and notebooks, but are we going to allow those to run in production? So if the answer is no, well? Then if we can't get support, we're not going to allow that. So where HPE Ezmeral is taking the lead here is, again, embracing those open source capabilities, but, if we deploy it, we're going to support it. Or we're going to work with the organization that has the committers to support it. You call HPE, the same phone number you've been calling for years for tier one 24 by seven support, and we will support your Kubernetes, your Spark your Presto, your Hadoop ecosystem of components. We're that throat to choke and we'll provide, all the way up to break/fix support, for some of these components and packages, giving these large enterprises the confidence to move forward with open source, but knowing that they have a trusted partner in which to do so. >> And that's why we've seen such success with say, for instance, managed services in the cloud, versus throwing out all the animals in the zoo and say, okay, figure it out yourself. But then, of course, what we saw, which was kind of ironic, was people finally said, "Hey, we can do this in the cloud more easily." So that's where you're seeing a lot of data land. However, the definition of cloud or the notion of cloud is changing. No longer is it just this remote set of services, "Somewhere out there in the cloud", some data center somewhere, no, it's moving to on-prem, on-prem is creating hybrid connections. You're seeing co-location facilities very proximate to the cloud. We're talking now about the edge, the near edge, and the far edge, deeply embedded. So that whole notion of cloud is changing. But I want to ask you, there's still a big push to cloud, everybody has a cloud first mantra, how do you see HPE competing in this new landscape? >> I think collaborating is probably a better word, although you could certainly argue if we're just leasing or renting hardware, then it would be competition, but I think again... The workload is going to flow to where the data exists. So if the data's being generated at the edge and being pumped into the cloud, then cloud is prod. That's the production system. If the data is generated via on-premises systems, then that's where it's going to be executed. That's production, and so HPE's approach is very much co-exist. It's a co-exist model of, if you need to do DevTests in the cloud and bring it back on-premises, fine, or vice versa. The key here is not locking our customers and our prospective clients into any sort of proprietary stack, as we were talking about earlier, giving people the flexibility to move those workloads to where the data exists, that is going to allow us to continue to get share of wallet, mind share, continue to deploy those workloads. And yes, there's going to competition that comes along. Do you run this on a GCP or do you run it on a GreenLake on-premises? Sure, we'll have those conversations, but again, if we're using open source software as the foundation for that, then actually where you run it is less relevant. >> So there's a lot of choices out there, when it comes to containers generally and Kubernetes specifically, and you may have answered this, you get the zero trust component, you've got the orchestrator, you've got the scale-out piece, but I'm interested in hearing in your words why an enterprise would or should consider Ezmeral instead of alternatives to Kubernetes solutions? >> It's a fair question, and it comes up in almost every conversation. "Oh, we already do Kubernetes, we have a Kubernetes standard", and that's largely true in most of the enterprises I speak to. They're using one of the many on-premises distributions to their cloud distributions, and they're all fine. They're all fine for what they were built for. Ezmeral was generally built for something a little different. Yes, everybody can run microservices based applications, DevOps based workloads, but where Ezmeral is different is for those data intensive, in clustered applications. Those sorts of applications require a certain degree of network awareness, persistent storage, et cetera, which requires either a significant amount of intelligence. Either you have to write in Golang, or you have to write your own operators, or Ezmeral can be that easy button. We deploy those stateful applications, because we bring a persistent storage layer, that came from MapR. We're really good at deploying those stateful clustered applications, and, in fact, we've opened sourced that as a project, KubeDirector, that came from BlueData, and we're really good at securing these, using SPIFFE and SPIRE, to ensure that there's that zero trust approach, that came from Scytale, and we've wrapped all of that in Kubernetes. So now you can take the most difficult, gnarly complex data intensive applications in your enterprise and deploy them using open source. And if that means we have to co-exist with an existing Kubernetes distribution, that's fine. That's actually the most common scenario that I walk into is, I start asking about, "What about these other applications you haven't done yet?" The answer is usually, "We haven't gotten to them yet", or "We're thinking about it", and that's when we talk about the capabilities of Ezmeral and I usually get the response, "Oh. A, we didn't know you existed and B well, let's talk about how exactly you do that." So again, it's more of a co-exist model rather than a compete with model, Dave. >> Well, that makes sense. I mean, I think again, a lot of people, they go, "Oh yeah, Kubernetes, no big deal. It's everywhere." But you're talking about a solution, kind of taking a platform approach with capabilities. You got to protect the data. A lot of times, these microservices aren't so micro and things are happening really fast. You've got to be secure. You got to be protected. And like you said, you've got a single phone number. You know, people say one throat to choke. Somebody in the media the other day said, "No, no. Single hand to shake." It's more of a partnership. I think that's apropos for HPE, Matt, with your heritage. >> That one's better. >> So, you know, thinking about this whole, we've gone through the pre big data days and the big data was all the hot buzzword. People don't maybe necessarily use that term anymore, although the data is bigger and getting bigger, which is kind of ironic. Where do you see this whole space going? We've talked about that sort of trend toward breaking down the silos, decentralization, maybe these hyper specialized roles that we've created, maybe getting more embedded or aligned with the line of business. How do you see... It feels like the next 10 years are going to be different than the last 10 years. How do you see it, Matt? >> I completely agree. I think we are entering this next era, and I don't know if it's well-defined. I don't know if I would go out on an edge to say exactly what the trend is going to be. But as you said earlier, data lakes really turned into data swamps. We ended up with lots of them in the enterprise, and enterprises had to allow that to happen. They had to let each business unit or each group of users collect the data that they needed and IT sort of had to deal with that down the road. I think that the more progressive organizations are leading the way. They are, again, taking those lessons from cloud and application developments, microservices, and they're allowing a freedom of choice. They're allowing data to move, to where those applications are, and I think this decentralized approach is really going to be king. You're going to see traditional software packages. You're going to see open source. You're going to see a mix of those, but what I think will probably be common throughout all of that is there's going to be this sense of automation, this sense that, we can't just build an algorithm once, release it and then wish it luck. That we've got to treat these analytics, and these data systems, as living things. That there's life cycles that we have to support. Which means we need to have DevOps for our data science. We need a CI/CD for our data analytics. We need to provide engineering at scale, like we do for software engineering. That's going to require automation, and an organizational thinking process, to allow that to actually occur. I think all of those things. The sort of people, process, products. It's all three of those things that are going to have to come into play, but stealing those best ideas from cloud and application developments, I think we're going to end up with probably something new over the next decade or so. >> Again, I'm loving this conversation, so I'm going to stick with it for a sec. It's hard to predict, but some takeaways that I have, Matt, from our conversation, I wonder if you could comment? I think the future is more open source. You mentioned automation, Devs are going to be key. I think governance as code, security designed in at the point of code creation, is going to be critical. It's no longer going be a bolt on. I don't think we're going to throw away the data warehouse or the data hubs or the data lakes. I think they become a node. I like this idea, I don't know if you know Zhamak Dehghani? but she has this idea of a global data mesh where these tools, lakes, whatever, they're a node on the mesh. They're discoverable. They're shareable. They're governed in a way. I think the mistake a lot of people made early on in the big data movement is, "Oh, we got data. We have to monetize our data." As opposed to thinking about what products can I build that are based on data that then can lead to monetization? I think the other thing I would say is the business has gotten way too technical. (Dave chuckles) It's alienated a lot of the business lines. I think we're seeing that change, and I think things like Ezmeral that simplify that, are critical. So I'll give you the final thoughts, based on my rant. >> No, your rant is spot on Dave. I think we are in agreement about a lot of things. Governance is absolutely key. If you don't know where your data is, what it's used for, and can apply policies to it. It doesn't matter what technology you throw at it, you're going to end up in the same state that you're essentially in today, with lots of swamps. I did like that concept of a node or a data mesh. It kind of goes back to the similar thing with a service mesh, or a set of APIs that you can use. I think we're going to have something similar with data. The trick is always, how heavy is it? How easy is it to move about? I think there's always going to be that latency issue, maybe not within the data center, but across the WAN. Latency is still going to be key, which means we need to have really good processes to be able to move data around. As you said, govern it. Determine who has access to what, when, and under what conditions, and then allow it to be free. Allow people to bring their choice of tools, provision them how they need to, while providing that audit, compliance and control. And then again, as you need to provision data across those nodes for those use cases, do so in a well measured and governed way. I think that's sort of where things are going. But we keep using that term governance, I think that's so key, and there's nothing better than using open source software because that provides traceability, auditability and this, frankly, openness that allows you to say, "I don't like where this project's going. I want to go in a different direction." And it gives those enterprises a control over these platforms that they've never had before. >> Matt, thanks so much for the discussion. I really enjoyed it. Awesome perspectives. >> Well thank you for having me, Dave. Excellent conversation as always. Thanks for having me again. >> You're very welcome. And thank you for watching everybody. This is theCube's continuous coverage of HPE Discover 2021. Of course, the virtual version. Next year, we're going to be back live. My name is Dave Volante. Keep it right there. (upbeat music)

Published Date : Jun 22 2021

SUMMARY :

and ensure the productivity of data teams. Good to see you again, Dave. Where do you spend your time? and practices into the data world. and at the time a lot and practices to these types of workloads, and MapR always had the best product, the ability to be forgotten, and if I have to go through this the cloud to generally where it continues to gain momentum, the committers to support it. of cloud or the notion that is going to allow us in most of the enterprises I speak to. You got to be protected. and the big data was all the hot buzzword. of that is there's going to so I'm going to stick with it for a sec. and then allow it to be free. for the discussion. Well thank you for having me, Dave. Of course, the virtual version.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Matt MaccauxPERSON

0.99+

MattPERSON

0.99+

Dave VolantePERSON

0.99+

HPORGANIZATION

0.99+

CytecORGANIZATION

0.99+

Next yearDATE

0.99+

two partsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Zhamak DehghaniPERSON

0.99+

HPEORGANIZATION

0.99+

BlueDataORGANIZATION

0.99+

todayDATE

0.99+

HadoopTITLE

0.99+

12 factorQUANTITY

0.99+

each business unitQUANTITY

0.99+

GDPRTITLE

0.98+

GolangTITLE

0.98+

each groupQUANTITY

0.98+

EzmeralORGANIZATION

0.97+

threeQUANTITY

0.97+

zero trustQUANTITY

0.97+

single phone numberQUANTITY

0.96+

EzmeralPERSON

0.96+

singleQUANTITY

0.96+

oneQUANTITY

0.96+

sevenQUANTITY

0.95+

kumbayaORGANIZATION

0.95+

one thingQUANTITY

0.93+

Big DataTITLE

0.91+

two thingsQUANTITY

0.9+

theCubeORGANIZATION

0.9+

next 10 yearsDATE

0.89+

four dotQUANTITY

0.89+

first mantraQUANTITY

0.89+

last 10 yearsDATE

0.88+

Ezmeral SoftwareORGANIZATION

0.88+

one environmentQUANTITY

0.88+

MapRORGANIZATION

0.87+

ScytaleORGANIZATION

0.87+

next decadeDATE

0.86+

firstQUANTITY

0.86+

KubernetesTITLE

0.86+

SPIFFETITLE

0.84+

SPIRETITLE

0.83+

tier oneQUANTITY

0.82+

SparkTITLE

0.8+

five megabytes of codeQUANTITY

0.77+

KubeDirectorORGANIZATION

0.75+

one questionQUANTITY

0.74+

Single handQUANTITY

0.74+

yearsQUANTITY

0.73+

last decadeDATE

0.73+

2021DATE

0.73+

AzureTITLE

0.7+

Robert Christiansen, HPE | HPE Discover 2021


 

(upbeat music) >> Welcome to theCUBE's coverage of HPE Discover 2021. I'm Lisa Martin. Robert Christiansen joins me, one of our alumni the VP of Strategy in the Office of the CTO at HPE. Robert, it's great to see you, welcome back to the program. >> It's nice being here, Lisa. Thank you so much for having me. >> So here we are still in this virtual world. Things are opening up a little bit, which is nice but one of the things I'm excited to talk to you about today is Edge to Cloud from the customer's perspective. Obviously, that's why HPE does what it does for its customers. So let's talk about some of the things that you see from your perspective, with respect to data. We can't have a Cube conversation without talking about data, there's more and more of it, value but getting access to it quickly, getting access to it in real-time and often cases to make data-driven decisions is a challenging thing to do. Talk to me about what you see from the customer's lens. >> Well, the customer at a very highest level from the board level on down they're saying, "Hey, what is our data strategy? How are we going to put the value of data in place? Are we going to have it manifest its value in an internal fashion where it makes us run better as an organization? Can we get cost improvements? Can we move quicker with that? And then can we monetize that data if it's like very specific to an industry like healthcare or pharma or something like that? Can we expose that data to the rest of the world and give them access into what we call like data sets?" And there's a lot of that going on right now too. So we're seeing these two different angles about how they're going to manage and control that data. And you were talking about, and you mentioned it, you know the Edge related focus around that. You know, the Edges where business is done is where people actually do the transaction whether it's in a healthcare like in a hospital or a manufacturing facility et cetera. And then, but that data that they're using at that location is really important to make a decision at that location. They can't send it back to a Cloud. They can't send it back to someplace, wait for a decision to happen and then shoot it back again and say, "Hey, stop the production line because we found a defect." You need to act at that moment which the clients are saying, "Hey, can you improve my reliability? Can you give me better SLS? Can you improve the quality of my products? Can you improve healthcare in a hospital by immediate decisions?" And that is a data problem. And that requires the movement of compute and networking and storage and fundamentally the core piece of HPE's world. But in addition to that, the software necessary to take the action on that data when they detect that there's some action that needs to be taken. >> And I mentioned a minute ago, you know real-time and we've learned in the last 15 months plus. One of the things we learned is for a lot of cases, access to real-time data is no longer a nice to have. It's really going to be something, an element that separates those that succeed versus those that aren't as competitive. But I want to talk about data from a consumption perspective consumers, producers, obviously, meeting to ensure that the data consumers have what they need, what is it? What is your thought when you talk with customers, the consumers versus the producers? >> Yeah, that's a great question, Lisa. One of the key fundamental areas that HPE and the Office of the CTO has really been focused on over the last six months is something that we call data spaces and that is putting in place a platform, a set of services that connect data consumers with data producers. And when you think about that, that really isn't nothing new. I mean, you could go all the way back, if you've been around for a while remember the company called TRW and they used to have credit reporting, and they used to sell that stuff. And then it moved into Experian and those things. But you've got Bloomberg and next LexisNexis and all these companies that sell data. And they've been doing it, but it's very siloed. And so the explosion of data, the valuableness the value of the data for the consumers of it has put the producers in a position where they can't readily be discovered. And whether it be a private source of data like an IoT device and an industrial control, or a set of data that might say, "Hey, here's credit card for our data on a certain geography." Those sets need to be discovered, curated, and be made available to those who would want that. You know, for example, the folks that want to know how IoT device is working inside an industrial control or a company who's trying to lower their fraud rates on credit card transactions, like in stadiums or something like that. And so this discoverability in this space, or what you just talked about is such a core piece of what we're working on right now. And we haven't, our strategy is not only to just work on what HPE has to bring that and manifest that to the marketplace. But more importantly, how are we working with our partners to really bridge that gap and bring that next generation of services to those clients that can make those connections. >> So connecting and facilitating collaboration, absolutely key, as well as that seamless flow of data sharing without constraints. How are customers working with HPE and some of your partners to be able to create a data strategy, launch it, and start gleaning value from data faster than they can before? (Robert chuckles) >> This is the big question because it's a maturity curve. Organizations are in various states of what we call data maturity or data management maturity. They can be in very early stages. You know what we consider, you know, they just more worried about just maintaining the lights on DR strategies and make sure that data doesn't go away versus all the way through a whole cycle where they're actually governing it and putting it into what I call those discoverable buckets that are made available. And there's a whole life cycle about that. And so we see a big opportunity here for our A&PS and other professional services organizations to help people get up that maturity curve. But they also have to have the foundational tools necessary to make that happen. This is really where the Ezmeral product line or software applications really shines being able to give that undercarriage that's necessary to help that data maturity and the growth of that client to meet those data needs. And we see the data fabric being a key element to that, for that distributed model, allowing people to get access and availability to have a highly redundant, highly durable data fabric and then to build applications specifically as data-intensive applications on top of that with the Ezmeral platform all the way into our GreenLake solutions. So it's quite a journey here, Lisa. I want to just, point to the fact that HPE has done a really, really good job of positioning itself for the explosion of all of these data-intensive AI/ML workloads that are making their way into every single conversation every single enterprise to this day that wants to take advantage of the value of the data they have and to augment that data through other sources. >> One, when you think about data-intensive applications the first one that pops into my mind is Uber. And it's one of those applications that we just expect. We kind of think of as a taxi service when really it's logistics and transportation, but all of the data on the backend that it is organizing to find the ride for me at my location to take me where I'm going. The explosion of data-intensive applications is great but there's also so much more demand from consumers whether we're in business or we're consuming in our personal lives. >> It's so true and that's a very popular example. And you know, you think about the real-time necessity of what's the traffic patterns at the time I order my thing. Is it going to route me the right way? That's a very real consumer facing one, but if we click into our clients and where HPE very much is like the backbone of the global economy. We provide probably one third of the compute for the global economy and it's a staggering stat if you really think about it. Our clients, I was just talking with a client here earlier, very, very large financial services company. And they have 1200 data sets that have been selling to their clients globally. And a lot of these clients want to augment that data with their existing real-time data to come up with a solution. And so they merge it and they can determine some value through a model, an AI model. And so we're working hand-in-hand with them right now to give them that backbone so that they can deliver data sets into these other systems and then make sure they get controlled and secured. So that the company we're working with, our client has a deep sense of security that that data set is not going to find itself out into the wild somewhere. And uncontrolled for a number of reasons, from security and governance mind. But the number of use cases, Lisa are as infinite as the number of opportunities for people see value in business today. >> When you're talking about 1200 data sets that a company is selling, and of course there are many, many data sets that many types of companies consume. How do you work with them to ensure that they don't just proliferate silos, but that they get more of a unified data repository that they can act on? >> Yeah, that's a great question. A key tenant of the strategy at HPE is Open-source. So we believe in a hybrid, multi-Cloud environment meaning that as long as we all agree that we are going to standardize on Open-source technologies and APIs, we will be able to write and build applications that can natively run on any abstract platform. So for example, it's very important that we containerize, for example, and we use storage and data tools that adhere to Open standards. So if you think about that, if you write a Spark application you want that Spark application potentially to run on any of the hyperscalers, the Amazon's or the Microsoft to GCPS, or you want it to run on-premises and specifically like on HPE equipment. But the idea here is I consider one of our clients right now. I mean, think about that. One of our clients specifically ask that question that you just said. They said, "Hey, we are building out this platform, this next generation platform. And we don't want the lock-in. We want to be, we want to create that environment where that data and the data framework." So they use very specific Open -source data frameworks and they open, they use very specific application frameworks the software from the Open-source community. We were able to meet that through the Ezmeral platform. Give them a very high availability, five nines high availability, redundant, redundant geographically to geographic data centers to give them that security that they're looking for. And because of that, it's opened so many other doors for us to walk in with a Cloud strategy that is an alternative, not just the one bet to public Cloud but you haven't other opportunity to bring a Cloud strategy on-premises that is compatible with Cloud-native activities that are going on in the public Cloud. And this is at the heart of HPE strategy. I think it's just, it's been paying off. It continues to pay off. We just keep investing and keep moving down that path. I think we're going to be doing really well. >> It sounds to me that the strategy that HP is developing is highly collaborative and synergistic with your customers. Talk to me a little bit about that, especially in the last year, as we've seen a massive acceleration in digital transformation about the rapid pivot to work from home, the necessity to collaborate electronically. Talk to me a little bit about that yin and yang with HPE and its customers in terms of your strategy. >> Yeah, well, I think when COVID hit one of the very first things that just took off with VDI. Rohit Dixon and I were talking on a podcast we had earlier around the work from home strategy that was implemented almost immediately. Well, we had it already in the can, we already were doing it for many clients already but it went from like a three priority to a 12, 10 being the max. Super, super charged up on how do we get work from home secured, work from home applications and stuff in the hands of people doing, you know, when data sensitivity is super important, VDI kicks in that's on that side. But then if you start looking at the digital transformation that has to happen in the supply chain that's going on right now. The opening up of our economies it's been various starts and stops if you look around the globe. The supply chains have absolutely gone under a huge amount of pressure, because, unlike in the United States, everybody just wants everything now because things are starting to open up. I was talking to a meat packing company and a restaurant business a little while ago. And they said, "Everybody wants to order the barbecue. Now we can't get the meat for the barbecues 'cause everybody's going to the barbecues." And so the supply, this is a multi-billion dollar industry supplying meat to all of the rest of the countries and stuff like that. And so they don't have optics into that supply chain today. So they're immediately having to go through a digitization process, the transformation in something as what you would call as low tech as delivering meat. So no industry is immune, none anywhere in this whole process. And it will continue to evolve as we exit and change how we live our life going into these next couple of years. I think it's going to be phenomenal just to watch. >> Yeah, it's one of the things I call a COVID catalyst some of the silver linings that have come out of this 'cause I wouldn't have thought of the meatpacking industry as a technology field as well, but now thanks to you, I will. Last question for you. When customers in this dynamic world in which we're still living talk about Edge to Cloud are they working with you to develop a Cloud initiatives, Cloud mandates, Cloud everywhere? And if so, how do you help them start? >> Yeah, that's a great question. So again, it's like back into the data model, everybody has a different degree or a starting point that they will engage us with a strategy but specifically with what you're talking about. Almost everybody already has a Cloud strategy. So they may be at different maturity levels with that Cloud strategy. And there's almost always a Cloud group. Now, historically HPE has not had much of a foot in the Cloud group because they never really historically looked at us says that HPE is a Cloud company. But what's happened over the last couple of years with the acceleration of the acceptance of Cloud on-premises and GreenLake, specifically, and the introduction of Ezmeral and the Cloud-native infrastructure services and past layer stuff that's coming up through the Ezmeral product into our clients. It's immediately opened the door for conversations around Cloud that is available for what is staying on-premises which is in excess of 70% of the applications today. Now, if you were to take that now and extend that into the Edge conversation, what if you were able to take a smaller form factor of a GreenLake Cloud and push it more closer to an Edge location while still giving the similar capabilities, Cloud-native functions that you had before? When we're provocative with clients in that sense they suddenly open up and see the art of the possible. And so this is where we are really, really breaking down a set of paradigms of what's possible by introducing, you know, not just from the Silicon all the way up but the set of services all the way to the top of stack to the actual application that they're going to be running. And we say, "Hey, we can offer it to you in as a pay as you go model, we can get you the consumption models that are necessary, that lets you buy at the same way as the Cloud offers it. But more importantly, we'll be able to run it for you and provide you an abstraction out of that model. So you don't have to send your people out into the field to do these things. We have the software, the tools, and the systems necessary to manage it for you." But the last part is I want to be really really focused on when clients are writing that application for the Edge that matters. They are putting it into new Cloud-native architectures containers, microservices, they're using solid pipelines development pipelines, they've implemented what they call their DevOps or their DataOps practices in field, in country, if you would say. That's where we shine. And so we had a really, really good conversation start there. And so how we start that is we arrive with a set of blueprints to help them establish what that roadmap looks like. And then our professional services staff, or A&PS groups around the globe are really really set up well to help them take that trip. >> Wow, that's outstanding, Robert. We could have a whole conversation on HPE's transformation. Internet itself that was my first job in tech was at Hewlett Packard back in the day. But this has been really interesting, really getting it your vision of the customer's experience and the customer's perspective from the Office of the CTO. Great to talk to you, Robert. Thank you for sharing all that you did. This could have been a Part 2 conversation. >> Well, I'm hopeful then that we'll do Part 3 and 4 here as the months go by. So I look forward to seeing you again, Lisa. >> Deal, that's a deal. All right. >> All right. >> For Robert Christiansen, I'm Lisa Martin. You're watching theCUBE's coverage of HPE Discover 2021. (upbeat music)

Published Date : Jun 22 2021

SUMMARY :

Office of the CTO at HPE. Thank you so much for having me. Talk to me about what you And that requires the movement One of the things we learned and manifest that to the marketplace. to be able to create a and the growth of that client that it is organizing to find the ride So that the company we're but that they get more of or the Microsoft to GCPS, about the rapid pivot to work from home, that has to happen in the supply chain of the meatpacking industry out into the field to do these things. and the customer's perspective as the months go by. Deal, that's a deal. coverage of HPE Discover 2021.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

HPEORGANIZATION

0.99+

RobertPERSON

0.99+

Robert ChristiansenPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

United StatesLOCATION

0.99+

LisaPERSON

0.99+

LexisNexisORGANIZATION

0.99+

BloombergORGANIZATION

0.99+

HPORGANIZATION

0.99+

Rohit DixonPERSON

0.99+

UberORGANIZATION

0.99+

SparkTITLE

0.99+

last yearDATE

0.99+

Hewlett PackardORGANIZATION

0.99+

4OTHER

0.99+

Part 2OTHER

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

1200 data setsQUANTITY

0.99+

TRWORGANIZATION

0.99+

first oneQUANTITY

0.99+

12QUANTITY

0.99+

todayDATE

0.99+

Part 3OTHER

0.98+

EdgeTITLE

0.98+

multi-billion dollarQUANTITY

0.98+

two different anglesQUANTITY

0.97+

first jobQUANTITY

0.97+

10QUANTITY

0.97+

one thirdQUANTITY

0.96+

EdgeORGANIZATION

0.96+

EzmeralORGANIZATION

0.95+

ExperianORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

A&PSORGANIZATION

0.94+

CloudTITLE

0.93+

GCPSORGANIZATION

0.92+

five ninesQUANTITY

0.92+

GreenLakeORGANIZATION

0.92+

EdgesORGANIZATION

0.91+

about 1200 data setsQUANTITY

0.88+

a minute agoDATE

0.87+

first thingsQUANTITY

0.85+

GreenLake CloudTITLE

0.84+

last six monthsDATE

0.84+

Edge to CloudTITLE

0.83+

last 15 monthsDATE

0.81+

Office of the CTOORGANIZATION

0.79+

2021DATE

0.77+

CTOORGANIZATION

0.75+

next couple of yearsDATE

0.74+

70%QUANTITY

0.72+

the CTOORGANIZATION

0.72+

one betQUANTITY

0.7+

HPE Discover 2021EVENT

0.69+

2021 035 Robert Christiansen


 

(upbeat music) >> Welcome to theCUBE's coverage of HPE Discover 2021. I'm Lisa Martin. Robert Christiansen joins me, one of our alumni the VP of Strategy in the Office of the CTO at HPE. Robert, it's great to see you, welcome back to the program. >> It's nice being here, Lisa. Thank you so much for having me. >> So here we are still in this virtual world. Things are opening up a little bit, which is nice but one of the things I'm excited to talk to you about today is Edge to Cloud from the customer's perspective. Obviously, that's why HPE does what it does for its customers. So let's talk about some of the things that you see from your perspective, with respect to data. We can't have a Cube conversation without talking about data, there's more and more of it, value but getting access to it quickly, getting access to it in real-time and often cases to make data-driven decisions is a challenging thing to do. Talk to me about what you see from the customer's lens. >> Well, the customer at a very highest level from the board level on down they're saying, "Hey, what is our data strategy? How are we going to put the value of data in place? Are we going to have it manifest its value in an internal fashion where it makes us run better as an organization? Can we get cost improvements? Can we move quicker with that? And then can we monetize that data if it's like very specific to an industry like healthcare or pharma or something like that? Can we expose that data to the rest of the world and give them access into what we call like data sets?" And there's a lot of that going on right now too. So we're seeing these two different angles about how they're going to manage and control that data. And you were talking about, and you mentioned it, you know the Edge related focus around that. You know, the Edges where business is done is where people actually do the transaction whether it's in a healthcare like in a hospital or a manufacturing facility et cetera. And then, but that data that they're using at that location is really important to make a decision at that location. They can't send it back to a Cloud. They can't send it back to someplace, wait for a decision to happen and then shoot it back again and say, "Hey, stop the production line because we found a defect." You need to act at that moment which the clients are saying, "Hey, can you improve my reliability? Can you give me better SLS? Can you improve the quality of my products? Can you improve healthcare in a hospital by immediate decisions?" And that is a data problem. And that requires the movement of compute and networking and storage and fundamentally the core piece of HPE's world. But in addition to that, the software necessary to take the action on that data when they detect that there's some action that needs to be taken. >> And I mentioned a minute ago, you know real-time and we've learned in the last 15 months plus. One of the things we learned is for a lot of cases, access to real-time data is no longer a nice to have. It's really going to be something, an element that separates those that succeed versus those that aren't as competitive. But I want to talk about data from a consumption perspective consumers, producers, obviously, meeting to ensure that the data consumers have what they need, what is it? What is your thought when you talk with customers, the consumers versus the producers? >> Yeah, that's a great question, Lisa. One of the key fundamental areas that HPE and the Office of the CTO has really been focused on over the last six months is something that we call data spaces and that is putting in place a platform, a set of services that connect data consumers with data producers. And when you think about that, that really isn't nothing new. I mean, you could go all the way back, if you've been around for a while remember the company called TRW and they used to have credit reporting, and they used to sell that stuff. And then it moved into Experian and those things. But you've got Bloomberg and next LexisNexis and all these companies that sell data. And they've been doing it, but it's very siloed. And so the explosion of data, the valuableness the value of the data for the consumers of it has put the producers in a position where they can't readily be discovered. And whether it be a private source of data like an IoT device and an industrial control, or a set of data that might say, "Hey, here's credit card for our data on a certain geography." Those sets need to be discovered, curated, and be made available to those who would want that. You know, for example, the folks that want to know how IoT device is working inside an industrial control or a company who's trying to lower their fraud rates on credit card transactions, like in stadiums or something like that. And so this discoverability in this space, or what you just talked about is such a core piece of what we're working on right now. And we haven't, our strategy is not only to just work on what HPE has to bring that and manifest that to the marketplace. But more importantly, how are we working with our partners to really bridge that gap and bring that next generation of services to those clients that can make those connections. >> So connecting and facilitating collaboration, absolutely key, as well as that seamless flow of data sharing without constraints. How are customers working with HPE and some of your partners to be able to create a data strategy, launch it, and start gleaning value from data faster than they can before? (Robert chuckles) >> This is the big question because it's a maturity curve. Organizations are in various states of what we call data maturity or data management maturity. They can be in very early stages. You know what we consider, you know, they just more worried about just maintaining the lights on DR strategies and make sure that data doesn't go away versus all the way through a whole cycle where they're actually governing it and putting it into what I call those discoverable buckets that are made available. And there's a whole life cycle about that. And so we see a big opportunity here for our A&PS and other professional services organizations to help people get up that maturity curve. But they also have to have the foundational tools necessary to make that happen. This is really where the Ezmeral product line or software applications really shines being able to give that undercarriage that's necessary to help that data maturity and the growth of that client to meet those data needs. And we see the data fabric being a key element to that, for that distributed model, allowing people to get access and availability to have a highly redundant, highly durable data fabric and then to build applications specifically as data-intensive applications on top of that with the Ezmeral platform all the way into our GreenLake solutions. So it's quite a journey here, Lisa. I want to just, point to the fact that HPE has done a really, really good job of positioning itself for the explosion of all of these data-intensive AI/ML workloads that are making their way into every single conversation every single enterprise to this day that wants to take advantage of the value of the data they have and to augment that data through other sources. >> One, when you think about data-intensive applications the first one that pops into my mind is Uber. And it's one of those applications that we just expect. We kind of think of as a taxi service when really it's logistics and transportation, but all of the data on the backend that it is organizing to find the ride for me at my location to take me where I'm going. The explosion of data-intensive applications is great but there's also so much more demand from consumers whether we're in business or we're consuming in our personal lives. >> It's so true and that's a very popular example. And you know, you think about the real-time necessity of what's the traffic patterns at the time I order my thing. Is it going to route me the right way? That's a very real consumer facing one, but if we click into our clients and where HPE very much is like the backbone of the global economy. We provide probably one third of the compute for the global economy and it's a staggering stat if you really think about it. Our clients, I was just talking with a client here earlier, very, very large financial services company. And they have 1200 data sets that have been selling to their clients globally. And a lot of these clients want to augment that data with their existing real-time data to come up with a solution. And so they merge it and they can determine some value through a model, an AI model. And so we're working hand-in-hand with them right now to give them that backbone so that they can deliver data sets into these other systems and then make sure they get controlled and secured. So that the company we're working with, our client has a deep sense of security that that data set is not going to find itself out into the wild somewhere. And uncontrolled for a number of reasons, from security and governance mind. But the number of use cases, Lisa are as infinite as the number of opportunities for people see value in business today. >> When you're talking about 1200 data sets that a company is selling, and of course there are many, many data sets that many types of companies consume. How do you work with them to ensure that they don't just proliferate silos, but that they get more of a unified data repository that they can act on? >> Yeah, that's a great question. A key tenant of the strategy at HPE is Open-source. So we believe in a hybrid, multi-Cloud environment meaning that as long as we all agree that we are going to standardize on Open-source technologies and APIs, we will be able to write and build applications that can natively run on any abstract platform. So for example, it's very important that we containerize, for example, and we use storage and data tools that adhere to Open standards. So if you think about that, if you write a Spark application you want that Spark application potentially to run on any of the hyperscalers, the Amazon's or the Microsoft to GCPS, or you want it to run on-premises and specifically like on HPE equipment. But the idea here is I consider one of our clients right now. I mean, think about that. One of our clients specifically ask that question that you just said. They said, "Hey, we are building out this platform, this next generation platform. And we don't want the lock-in. We want to be, we want to create that environment where that data and the data framework." So they use very specific Open -source data frameworks and they open, they use very specific application frameworks the software from the Open-source community. We were able to meet that through the Ezmeral platform. Give them a very high availability, five nines high availability, redundant, redundant geographically to geographic data centers to give them that security that they're looking for. And because of that, it's opened so many other doors for us to walk in with a Cloud strategy that is an alternative, not just the one bet to public Cloud but you haven't other opportunity to bring a Cloud strategy on-premises that is compatible with Cloud-native activities that are going on in the public Cloud. And this is at the heart of HPE strategy. I think it's just, it's been paying off. It continues to pay off. We just keep investing and keep moving down that path. I think we're going to be doing really well. >> It sounds to me that the strategy that HP is developing is highly collaborative and synergistic with your customers. Talk to me a little bit about that, especially in the last year, as we've seen a massive acceleration in digital transformation about the rapid pivot to work from home, the necessity to collaborate electronically. Talk to me a little bit about that yin and yang with HPE and its customers in terms of your strategy. >> Yeah, well, I think when COVID hit one of the very first things that just took off with VDI. Rohit Dixon and I were talking on a podcast we had earlier around the work from home strategy that was implemented almost immediately. Well, we had it already in the can, we already were doing it for many clients already but it went from like a three priority to a 12, 10 being the max. Super, super charged up on how do we get work from home secured, work from home applications and stuff in the hands of people doing, you know, when data sensitivity is super important, VDI kicks in that's on that side. But then if you start looking at the digital transformation that has to happen in the supply chain that's going on right now. The opening up of our economies it's been various starts and stops if you look around the globe. The supply chains have absolutely gone under a huge amount of pressure, because, unlike in the United States, everybody just wants everything now because things are starting to open up. I was talking to a meat packing company and a restaurant business a little while ago. And they said, "Everybody wants to order the barbecue. Now we can't get the meat for the barbecues 'cause everybody's going to the barbecues." And so the supply, this is a multi-billion dollar industry supplying meat to all of the rest of the countries and stuff like that. And so they don't have optics into that supply chain today. So they're immediately having to go through a digitization process, the transformation in something as what you would call as low tech as delivering meat. So no industry is immune, none anywhere in this whole process. And it will continue to evolve as we exit and change how we live our life going into these next couple of years. I think it's going to be phenomenal just to watch. >> Yeah, it's one of the things I call a COVID catalyst some of the silver linings that have come out of this 'cause I wouldn't have thought of the meatpacking industry as a technology field as well, but now thanks to you, I will. Last question for you. When customers in this dynamic world in which we're still living talk about Edge to Cloud are they working with you to develop a Cloud initiatives, Cloud mandates, Cloud everywhere? And if so, how do you help them start? >> Yeah, that's a great question. So again, it's like back into the data model, everybody has a different degree or a starting point that they will engage us with a strategy but specifically with what you're talking about. Almost everybody already has a Cloud strategy. So they may be at different maturity levels with that Cloud strategy. And there's almost always a Cloud group. Now, historically HPE has not had much of a foot in the Cloud group because they never really historically looked at us says that HPE is a Cloud company. But what's happened over the last couple of years with the acceleration of the acceptance of Cloud on-premises and GreenLake, specifically, and the introduction of Ezmeral and the Cloud-native infrastructure services and past layer stuff that's coming up through the Ezmeral product into our clients. It's immediately opened the door for conversations around Cloud that is available for what is staying on-premises which is in excess of 70% of the applications today. Now, if you were to take that now and extend that into the Edge conversation, what if you were able to take a smaller form factor of a GreenLake Cloud and push it more closer to an Edge location while still giving the similar capabilities, Cloud-native functions that you had before? When we're provocative with clients in that sense they suddenly open up and see the art of the possible. And so this is where we are really, really breaking down a set of paradigms of what's possible by introducing, you know, not just from the Silicon all the way up but the set of services all the way to the top of stack to the actual application that they're going to be running. And we say, "Hey, we can offer it to you in as a pay as you go model, we can get you the consumption models that are necessary, that lets you buy at the same way as the Cloud offers it. But more importantly, we'll be able to run it for you and provide you an abstraction out of that model. So you don't have to send your people out into the field to do these things. We have the software, the tools, and the systems necessary to manage it for you." But the last part is I want to be really really focused on when clients are writing that application for the Edge that matters. They are putting it into new Cloud-native architectures containers, microservices, they're using solid pipelines development pipelines, they've implemented what they call their DevOps or their DataOps practices in field, in country, if you would say. That's where we shine. And so we had a really, really good conversation start there. And so how we start that is we arrive with a set of blueprints to help them establish what that roadmap looks like. And then our professional services staff, or A&PS groups around the globe are really really set up well to help them take that trip. >> Wow, that's outstanding, Robert. We could have a whole conversation on HPE's transformation. Internet itself that was my first job in tech was at Hewlett Packard back in the day. But this has been really interesting, really getting it your vision of the customer's experience and the customer's perspective from the Office of the CTO. Great to talk to you, Robert. Thank you for sharing all that you did. This could have been a Part 2 conversation. >> Well, I'm hopeful then that we'll do Part 3 and 4 here as the months go by. So I look forward to seeing you again, Lisa. >> Deal, that's a deal. All right. >> All right. >> For Robert Christiansen, I'm Lisa Martin. You're watching theCUBE's coverage of HPE Discover 2021. (upbeat music)

Published Date : Jun 9 2021

SUMMARY :

Office of the CTO at HPE. Thank you so much for having me. Talk to me about what you And that requires the movement One of the things we learned and manifest that to the marketplace. to be able to create a and the growth of that client that it is organizing to find the ride So that the company we're but that they get more of or the Microsoft to GCPS, about the rapid pivot to work from home, that has to happen in the supply chain of the meatpacking industry out into the field to do these things. and the customer's perspective as the months go by. Deal, that's a deal. coverage of HPE Discover 2021.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

HPEORGANIZATION

0.99+

RobertPERSON

0.99+

Robert ChristiansenPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

United StatesLOCATION

0.99+

LisaPERSON

0.99+

LexisNexisORGANIZATION

0.99+

BloombergORGANIZATION

0.99+

HPORGANIZATION

0.99+

Rohit DixonPERSON

0.99+

UberORGANIZATION

0.99+

SparkTITLE

0.99+

last yearDATE

0.99+

Hewlett PackardORGANIZATION

0.99+

4OTHER

0.99+

Part 2OTHER

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

1200 data setsQUANTITY

0.99+

TRWORGANIZATION

0.99+

first oneQUANTITY

0.99+

12QUANTITY

0.99+

todayDATE

0.99+

Part 3OTHER

0.98+

EdgeTITLE

0.98+

multi-billion dollarQUANTITY

0.98+

2021DATE

0.98+

two different anglesQUANTITY

0.97+

first jobQUANTITY

0.97+

10QUANTITY

0.97+

one thirdQUANTITY

0.96+

EdgeORGANIZATION

0.96+

EzmeralORGANIZATION

0.95+

theCUBEORGANIZATION

0.95+

ExperianORGANIZATION

0.94+

A&PSORGANIZATION

0.94+

CloudTITLE

0.93+

GCPSORGANIZATION

0.92+

five ninesQUANTITY

0.92+

GreenLakeORGANIZATION

0.92+

EdgesORGANIZATION

0.91+

about 1200 data setsQUANTITY

0.88+

a minute agoDATE

0.87+

first thingsQUANTITY

0.85+

GreenLake CloudTITLE

0.84+

last six monthsDATE

0.84+

Edge to CloudTITLE

0.83+

last 15 monthsDATE

0.81+

Office of the CTOORGANIZATION

0.79+

CTOORGANIZATION

0.75+

next couple of yearsDATE

0.74+

70%QUANTITY

0.72+

the CTOORGANIZATION

0.72+

one betQUANTITY

0.7+

HPE Discover 2021EVENT

0.69+

last couple of yearsDATE

0.69+

Arwa Kaddoura - VP, WW Sales & GTM Lead, HPE GreenLake Cloud Services [ZOOM]


 

(lively music) >> Welcome back to HPE Discover 2021. My name is Dave Vellante and you're watching theCUBE's virtual coverage of Discover '21, and we're excited to welcome back Arwa Kaddoura, she's a vice president and world-wide go-to market leader for HPE's smoking hot GreenLake Cloud Services. Arwa, welcome back to theCUBE, good to see you again. >> Thank you for having me, it's good to be with you. >> So, talk about how your products and services are supporting customer transformations. I'm interested in the experience that everybody's been dreaming about. Describe how you're giving your customer that competitive advantage. And if you've got an examples, that would be awesome. >> Yeah, you got it. I think as we heard Antonio say, that cloud is an experience, not a destination, right? And what we're doing with GreenLake is bringing those cloud capabilities and the cloud experience to our customers. You know, we like to say, colocations, data center and edge of course. So this is the cloud on prem. And so rather than forcing customers to only have to go up to cloud, to get modern cloud capabilities or the benefits of things like, pay as you go for consumption, etc, cloud native capabilities, like containers, leveraging Kubernetes, we now bring all of that to GreenLake and to our customers, edge locations, and Colocation and data centers. We've been able to dramatically transform many of our customers businesses, right, and you'll probably see it discover some of those examples come to life, for example, Carestream, who is in the electronic medical imaging world, right, they have all of the X-Ray equipment that capture X-rays and different sort of diagnostics for patients. And we worked with them to not only craft a ML solution to better read and diagnose these images, but also all of the underlying infrastructure with the HPE GreenLake ML Ops platform that allows them to instantly leverage the capabilities of machine learning and the infrastructure to go with it. >> And so tell me, so how is it resonating with customers? They're talking to customers all the time? What do they tell you? >> Sure, you know I think what our customers appreciate about HP GreenLake is, it's not sort of look, it's either all on prem in my data center, and I have to fully manage it, build it, implement it, take care of it, or it's fully public cloud, I have little control and basically, I get whatever the public cloud gives me, right? HPE GreenLake gives our customers the flexibility and control that they require, right? And so you can think of many use cases where customers have a need to have the compute storage sort of processing need to happen closer to where their data and apps live. And so for that exact reason, our customers love the flexibility, right. Cloud One Dotto was public cloud, Cloud Two Dotto I think is the cloud that comes to our customers at their convenience. And to me, what I tell CIOs and CTOs and sort of other lines of business leaders when I meet with them, is you shouldn't be forced to have to take your data and apps elsewhere to get the transformation that you need. We want to be able to bring that directly to our customers. >> 'Cause a lot of the transformation is around data, we love talking about data on theCUBE. It's funny, I mean, we talked about big data last decade, we don't use that term much anymore. It was kind of overhyped, but as oftentimes is the case may be in the early days it's overhyped, but then it's underhyped. When it actually starts to kick in, and I feel like we're entering a new age of data and insights with the ascendancy of machine learning and AI. What does this mean from HPEs perspective and what are customers telling you that it means for them? >> Yeah, now, data I think, we often hear data is the new currency, right? It's the new gold. we've heard Antonio even say things like, data can even become something that maybe over time companies start to put some kind of value on their balance sheet behind, right, the same way that maybe brands represented this value on a balance sheet. Effectively, what's happened with data is, a lot of people have a lot of data. But there's not been a lot of ability to extract insights from data, right. And I think this is the new revolution that we're all undergoing is we finally have the modern analytics tools to actually turn the data into insights. And what we bring to the table from an HPE perspective is the fact that we have the best infrastructure, we obviously now have the cloud capabilities mixed in with our data fabric or container platform, or machine learning operations platform, to then be able to process that data, again, integrated with many of the great ISV partners that we have on the data side allow our customers to turn that into real insights for their business. And effectively data is becoming a huge competitive advantage, right? I think many of us are leveraging some pretty interesting tools or gadgets these days, right? Like, I wear one of those sleep rings. You can imagine a company like that in the future that's able to collect so much data from the folks that purchase their products, then being able to give us insights about, where's the best ZIP Code that people get the most amount of sleep and which ZIP Codes are the healthiest in the United States or countries, et cetera? But data really is becoming a competitive advantage. And one of the things that we care most about at HPE is also using it as a force for good and making sure that there is a sort of ethical AI capability. >> That's a great message and very important one. It's interesting what you're saying about data and the value, how we value, it's clearly being valued in terms of companies' market caps, but maybe it's not in the balance sheet yet, but it's on the income statement in terms of data products and data services that that's happening. So, maybe we'll see if Antonia is right in the next several years. But so, let's talk more about the specific data challenges that you're solving for your customers, they talk about silos, they talk about, they haven't gotten as much value out of their data initiatives as they wanted to. What are they telling you are their challenges and how are you approaching it? >> Yeah, I think data is everywhere, right? The ability for customers to store the right amount of data is a huge challenge. Because obviously, there's a huge cost associated with collecting, keeping, cleansing, processing, all the way to sort of analyzing your data. There tends to be a ton of data silos, right. So customers are looking for a common data fabric that they can then process their data sources across, and then be able to sort of tap into that data from an analytics perspective. So much of the technology, again, that we're focused on is be able to store the data, right, our Data Fabric layer with Ezmeral, right, being able to process that data, capture that data, and then allow the analytics tools to then harness the power of that data and turn that into real business insights for our customers. Every customer that I spoken to whether their financial services, you can imagine the big financial services, I mean, they've got just bazillions of pockets of data everywhere. And the real sort of challenge for them is how do I build a common data platform that allows me to tap into that data in effective ways for my business users? >> Can you talk a little bit about how you're changing the way you're providing solutions, maybe you could contrast it with the way HPE has done in the past? Because I think that's important when you think about, you talk a lot about GreenLake and as a service. But if the products are still kind of boxes and lands and gigahertz and ports, then that's a discontinuity. So, what's changed from the past and how are you feeding into the way customers are transforming their business and supporting their outcomes? >> That's exactly right. At some point in time, right, if you think maybe 10 or 20 years back, it used to be very much about the infrastructure for HPE. What's exciting about what we're doing differently for our customers, is, look, we have the best infrastructure in the business, right? HPE has been doing this longer than anyone has probably almost 60 years now. But being able to vertically integrate right, move up in that value chain so that our customers can get more complete solutions, is the more interesting part for our customers. Our customers love our technology yes, the gigahertz and the speeds and feeds, all of that do matter because they make for some very powerful infrastructure. However, what makes it easier is the fact that we are building platform stacks on top of that hardware, that help abstract away the complexity of that infrastructure and the ability to use it far more seamlessly. And then, if you think about it we of course have also one of the most advanced services organizations. So being able to leverage our services capabilities, our platform capabilities, on top of that hardware, again, deliver it back to our customers in a consumption model, which they've come to expect from a cloud model. And then surrounded by a very rich ecosystem of partners, and we're talking about system integrators that now have capabilities on helping our customers run their GreenLake environments. We're talking about ISVs, right, so software stacks and platforms that fully integrate with the GreenLake platform for completely seamless solutions, as well as channel partners and global distributors. So I think that's where we can truly deliver the ultimate end-to-end solution. It's not just the hardware, right? But it's being complemented with the right services, being complemented with the right platform capabilities, the software integrations to deliver that workload that the customer expects. >> So customers and partners, they got to place bets, they've got to put resources, time, money, and align their resources with their partners and their suppliers like HPE. So when they ask you, hey, okay, "HPE, tell me what's your overall strategy? "Why is it compelling? "And why do you give me competitive advantage relative to some of your peers in the industry?" >> Yeah, I think what partners are going to be most excited about is the openness of the platform, right? Being able to allow our partners to leverage GreenLake Central with open API, so that they can integrate some of their own technologies into our platform, the ability to allow them to also layer in their own managed services on top of the platform is key. And, of course, being able to build sort of these win-win solutions with the system integrators, right? The system integrators have some fantastic capabilities all the way from an application development, all the way down to the infrastructure management, and data center delivery centers that they have. And so leveraging HPE GreenLake really helps them have access to the core technologies that they need to deliver these solutions. >> I wonder if I could take a little sort of side road here and ask you because so many changes going on, HPE itself is transforming, your customers are transforming, the pandemic has accelerated all these transformations. Can you talk a little bit about how you've transformed go-to-market specifically in the context of as a service? I mean, that had to be quite a change for you guys. >> Yeah, now go-to-market transformations in support of sort of moving from traditional go-to-markets, right, to cloud go-to-markets are significant. They required us to really think through what does delivering as a service solutions mean for our direct Salesforce? What does it mean for our partners and their transformations and being able to support as a service solutions? For HPE specifically, it also means thinking about our customer outcomes, not just our ability to ship the requisite hardware and say, look, once it's left our dock, our job is done, right. It really takes our obligation all the way to the customer using the technology on a day by day basis, as well as supporting them in making sure that everything from implementation to set up to the ongoing monitoring operations of the technology is working for them in the way that they'd expect in an as a service way, right? We don't expect them to operate it, we don't expect them to do anything more than pick up the phone and call us if something doesn't go as planned. >> Then how about your sellers and your partners? How did they respond? I mean, you wake up one day is Okay guys, here we go. New compensation scheme, new way to sell, new way to market. That took some thought and some time and where are you in that journey? >> That's right. And I always say, if you expect people to wake up one day and be transformed, right, you're kidding yourself. So everything from sort of the way that we think about our customers use cases, right, and empowering our sellers to understand the outcomes that our customers expect and demand from us to things like compensation to the partner rebate program that we leverage through the channel partners in order to give them the right incentives to also allow them to make the right investments to support GreenLake. HPE has a fairly significant field, sales and solution team. And so not thinking about this only as a single person that represents GreenLake, but looking at our capabilities across the board, right, we have fantastic advisory consultants on the ground with PhDs and data science, we have folks that understand high performance computing. So making sure that we're embedding the expertise in all of the right personas that support our customers, not just from a comp perspective, but also from an understanding of the end-to-end solutions that we're bringing to those markets. >> So what gets you stoked in the morning, you get out of bed, you're like, "Okay, I'm going to go attack the world." What are you most excited about for HPE and its future? >> There's so much happening right now in this sort of cloud world, right? To me, the most exciting portion is the fact that given that we've now introduced on prem cloud to the world, our ability to ship new services and new capabilities, but also do that via a very rich partner ecosystem, honestly is what probably has me most excited. This is no longer the age of go-at-it-alone, right. So not only are our engineering and product teams hard at work in the engine room producing capabilities at sort of lightning fast speeds, but it's also our ability to partner, whether it's with platform providers, software providers, or system integrators and services providers. That ecosystem is starting to come together to deliver highly meaningful solutions to our customers and all in a very open way. The number one thing that I personally care about is that our customers never feel like they are being locked in, or that they are sort of being forced, have to give up certain levels of capabilities, we want to give them the best of what's out there and allow them to then have that flexibility in their solution. >> And one of the challenges, of course, with virtual events is you don't have the hallway track, somebody can say, "Hey, have you seen that IoT zone? It's amazing, they got all these robots going around." So what would you say that people should be focused on at discover maybe things that you want to call out specific highlights or segments that you think are relevant? >> Yeah, there's going to be a ton of fantastic stuff. I think, really looking for that edge to cloud strategy, that we're going to be spending a lot of time talking about looking at some of our vertical workload solutions, right? We're going to be talking about quite a few from electronic healthcare records, to payment solutions and many more. I think, depending on what folks are interested in there's going to be something for everyone. Project Aurora, which now starts to announce our new security capabilities, the zero trust capabilities that we're delivering is probably interesting to a lot of our customers. So lots of exciting things coming and I'm excited for our customers to check those out. >> No doubt, that's a hot topic, especially given what's been happening in the news these past several months. Arwa, thanks so much for coming back in theCUBE. It's great to see you hopefully face-to-face next time. >> Thank you, I sure hope so. Thanks so much for having me. >> It was our pleasure. And thank you for watching and thank you for being with us in our ongoing coverage of HPE Discover 2021. This is Dave Vellante. You're watching theCUBE, the leader in digital tech coverage. >> Thank you. (soft music)

Published Date : Jun 6 2021

SUMMARY :

good to see you again. it's good to be with you. I'm interested in the experience and the cloud experience to our customers. and apps elsewhere to get the 'Cause a lot of the that people get the most amount of sleep and data services that that's happening. that allows me to tap into that data and how are you feeding of that infrastructure and the ability they got to place bets, the ability to allow them to also layer I mean, that had to be and being able to support and where are you in that journey? of the way that we think I'm going to go attack the world." and allow them to then or segments that you think are relevant? to a lot of our customers. It's great to see you hopefully Thanks so much for having me. and thank you for being with us Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ArwaPERSON

0.99+

Dave VellantePERSON

0.99+

GreenLakeORGANIZATION

0.99+

Arwa KaddouraPERSON

0.99+

HPEORGANIZATION

0.99+

United StatesLOCATION

0.99+

HPORGANIZATION

0.99+

AntonioPERSON

0.99+

one dayQUANTITY

0.99+

GreenLakeTITLE

0.97+

single personQUANTITY

0.97+

oneQUANTITY

0.96+

CarestreamORGANIZATION

0.96+

almost 60 yearsQUANTITY

0.96+

HPEsORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

last decadeDATE

0.91+

10DATE

0.9+

HPE GreenLake Cloud ServicesORGANIZATION

0.89+

zero trustQUANTITY

0.84+

AntoniaPERSON

0.83+

GreenLake CentralORGANIZATION

0.81+

HPE GreenLakeTITLE

0.75+

20 years backDATE

0.75+

CloudCOMMERCIAL_ITEM

0.75+

EzmeralORGANIZATION

0.74+

GreenLakeCOMMERCIAL_ITEM

0.72+

next several yearsDATE

0.71+

one thingQUANTITY

0.71+

bazillions of pockets ofQUANTITY

0.68+

SalesforceORGANIZATION

0.67+

pandemicEVENT

0.67+

2021TITLE

0.66+

GreenLake Cloud ServicesORGANIZATION

0.65+

Discover '21EVENT

0.62+

ton of dataQUANTITY

0.61+

Two DottoTITLE

0.6+

Cloud OneCOMMERCIAL_ITEM

0.59+

Discover 2021EVENT

0.55+

OpsCOMMERCIAL_ITEM

0.54+

GTMORGANIZATION

0.54+

KubernetesTITLE

0.53+

Project AuroraORGANIZATION

0.51+

monthsDATE

0.49+

DottoTITLE

0.42+

DiscoverEVENT

0.32+

Breaking Analysis: Chasing Snowflake in Database Boomtown


 

(upbeat music) >> From theCUBE studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is braking analysis with Dave Vellante. >> Database is the heart of enterprise computing. The market is both exploding and it's evolving. The major force is transforming the space include Cloud and data, of course, but also new workloads, advanced memory and IO capabilities, new processor types, a massive push towards simplicity, new data sharing and governance models, and a spate of venture investment. Snowflake stands out as the gold standard for operational excellence and go to market execution. The company has attracted the attention of customers, investors, and competitors and everyone from entrenched players to upstarts once in the act. Hello everyone and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis, we'll share our most current thinking on the database marketplace and dig into Snowflake's execution. Some of its challenges and we'll take a look at how others are making moves to solve customer problems and try to get a piece of the growing database pie. Let's look at some of the factors that are driving market momentum. First, customers want lower license costs. They want simplicity. They want to avoid database sprawl. They want to run anywhere and manage new data types. These needs often are divergent and they pull vendors and technologies in different direction. It's really hard for any one platform to accommodate every customer need. The market is large and it's growing. Gardner has it at around 60 to 65 billion with a CAGR of somewhere around 20% over the next five years. But the market, as we know it is being redefined. Traditionally, databases have served two broad use cases, OLTP or transactions and reporting like data warehouses. But a diversity of workloads and new architectures and innovations have given rise to a number of new types of databases to accommodate all these diverse customer needs. Many billions have been spent over the last several years in venture money and it continues to pour in. Let me just give you some examples. Snowflake prior to its IPO, raised around 1.4 billion. Redis Labs has raised more than 1/2 billion dollars so far, Cockroach Labs, more than 350 million, Couchbase, 250 million, SingleStore formerly MemSQL, 238 million, Yellowbrick Data, 173 million. And if you stretch the definition of database a little bit to including low-code or no-code, Airtable has raised more than 600 million. And that's by no means a complete list. Now, why is all this investment happening? Well, in a large part, it's due to the TAM. The TAM is huge and it's growing and it's being redefined. Just how big is this market? Let's take a look at a chart that we've shown previously. We use this chart to Snowflakes TAM, and it focuses mainly on the analytics piece, but we'll use it here to really underscore the market potential. So the actual database TAM is larger than this, we think. Cloud and Cloud-native technologies have changed the way we think about databases. Virtually 100% of the database players that they're are in the market have pivoted to a Cloud first strategy. And many like Snowflake, they're pretty dogmatic and have a Cloud only strategy. Databases has historically been very difficult to manage, they're really sensitive to latency. So that means they require a lot of tuning. Cloud allows you to throw virtually infinite resources on demand and attack performance problems and scale very quickly, minimizing the complexity and tuning nuances. This idea, this layer of data as a service we think of it as a staple of digital transformation. Is this layer that's forming to support things like data sharing across ecosystems and the ability to build data products or data services. It's a fundamental value proposition of Snowflake and one of the most important aspects of its offering. Snowflake tracks a metric called edges, which are external connections in its data Cloud. And it claims that 15% of its total shared connections are edges and that's growing at 33% quarter on quarter. This notion of data sharing is changing the way people think about data. We use terms like data as an asset. This is the language of the 2010s. We don't share our assets with others, do we? No, we protect them, we secure or them, we even hide them. But we absolutely don't want to share those assets but we do want to share our data. I had a conversation recently with Forrester analyst, Michelle Goetz. And we both agreed we're going to scrub data as an asset from our phrasiology. Increasingly, people are looking at sharing as a way to create, as I said, data products or data services, which can be monetized. This is an underpinning of Zhamak Dehghani's concept of a data mesh, make data discoverable, shareable and securely governed so that we can build data products and data services that can be monetized. This is where the TAM just explodes and the market is redefining. And we think is in the hundreds of billions of dollars. Let's talk a little bit about the diversity of offerings in the marketplace. Again, databases used to be either transactional or analytic. The bottom lines and top lines. And this chart here describe those two but the types of databases, you can see the middle of mushrooms, just looking at this list, blockchain is of course a specialized type of database and it's also finding its way into other database platforms. Oracle is notable here. Document databases that support JSON and graph data stores that assist in visualizing data, inference from multiple different sources. That's is one of the ways in which adtech has taken off and been so effective. Key Value stores, log databases that are purpose-built, machine learning to enhance insights, spatial databases to help build the next generation of products, the next automobile, streaming databases to manage real time data flows and time series databases. We might've missed a few, let us know if you think we have, but this is a kind of pretty comprehensive list that is somewhat mind boggling when you think about it. And these unique requirements, they've spawned tons of innovation and companies. Here's a small subset on this logo slide. And this is by no means an exhaustive list, but you have these companies here which have been around forever like Oracle and IBM and Teradata and Microsoft, these are the kind of the tier one relational databases that have matured over the years. And they've got properties like atomicity, consistency, isolation, durability, what's known as ACID properties, ACID compliance. Some others that you may or may not be familiar with, Yellowbrick Data, we talked about them earlier. It's going after the best price, performance and analytics and optimizing to take advantage of both hybrid installations and the latest hardware innovations. SingleStore, as I said, formerly known as MemSQL is a very high end analytics and transaction database, supports mixed workloads, extremely high speeds. We're talking about trillions of rows per second that could be ingested in query. Couchbase with hybrid transactions and analytics, Redis Labs, open source, no SQL doing very well, as is Cockroach with distributed SQL, MariaDB with its managed MySQL, Mongo and document database has a lot of momentum, EDB, which supports open source Postgres. And if you stretch the definition a bit, Splunk, for log database, why not? ChaosSearch, really interesting startup that leaves data in S-3 and is going after simplifying the ELK stack, New Relic, they have a purpose-built database for application performance management and we probably could have even put Workday in the mix as it developed a specialized database for its apps. Of course, we can't forget about SAP with how not trying to pry customers off of Oracle. And then the big three Cloud players, AWS, Microsoft and Google with extremely large portfolios of database offerings. The spectrum of products in this space is very wide, with you've got AWS, which I think we're up to like 16 database offerings, all the way to Oracle, which has like one database to do everything not withstanding MySQL because it owns MySQL got that through the Sun Acquisition. And it recently, it made some innovations there around the heat wave announcement. But essentially Oracle is investing to make its database, Oracle database run any workload. While AWS takes the approach of the right tool for the right job and really focuses on the primitives for each database. A lot of ways to skin a cat in this enormous and strategic market. So let's take a look at the spending data for the names that make it into the ETR survey. Not everybody we just mentioned will be represented because they may not have quite the market presence of the ends in the survey, but ETR that capture a pretty nice mix of players. So this chart here, it's one of the favorite views that we like to share quite often. It shows the database players across the 1500 respondents in the ETR survey this past quarter and it measures their net score. That's spending momentum and is shown on the vertical axis and market share, which is the pervasiveness in the data set is on the horizontal axis. The Snowflake is notable because it's been hovering around 80% net score since the survey started picking them up. Anything above 40%, that red line there, is considered by us to be elevated. Microsoft and AWS, they also stand out because they have both market presence and they have spending velocity with their platforms. Oracle is very large but it doesn't have the spending momentum in the survey because nearly 30% of Oracle installations are spending less, whereas only 22% are spending more. Now as a caution, this survey doesn't measure dollar spent and Oracle will be skewed toward the big customers with big budgets. So you got to consider that caveat when evaluating this data. IBM is in a similar position although its market share is not keeping up with Oracle's. Google, they've got great tech especially with BigQuery and it has elevated momentum. So not a bad spot to be in although I'm sure it would like to be closer to AWS and Microsoft on the horizontal axis, so it's got some work to do there. And some of the others we mentioned earlier, like MemSQL, Couchbase. As shown MemSQL here, they're now SingleStore. Couchbase, Reddis, Mongo, MariaDB, all very solid scores on the vertical axis. Cloudera just announced that it was selling to private equity and that will hopefully give it some time to invest in this platform and get off the quarterly shot clock. MapR was acquired by HPE and it's part of HPE's Ezmeral platform, their data platform which doesn't yet have the market presence in the survey. Now, something that is interesting in looking at in Snowflakes earnings last quarter, is this laser focused on large customers. This is a hallmark of Frank Slootman and Mike Scarpelli who I know they don't have a playbook but they certainly know how to go whale hunting. So this chart isolates the data that we just showed you to the global 1000. Note that both AWS and Snowflake go up higher on the X-axis meaning large customers are spending at a faster rate for these two companies. The previous chart had an end of 161 for Snowflake, and a 77% net score. This chart shows the global 1000, in the end there for Snowflake is 48 accounts and the net score jumps to 85%. We're not going to show it here but when you isolate the ETR data, nice you can just cut it, when you isolate it on the fortune 1000, the end for Snowflake goes to 59 accounts in the data set and Snowflake jumps another 100 basis points in net score. When you cut the data by the fortune 500, the Snowflake N goes to 40 accounts and the net score jumps another 200 basis points to 88%. And when you isolate on the fortune 100 accounts is only 18 there but it's still 18, their net score jumps to 89%, almost 90%. So it's very strong confirmation that there's a proportional relationship between larger accounts and spending momentum in the ETR data set. So Snowflakes large account strategy appears to be working. And because we think Snowflake is sticky, this probably is a good sign for the future. Now we've been talking about net score, it's a key measure in the ETR data set, so we'd like to just quickly remind you what that is and use Snowflake as an example. This wheel chart shows the components of net score, that lime green is new adoptions. 29% of the customers in the ETR dataset that are new to Snowflake. That's pretty impressive. 50% of the customers are spending more, that's the forest green, 20% are flat, that's the gray, and only 1%, the pink, are spending less. And 0% zero or replacing Snowflake, no defections. What you do here to get net scores, you subtract the red from the green and you get a net score of 78%. Which is pretty sick and has been sick as in good sick and has been steady for many, many quarters. So that's how the net score methodology works. And remember, it typically takes Snowflake customers many months like six to nine months to start consuming it's services at the contracted rate. So those 29% new adoptions, they're not going to kick into high gear until next year, so that bodes well for future revenue. Now, it's worth taking a quick snapshot at Snowflakes most recent quarter, there's plenty of stuff out there that you can you can google and get a summary but let's just do a quick rundown. The company's product revenue run rate is now at 856 million they'll surpass $1 billion on a run rate basis this year. The growth is off the charts very high net revenue retention. We've explained that before with Snowflakes consumption pricing model, they have to account for retention differently than what a SaaS company. Snowflake added 27 net new $1 million accounts in the quarter and claims to have more than a hundred now. It also is just getting its act together overseas. Slootman says he's personally going to spend more time in Europe, given his belief, that the market is huge and they can disrupt it and of course he's from the continent. He was born there and lived there and gross margins expanded, do in a large part to renegotiation of its Cloud costs. Welcome back to that in a moment. Snowflake it's also moving from a product led growth company to one that's more focused on core industries. Interestingly media and entertainment is one of the largest along with financial services and it's several others. To me, this is really interesting because Disney's example that Snowflake often puts in front of its customers as a reference. And it seems to me to be a perfect example of using data and analytics to both target customers and also build so-called data products through data sharing. Snowflake has to grow its ecosystem to live up to its lofty expectations and indications are that large SIS are leaning in big time. Deloitte cross the $100 million in deal flow in the quarter. And the balance sheet's looking good. Thank you very much with $5 billion in cash. The snarks are going to focus on the losses, but this is all about growth. This is a growth story. It's about customer acquisition, it's about adoption, it's about loyalty and it's about lifetime value. Now, as I said at the IPO, and I always say this to young people, don't buy a stock at the IPO. There's probably almost always going to be better buying opportunities ahead. I'm not always right about that, but I often am. Here's a chart of Snowflake's performance since IPO. And I have to say, it's held up pretty well. It's trading above its first day close and as predicted there were better opportunities than day one but if you have to make a call from here. I mean, don't take my stock advice, do your research. Snowflake they're priced to perfection. So any disappointment is going to be met with selling. You saw that the day after they beat their earnings last quarter because their guidance in revenue growth,. Wasn't in the triple digits, it sort of moderated down to the 80% range. And they pointed, they pointed to a new storage compression feature that will lower customer costs and consequently, it's going to lower their revenue. I swear, I think that that before earnings calls, Scarpelli sits back he's okay, what kind of creative way can I introduce the dampen enthusiasm for the guidance. Now I'm not saying lower storage costs will translate into lower revenue for a period of time. But look at dropping storage prices, customers are always going to buy more, that's the way the storage market works. And stuff like did allude to that in all fairness. Let me introduce something that people in Silicon Valley are talking about, and that is the Cloud paradox for SaaS companies. And what is that? I was a clubhouse room with Martin Casado of Andreessen when I first heard about this. He wrote an article with Sarah Wang, calling it to question the merits of SaaS companies sticking with Cloud at scale. Now the basic premise is that for startups in early stages of growth, the Cloud is a no brainer for SaaS companies, but at scale, the cost of Cloud, the Cloud bill approaches 50% of the cost of revenue, it becomes an albatross that stifles operating leverage. Their conclusion ended up saying that as much as perhaps as much as the back of the napkin, they admitted that, but perhaps as much as 1/2 a trillion dollars in market cap is being vacuumed away by the hyperscalers that could go to the SaaS providers as cost savings from repatriation. And that Cloud repatriation is an inevitable path for large SaaS companies at scale. I was particularly interested in this as I had recently put on a post on the Cloud repatriation myth. I think in this instance, there's some merit to their conclusions. But I don't think it necessarily bleeds into traditional enterprise settings. But for SaaS companies, maybe service now has it right running their own data centers or maybe a hybrid approach to hedge bets and save money down the road is prudent. What caught my attention in reading through some of the Snowflake docs, like the S-1 in its most recent 10-K were comments regarding long-term purchase commitments and non-cancelable contracts with Cloud companies. And the companies S-1, for example, there was disclosure of $247 million in purchase commitments over a five plus year period. And the company's latest 10-K report, that same line item jumped to 1.8 billion. Now Snowflake is clearly managing these costs as it alluded to when its earnings call. But one has to wonder, at some point, will Snowflake follow the example of say Dropbox which Andreessen used in his blog and start managing its own IT? Or will it stick with the Cloud and negotiate hard? Snowflake certainly has the leverage. It has to be one of Amazon's best partners and customers even though it competes aggressively with Redshift but on the earnings call, CFO Scarpelli said, that Snowflake was working on a new chip technology to dramatically increase performance. What the heck does that mean? Is this Snowflake is not becoming a hardware company? So I going to have to dig into that a little bit and find out what that it means. I'm guessing, it means that it's taking advantage of ARM-based processes like graviton, which many ISVs ar allowing their software to run on that lower cost platform. Or maybe there's some deep dark in the weeds secret going on inside Snowflake, but I doubt it. We're going to leave all that for there for now and keep following this trend. So it's clear just in summary that Snowflake they're the pace setter in this new exciting world of data but there's plenty of room for others. And they still have a lot to prove. For instance, one customer in ETR, CTO round table express skepticism that Snowflake will live up to its hype because its success is going to lead to more competition from well-established established players. This is a common theme you hear it all the time. It's pretty easy to reach that conclusion. But my guess is this the exact type of narrative that fuels Slootman and sucked him back into this game of Thrones. That's it for now, everybody. Remember, these episodes they're all available as podcasts, wherever you listen. All you got to do is search braking analysis podcast and please subscribe to series. Check out ETR his website at etr.plus. We also publish a full report every week on wikinbon.com and siliconangle.com. You can get in touch with me, Email is David.vellante@siliconangle.com. You can DM me at DVelante on Twitter or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week everybody, be well and we'll see you next time. (upbeat music)

Published Date : Jun 5 2021

SUMMARY :

This is braking analysis and the net score jumps to 85%.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michelle GoetzPERSON

0.99+

AWSORGANIZATION

0.99+

Mike ScarpelliPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Sarah WangPERSON

0.99+

AmazonORGANIZATION

0.99+

50%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

AndreessenPERSON

0.99+

EuropeLOCATION

0.99+

40 accountsQUANTITY

0.99+

$1 billionQUANTITY

0.99+

Frank SlootmanPERSON

0.99+

SlootmanPERSON

0.99+

OracleORGANIZATION

0.99+

Redis LabsORGANIZATION

0.99+

ScarpelliPERSON

0.99+

TAMORGANIZATION

0.99+

sixQUANTITY

0.99+

33%QUANTITY

0.99+

$5 billionQUANTITY

0.99+

80%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

1.8 billionQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

59 accountsQUANTITY

0.99+

Cockroach LabsORGANIZATION

0.99+

DisneyORGANIZATION

0.99+

TeradataORGANIZATION

0.99+

18QUANTITY

0.99+

77%QUANTITY

0.99+

85%QUANTITY

0.99+

29%QUANTITY

0.99+

20%QUANTITY

0.99+

BostonLOCATION

0.99+

78%QUANTITY

0.99+

Martin CasadoPERSON

0.99+

48 accountsQUANTITY

0.99+

856 millionQUANTITY

0.99+

1500 respondentsQUANTITY

0.99+

nine monthsQUANTITY

0.99+

Zhamak DehghaniPERSON

0.99+

0%QUANTITY

0.99+

wikinbon.comOTHER

0.99+

88%QUANTITY

0.99+

twoQUANTITY

0.99+

$100 millionQUANTITY

0.99+

89%QUANTITY

0.99+

AirtableORGANIZATION

0.99+

next yearDATE

0.99+

SnowflakeORGANIZATION

0.99+

two companiesQUANTITY

0.99+

DeloitteORGANIZATION

0.99+

200 basis pointsQUANTITY

0.99+

FirstQUANTITY

0.99+

HPEORGANIZATION

0.99+

15%QUANTITY

0.99+

more than 600 millionQUANTITY

0.99+

last quarterDATE

0.99+

161QUANTITY

0.99+

David.vellante@siliconangle.comOTHER

0.99+

$247 millionQUANTITY

0.99+

27 netQUANTITY

0.99+

2010sDATE

0.99+

siliconangle.comOTHER

0.99+

ForresterORGANIZATION

0.99+

MemSQLTITLE

0.99+

Yellowbrick DataORGANIZATION

0.99+

more than 1/2 billion dollarsQUANTITY

0.99+

DropboxORGANIZATION

0.99+

MySQLTITLE

0.99+

BigQueryTITLE

0.99+

Jerome Lecat, Scality and Chris Tinker, HPE | CUBE Conversation


 

(uplifting music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCube here in Palo Alto, California. We've got two great remote guests to talk about some big news hitting with Scality and Hewlett Packard Enterprise. Jerome Lecat CEO of Scality and Chris Tinker, Distinguished Technologist from HPE, Hewlett Packard Enterprise, Jerome, Chris, great to see you both Cube alumnis from an original gangster days as we'd say back then when we started almost 11 years ago. Great to see you both. >> It's great to be back. >> Good to see you John. >> So, really compelling news around kind of this next generation storage cloud native solution. Okay, it's really kind of an impact on the next gen, I call next gen, dev ops meets application, modern application world and something we've been covering heavily. There's some big news here around Scality and HPE offering a pretty amazing product. You guys introduced essentially the next gen piece of it, Artesca, we'll get into in a second, but this is a game-changing announcement you guys announced, this is an evolution continuing I think is more of a revolution, but I think, you know storage is kind of abstractionally of evolution to this app centric world. So talk about this environment we're in and we'll get to the announcement, which is object store for modern workloads, but this whole shift is happening Jerome. This is a game changer to storage and customers are going to be deploying workloads. >> Yeah, Scality really, I mean, I personally really started working on Scality more than 10 years ago, close to 15 now. And if we think about it I mean the cloud has really revolutionized IT. And within the cloud, we really see layers and layers of technology. I mean, it all start at around 2006 with Amazon and Google and Facebook finding ways to do initially what was consumer IT at very large scale, very low credible reliability and then slowly creeped into the enterprise. And at the very beginning, I would say that everyone was kind of wizards trying things and really coupling technologies together. And to some degree we were some of the first wizard doing this, but we, we're now close to 15 years later and there's a lot of knowledge and a lot of experience, a lot of tools. And this is really a new generation. I'll call it cloud native, or you can call it next gen whatever, but there is now enough experience in the world, both at the development level and at the infrastructure level to deliver truly distributed automated systems that run on industry standard servers. Obviously good quality server deliver a better service than others, but there is now enough knowledge for this to truly go at scale. And call this cloud or call this cloud native. Really the core concept here is to deliver scalable IT at very low cost, very high level of reliability, all based on software. And we've, we've been participated in this motion, but we feel that now the breadth of what's coming is at the new level, and it was time for us to think, develop and launch a new product that's specifically adapted to that. And Chris, I will let you comment on this because the customers or some of them, you can add a customer, you do that. >> Well, you know, you're right. You know, I've been in the, I've been like you I've been in this industry for a, well, along time. Give a long, 20 to 21 years in HPE in engineering. And look at the actual landscape has changed with how we're doing scale-out software-defined storage for particular workloads. And we're a catalyst has evolved here is an analytics normally what was only done in the three letter acronyms and massively scale-out parallel namespace file systems, parallel file systems. The application space has encroached into the enterprise world where the enterprise world needed a way to actually take a look at how to, how do I simplify the operations? How do I actually be able to bring about an application that can run in the public cloud or on premise or hybrid, be able to actually look at a workload optimized step that aligns the actual cost to the actual analytics that I'm going to be doing the workload that I'm going to be doing and be able to bridge those gaps and be able to spin this up and simplify operations. And you know, and if you, if you are familiar with these parallel processes which by the way we actually have on our truck, I, I do engineer those, but they are, they are, they are they have their own unique challenges, but in the world of enterprise where customers are looking to simplify operations, then take advantage of new application, analytic workloads whether it be smart, Mesa, whatever it might be, right. I mean, if I want to spin up a Mongo DB or maybe maybe a, you know, last a search capability how do I actually take those technologies, embrace a modern scale-out storage stack that without without breaking the bank, but also provide a simple operations. And that's, that's why we look for object storage capabilities because it brings us this massive parallelization. Back to you John. >> Well before we get into the product. I want to just touch on one thing Jerome you mentioned, and Chris, you, you brought up the DevOps piece next gen, next level, whatever term you use. It is cloud native, cloud native has proven that DevOps infrastructure is code is not only legit. It's being operationalized in all enterprises and add security in there, you have DevSecOps, this is the reality and hybrid cloud in particular has been pretty much the consensus is that standard. So our defacto center whatever you want to call it, that's happening. Multicloud are on the horizon. So these new workloads are have these new architectural changes, cloud on premises and edge. This is the number one story. And the number one challenge all enterprises are now working on. How do I build the architecture for the cloud on premises and edge? This is forcing the DevOps team to flex and build new apps. Can you guys talk about that particular trend? And is it, and is that relevant here? >> Yeah, I, I now talk about really storage anywhere and cloud anywhere and and really the key concept is edge to go to cloud. I mean, we all understand now that the edge will host a lot of that time and the edge is many different things. I mean, it's obviously a smartphone, whatever that is, but it's also factories, it's also production. It's also, you know, moving moving machinery, trains, planes, satellites that that's all the edge, cars obviously. And a lot of that I will be both produced and process there, but from the edge who will want to be able to send the data for analysis, for backup, for logging to a call, and that call could be regional, maybe not, you know, one call for the whole planet, but maybe one corporate region the state in the U.S. And then from there you will also want to push some of the data to public cloud. One of the thing that we see more and more is that the D.R that has centered the disaster recovery is not another physical data center. It's actually the cloud, and that's a very efficient infrastructure very cost efficient, especially. So really it, it, it's changing the paradigm on how you think about storage because you really need to integrate these three layers in a consistent approach especially around the topic of security because you want the data to be secure all along the way. And data is not just data, its data, and who can access the data, who can modify the data what are the conditions that allow modification all automatically erasure of the data? In some cases, it's super important that the data automatically erased after 10 years and all this needs to be transported from edge to core to cloud. So that that's one of the aspects. Another aspects that resonates for me with what you said is a word you didn't say, but it's actually crucial this whole revolution. It's Kubernetes I mean, Kubernetes is in now a mature technology, and it's, it's just, you know the next level of automatized operation for distributed system, which we didn't have 5 or 10 years ago. And that is so powerful that it's going to allow application developers to develop much faster system that can be distributed again edge to go to cloud, because it's going to be an underlying technology that spans the three layers. >> Chris, your thoughts hybrid cloud. I've been, I've been having questions with the HPE folks for God years and years on hybrid clouds, now here. >> Right (chuckles) >> Well, you know, and, and it's exciting in a layout right, so you look at like a, whether it be enterprise virtualization, that is a scale-out general purpose virtualization workloads whether it be analytic workloads, whether it be no data protection is a paramount to all of this, orchestration is paramount. If you look at that DevSecOps, absolutely. I mean, securing the actual data the digital last set is, is absolutely paramount. And if you look at how we do this look at the investments we're making, we're making enough and look at the collaborative platform development which goes to our partnership with Scality. It is, we're providing them an integral aspect of everything we do, whether we're bringing in Ezmeral which is our software we use for orchestration look at the veneer of its control plane, controlling Kubernetes. Being able to actually control the active clusters and the actual backing store for all the analytics that we just talked about. Whether it be a web-scale app that is traditionally using a politics namespace and now been modernized and take advantage of newer technologies running an NBME burst buffers or a hundred gig networks with Slingshot network of 200 and 400 gigabit looking at how do we actually get the actual analytics, the workload to the CPU and have it attached to the data at risk. Where's the data, how do we land the data? How do we actually align, essentially locality, locality of the actual asset to the computer. And this is where, you know, we can look leverage whether it be a Zair or Google or name your favorite hybrid, hyperscaler, leverage those technologies leveraging the actual persistent store. And this is where Scality is, with this object store capability has it been an industry trendsetter, setting the actual landscape of how provide an object store on premise and hybrid cloud run it in a public cloud, but being able to facilitate data mobility and tie it back to, and tie it back to an application. And this is where a lot of things have changed in the world of analytics, because the applications that you, the newer technologies that are coming on the market have taken advantage of this particular protocol as threes. So they can do web scale massively parallel concurrent workloads. >> You know what let's get into the announcement. I love cool and relevant products. And I think this hits the mark. Scality you guys have Artesca, which is just announced. And I think it, you know, we obviously we reported on it. You guys have a lightweight true enterprise grade object store software for Kubernetes. This is the announcement, Jerome, tell us about it. What's the big deal? Cool and relevant, come on, this is cool. Right, tell us. >> I'm super excited. I'm not sure, if you can see it as well on the screen, but I'm super, super excited. You know, we, we introduced the ring 11 years ago and they says our biggest announcements for the past 11 years. So yes, do pay attention. And, you know, after, after looking at, at all these trends and understanding where we see the future going. We decided that it was time to embark (indistinct) So there's not one line of code that's the same as our previous generation product. They will both exist, they both have a space in the market. And Artesca was specifically designed for this cloud native era. And what we see is that people want something that's lightweight especially because it had to go to the edge. They still want the enterprise grid that Scality is known for. And it has to be modern. What we really mean by modern is, we see object storage now being the primary storage for many application more and more applications. And so we have to be able to deliver the performance, that primary storage expects. This idea of a Scality of serving primary storage is actually not completely new. When we launched Scality 10 years ago, the first application that we were supporting was consumer email for which we were, and we are still today, the primary storage. So we have, we know what it is to be the primary store. We know what's the level of reliability you need to hit. We know what, what latency means and latency is different from throughput, you really need to optimize both. And I think that still today we're the only object storage company that protects data from both replication and original encoding Because we understand that replication is faster, but the original encoding is more better, and more, of file where fast internet latency doesn't matter so much. So we we've been being all that experience, but really rethinking of product for that new generation that really is here now. And so where we're truly excited, I guess people a bit more about the product. It's a software, Scality is a software company and that's why we love to partner with HPE who's producing amazing servers, you know for the record and the history. The very first deployment of Scality in 2010 was on the HP servers. So this is a long love story here. And so to come back to our desk is lightweight in the sense that it's easy to use. We can start small, we can start from just one server or one VM I mean, you would start really small, but he can grow infinitely. The fact that we start small, we didn't, you know limit the technology because of that. So you can start from one to many and it's cloud native in the sense that it's completely Kubernetes compatible it's Kubernetes office traded. It will deploy on many Kubernetes distributions. We're talking obviously with Ezmeral we're also talking with zoo and with the other all those of communities distribution it will also be able to be run in the cloud. Now, I'm not sure that there will be many true production deployment of Artesca going the cloud, because you already have really good object storage by the cloud providers but when you are developing something and you want to test that, you know just doing it in the cloud is very practical. So you'll be able to deploy our Kubernetes cloud distribution, and it's more than object storage in the sense that it's application centric. A lot of our work is actually validating that our storage is fit for this single purpose application. And making sure that we understand the requirement of these application, that we can guide our customers on how to deploy. And it's really designed to be the primary storage for these new workloads. >> The big part of the news is your relationship with Hewlett Packard Enterprise is some exclusivity here as part of this and as you mentioned the relationship goes back many, many years. We've covered the, your relationship in the past. Chris also, you know, we cover HP like a blanket. This is big news for HPE as well. >> This is very big news. >> What is the relationship, talk about this exclusivity Could you share about the partnership and the exclusivity piece? >> Well, there's the partnership expands into the pan HPE portfolio. we look, we made a massive investment in edge IOT device. So we actually have how did we align the cost to the demand. Our customers come to us, wanting to looking at think about what we're doing with Greenlake, like in consumption based modeling. They want to be able to be able to consume the asset without having to do a capital outlay out of the gate. Number two, look at, you know how do you deploy technology, really demand. It depends on the scale, right? So in a lot of your web skill, you know, scale out technologies, it putting them on a diet is challenging. Meaning how skinny can you get it. Getting it down into the 50 terabyte range and then the complexities of those technologies at as you take a day one implementation and scale it out over you know, you know, multiple iterations over quarters, the growth becomes a challenge so working with Scality we, we believe we've actually cracked this nut. We figured out how to a number one, how to start small, but not limit a customer's ability to scale it out incrementally or grotesquely. You can eat depending on the quarters, the month, whatever whatever the workload is, how do you actually align and be able to consume it? So now whether it be on our Edgeline products our DL products go right there, now what that Jerome was talking about earlier you know, we, we, we ship a server every few seconds. That won't be a problem. But then of course, into our density optimized compute with the Apollo products. And this where our two companies have worked in an exclusivity where they scale the software bonds on the HP ecosystem. And then we can, of course provide you, our customers the ability to consume that through our GreenLake financial models or through a CapEx partners. >> Awesome, so Jerome and, and Chris, who's the customer here obviously, there's an exclusive period. Talk about the target customer and how the customers get the product and how they get the software. And how does this exclusivity with HP fit into it? >> Yeah, so there there's really a three types of customers and we've really, we've worked a lot with a company called UseDesign to optimize the user interface for each the types of customers. So we really thought about each customer role and providing with each of them the best product. So the, the first type of customer are application owners who are deploying an application that requires an object storage in the backend, you typically want a simple object store for one application, they want it to be simple and work. Honestly they want no thrill, just want an object store that works. And they want to be able to start as small as they start with their application. Often it's, you know, the first deployment maybe a small deployment, you know applications like a backup like VML, Rubrik, or analytics like (indistinct), file system that now, now available as a software, you know like CGI does a really great departmental NAS that works very well that needs an object store in the backend. Or for high performance computing a wake-up house system is an amazing file system. We will also have vertical application like road peak, for example, who provides origin and the view of the software broadcasters. So all these are application, they request an object store in the backend and you just need a simple high-performance working well object store and I'll discuss perfect for that. Now, the second type of people that we think will be interested by Artesca are essentially developer who are currently developing some capabilities or cloud native application, your next gen. And as part of their development stack, it's getting better and better when you're developing a cloud native application to really target an object storage rather than NFS, as you're persistent. It just, you know, think about generations of technologies and NFS and filesystem were great 25 years ago. I mean, it's an amazing technology. Now, when you want to develop a distributed scalable application object storage is a better fit because it's the same generation. And so same thing, I mean, you know, they're developing something they need an object store that they can develop on. So they want it very lightweight, but they also want the product that their enterprise or their customers will be able to rely on for years and years on. And this guy's really great fit to do that. The third type of customer are more architects, I would say are the architects that are designing a system where they are going to have 50 factories, a thousand planes, a million cars, they are going to have some local storage which will they want to replicate to the core and possibly also to the cloud. And as the design is really new generation workloads that are incredibly distributed but with local storage Artesca are really great for that. >> And tell about the HPE exclusive Chris. What's the, how does that fit in? Do they buy through Scality? Can they get it for the HP? Are you guys working together on how customers can procure it? >> Both ways, yeah both ways they can procure it through Scality. They can secure it through HPE and it's, it's it's the software stack running on our density optimized compute platforms which you would choose and align those and to provide an enterprise quality. Cause if it comes back to it in all of these use cases is how do we align up into a true enterprise stack, bringing about multitenancy bringing about the, the, the fact that you know, if you look at like a local coding one of the things that they're bringing to it, so that we can get down into the DL325. So with the exclusivity, you actually get choice. And that choice comes into our entire portfolio whether it be the Edgeline platform the DL325 AMD processing stack or the Intel 380, or whether it be the Apollos or like I said, there's, there's, there's so many ample choices there that facilitate this, and it's this allows us to align those two strategies. >> Awesome, and I think the Kubernetes piece is really relevant because, you know, I've been interviewing folks practitioners and Kubernetes is very much maturing fast. It's definitely the centerpiece of the cloud native both below the, the line, if you will below under the hood for the, for the infrastructure and then for apps, they want a program on top of it that's critical. I mean, Jerome, this is like, this is the future. >> Yeah, and if you don't mind like to come back to the myth on the exclusivity with HP. So we did a six month exclusive and the very reason we could do this is because HP has such breadth of server portfolio. And so we can go from, you know, really simple, very cheap you know, DL380, machine that we tell us for a few dollars. I mean, it's really like simple system, 50 terabyte. We can have the DL325 that Chris mentioned that is really a powerhouse all NVME, clash over storage is NVME, very fast processors you know, dense, large, large system, like the APOE 4,500. So it's a very large graph of portfolio. We support the whole portfolio and we work together on this. So I want to say that you know, one of the reason I want to send kudos to HP for the breadth of their server line really. As mentioned, Artesca can be ordered from either company. In hand-in-hand together, so anyway, you'll see both of us and our field working incredibly well together. >> Well, just on that point, I think just for clarification was this co-design by Scality and HPE, because Chris you mentioned, you know, the, the configuration of your systems. Can you guys, Chris quickly talk about the design. >> From, from, from the code base the software is entirely designed and developed by Scality, from testing and performance, so this really was a joint work with HP providing both a hardware and manpower so that we could accelerate the testing phase. >> You know, Chris HPE has just been doing such a great job of really focused on this. I know I've been covering it for years before it was fashionable. The idea of apps working no matter where it lives, public cloud, data center, edge. And you mentioned edge line's been around for awhile, you know, app centric, developer friendly, cloud first, has been an HPE kind of guiding first principle for many, many years. >> Well, it has. And, you know, as our CEO here intended, by 2022 everything will be able to be consumed as a service in our portfolio. And then this stack allows us the simplicity and the consumability of the technology and the granulation of it allows us to simplify the installation. Simplify the actual deployment bringing into a cloud ecosystem, but more importantly for the end customer. They simply get an enterprise quality product running on an optimized stack that they can consume through a orchestrated simplistic interface. That customers that's what they're wanting for today's but they come to me and ask, hey how do I need a, I've got this new app, new project. And, you know, it goes back to who's actually coming. It's no longer the IT people who are actually coming to us. It's the lines of business. It's that entire dimension of business owners coming to us, going this is my challenge. And how can you, HPE help us? And we rely on our breadth of technology, but also our breadth of partners to come together in our, of course Scality is hand in hand and our collaborative business unit our collaborative storage product engineering group that actually brought, brought this to market. So we're very excited about this solution. >> Chris, thanks for that input and great insight. Jerome, congratulations on a great partnership with HPE obviously great joint customer base. Congratulations on the product release here. Big moving the ball down the field, as they say. New functionality, clouds, cloud native object store. Phenomenal, so wrap, wrap, wrap up the interview. Tell us your vision for Scality and the future of storage. >> Yeah, I think I started in, Scality is going to be an amazing leader, it is already. But yeah, so, you know I have three things that I think will govern how storage is going. And obviously Marc Andreessen said it software is everywhere and software is eating the world. So definitely that's going to be true in the data center in storage in particular, but the three trends that are more specific are first of all, I think that security performance and agility is now basic expectation. It's, it's not, you know it's not like an additional feature. It's just the basic tables, security performance and our job. The second thing is, and we've talked about it during this conversation is edge to go. You need to think your platform with edge, core and cloud. You know, you, you don't want to have separate systems separate design interface point for edge and then think about the core and then think about cloud, and then think about the diverse power. All this needs to be integrated in a design. And the third thing that I see as a major trend for the next 10 years is data sovereignty. More and more, you need to think about where is the data residing? What are the legal challenges? What is the level of protection, against who are you protected? What is your independence strategy? How do you keep as a company being independent from the people you need to be in the band? And I mean, I say companies, but this is also true for public services. So these, these for me are the three big trends. And I do believe that software defined distributed architecture are necessary for these trends but you also need to think about being truly enterprise grade. and that has been one of our focus with design of Artesca. How do we combine a lightweight product with all of the security requirements and data sovereignty requirements that we expect to have in the next thing? >> That's awesome. Congratulations on the news Scality, Artesca. The big release with HPE exclusive for six months, Chris Tinker, Distinguished Engineer at HPE. Great to see you Jerome Lecat CEO of Scality, great to see you as well. Congratulations on the big news. I'm John Furrier from theCube. Thanks for watching. (uplifting music)

Published Date : Apr 26 2021

SUMMARY :

Great to see you both. an impact on the next gen, And at the very beginning, I would say that aligns the actual cost And the number one challenge So that that's one of the aspects. for God years and years on that are coming on the And I think it, you know, we in the sense that it's easy to use. The big part of the align the cost to the demand. and how the customers get the product in the backend and you just need a simple And tell about the HPE exclusive Chris. and it's, it's it's the of the cloud native both below and the very reason we could do this is talk about the design. the software is entirely designed And you mentioned edge line's been around and the consumability of the and the future of storage. from the people you great to see you as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JeromePERSON

0.99+

AmazonORGANIZATION

0.99+

Jerome LecatPERSON

0.99+

Marc AndreessenPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Chris TinkerPERSON

0.99+

two companiesQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

HPORGANIZATION

0.99+

2010DATE

0.99+

Jerome LecatPERSON

0.99+

FacebookORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

ScalityORGANIZATION

0.99+

20QUANTITY

0.99+

HPEORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

50 factoriesQUANTITY

0.99+

50 terabyteQUANTITY

0.99+

a million carsQUANTITY

0.99+

six monthsQUANTITY

0.99+

GreenlakeORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

three typesQUANTITY

0.99+

a thousand planesQUANTITY

0.99+

CapExORGANIZATION

0.99+

both waysQUANTITY

0.99+

10 years agoDATE

0.99+

U.S.LOCATION

0.99+

DL325COMMERCIAL_ITEM

0.99+

six monthQUANTITY

0.99+

21 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

ArtescaORGANIZATION

0.99+

eachQUANTITY

0.99+

one applicationQUANTITY

0.99+

one serverQUANTITY

0.99+

11 years agoDATE

0.99+

200QUANTITY

0.99+

second thingQUANTITY

0.98+

third thingQUANTITY

0.98+

first typeQUANTITY

0.98+

each customerQUANTITY

0.98+

first applicationQUANTITY

0.98+

5DATE

0.98+

A Day in the Life of an IT Admin | HPE Ezmeral Day 2021


 

>>Hi, everyone. Welcome to ASML day. My name is Yasmin Joffey. I'm the director of systems engineering for ASML at HPE. Today. We're here and joined by my colleague, Don wake, who is a technical marketing engineer who will talk to us about the date and the life of an it administrator through the lens of ASML container platform. We'll be answering your questions real time. So if you have any questions, please feel free to put your questions in the chat, and we should have some time at the end for some live Q and a. Don wants to go ahead and kick us off. >>All right. Thanks a lot, Yasir. Yeah, my name is Don wake. I'm the tech marketing guy and welcome to asthma all day, day in the life of an it admin and happy St. Patrick's day. At the same time, I hope you're wearing green virtual pinch. If you're not wearing green, don't have to look that up if you don't know what I'm scouting. So we're just going to go through some quick things. Talk about discussion of modern business. It needs to kind of set the stage and go right into a demo. Um, so what is the need here that we're trying to fulfill with, uh, ASML container platform? It's, it's all rooted in analytics. Um, modern businesses are driven by data. Um, they are also application centric and the separation of applications and data has never been more important or, or the relationship between the two applications are very data hungry. >>These days, they consume data in all new ways. The applications themselves are, are virtualized, containerized, and distributed everywhere, and optimizing every decision and every application is, is become a huge problem to tackle for every enterprise. Um, so we look at, um, for example, data science, um, as one big use case here, um, and it's, it's really a team sport and I'm today wearing the hat of perhaps, you know, operations team, maybe software engineer, guy working on, you know, continuous integration, continuous development integration with source control, and I'm supporting these data scientists, data analysts. And I also have some resource control. I can decide whether or not the data science team gets a, a particular cluster of compute and storage so that they can do their work. So this is the solution that I've been given as an it admin, and that is the ASML container platform. >>And just walking through this real quick, at the top, I'm trying to, as wherever possible, not get involved in these guys' lives. So the data engineers, scientists, app developers, dev ops guys, they all have particular needs and they can access their resources and spin up clusters, or just do work with the Jupiter notebook or run spark or Kafka or any of the, you know, popular analytics platforms by just getting in points that we can provide to them web URLs and their self service. But in the backend, I can then as the it guy makes sure the Kubernetes clusters are up and running, I can assign particular access to particular roles. I can make sure the data's well protected and I can connect them. I can import clusters from public clouds. I can, uh, you know, put my like clusters on premise if I want to. >>And I can do all this through this centralized control plane. So today I'm just going to show you I'm supporting some data scientists. So one of our very own guys is actually doing a demo right now as well, called the a day in the life of the data scientist. And he's on the opposite side, not caring about all the stuff I'm doing in the backend and he's training models and registering the models and working with data, uh, inside his, you know, Jupiter notebook, running inferences, running postman scripts. And so I'm in the background here, making sure that he's got access to his cluster storage protected, make sure it's, um, you know, his training models are up, he's got service endpoints, connecting him to, um, you know, his source control and making sure he's got access to all that stuff. So he's got like a taxi ride prediction model that he's working on and he has a Jupiter notebook and models. So why don't we, um, get hands on and I'll just jump right over it. >>It was no container platform. So this is a web UI. So this is the interface into the container platform. Our centralized control plane, I'm using my active directory credentials to log in here. >>And >>When I log in, I've also been assigned a particular role, uh, with regard to how much of the resources I can access. Now, in my case, I'm a site admin you can see right up here in the upper right hand, I'm a site admin and I have access to lots and lots of resources. And the one I'm going to be focusing on today is a Kubernetes cluster. Um, so I have a cluster I can go in here and let's say, um, we have a new data scientists come on board one. I can give him his own resources so he can do whatever he wants, use some GPU's and not affect other clusters. Um, so we have all these other clusters already created here. You can see here that, um, this is a very busy, um, you know, production system. They've got some dev clusters over here. >>I see here, we have a production cluster. So he needs to produce something for data scientists to use. It has to be well protected and, and not be treated like a development resource. So under his production cluster, I decided to create a new Kubernetes cluster. And literally I just push a button, create Kubernetes cluster once I've done that. And I'll just show you some of the screens and this is a live environment. So this is, I could actually do it all my hosts are used up right now, but I wouldn't be able to go in here and give it a name, just select, um, some hosts to use as the primary master controller and some workers answer a few more questions. And then once that's done, I have now created a special, a whole nother Kubernetes cluster, um, that I could also create tenants from. >>So tenants are really Kubernetes. Uh namespaces so in addition to taking hosts and Kubernetes clusters, I can also go to that, uh, to existing clusters and now carve out a namespace from that. So I look at some of the clusters that were already created and, um, let's see, we've got, um, we've got this year is an example of a tenant that I could have created from that production cluster. And to do that here in the namespace, I just hit create and similar to how you create a cluster. You can now carve down from a given cluster and we'll say the production cluster and give it a name and a description. I can even tell it, I want this specific one to be an AI ML project, um, which really is our ML ops license. So at the end of the day, I can say, okay, I'm going to create an ML ops tenant from that cluster that I created. >>And so I've already created it here for this demo. And I'm going to just go into that Kubernetes namespace now that we also call it tenant. I mean, it's like, multitenancy the name essentially means we're carving out resources so that somebody can be isolated from another environment. First thing I typically do. Um, and at this point I could also give access to this tenant and only this tenant to my data scientist. So the first thing I typically do is I go in here and you can actually assign users right here. So right now it's just me. But if I want it to, for example, give this, um, to Terry, I could go in here and find another user and assign him from this lead, from this list, as long as he's got the proper credentials here. So you can see here, all these other users have active directory credentials, and they, uh, when we created the cluster itself, we also made sure it integrated with our active directory, so that only authorized users can get in there. >>Let's say the first thing I want to do is make sure when I do Jupiter notebook work, or when Terry does, I'm going to connect him up straight up to the get hub repository. So he gives me a link to get hub and says, Hey man, this is all of my cluster work that I've been doing. I've got my source control there. My scripts, my Python notebooks, my Jupiter notebooks. So when I create that, I simply give him, you know, he gives me his, I create a configuration. I say, okay, here's a, here's a get repo. Here's the link to it. I can use a token, here's his username. And I can now put in that token. So this is actually a private repo and using a token, you know, standard get interface. And then the cool thing after that, you can go in here and actually copy the authorization secret. >>And this gets into the Kubernetes world. Um, you know, if you want to make sure you have secure integration with things like your source control or perhaps your active directory, that's all maintained in secrets. So you can take that secret. And when I then create his notebook, I can put that secret right in here in this, uh, launch Yammel. And I say, Hey, connect this Jupiter notebook up with this secret so he can log in. And when I've launched this Jupiter notebook cluster, this is actually now, uh, within my, my, uh, Kubernetes tenant. It is now really a pod. And if I want to, I can go right into a terminal for that, uh, Kubernetes tenant and say, coop CTL, these are standard, you know, CNCF certified Kubernetes get pods. And when I do this, it'll tell me all of the active pods and within those positive containers that I'm running. >>So I'm running quite a few pods and containers here in this, uh, artificial intelligence machine learning, um, tenant. So that's kind of cool. Also, if I wanted to, I could go straight and I can download the config for Kubernetes, uh, control. Uh well, and then I can do something like this, where on my own system where I'm more comfortable, perhaps coop CTL get pods. So this is running on my laptop and I just had to do a coop CTL refresh and give the IP address and authorization, um, information in order to connect from my laptop to that end point. So from a CIC D perspective from, you know, an it admin guides, he usually wants to use tools right on his, uh, desktop. So here am I back in my web browser, I'm also here on the dashboard of this, uh, Kubernetes, um, tenant, and I can see how it's doing. >>It looks like it's kind of busy here. I can focus specifically on a pod if I want to. I happen to know this pod is my Jupiter notebook pod. So aren't, I show how, you know, I could enable my data scientists by just giving him the, uh, URL or what we call a notebook service end points or notebook end point. And just by clicking on this URL or copying it, copying, you know, it's a link, uh, and then emailing it to them and say, okay, here's your, uh, you know, here's your duper notebook. And I say, Hey, just log in with your credentials. I've already logged in. Um, and so then he's got his Jupiter notebook here and you can see that he's connected to his GitHub repo directly. He's got all of the files that he needs to run his data science project and within here, and this is really in the data science realm, data scientists realm. >>He can see that he can have access to centralized storage and he can copy the files from his GitHub repo to that centralized storage. And, you know, these, these commands, um, are kind of cool. They're a little Jupiter magic commands, and we've got some of our own that showed that attachment to the cluster. Um, but you can see here if you run these commands, they're actually looking at the shared project repository managed by the container platform. So, you know, just to show you that again, I'll go back to the container platform. And in fact, the data scientist, uh, could do the same thing. Attitude put a notebook back to platform. So here's this project repository. So this is other big point. So now putting on my storage admin hat, you know, I've got this shared, um, storage, um, volume that is managed for me by the ESMO data fabric. >>Um, in, in here, you can see that the data scientist, um, from his get repo is able to through Jupiter notebook directly, uh, copy his code. He was able to run as Jupiter notebook and create this XG boost, uh, model. So this file can then be registered in this AIML tenant. So he can go in here and register his model. So this is, you know, this is really where the data scientist guy can self-service kick off his notebooks, even get a deployment end point so that he can then inference his cluster. So here again, another URL that you could then take this and put it into like a postman rest URL and get answers. Um, but let's say he wants to, um, he's been doing all this work and I want to make sure that his, uh, data's protected, uh, how about creating a mirror. >>So if I want to create a mirror of that data, now I go back to this other, uh, and this is the, the, uh, data fabric embedded in a very special cluster called the Picasso cluster. And it's a version of the ASML data fabric that allows you to launch what was formerly called Matt bar as a Kubernetes cluster. And when you create this special cluster, every other cluster that you create is automatically, uh, gets things like that. Tenant storage. I showed you to create a shared workspace, and it's automatically managed by this, uh, data fabric. Uh, and you're even given an end point to go into the data fabric and then use all of the awesome features of ASML data fabric. So here I can just log in here. And now I'm at the, uh, data fabric, web UI to do some data protection and mirroring. >>So >>Let's go over here. Let's say I want to, uh, create a mirror of that tenant. So I forgot to note what the name of my tenant was. I'm going to go back to my tenant, the name of the volume that I'm playing with here. So in my AIML tenant, I'm going to go to my source, control my project repository that I want to protect. And I see that the ESMO data fabric has created 10 and 30 as a volume. So I'll go back to my, um, data fabric here, and I'm going to look for 10 and 30. And if I want to, I can go into tenant 30, >>Okay. >>Down here, I can look at the usage. I can look at all of the, you know, I've used very little of the, uh, allocated storage that I want, but let's, uh, you know what, let's go ahead and create a volume to mirror that one. So very simple web UI that has said create volume. I go in here and I say, I want to do a, a tenant 30 mirror. And I say, mirror the mirror volume. Um, I want to use my Picasso cluster. I want to use tenant 30. So now that's actually looking up in the data fabric, um, database there's 10 and 30 K. So it knows exactly which one I want to use. I can go in here and I can say, you know, ext HCP, tenant, 30 mirror, you know, I can give it whatever name I want and this path here. >>And that's a whole nother, uh, demo is this could be in Tokyo. This could be mirrored to all kinds of places all over the world, because this is truly a global name, split namespace, which is a huge differentiator for us in this case, I'm creating a local mirror and that can go down here and, um, I can add, uh, audit and encryptions. I can do, um, access control. I can, you know, change permissions, you know, so full service, um, interactivity here. And of course this is using the web UI, but there's also rest API interfaces as well. So that is pretty much the, the brunt of what I wanted to show you in the demo. Um, so we got hands on and I'm just going to throw this up real quick and then come back to Yasser. See if he's got any questions he has received from anybody watching, if you have any new questions. >>Yeah. We've got a few questions. Um, we can, uh, just take some time to go, hopefully answer a few. Um, so it, it does look like you can integrate or incorporate your existing get hub, uh, to be able to, um, extract, uh, shared code or repositories. Correct? >>Yeah. So we have that built in and can either be, um, get hub or bit bucket it's, you know, pretty standard interface. So just like you can go into any given, get hub and do a clone of a, of a repo, pull it into your local environment. We integrated that directly into the gooey so that you can, uh, say to your, um, AIML tenant, uh, to your Jupiter notebook. You know, here's, here's my GitHub repo. When you open up my notebook, just connect me straight up. So it saves you some, some steps there because Jupiter notebook is designed to be integrated with get hub. So we have get hub integrated in as well or bit bucket. Right. >>Um, another question around the file system, um, has the map, our file system that was carried over, been modified in any way to run on top of Kubernetes. >>So yeah, I would say that the map, our file system data fabric, what I showed here is the Kubernetes version of it. So it gives you a lot of the same features, but if you need, um, perhaps run it on bare metal, maybe you have performance, um, concerns, um, you know, you can, uh, you can also deploy it as a separate bare metal instance of data fabric, but this is just one way that you can, uh, use it integrated directly into Kubernetes depends really the needs of, of the, uh, the user and that a fabric has a lot of different capabilities, but this is, um, it has a lot of the core file system capabilities where you can do snapshots and mirrors, and it it's of course, striped across multiple, um, multiple disks and nodes. And, uh, you know, Matt BARR data fabric has been around for years. It's, uh, and it's designed for integration with these, uh, analytic type workloads. >>Great. Um, you showed us how you can manage, um, Kubernetes clusters through the ASML container platform you buy. Um, but the question is, can you, uh, control who accesses, which tenant, I guess, namespace that you created, um, and also can you restrict or, uh, inject resource limitations for each individual namespace through the UI? >>Oh yeah. So that's, that's a great question. Yes. To both of those. So, um, as a site admin, I had lots of authority to create clusters, to go into any cluster I wanted, but typically for like the data scientist example I used, I would give him, I would create a user for him. And there's a couple of ways you can create users. Um, and it's all role-based access control. So I could create a local user and have container platform authenticate him, or I can say integrate directly with, uh, active directory or LDAP, and then even including which groups he has access to. And then in the user interface for the site admin, I could say he gets access to this tenant and only this tenant. Um, another thing you asked about is his limitations. So when you create the tenant to prevent that noisy neighbor problem, you can, um, go in and create quotas. >>So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, um, flow is okay, I've defined which cluster I want to use. I defined how much memory I want to use. So there's a quota right there. You could say, Hey, how many CPU's am I taking from this pool? And that's one of the cool things about the platform is that it abstracts all that away. You don't have to really know exactly which host, um, you know, you can create the cluster and select specific hosts, but once you've created the cluster, it's not just a big pool of resources. So you can say Bob, over here, um, he's only going to get 50 of the a hundred CPU's available and he's only going to get X amount of gigabytes of memory. And he's only going to get this much storage that he can consume. So you can then safely hand off something and know they're not going to take all the resources, especially the GPU's where those will be expensive. And you want to make sure that one person doesn't hog all the resources. And so that absolutely quotas are built in there. >>Fantastic. Well, we, I think we are out of time. Um, we have, uh, a list of other questions that we will absolutely reach out and, um, get all your questions answered, uh, for those of you who ask questions in the chat. Um, Don, thank you very much. Thanks everyone else for joining Don, will this recording be made available for those who couldn't make it today? >>I believe so. Honestly, I'm not sure what the process is, but, um, yeah, it's being recorded so they must've done that for a reason. >>Fantastic. Well, Don, thank you very much for your time and thank everyone else for joining. Thank you.

Published Date : Mar 17 2021

SUMMARY :

So if you have any questions, please feel free to put your questions in the chat, don't have to look that up if you don't know what I'm scouting. you know, continuous integration, continuous development integration with source control, and I'm supporting I can, uh, you know, And so I'm in the background here, making sure that he's got access to So this is a web UI. You can see here that, um, this is a very busy, um, you know, And I'll just show you some of the screens and this is a live environment. in the namespace, I just hit create and similar to how you create a cluster. So you can see here, all these other users have active I create that, I simply give him, you know, he gives me his, I create a configuration. So you can take that secret. So this is running on my laptop and I just had to do a coop CTL refresh And just by clicking on this URL or copying it, copying, you know, it's a link, So now putting on my storage admin hat, you know, I've got this shared, So here again, another URL that you could then take this and put it into like a postman rest URL And when you create this special cluster, every other cluster that you create is automatically, And I see that the ESMO data I can look at all of the, you know, I can, you know, change permissions, Um, so it, it does look like you can integrate So just like you can go into any given, Um, another question around the file system, um, has the it has a lot of the core file system capabilities where you can do snapshots and mirrors, and also can you restrict or, uh, inject resource limitations for each So when you create the tenant to prevent So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, Um, Don, thank you very much. I believe so.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
YasirPERSON

0.99+

TerryPERSON

0.99+

Don wakePERSON

0.99+

TokyoLOCATION

0.99+

50QUANTITY

0.99+

Yasmin JoffeyPERSON

0.99+

FirstQUANTITY

0.99+

two applicationsQUANTITY

0.99+

DonPERSON

0.99+

TodayDATE

0.99+

todayDATE

0.99+

St. Patrick's dayEVENT

0.98+

10QUANTITY

0.98+

bothQUANTITY

0.98+

30 K.QUANTITY

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

HPEORGANIZATION

0.97+

one personQUANTITY

0.97+

first thingQUANTITY

0.97+

YasserPERSON

0.97+

KafkaTITLE

0.97+

PythonTITLE

0.96+

ASMLORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

one wayQUANTITY

0.95+

JupiterLOCATION

0.94+

ESMOORGANIZATION

0.94+

GitHubORGANIZATION

0.94+

ASMLEVENT

0.93+

BobPERSON

0.93+

Matt BARRPERSON

0.92+

this yearDATE

0.91+

JupiterORGANIZATION

0.9+

each individualQUANTITY

0.86+

30OTHER

0.85+

a hundred CPUQUANTITY

0.82+

ASMLTITLE

0.82+

2021DATE

0.8+

coopORGANIZATION

0.78+

a dayQUANTITY

0.78+

KubernetesORGANIZATION

0.75+

coupleQUANTITY

0.75+

A Day in the LifeTITLE

0.73+

an ITTITLE

0.7+

30 mirrorQUANTITY

0.69+

caseQUANTITY

0.64+

CTLCOMMERCIAL_ITEM

0.57+

few more questionsQUANTITY

0.57+

coop CTLORGANIZATION

0.55+

yearsQUANTITY

0.55+

QuentinPERSON

0.51+

30QUANTITY

0.49+

Ezmeral DayPERSON

0.48+

lotsQUANTITY

0.43+

JupiterCOMMERCIAL_ITEM

0.42+

10TITLE

0.41+

PicassoORGANIZATION

0.38+

Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021


 

>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I'd like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Container Platform together with Run:AI and Nvidia deliver a world class solutions for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. >> What a great panel discussion. And these partners they really do have a good understanding of the possibilities, working on the platform, and I hope and expect we'll see this ecosystem continue to grow. That concludes the main program, which means you can now pick one of three live demos to attend and chat live with experts. Now those three include day in the life of IT Admin, day in the life of a data scientist, and even a day in the life of the HPE Ezmeral Data Fabric, where you can see the many ways the data fabric is used in your life today. Wish you could attend all three, no worries. The recordings will be available on demand for you and your teams. Moreover, the show doesn't stop here, HPE has a growing and thriving tech community, you should check it out. It's really a solid starting point for learning more, talking to smart people about great ideas and seeing how Ezmeral can be part of your own data journey. Again, thanks very much to all of you for joining, until next time, keep unleashing the power of your data.

Published Date : Mar 17 2021

SUMMARY :

and how it can help you Hey, would you mind just talking a bit and integrated that with the and really what that's meant for Dataiku. So, basically I'd like the quote here Florian Douetteau, and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and HPE Container Platform together with Run:AI efficiency of the solution. So first, it is important to understand for our team to have you and even a day in the life of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

Ron KafkaPERSON

0.99+

RonPERSON

0.99+

Omri GellerPERSON

0.99+

Florian DouetteauPERSON

0.99+

HPEORGANIZATION

0.99+

Daniel HladkyPERSON

0.99+

DataikuORGANIZATION

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

NvidiaORGANIZATION

0.99+

2018DATE

0.99+

DSSORGANIZATION

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

early 2020DATE

0.99+

firstQUANTITY

0.98+

Data Science StudioORGANIZATION

0.98+

EzmeralPERSON

0.98+

EzmeralORGANIZATION

0.98+

Dataiku Data Science StudioORGANIZATION

0.97+

three live demosQUANTITY

0.97+

bothQUANTITY

0.97+

about 80%QUANTITY

0.96+

HPEsORGANIZATION

0.95+

three stagesQUANTITY

0.94+

two great guest speakersQUANTITY

0.93+

OmriPERSON

0.91+

Analytics UnleashedORGANIZATION

0.91+

D3STITLE

0.87+

almost 7,000 jobsQUANTITY

0.87+

HPE Container PlatformTITLE

0.86+

HPE Ezmeral Container PlatformTITLE

0.83+

HBE EzmeralORGANIZATION

0.83+

RunORGANIZATION

0.82+

Ezmeral Container PlatformTITLE

0.81+

about five years agoDATE

0.8+

PlatformTITLE

0.71+

EzmeralTITLE

0.7+

Run:AIORGANIZATION

0.7+

Ezmeral DataORGANIZATION

0.69+

2021DATE

0.68+

Ezmeral Ecosystem ProgramTITLE

0.68+

ICSORGANIZATION

0.67+

RunTITLE

0.66+

Partner Scale InitiativesORGANIZATION

0.66+

HPE Ezmeral Preview | HPE Ezmeral \\ Analytics Unleashed


 

>>on March 17th at 8 a.m. >>Pacific. The >>Cube is hosting Israel Day with support from Hewlett Packard. Enterprise I am really excited about is moral. It's H. P s set of solutions that will allow containerized apps and workloads to run >>anywhere. Talking on Prem in the public cloud across clouds >>are really anywhere, including the emergent edge you can think of, as well as a data fabric and a platform to allow you to manage work across all >>these domains. >>That is more all day. We have an exciting lineup of guests, including Kirk Born, who was a famed >>astrophysicist and >>extraordinary data scientist. >>He's from Booz >>Allen. Hamilton will also be joined by my longtime friend Kumar. Sorry >>Conte, who is CEO >>and head of software at HP. In addition, you'll hear from Robert Christiansen >>of HPV will discuss >>data strategies that make sense >>for you, >>and we'll hear from >>customers and partners from around the globe who >>are using as moral >>capabilities to >>create and deploy transformative >>products and solutions that are >>impacting lives every single day. We'll also give you a chance to have a few breakout rooms >>and go deeper on specific topics >>that are important to you, and we'll give you a demo toward the end. So you want to hang around now? Most of all, we >>have a team of experts >>standing by to answer any questions that you may have. >>So please >>do join in on the chat room. It's gonna be a great event. So grab your coffee, your tea or your favorite beverage and grab a note >>pad. We'll see >>you there. March 17th at 8 a.m. >>8 a.m. Pacific >>on the Cube.

Published Date : Mar 11 2021

SUMMARY :

that will allow containerized apps and workloads to run Talking on Prem in the public cloud across clouds We have an exciting lineup of guests, including Kirk Born, Hamilton will also be joined by my longtime friend Kumar. and head of software at HP. We'll also give you a chance to have a few breakout that are important to you, and we'll give you a demo toward the end. do join in on the chat room. We'll see you there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robert ChristiansenPERSON

0.99+

Kirk BornPERSON

0.99+

KumarPERSON

0.99+

Hewlett PackardORGANIZATION

0.99+

ContePERSON

0.99+

HPORGANIZATION

0.99+

8 a.m. PacificDATE

0.98+

HamiltonPERSON

0.95+

AllenPERSON

0.93+

HPEORGANIZATION

0.9+

March 17th at 8 a.m.DATE

0.87+

Israel DayEVENT

0.82+

H. PORGANIZATION

0.8+

PacificLOCATION

0.78+

HPE EzmeralORGANIZATION

0.75+

PremPERSON

0.75+

single dayQUANTITY

0.71+

EzmeralPERSON

0.66+

HPVORGANIZATION

0.64+

CubeCOMMERCIAL_ITEM

0.62+

BoozORGANIZATION

0.51+

Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021


 

>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Ezmeral Container Platform together with Run:AI and Nvidia deliver a word about solution for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. (bright upbeat music)

Published Date : Mar 11 2021

SUMMARY :

and how it can help you journey has been with HPE? and integrated that with the and really what that's meant for Dataiku. and put machine learning and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and outcomes with vendors efficiency of the solution. So first, it is important to understand and new partners to our marketplace. Thank you, Ron.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

Ron KafkaPERSON

0.99+

Florian DouetteauPERSON

0.99+

RonPERSON

0.99+

Omri GellerPERSON

0.99+

HPEORGANIZATION

0.99+

Daniel HladkyPERSON

0.99+

NvidiaORGANIZATION

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

2018DATE

0.99+

DataikuORGANIZATION

0.99+

DSSORGANIZATION

0.99+

last yearDATE

0.99+

todayDATE

0.99+

OmriPERSON

0.99+

Data Science StudioORGANIZATION

0.98+

early 2020DATE

0.98+

firstQUANTITY

0.98+

EzmeralORGANIZATION

0.98+

Dataiku Data Science StudioORGANIZATION

0.97+

about 80%QUANTITY

0.97+

bothQUANTITY

0.97+

HPEsORGANIZATION

0.95+

three stagesQUANTITY

0.94+

two great guest speakersQUANTITY

0.93+

oneQUANTITY

0.93+

almost 7,000 jobsQUANTITY

0.92+

Analytics UnleashedORGANIZATION

0.91+

HPE Ezmeral Container PlatformTITLE

0.84+

HBE EzmeralORGANIZATION

0.83+

RunORGANIZATION

0.83+

Ezmeral Container PlatformTITLE

0.82+

D3STITLE

0.81+

about five years agoDATE

0.8+

HPE Ezmeral Container PlatformTITLE

0.79+

2021DATE

0.76+

Run:AIORGANIZATION

0.72+

EzmeralTITLE

0.7+

PlatformTITLE

0.69+

Ezmeral Container PlatformTITLE

0.68+

ICSORGANIZATION

0.67+

Partner Scale InitiativesORGANIZATION

0.66+

HPETITLE

0.62+

DSSTITLE

0.6+

Ezmeral ContainerTITLE

0.59+

ContainerTITLE

0.56+

HPE EzmeralEVENT

0.55+

FirstQUANTITY

0.52+

RunTITLE

0.51+

DayEVENT

0.51+

Robert Christiansen & Kumar Sreekanti | HPE Ezmeral Day 2021


 

>> Okay. Now we're going to dig deeper into HPE Ezmeral and try to better understand how it's going to impact customers. And with me to do that are Robert Christiansen, who is the Vice President of Strategy in the office of the CTO and Kumar Sreekanti, who is the Chief Technology Officer and Head of Software, both of course, with Hewlett Packard Enterprise. Gentlemen, welcome to the program. Thanks for coming on. >> Good seeing you, Dave. Thanks for having us. >> It's always good to see you guys. >> Thanks for having us. >> So, Ezmeral, kind of an interesting name, catchy name, but Kumar, what exactly is HPE Ezmeral? >> It's indeed a catchy name. Our branding team has done fantastic job. I believe it's actually derived from Esmeralda, is the Spanish for emarald. Often it's supposed some very mythical bars, and they derived Ezmeral from there. And we all initially when we heard, it was interesting. So, Ezmeral was our effort to take all the software, the platform tools that HPE has and provide this modern operating platform to the customers and put it under one brand. So, it has a modern container platform, it does persistent storage with the data fabric and it doesn't include as many of our customers from that. So, think of it as a modern container platform for modernization and digitazation for the customers. >> Yeah, it's an interesting, you talk about platform, so it's not, you know, a lot of times people say product, but you're positioning it as a platform so that has a broader implication. >> That's very true. So, as the customers are thinking of this digitazation, modernization containers and Microsoft, as you know, there is, has become the stable all. So, it's actually a container orchestration platform with golfers open source going into this as well as the persistence already. >> So, by the way, Ezmeral, I think Emerald in Spanish, I think in the culture, it also has immunity powers as well. So immunity from lock-in, (Robert and Kumar laughing) and all those other terrible diseases, maybe it helps us with COVID too. Robert, when you talk to customers, what problems do you probe for that Ezmeral can do a good job solving? >> Yeah, that's a really great question because a lot of times they don't even know what it is that they're trying to solve for other than just a very narrow use case. But the idea here is to give them a platform by which they can bridge both the public and private environment for what they do, the application development, specifically in the data side. So, when yo're looking to bring containerization, which originally got started on the public cloud and it has moved its way, I should say it become popular in the public cloud and it moved its way on premises now, Ezmeral really opens the door to three fundamental things, but, you know, how do I maintain an open architecture like you're referring to, to some low or no lock-in of my applications. Number two, how do I gain a data fabric or a data consistency of accessing the data so I don't have to rewrite those applications when I do move them around. And then lastly, where everybody's heading, the real value is in the AI ML initiatives that companies are really bringing and that value of their data and locking that data at where the data is being generated and stored. And so the Ezmeral platform is those multiple pieces that Kumar was talking about stacked together to deliver the solutions for the client. >> So Kumar, how does it work? What's the sort of IP or the secret source behind it all? What makes HPE different? >> Yeah. Continuing on (indistinct) it's a modern glass form of optimizing the data and workloads. But I think I would say there are three unique characteristics of this platform. Number one is that it actually provides you both an ability to run statefull and stateless as workloads under the same platform. And number two is, as we were thinking about, unlike another Kubernete is open source, it actually add, use you all open-source Kurbenates as well as an orchestration behind them so you can actually, you can provide this hybrid thing that Robert was talking about. And then actually we built the workflows into it, for example, they'll actually announced along with it Ezmeral, ML expert on the customers can actually do the workflow management around specific data woakload. So, the magic is if you want to see the secrets out of all the efforts that has been going into some of the IP acquisitions that HPE has done over the years, we said we BlueData, MAPR, and the Nimble, all these pieces are coming together and providing a modern digitization platform for the customers. >> So these pieces, they all have a little bit of a machine intelligence in them, you have people, who used to think of AI as this sort of separate thing, I mean the same thing with containers, right? But now it's getting embedded into the stack. What is the role of machine intelligence or machine learning in Ezmeral? >> I would take a step back and say, you know, there's very well the customers, the amount of data that is being generated and 95% or 98% of the data is machine generated. And it does a series of a window gravity, and it is sitting at the edge and we were the only one that had edge to the cloud data fabric that's built to it. So, the number one is that we are bringing computer or a cloud to the data that taking the data to the cloud, right, if you will. It's a cloud like experience that provides the customer. AI is not much value to us if we don't harness the data. So, I said this in one of the blog was we have gone from collecting the data, to the finding the insights into the data, right. So, that people have used all sorts of analysis that we are to find data is the new oil. So, the AI and the data. And then now your applications have to be modernized and nobody wants write an application in a non microservices fashion because you wanted to build the modernization. So, if you bring these three things, I want to have a data gravity with lots of data, I have built an AI applications and I want to have those three things I think we bring to the customer. >> So, Robert let's stay on customers for a minute. I mean, I want to understand the business impact, the business case, I mean, why should all the cloud developers have all the fun, you've mentioned it, you're bridging the cloud and on-prem, they talk about when you talk to customers and what they are seeing is the business impact, what's the real drivers for that? >> That's a great question cause at the end of the day, I think the recent survey that was that cost and performance are still the number one requirement for this, just real close second is agility, the speed at which they want to move and so those two are the top of mind every time. But the thing we find Ezmeral, which is so impactful is that nobody brings together the Silicon, the hardware, the platform, and all of that stack together work and combine like Ezmeral does with the platforms that we have and specifically, we start getting 90, 92, 93% utilization out of AI ML workloads on very expensive hardware, it really, really is a competitive advantage over a public cloud offering, which does not offer those kinds of services and the cost models are so significantly different. So, we do that by collapsing the stack, we take out as much intellectual property, excuse me, as much software pieces that are necessary so we are closest to the Silicon, closest to the applications, bring it to the hardware itself, meaning that we can interleave the applications, meaning that you can get to true multitenancy on a particular platform that allows you to deliver a cost optimized solution. So, when you talk about the money side, absolutely, there's just nothing out there and then on the second side, which is agility. One of the things that we know is today is that applications need to be built in pipelines, right, this is something that's been established now for quite some time. Now, that's really making its way on premises and what Kumar was talking about with, how do we modernize? How do we do that? Well, there's going to be some that you want to break into microservices containers, and there's some that you don't. Now, the ones that they're going to do that they're going to get that speed and motion, et cetera, out of the gate and they can put that on premises, which is relatively new these days to the on-premises world. So, we think both won't be the advantage. >> Okay. I want to unpack that a little bit. So, the cost is clearly really 90 plus percent utilization. >> Yes. >> I mean, Kumar, you know, even pre virtualization, we know that it was like, even with virtualization, you never really got that high. I mean, people would talk about it, but are you really able to sustain that in real world workloads? >> Yeah. I think when you make your exchangeable cut up into smaller pieces, you can insert them into many areas. We have one customer was running 18 containers on a single server and each of those containers, as you know, early days of new data, you actually modernize what we consider week run containers or microbiome. So, if you actually build these microservices, and you all and you have versioning all correctly, you can pack these things extremely well. And we have seen this, again, it's not a guarantee, it all depends on your application and your, I mean, as an engineer, we want to always understand all of these caveats work, but it is a very modern utilization of the platform with the data and once you know where the data is, and then it becomes very easy to match those two. >> Now, the other piece of the value proposition that I heard Robert is it's basically an integrated stack. So I don't have to cobble together a bunch of open source components, there's legal implications, there's obviously performance implications. I would imagine that resonates and particularly with the enterprise buyer because they don't have the time to do all this integration. >> That's a very good point. So there is an interesting question that enterprises, they want to have an open source so there is no lock-in, but they also need help to implement and deploy and manage it because they don't have the expertise. And we all know that the IKEA desk has actually brought that API, the past layer standardization. So what we have done is we have given the open source and you arrive to the Kubernetes API, but at the same time orchestration, persistent stories, the data fabric, the AI algorithms, all of them are bolted into it and on the top of that, it's available both as a licensed software on-prem, and the same software runs on the GreenLake. So you can actually pay as you go and then we run it for them in a colo or, or in their own data center. >> Oh, good. That was one of my latter questions. So, I can get this as a service pay by the drink, essentially I don't have to install a bunch of stuff on-prem and pay it perpetualized... >> There is a lot of containers and is the reason and the lapse of service in the last discover and knowledge gone production. So both Ezmeral is available, you can run it on-prem, on the cloud as well, a congenital platform, or you can run instead on GreenLake. >> Robert, are there any specific use case patterns that you see emerging amongst customers? >> Yeah, absolutely. So there's a couple of them. So we have a, a really nice relationship that we see with any of the Splunk operators that were out there today, right? So Splunk containerized, their operator, that operator is the number one operator, for example, for Splunk in the IT operation side or notifications as well as on the security operations side. So we've found that that runs highly effective on top of Ezmeral, on top of our platforms so we just talked about, that Kumar just talked about, but I want to also give a little bit of backgrounds to that same operator platform. The way that the Ezmeral platform has done is that we've been able to make it highly active, active with HA availability at nine, it's going to be at five nines for that same Splunk operator on premises, on the Kubernetes open source, which is as far as I'm concerned, a very, very high end computer science work. You understand how difficult that is, that's number one. Number two is you'll see just a spark workloads as a whole. All right. Nobody handles spark workloads like we do. So we put a container around them and we put them inside the pipeline of moving people through that basic, ML AI pipeline of getting a model through its system, through its trained, and then actually deployed to our ML ops pipeline. This is a key fundamental for delivering value in the data space as well. And then lastly, this is, this is really important when you think about the data fabric that we offer, the data fabric itself doesn't necessarily have to be bolted with the container platform, the container, the actual data fabric itself, can be deployed underneath a number of our, you know, for competitive platforms who don't handle data well. We know that, we know that they don't handle it very well at all. And we get lots and lots of calls for people saying, "Hey, can you take your Ezmeral data fabric "and solve my large scale, "highly challenging data problems?" And we say, "yeah, "and then when you're ready for a real world, "full time enterprise ready container platform, "we'd be happy to prove that too." >> So you're saying you're, if I'm inferring correctly, you're one of the values as you're simplifying that whole data pipeline and the whole data science, science project pun intended, I guess. (Robert and Kumar laughing) >> That's true. >> Absolutely. >> So, where does a customer start? I mean, what, what are the engagements like? What's the starting point? >> It's means we're probably one of the most trusted and robust supplier for many, many years and we have a phenomenal workforce of both the (indistinct), world leading support organization, there are many places to start with. One is obviously all these salaries that are available on the GreenLake, as we just talked about, and they can start on a pay as you go basis. There are many customers that actually some of them are from the early days of BlueData and MAPR, and then already running and they actually improvise on when, as they move into their next version more of a message. You can start with simple as well as container platform or system with the store, a computer's operation and can implement as an analyst to start working. And then finally as a big company like HPE as an everybody's company, that finance it's services, it's very easy for the customers to be able to get that support on day to day operations. >> Thank you for watching everybody. It's Dave Vellante for theCUBE. Keep it right there for more great content from Ezmeral.

Published Date : Mar 10 2021

SUMMARY :

in the office of the Thanks for having us. digitazation for the customers. so it's not, you know, a lot So, as the customers are So, by the way, Ezmeral, of accessing the data So, the magic is if you I mean the same thing and it is sitting at the edge is the business impact, One of the things that we know is today So, the cost is clearly really I mean, Kumar, you know, and you have versioning all correctly, of the value proposition and the same software service pay by the drink, and the lapse of service that operator is the number one operator, and the whole data science, that are available on the GreenLake, Thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KumarPERSON

0.99+

RobertPERSON

0.99+

90QUANTITY

0.99+

Dave VellantePERSON

0.99+

Robert ChristiansenPERSON

0.99+

Kumar SreekantiPERSON

0.99+

SplunkORGANIZATION

0.99+

EzmeralPERSON

0.99+

95%QUANTITY

0.99+

DavePERSON

0.99+

HPEORGANIZATION

0.99+

98%QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

twoQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

IKEAORGANIZATION

0.99+

OneQUANTITY

0.99+

MAPRORGANIZATION

0.99+

one customerQUANTITY

0.99+

BlueDataORGANIZATION

0.99+

90 plus percentQUANTITY

0.99+

bothQUANTITY

0.99+

eachQUANTITY

0.99+

NimbleORGANIZATION

0.99+

second sideQUANTITY

0.98+

oneQUANTITY

0.98+

GreenLakeORGANIZATION

0.98+

todayDATE

0.98+

EzmeralORGANIZATION

0.97+

EmeraldPERSON

0.97+

HPE EzmeralORGANIZATION

0.97+

three unique characteristicsQUANTITY

0.96+

92QUANTITY

0.95+

one brandQUANTITY

0.94+

Number oneQUANTITY

0.94+

single serverQUANTITY

0.93+

SpanishOTHER

0.92+

three thingsQUANTITY

0.92+

nineQUANTITY

0.9+

18 conQUANTITY

0.89+

number twoQUANTITY

0.88+

KubernetesTITLE

0.86+

93%QUANTITY

0.86+

KubernetesORGANIZATION

0.85+

Number twoQUANTITY

0.83+

secondQUANTITY

0.8+

COVIDOTHER

0.79+

EzmeralTITLE

0.77+

coupleQUANTITY

0.75+

three fundamental thingsQUANTITY

0.75+

KuberneteTITLE

0.73+

GreenLakeTITLE

0.7+

Kirk Borne, Booz Allen | HPE Ezmeral Day 2021


 

>>okay. Getting data right is one of the top priorities for organizations to affect digital strategy. So right now we're going to dig into the challenges customers face when trying to deploy enterprise wide data strategies. And with me to unpack this topic is Kirk born principal data Scientists and executive advisor Booz Allen Hamilton. Kirk, great to see you. Thank you, sir, for coming on the program. >>Great to be here, Dave. >>So hey, enterprise scale data science and engineering initiatives there. Nontrivial. What do you see? Some of the challenges and scaling data science and data engineering ops. >>Well, one of the first challenge is just getting it out of the sandbox because so many organizations, they say, let's do cool things with data. But how do you take it out of that sort of play phase into an operational phase? And so being able to do that is one of the biggest challenges. And then being able to enable that for many different use cases then creates an enormous challenge. Because do you replicate the technology and the team for each individual use case, or can you unify teams and technologies to satisfy all possible use cases? And so those are really big challenges for companies, organizations everywhere to think about >>what about the idea of industrializing those those data operations? I mean, what does that? What does that mean to you? Is that a security connotation? A compliance? How do you think about it? >>It's actually all of those industrialized to me is sort of like How do you not make it a one off? But you make it sort of a reproducible, solid, risk compliant and so forth system that can be reproduced many different times and again using the same infrastructure and the same analytic tools and techniques, but for many different use cases, so we don't have to rebuild the will reinvent the wheel, reinvent the car, so to speak. Every time you need a different type of vehicle, you build a car or a truck or a race car. There's some fundamental principles that are common to all of those, and that's where that industrialization is, and it includes security, compliance with regulations and all those things. But it also means just being able to scale it out to to new opportunities beyond the ones that you dreamed of when you first invented the thing >>you know, data by its very nature. As you well know, it's distributed, but for you've been at this a while. For years, we've been trying to sort of shove everything into a monolithic architecture and and in hardening infrastructures around that and many organizations, it's It's become a block to actually getting stuff done. But so how? How are you seeing things like the edge emerged? How do you How do you think about the edge? How do you see that evolving? And how do you think customers should be dealing with with edge and edge data? >>Well, it's really kind of interesting. I had many years at NASA working on data systems, and back in those days, the the idea was you would just put all the data in a big data center, and then individual scientists would retrieve that data and do analytics on it, do their analysis on their local computer. And you might say that sort of like edge analytics, so to speak, because they're doing analytics at at their home computer. But that's not what edge means. It means actually doing the analytics, the insights, discovery at the point of data collection, and so that's that's really real time Business decision making. You don't bring the data back and then try to figure out sometime in the future what to do. And I think an autonomous vehicle is a good example of why you don't want to do that. Because if you collect data from all the cameras and radars and light ours that are on a self driving car and you move that data back to a data cloud while the car is driving down the street and let's say a child walks in front of the car, you send all the data back. It computes and does some object recognition and pattern detection, and 10 minutes later sent a message to the car. Hey, you need to put your brakes on. Well, it's a little kind of late at that point, and so you need to make those discoveries, insight, discoveries, those pattern discoveries and hence the proper decisions from the patterns in the data at the point of data collection. And so that's Data Analytics at the edge. And so, yes, you can bring the data back to a central cloud or distributed cloud. It almost doesn't even matter if if your data is distributed, so any use case in any data, scientists or any analytic team in the business can access it. Then what you really have is a data mesh or a data fabric that makes it accessible at the point that you need it, whether it's at the edge or in some static post, uh, event processing. For example, typical business quarter reporting takes a long look at your last three months of business. Well, that's fine in that use case, but you can't do that for a lot of other real time analytic decision making. Well, >>that's interesting. I mean, it sounds like you think the the edge not as a place, but as you know, where it makes sense to actually, you know, the first opportunity, if you will, to process the data at low latency, where it needs to be low latency. Is that a good way to think about it? >>Absolutely. It's a little late and see that really matters. Uh, sometimes we think we're gonna solve that with things like five G networks. We're gonna be able to send data really fast across the wire. But again, that self driving cars yet another example because what if you all of a sudden the network drops out, you still need to make the right decision with the network not even being there, >>that darn speed of light problem. Um, and so you use this term data mash or or data fabric? Double click on that. What do you mean by that? >>Well, for me, it's it's, uh, it's a sort of a unified way of thinking about all your data. And when I think of mesh, I think of like weaving on a loom, or you're you're creating a blanket or a cloth and you do weaving, and you do that. All that cross layering of the different threads and so different use cases in different applications and different techniques can make use of this one fabric, no matter where it is in the in the business. Or again if it's at the edge or or back at the office. One unified fabric, which has a global name space so anyone can access the data they need, sort of uniformly, no matter where they're using it. And so it's a way of this unifying all the data and use cases and sort of a virtual environment that that no longer you need to worry about. So what's what's the actual file name or what's the actual server of this thing is on? Uh, you can just do that for whatever use case you have. But I think it helps Enterprises now to reach a stage which I like to call the self driving enterprise. Okay, so it's modeled after the self driving car. So the self driving enterprise needs the business leaders in the business itself. You would say it needs to make decisions oftentimes in real time, all right. And so you need to do sort of predictive modeling and cognitive awareness of the context of what's going on. So all these different data sources enable you to do all those things with data. And so, for example, any kind of a decision in a business, any kind of decision in life, I would say, is a prediction, right? You say to yourself, If I do this such and such will happen If I do that, this other thing will happen. So a decision is always based upon a prediction about outcomes, and you want to optimize that outcome so both predictive and prescriptive analytics need to happen in this in this same stream of data and not statically afterwards, so that self driving enterprises enabled by having access to data wherever and whenever you need it. And that's what that fabric that data fabric and data mesh provides for you, at least in my opinion. >>Well, so like carrying that analogy like the self driving vehicle, your abstracting, that complexity away in this metadata layer that understands whether it's on prem or in the public cloud or across clouds or at the edge where the best places to process that data, what makes sense? Does it make sense to move it or not? Ideally, I don't have to. Is that how you're thinking about it? Is that why we need this notion of a data fabric >>right? It really abstracts away all the sort of complexity that the I T aspects of the job would require. But not every person in the business is going to have that familiarity with the servers and the access protocols and all kinds of it related things, and so abstracting that away. And that's in some sense what containers do. Basically, the containers abstract away that all the information about servers and connectivity protocols and all this kind of thing You just want to deliver some data to an analytic module that delivers me. And inside our prediction, I don't need to think about all those other things so that abstraction really makes it empowering for the entire organization. You like to talk a lot about data, democratization and analytics democratization. This really gives power to every person in the organization to do things without becoming an I t. Expert. >>So the last last question, we have time for years. So it sounds like Kirk the next 10 years of data not going to be like the last 10 years will be quite different. >>I think so. I think we're moving to this. Well, first of all, we're going to be focused way more on the why question. Why are we doing this stuff? The more data we collect, we need to know why we're doing it. And one of the phrases I've seen a lot in the past year, which I think is going to grow in importance in the next 10 years, is observe ability, so observe ability to me is not the same as monitoring. Some people say monitoring is what we do. But what I like to say is, yeah, that's what you do. But why you do it is observe ability. You have to have a strategy. Why what? Why am I collecting this data? Why am I collecting it here? Why am I collecting it at this time? Resolution? And so getting focused on those why questions create be able to create targeted analytic solutions for all kinds of different different business problems. And so it really focuses it on small data. So I think the latest Gartner data and Analytics trending reports said we're gonna see a lot more focused on small data in the near future. >>Kirk born your dot connector. Thanks so much >>for coming on. The Cuban >>being part of the program. >>My pleasure. Mm mm.

Published Date : Mar 10 2021

SUMMARY :

for coming on the program. What do you see? the technology and the team for each individual use case, or can you unify teams and opportunities beyond the ones that you dreamed of when you first invented the thing And how do you think customers should be dealing with with edge and edge data? fabric that makes it accessible at the point that you need it, whether it's at the edge or in some static I mean, it sounds like you think the the edge not as a place, But again, that self driving cars yet another example because what if you all of a sudden the network drops out, Um, and so you use this term data And so you need to do sort of predictive modeling and cognitive awareness Well, so like carrying that analogy like the self driving vehicle, But not every person in the business is going to have that familiarity So it sounds like Kirk the next 10 And one of the phrases I've seen a lot in the past year, which I think is going to grow in importance in the next 10 years, Thanks so much for coming on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

NASAORGANIZATION

0.99+

KirkPERSON

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

Booz Allen HamiltonPERSON

0.98+

GartnerORGANIZATION

0.98+

first opportunityQUANTITY

0.98+

first challengeQUANTITY

0.97+

each individualQUANTITY

0.96+

DoubleQUANTITY

0.91+

Kirk BornePERSON

0.91+

firstQUANTITY

0.91+

Booz AllenPERSON

0.89+

next 10 yearsDATE

0.86+

10 minutes laterDATE

0.86+

past yearDATE

0.85+

five GORGANIZATION

0.83+

last 10 yearsDATE

0.83+

CubanPERSON

0.82+

one fabricQUANTITY

0.76+

next 10 yearsDATE

0.75+

2021DATE

0.75+

caseQUANTITY

0.73+

One unifiedQUANTITY

0.71+

HPE Ezmeral DayEVENT

0.56+

yearsQUANTITY

0.54+

three monthsQUANTITY

0.53+

lastDATE

0.38+

Tech for Good | Exascale Day


 

(plane engine roars) (upbeat music) >> They call me Dr. Goh. I'm Senior Vice President and Chief Technology Officer of AI at Hewlett Packard Enterprise. And today I'm in Munich, Germany. Home to one and a half million people. Munich is famous for everything from BMW, to beer, to breathtaking architecture and festive markets. The Bavarian capital is the beating heart of Germany's automobile industry. Over 50,000 of its residents work in automotive engineering, and to date, Munich allocated around 30 million euros to boost electric vehicles and infrastructure for them. (upbeat music) >> Hello, everyone, my name is Dr. Jerome Baudry. I am a professor at the University of Alabama in Huntsville. Our mission is to use a computational resources to accelerate the discovery of drugs that will be useful and efficient against the COVID-19 virus. On the one hand, there is this terrible crisis. And on the other hand, there is this absolutely unique and rare global effort to fight it. And that I think is a is a very positive thing. I am working with the Cray HPE machine called Sentinel. This machine is so amazing that it can actually mimic the screening of hundreds of thousands, almost millions of chemicals a day. What we take weeks, if not months, or years, we can do in a matter of a few days. And it's really the key to accelerating the discovery of new drugs, new pharmaceuticals. We are all in this together, thank you. (upbeat music) >> Hello, everyone. I'm so pleased to be here to interview Dr. Jerome Baudry, of the University of Alabama in Huntsville. >> Hello, Dr. Goh, I'm very happy to be meeting with you here, today. I have a lot of questions for you as well. And I'm looking forward to this conversation between us. >> Yes, yes, and I've got lots of COVID-19 and computational science questions lined up for you too Jerome. Yeah, so let's interview each other, then. >> Absolutely, let's do that, let's interview each other. I've got many questions for you. And , we have a lot in common and yet a lot of things we are addressing from a different point of view. So I'm very much looking forward to your ideas and insights. >> Yeah, especially now, with COVID-19, many of us will have to pivot a lot of our research and development work, to address the most current issues. I watch your video and I've seen that you're very much focused on drug discovery using super computing. The central notebook you did, I'm very excited about that. Can you tell us a bit more about how that works, yeah? >> Yes, I'd be happy to in fact, I watch your video as well manufacturing, and it's actually quite surprisingly close, what we do with drugs, and with what other people do with planes or cars or assembly lanes. we are calculating forces, on molecules, on drug candidates, when they hit parts of the viruses. And we essentially try to identify what small molecules will hit the viruses or its components, the hardest to mess with its function in a way. And that's not very different from what you're doing. What you are describing people in the industry or in the transportation industry are doing. So that's our problem, so to speak, is to deal with a lot of small molecules. Guy creating a lot of forces. That's not a main problem, our main problem is to make intelligent choices about what calculates, what kind of data should we incorporate in our calculations? And what kind of data should we give to the people who are going to do the testing? And that's really something I would like you to do to help us understand better. How do you see artificial intelligence, helping us, putting our hands on the right data to start with, in order to produce the right data and accuracy. >> Yeah, that's that's a great question. And it is a question that we've been pondering in our strategy as a company a lot recently. Because more and more now we realize that the data is being generated at the far out edge. By edge. I mean, something that's outside of the cloud and data center, right? Like, for example, a more recent COVID-19 work, doing a lot of cryo electron microscope work, right? To try and get high resolution pictures of the virus and at different angles, so creating lots of movies under electron microscope to try and create a 3D model of the virus. And we realize that's the edge, right, because that's where the microscope is, away from the data center. And massive amounts of data is generated, terabytes and terabytes of data per day generated. And we had to develop means, a workflow means to get that data off the microscope and provide pre-processing and processing, so that they can achieve results without delay. So we learned quite a few lessons there, right, especially trying to get the edge to be more intelligent, to deal with the onslaught of data coming in, from these devices. >> That's fantastic that you're saying that and that you're using this very example of cryo-EM, because that's the kind of data that feeds our computations. And indeed, we have found that it is very, very difficult to get the right cryo-EM data to us. Now we've been working with HPE supercomputer Sentinel, as you may know, for our COVID-19 work. So we have a lot of computational power. But we will be even faster and better, frankly, if we knew what kind of cryo-EM data to focus on. In fact, most of our discussions are based on not so much how to compute the forces of the molecules, which we do quite well on an HP supercomputer. But again, what cryo-EM 3D dimensional space to look at. And it's becoming almost a bottleneck. >> Have access to that. >> And we spend a lot of time, do you envision a point where AI will be able to help us, to make this kind of code almost live or at least as close to live as possible, as that that comes from the edge? How to pack it and not triage it, but prioritize it for the best possible computations on supercomputers? >> What a visionary question and desire, right? Like exactly the vision we have, right? Of course, the ultimate vision, you aim for the best, and that will be a real time stream of processed data coming off the microscope straight, providing your need, right? We are not there. Before this, we are far from there, right? But that's the aim, the ability to push more and more intelligence forward, so that by the time the data reaches you, it is what you need, right, without any further processing. And a lot of AI is applied there, particularly in cryo-EM where they do particle picking, right, they do a lot of active pictures and movies of the virus. And then what they do is, they rotate the virus a little bit, right? And then to try and figure out in all the different images in the movies, to try and pick the particles in there. And this is very much image processing that AI is very good at. So many different stages, application is made. The key thing, is to deal with the data that is flowing at this at this speed, and to get the data to you in the right form, that in time. So yes, that's the desire, right? >> It will be a game changer, really. You'll be able to get things in a matter of weeks, instead of a matter of years to the colleague who will be doing the best day. If the AI can help me learn from a calculation that didn't exactly turn out the way we want it to be, that will be very, very helpful. I can see, I can envision AI being able to, live AI to be able to really revolutionize all the process, not only from the discovery, but all the way to the clinical, to the patient, to the hospital. >> Well, that's a great point. In fact, I caught on to your term live AI. That's actually what we are trying to achieve. Although I have not used that term before. Perhaps I'll borrow it for next time. >> Oh please, by all means. >> You see, yes, we have done, I've been doing also recent work on gene expression data. So a vaccine, clinical trial, they have the blood, they get the blood from the volunteers after the first day. And then to run very, very fast AI analytics on the gene expression data that the one, the transcription data, before translation to emit amino acid. The transcription data is enormous. We're talking 30,000, 60,000 different items, transcripts, and how to use that high dimensional data to predict on day one, whether this volunteer will get an adverse event or will have a good antibody outcome, right? For efficacy. So yes, how to do it so quickly, right? To get the blood, go through an SA, right, get the transcript, and then run the analytics and AI to produce an outcome. So that's exactly what we're trying to achieve, yeah. Yes, I always emphasize that, ultimately, the doctor makes that decision. Yeah, AI only suggests based on the data, this is the likely outcome based on all the previous data that the machine has learned from, yeah. >> Oh, I agree, we wouldn't want the machine to decide the fate of the patient, but to assist the doctor or nurse making the decision that will be invaluable? And are you aware of any kind of industry that already is using this kind of live AI? And then, is there anything in, I don't know in sport or crowd control? Or is there any kind of industry? I will be curious to see who is ahead of us in terms of making this kind of a minute based decisions using AI? Yes, in fact, this is very pertinent question. We as In fact, COVID-19, lots of effort working on it, right? But now, industries and different countries are starting to work on returning to work, right, returning to their offices, returning to the factories, returning to the manufacturing plants, but yet, the employers need to reassure the employees that things, appropriate measures are taken for safety, but yet maintain privacy, right? So our Aruba organization actually developed a solution called contact location tracing inside buildings, inside factories, right? Why they built this, and needed a lot of machine learning methods in there to do very, very well, as you say, live AI right? To offer a solution? Well, let me describe the problem. The problem is, in certain countries, and certain states, certain cities where regulations require that, if someone is ill, right, you actually have to go in and disinfect the area person has been to, is a requirement. But if you don't know precisely where the ill person has been to, you actually disinfect the whole factory. And if you have that, if you do that, it becomes impractical and cost prohibitive for the company to keep operating profitably. So what they are doing today with Aruba is, that they carry this Bluetooth Low Energy tag, which is a quarter size, right? The reason they do that is, so that they extract the tag from the person, and then the system tracks, everybody, all the employees. We have one company, there's 10,000 employees, right? Tracks everybody with the tag. And if there is a person ill, immediately a floor plan is brought up with hotspots. And then you just targeted the cleaning services there. The same thing, contact tracing is also produced automatically, you could say, anybody that is come in contact with this person within two meters, and more than 15 minutes, right? It comes up the list. And we, privacy is our focused here. There's a separation between the tech and the person, on only restricted people are allowed to see the association. And then things like washrooms and all that are not tracked here. So yes, live AI, trying to make very, very quick decisions, right, because this affects people. >> Another question I have for you, if you have a minute, actually has to be the same thing. Though, it's more a question about hardware, about computer hardware purify may. We're having, we're spending a lot of time computing on number crunching giant machines, like Sentinel, for instance, which is a dream to use, but it's very good at something but when we pulled it off, also spent a lot of time moving back and forth, so data from clouds from storage, from AI processing, to the computing cycles back and forth, back and forth, did you envision an architecture, that will kind of, combine the hardware needed for a massively parallel calculations, kind of we are doing. And also very large storage, fast IO to be more AI friendly, so to speak. You see on the horizon, some kind of, I would say you need some machine, maybe it's to be determined, to be ambitious at times but something that, when the AI ahead plan in terms of passing the vector to the massively parallel side, yeah, that makes sense? >> Makes a lot of sense. And you ask it I know, because it is a tough problem to solve, as we always say, computation, right, is growing capability enormously. But bandwidth, you have to pay for, latency you sweat for, right? >> That's a very good >> So moving data is ultimately going to be the problem. >> It is. >> Yeah, and we've move the data a lot of times, right, >> You move back and forth, so many times >> Back and forth, back and forth, from the edge that's where you try to pre-process it, before you put it in storage, yeah. But then once it arrives in storage, you move it to memory to do some work and bring it back and move it memory again, right, and then that's what HPC, and then you put it back into storage, and then the AI comes in you, you do the learning, the other way around also. So lots of back and forth, right. So tough problem to solve. But more and more, we are looking at a new architecture, right? Currently, this architecture was built for the AI side first, but we're now looking and see how we can expand that. And this is that's the reason why we announced HPE Ezmeral Data Fabric. What it does is that, it takes care of the data, all the way from the edge point of view, the minute it is ingested at the edge, it is incorporated in the global namespace. So that eventually where the data arrives, lands at geographically one, or lands at, temperature, hot data, warm data or cold data, regardless of eventually where it lands at, this Data Fabric checks everything, from in a global namespace, in a unified way. So that's the first step. So that data is not seen as in different places, different pieces, it is a unified view of all the data, the minute that it does, Just start from the edge. >> I think it's important that we communicate that AI is purposed for good, A lot of sci-fi movies, unfortunately, showcase some psychotic computers or teams of evil scientists who want to take over the world. But how can we communicate better that it's a tool for a change, a tool for good? >> So key differences are I always point out is that, at least we have still judgment relative to the machine. And part of the reason we still have judgment is because our brain, logical center is automatically connected to our emotional center. So whatever our logic say is tempered by emotion, and whatever our emotion wants to act, wants to do, right, is tempered by our logic, right? But then AI machine is, many call them, artificial specific intelligence. They are just focused on that decision making and are not connected to other more culturally sensitive or emotionally sensitive type networks. They are focus networks. Although there are people trying to build them, right. That's this power, reason why with judgment, I always use the phrase, right, what's correct, is not always the right thing to do. There is a difference, right? We need to be there to be the last Judge of what's right, right? >> Yeah. >> So that says one of the the big thing, the other one, I bring up is that humans are different from machines, generally, in a sense that, we are highly subtractive. We, filter, right? Well, machine is highly accumulative today. So an AI machine they accumulate to bring in lots of data and tune the network, but our brains a few people realize, we've been working with brain researchers in our work, right? Between three and 30 years old, our brain actually goes through a pruning process of our connections. So for those of us like me after 30 it's done right. (laughs) >> Wait till you reach my age. >> Keep the brain active, because it prunes away connections you don't use, to try and conserve energy, right? I always say, remind our engineers about this point, about prunings because of energy efficiency, right? A slice of pizza drives our brain for three hours. (laughs) That's why, sometimes when I get need to get my engineers to work longer, I just offer them pizza, three more hours, >> Pizza is universal solution to our problems, absolutely. Food Indeed, indeed. There is always a need for a human consciousness. It's not just a logic, it's not like Mr. Spock in "Star Trek," who always speaks about logic but forgets the humanity aspect of it. >> Yes, yes, The connection between the the logic centers and emotional centers, >> You said it very well. Yeah, yeah and the thing is, sleep researchers are saying that when you don't get enough REM sleep, this connection is weakened. Therefore, therefore your decision making gets affected if you don't get enough sleep. So I was thinking, people do alcohol test breathalyzer test before they are allowed to operate sensitive or make sensitive decisions. Perhaps in the future, you have to check whether you have enough REM sleep before, >> It is. This COVID-19 crisis obviously problematic, and I wish it never happened, but there is something that I never experienced before is, how people are talking to each other, people like you and me, we have a lot in common. But I hear more about the industry outside of my field. And I talk a lot to people, like cryo-EM people or gene expression people, I would have gotten the data before and process it. Now, we have a dialogue across the board in all aspects of industry, science, and society. And I think that could be something wonderful that we should keep after we finally fix this bug. >> Yes. yes, yes. >> Right? >> Yes, that's that's a great point. In fact, it's something I've been thinking about, right, for employees, things have changed, because of COVID-19. But very likely, the change will continue, yeah? >> Right. Yes, yes, because there are a few positive outcomes. COVID-19 is a tough outcome. But there positive side of things, like communicating in this way, effectively. So we were part of the consortium that developed a natural language processing system in AI system that would allow you scientists to do, I can say, with the link to that website, allows you to do a query. So say, tell me the latest on the binding energy between the Sasko B2 virus like protein and the AC receptor. And then you will, it will give you a list of 10 answers, yeah? And give you a link to the papers that say, they say those answers. If you key that in today to NLP, you see 315 points -13.7 kcal per mole, which is right, I think the general consensus answer, and see a few that are highly out of out of range, right? And then when you go further, you realize those are the earlier papers. So I think this NLP system will be useful. (both chattering) I'm sorry, I didn't mean to interrupt, but I mentioned yesterday about it, because I have used that, and it's a game changer indeed, it is amazing, indeed. Many times by using this kind of intelligent conceptual, analyzes a very direct use, that indeed you guys are developing, I have found connections between facts, between clinical or pharmaceutical aspects of COVID-19. That I wasn't really aware of. So a it's a tool for creativity as well, I find it, it builds something. It just doesn't analyze what has been done, but it creates the connections, it creates a network of knowledge and intelligence. >> That's why three to 30 years old, when it stops pruning. >> I know, I know. (laughs) But our children are amazing, in that respect, they see things that we don't see anymore. they make connections that we don't necessarily think of, because we're used to seeing a certain way. And the eyes of a child, are bringing always something new, which I think is what AI could potentially bring here. So look, this is fascinating, really. >> Yes, yes, difference between filtering subtractive and the machine being accumulative. That's why I believe, the two working together, can have a stronger outcome if used properly. >> Absolutely. And I think that's how AI will be a force for good indeed. Obviously see, seems that we would have missed that would end up being very important. Well, we are very interested in or in our quest for drug discovery against COVID-19, we have been quite successful so far. We have accelerated the process by an order of magnitude. So we're having molecules that are being tested against the virus, otherwise, it would have taken maybe three or four years to get to that point. So first thing, we have been very fast. But we are very interested in natural products, that chemicals that come from plants, essentially. We found a way to mine, I don't want to say explore it, but leverage, that knowledge of hundreds of years of people documenting in a very historical way of what plants do against what diseases in different parts of the world. So that really has been a, not only very useful in our work, but a fantastic bridge to our common human history, basically. And second, yes, plants have chemicals. And of course we love chemicals. Every living cell has chemicals. The chemicals that are in plants, have been fine tuned by evolution to actually have some biological function. They are not there just to look good. They have a role in the cell. And if we're trying to come up with a new growth from scratch, which is also something we want to do, of course, then we have to engineer a function that evolution hasn't already found a solution to, for in plants, so in a way, it's also artificial intelligence. We have natural solutions to our problems, why don't we try to find them and see their work in ourselves, we're going to, and this is certainly have to reinvent the wheel each time. >> Hundreds of millions of years of evolution, >> Hundreds of millions of years. >> Many iterations, >> Yes, ending millions of different plants with all kinds of chemical diversity. So we have a lot of that, at our disposal here. If only we find the right way to analyze them, and bring them to our supercomputers, then we will, we will really leverage this humongus amount of knowledge. Instead of having to reinvent the wheel each time we want to take a car, we'll find that there are cars whose wheels already that we should be borrowing instead of, building one each time. Most of the keys are out there, if we can find them, They' re at our disposal. >> Yeah, nature has done the work after hundreds of millions of years. >> Yes. (chattering) Is to figure out, which is it, yeah? Exactly, exactly hence the importance of biodiversity. >> Yeah, I think this is related to the Knowledge Graph, right? Where, yes, to objects and the linking parameter, right? And then you have hundreds of millions of these right? A chemical to an outcome and the link to it, right? >> Yes, that's exactly what it is, absolutely the kind of things we're pursuing very much, so absolutely. >> Not only only building the graph, but building the dynamics of the graph, In the future, if you eat too much Creme Brulee, or if you don't run enough, or if you sleep, well, then your cells, will have different connections on this graph of the ages, will interact with that molecule in a different way than if you had more sleep or didn't eat that much Creme Brulee or exercise a bit more, >> So insightful, Dr. Baudry. Your, span of knowledge, right, impressed me. And it's such fascinating talking to you. (chattering) Hopefully next time, when we get together, we'll have a bit of Creme Brulee together. >> Yes, let's find out scientifically what it does, we have to do double blind and try three times to make sure we get the right statistics. >> Three phases, three clinical trial phases, right? >> It's been a pleasure talking to you. I like we agreed, you knows this, for all that COVID-19 problems, the way that people talk to each other is, I think the things that I want to keep in this in our post COVID-19 world. I appreciate very much your insight and it's very encouraging the way you see things. So let's make it happen. >> We will work together Dr.Baudry, hope to see you soon, in person. >> Indeed in person, yes. Thank you. >> Thank you, good talking to you.

Published Date : Oct 16 2020

SUMMARY :

and to date, Munich allocated And it's really the key to of the University of to be meeting with you here, today. for you too Jerome. of things we are addressing address the most current issues. the hardest to mess with of the virus. forces of the molecules, and to get the data to you out the way we want it In fact, I caught on to your term live AI. And then to run very, the employers need to reassure has to be the same thing. to solve, as we always going to be the problem. and forth, from the edge to take over the world. is not always the right thing to do. So that says one of the the big thing, Keep the brain active, because but forgets the humanity aspect of it. Perhaps in the future, you have to check And I talk a lot to changed, because of COVID-19. So say, tell me the latest That's why three to 30 years And the eyes of a child, and the machine being accumulative. And of course we love chemicals. Most of the keys are out there, Yeah, nature has done the work Is to figure out, which is it, yeah? it is, absolutely the kind And it's such fascinating talking to you. to make sure we get the right statistics. the way you see things. hope to see you soon, in person. Indeed in person, yes.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

HuntsvilleLOCATION

0.99+

BaudryPERSON

0.99+

Jerome BaudryPERSON

0.99+

threeQUANTITY

0.99+

10 answersQUANTITY

0.99+

hundreds of yearsQUANTITY

0.99+

Star TrekTITLE

0.99+

GohPERSON

0.99+

10,000 employeesQUANTITY

0.99+

COVID-19OTHER

0.99+

University of AlabamaORGANIZATION

0.99+

hundreds of millionsQUANTITY

0.99+

Hundreds of millions of yearsQUANTITY

0.99+

yesterdayDATE

0.99+

BMWORGANIZATION

0.99+

three timesQUANTITY

0.99+

three hoursQUANTITY

0.99+

more than 15 minutesQUANTITY

0.99+

todayDATE

0.99+

13.7 kcalQUANTITY

0.99+

MunichLOCATION

0.99+

first stepQUANTITY

0.99+

four yearsQUANTITY

0.99+

Munich, GermanyLOCATION

0.99+

ArubaORGANIZATION

0.99+

SentinelORGANIZATION

0.99+

Hundreds of millions of yearsQUANTITY

0.99+

315 pointsQUANTITY

0.99+

twoQUANTITY

0.99+

Dr.PERSON

0.98+

hundreds of millions of yearsQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

each timeQUANTITY

0.98+

secondQUANTITY

0.98+

three more hoursQUANTITY

0.98+

around 30 million eurosQUANTITY

0.98+

first thingQUANTITY

0.97+

bothQUANTITY

0.97+

University of AlabamaORGANIZATION

0.97+

first dayQUANTITY

0.97+

Sasko B2 virusOTHER

0.97+

SpockPERSON

0.96+

oneQUANTITY

0.96+

two metersQUANTITY

0.95+

Three phasesQUANTITY

0.95+

GermanyLOCATION

0.95+

one companyQUANTITY

0.94+

COVID-19 virusOTHER

0.94+

HPORGANIZATION

0.92+

Dr.BaudryPERSON

0.91+

Hewlett Packard EnterpriseORGANIZATION

0.91+

day oneQUANTITY

0.89+

30QUANTITY

0.88+

30 years oldQUANTITY

0.88+

BavarianOTHER

0.88+

30 years oldQUANTITY

0.84+

one and a half million peopleQUANTITY

0.84+

millions of chemicals a dayQUANTITY

0.84+

millions ofQUANTITY

0.83+

HPEORGANIZATION

0.82+

COVID-19 crisisEVENT

0.82+

ExascalePERSON

0.81+

Over 50,000 of its residentsQUANTITY

0.81+

ArubaLOCATION

0.8+

30,000, 60,000 different itemsQUANTITY

0.77+

Mr.PERSON

0.77+

doubleQUANTITY

0.73+

plantsQUANTITY

0.7+

Cray HPEORGANIZATION

0.69+

ACOTHER

0.67+

timesQUANTITY

0.65+

three clinical trial phasesQUANTITY

0.65+