Image Title

Search Results for HPE Ezmeral:

Accelerating Your Data driven Journey The HPE Ezmeral Strategic Road Ahead | HPE Ezmeral Day 2021


 

>>Yeah. Okay. Now we're going to dig deeper into HP es moral and try to better understand how it's going to impact customers. And with me to do that are Robert Christensen is the vice president strategy in the office of the C, T. O. And Kumar Srikanth is the chief technology officer and head of software both, of course, with Hewlett Packard Enterprise. Gentlemen, welcome to the program. Thanks for coming on. >>Good seeing you. Thanks for having us. >>Always. Great. Great to see you guys. So, Esmeralda, kind of a interesting name. Catchy name. But tomorrow, what exactly is H P E s bureau? >>Yeah. It's indeed a catchy name. Our branding team done a fantastic job. I believe it's actually a derivation from Esmeralda. The Spanish for Emerald Berlin. Supposed to have some very mystical powers. Um, and they derived as moral from there, and we all actually, initially that we heard it was interesting. Um, so as well was our effort to take all the software, the platform tools that HB has and provide these modern operating platform to the customers and put it under one brand. It has a modern container platform. It has a persistent stories distribute the date of February. It has been foresight, as many of our customers similar, So it's the think of it as a container platform offering for modernization of the civilization of the customers. >>Yeah, it's an interesting to talk about platform, so it's not a lot of times people think product, but you're positioning it as a platform, so it has a broader implications. >>That's very true. So as the customers are thinking of this civilization, modernization containers and microservices, as you know there has become, has become the stable whole. So it's actually a container orchestration platform. It offers open source proven. It is as well as the persistence always bolted to >>so by the way, s moral, I think emerald in Spain, I think in the culture it also has immunity powers as well. So immunity >>from >>lock in and all those other terrible diseases. Maybe it helps us with covid to rob Robert. When you talk to customers, what problems do you probe for that that is immoral. Can can do a good job solving. >>Yeah, they That's a really great question because a lot of times they don't even know what it is that they're trying to solve for, other than just a very narrow use case. But the idea here is to give them a platform by which they can bridge both the public and private environment for what to do an application development specifically in the data side. So when they're looking to bring Container Ization, which originally got started on the public cloud and has moved its way, I should say, become popular in the public cloud and has moved its way on premises. Now Esmeralda really opens the door to three fundamental things. But how do I maintain an open architecture like you're referring to some low or oh, no lock in of my applications And there were two. How do I gain a data fabric or data consistency of accessing the data so I don't have to rewrite those applications when I do move them around and then, lastly, where everybody is heading down, the real value is in the AI ML initiatives that companies are are really bringing that value of their data and locking the data at where the data is being generated and stored. And so the is moral platform is those multiple pieces that I was talking about stacked together to deliver those solutions for the client. >>So come on, what's the How does it work? What's the sort of I p or the secret sauce behind it all? What makes HP different? >>Continuing our team of medical force around, uh, it's a moral platform for optimizing the data Indians who were close. I think I would say there are three unique characteristics of this platform. Number one is actually provides you both an ability to run stable and stateless were close under the same platform, and number two is as we were thinking about. Unlike analogues, covenant is open source. It actually produce you all open source government as well as an orchestration behind you. So you can actually you can provide this hybrid, um, thing that drivers was talking about. And then actually we built the work flows into it. For example, we're actually announced along with Esmeralda MLS, but on their customers can actually do the work flow management. Our own specifically did the work force. So the magic is if you want to see the secrets of is all the efforts that have been gone into some of the I p acquisitions that HBs the more years we should be. Blue Data bar in the nimble emphasize, all these pieces are coming together and providing a modern digitalization platform for the customers. >>So these pieces, they all have a little bit of a machine intelligence in them. Yeah, People used to think of a I as the sort of separate thing, having the same thing with containers, right? But now it's getting embedded in into the stack. What? What is the role of machine intelligence or machine learning in Edinburgh? >>I would take a step back and say, You know this very well. They're the customer's data amount of data that is being generated, and 95% or 98% of data is machine generated, and it has a serious amount of gravity, and it is sitting at the edge, and we were the only the only one that edge to the cloud data fabric that's built. So the number one is that we are bringing computer or a cloud to the data. They're taking the data to the cloud like if you go, it's a cloud like experience that provides the customer. Yeah, is not much value to us if we don't harness the data. So I said this in one of the blood. Of course, we have gone from collecting the data era to the finding insights into the data so that people have used all sorts of analysis that we are to find data is the new oil to the air and the data. And then now you're applications have to be modernized. And nobody wants to write an obligation in a non microservices fashion because you want to build the modernization. So if you bring these three things, I want to have a data. Gravity have lots of data. I had to build an area applications and I want to have an idea those three things I think we bring together to the customs. >>So, Robert, let's stay on customers from it. I mean, you know, I want to understand the business impact, the business case. I mean, why should all the you know, the cloud developers have all the fun? You mentioned that you're bridging the cloud and on Prem, uh, they talk about when you talk to customers and what they are seeing is the business impact. What's the real drivers for them. >>That's a great question because at the end of the day I think the reason survey that was that cost and performance is still the number one requirement for the real close. Second is agility, the speed of which they want to move. And so those two are the top of mind every time. But the thing we find in as moral, which is so impactful, is that nobody brings together the silicon, the hardware, the platform and all that stacked together work and combined, like as moral does with the platforms that we have and specifically, you know, when we start getting 90 92 93% utilization out of ai ml workloads on very expensive hardware, it really, really is a competitive advantage over a public cloud offering which does not offer those kind of services. And the cost models are so significantly different. So we do that by collapsing the stack. We take out as much intellectual property, give me, um, as much software pieces that are necessary. So we are closest to the silicon closest to the applications bring into the hardware itself, meaning that we can inter leave the applications, meaning that you can get to true multi tendency on a particular platform that allows you to deliver a cost optimized solution. So when you talk about the money side, absolutely. There's just nothing out there and then on the second side, which is agility. Um, one of the things that we know is today is that applications need to be built in pipelines. Right? This is something that has been established now for quite some time now. That's really making its way on premises. And what Kumar was talking about was, how do we modernize? How do we do that? Well, there's going to be something that you want to break into Microservices and containers. There's something you don't now the ones that they're going to do that they're gonna get that speed and motion etcetera out of the gate. And they can put that on premises, which is relatively new these days to the on premises world. So we think both will be the advantage. >>Okay, I want to unpack that a little bit. So the cost is clearly really 90 plus percent utilization. I mean, come on. You know, even even a pre virtualization. We know what it was like even with virtualization, you never really got that high. I mean, people would talk about it, but are you really able to sustain that in real world workloads? >>Yeah, I think when you I think when you when you make your exchangeable currency into small pieces, you can insert them into many areas. And we have one customer was running 18 containers on a single server and each of those containers, as you know, early days of data. You actually modernized what we consider we won containers of micro B. Um, so if you actually build these microservices and you have all anti affinity rules and you have rationing formulas all correctly, you can pack being part of these things extremely violent. We have seen this again. It's not a guarantee. It all depends on your application and your I mean, as an engineer, we want to always understand how this can be that sport. But it is a very modern utilization of the platform with the data and once you know where the data is, and then it becomes very easy to match those >>now. The other piece of the value proposition that I heard Robert is it's basically an integrated stack, so I don't have to cobble together a bunch of open source components. It's there. There's legal implications. There's obviously performance implications that I would imagine that resonates is particularly with the enterprise buyer, because they have the time to do all this integration. >>That's a very good point. So there is an interesting, uh, interesting question that enterprise they want to have an open source, so there is no lock in. But they also need help to implement and deploy and manage it because they don't have expertise. And we all know that Katie has actually brought that AP the past layer standardization. So what we have done is we've given the open source and you write to the covenant is happy, but at the same time orchestration, persistent stories, the data fabric, the ai algorithms, all of them are bolted into it. And on the top of that, it's available both as a licensed software and run on Prem. And the same software runs on the Green Lake so you can actually pay as you go and you don't we run it for them in in a collar or or in their own data center. >>Oh, good. I was one of my latter questions, so I can get this as a service paid by the drink. Essentially, I don't have to install a bunch of stuff on Prem and pay >>a perpetual license container at the service and the service in the last Discover. And now it's gone production. So both MLRS is available. You can run it on friends on the top of Admiral Container platform or you can run inside of the Green Bay. >>Robert, are there any specific use case patterns that you see emerging amongst customers? >>Yeah, absolutely. So there's a couple of them. So we have a really nice relationship that we see with any of the Splunk operators that were out there today. Right? So Splunk containerized their operator. That operator is the number one operator, for example, for Splunk, um, in the i t operation side or notifications as well as on the security operation side. So we found that that runs highly effective on top of his moral on top of our platforms that we just talked about what, uh, Kumar just talked about, but I want to also give a little bit of backgrounds to that same operator platform. The way that the Admiral platform has done is that we've been able to make highly active, active with a check availability at 95 nines for that same spark operator on premises on the kubernetes open source, which is, as far as I'm concerned. Very, very high end computer science work. You understand how difficult that is? Uh, that's number one. Number two, you'll see spark just a spark. Workloads as a whole. All right. Nobody handles spark workloads like we do. So we put a container around them, and we put them inside the pipeline of moving people through that basic, uh uh, ml ai pipeline of getting a model through its system through its train and then actually deployed to our MLS pipeline. This is a key fundamental for delivering value in the data space as well. And then, lastly, this is This is really important. When you think about the data fabric that we offer, um, the data fabric itself, it doesn't necessarily have to be bolted with the container platform to container at the actual data. Fabric itself can be deployed underneath a number of our for competitive platforms who don't handle data. Well, we know that we know that they don't handle it very well at all. And we get lots and lots of calls for people say, Hey, can you take your as Merrill data for every and solve my large scale, highly challenging data problems, we say yes. And then when you're ready for a real world full time but enterprise already, container platform would be happy to privilege. >>So you're saying if I'm inferring correctly, you're one of the values? Is your simplifying that whole data pipeline and the whole data science science project? Unintended, I guess. >>Okay, >>that's so so >>absolutely So where does the customer start? I mean, what what are the engagements like? Um, what's the starting point? >>It's being is probably one of the most trusted enterprise supplier for many, many years, and we have a phenomenal workforce of the both. The PowerPoint next is one of the leading world leading support organization. There are many places to start with. The right one is Obviously all these services are available on the green leg as we just start apart and they can start on a pay as you go basis. We have many customers that. Actually, some of the grandfather from the early days of pleaded and map are and they're already running, and they actually improvised on when, as they move into their next generation modernization, um, you can start with simple as metal container platform with persist with the story compared to this operation and can implement as as little as $10 and to start working. Um, and finally, there is a a big company like HP E. As an enterprise company defined next services. It's very easy for the customers to be able to get that support on the day to operation. >>Thank you for watching everybody's day volonte for the Cube. Keep it right there for more great content from Esmeralda. >>A mhm, okay.

Published Date : Mar 17 2021

SUMMARY :

Christensen is the vice president strategy in the office of the C, T. O. And Kumar Srikanth is the chief technology Thanks for having us. Great to see you guys. It has been foresight, as many of our customers similar, So it's the think of Yeah, it's an interesting to talk about platform, so it's not a lot of times people think product, So as the customers are thinking of this civilization, so by the way, s moral, I think emerald in Spain, I think in the culture it also has immunity When you talk to customers, what problems do you probe for that that is immoral. And so the is moral platform is those multiple pieces that I was talking about stacked together So the magic is if you want to see the secrets of is all the efforts What is the role of machine intelligence They're taking the data to the cloud like if you go, it's a cloud like experience that I mean, you know, I want to understand the business impact, But the thing we find in as moral, which is so impactful, So the cost is clearly really 90 plus percent of the platform with the data and once you know where the data is, The other piece of the value proposition that I heard Robert is it's basically an integrated stack, on the Green Lake so you can actually pay as you go and you don't we by the drink. You can run it on friends on the top of Admiral Container platform or you can run inside of the the container platform to container at the actual data. data pipeline and the whole data science science project? It's being is probably one of the most trusted enterprise supplier for many, Thank you for watching everybody's day volonte for the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RobertPERSON

0.99+

SpainLOCATION

0.99+

95%QUANTITY

0.99+

18 containersQUANTITY

0.99+

$10QUANTITY

0.99+

FebruaryDATE

0.99+

EdinburghLOCATION

0.99+

SplunkORGANIZATION

0.99+

twoQUANTITY

0.99+

Robert ChristensenPERSON

0.99+

KatiePERSON

0.99+

98%QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

KumarPERSON

0.99+

Kumar SrikanthPERSON

0.99+

HPORGANIZATION

0.99+

eachQUANTITY

0.99+

SecondQUANTITY

0.99+

H P EORGANIZATION

0.99+

bothQUANTITY

0.99+

T. O.PERSON

0.99+

PowerPointTITLE

0.99+

one customerQUANTITY

0.99+

tomorrowDATE

0.98+

90 plus percentQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

Emerald BerlinPERSON

0.98+

second sideQUANTITY

0.98+

HPEORGANIZATION

0.97+

three thingsQUANTITY

0.96+

EsmeraldaPERSON

0.96+

Esmeralda MLSORGANIZATION

0.96+

PremORGANIZATION

0.95+

single serverQUANTITY

0.94+

HP E.ORGANIZATION

0.94+

three unique characteristicsQUANTITY

0.93+

SpanishOTHER

0.93+

number twoQUANTITY

0.93+

one brandQUANTITY

0.91+

three fundamental thingsQUANTITY

0.89+

MerrillORGANIZATION

0.85+

Number oneQUANTITY

0.83+

coupleQUANTITY

0.83+

Green LakeORGANIZATION

0.83+

90 92 93%QUANTITY

0.8+

Number twoQUANTITY

0.8+

EzmeralLOCATION

0.8+

BayLOCATION

0.79+

HBORGANIZATION

0.77+

GravityORGANIZATION

0.77+

AdmiralORGANIZATION

0.74+

IzationTITLE

0.73+

robPERSON

0.7+

95 ninesQUANTITY

0.68+

IndiansPERSON

0.68+

DiscoverORGANIZATION

0.67+

GreenORGANIZATION

0.64+

number oneQUANTITY

0.61+

CubeCOMMERCIAL_ITEM

0.57+

Ezmeral DayEVENT

0.55+

2021DATE

0.55+

EsmeraldaORGANIZATION

0.54+

ContainerORGANIZATION

0.5+

AdmiralTITLE

0.49+

A Day in the Life of an IT Admin | HPE Ezmeral Day 2021


 

>>Hi, everyone. Welcome to ASML day. My name is Yasmin Joffey. I'm the director of systems engineering for ASML at HPE. Today. We're here and joined by my colleague, Don wake, who is a technical marketing engineer who will talk to us about the date and the life of an it administrator through the lens of ASML container platform. We'll be answering your questions real time. So if you have any questions, please feel free to put your questions in the chat, and we should have some time at the end for some live Q and a. Don wants to go ahead and kick us off. >>All right. Thanks a lot, Yasir. Yeah, my name is Don wake. I'm the tech marketing guy and welcome to asthma all day, day in the life of an it admin and happy St. Patrick's day. At the same time, I hope you're wearing green virtual pinch. If you're not wearing green, don't have to look that up if you don't know what I'm scouting. So we're just going to go through some quick things. Talk about discussion of modern business. It needs to kind of set the stage and go right into a demo. Um, so what is the need here that we're trying to fulfill with, uh, ASML container platform? It's, it's all rooted in analytics. Um, modern businesses are driven by data. Um, they are also application centric and the separation of applications and data has never been more important or, or the relationship between the two applications are very data hungry. >>These days, they consume data in all new ways. The applications themselves are, are virtualized, containerized, and distributed everywhere, and optimizing every decision and every application is, is become a huge problem to tackle for every enterprise. Um, so we look at, um, for example, data science, um, as one big use case here, um, and it's, it's really a team sport and I'm today wearing the hat of perhaps, you know, operations team, maybe software engineer, guy working on, you know, continuous integration, continuous development integration with source control, and I'm supporting these data scientists, data analysts. And I also have some resource control. I can decide whether or not the data science team gets a, a particular cluster of compute and storage so that they can do their work. So this is the solution that I've been given as an it admin, and that is the ASML container platform. >>And just walking through this real quick, at the top, I'm trying to, as wherever possible, not get involved in these guys' lives. So the data engineers, scientists, app developers, dev ops guys, they all have particular needs and they can access their resources and spin up clusters, or just do work with the Jupiter notebook or run spark or Kafka or any of the, you know, popular analytics platforms by just getting in points that we can provide to them web URLs and their self service. But in the backend, I can then as the it guy makes sure the Kubernetes clusters are up and running, I can assign particular access to particular roles. I can make sure the data's well protected and I can connect them. I can import clusters from public clouds. I can, uh, you know, put my like clusters on premise if I want to. >>And I can do all this through this centralized control plane. So today I'm just going to show you I'm supporting some data scientists. So one of our very own guys is actually doing a demo right now as well, called the a day in the life of the data scientist. And he's on the opposite side, not caring about all the stuff I'm doing in the backend and he's training models and registering the models and working with data, uh, inside his, you know, Jupiter notebook, running inferences, running postman scripts. And so I'm in the background here, making sure that he's got access to his cluster storage protected, make sure it's, um, you know, his training models are up, he's got service endpoints, connecting him to, um, you know, his source control and making sure he's got access to all that stuff. So he's got like a taxi ride prediction model that he's working on and he has a Jupiter notebook and models. So why don't we, um, get hands on and I'll just jump right over it. >>It was no container platform. So this is a web UI. So this is the interface into the container platform. Our centralized control plane, I'm using my active directory credentials to log in here. >>And >>When I log in, I've also been assigned a particular role, uh, with regard to how much of the resources I can access. Now, in my case, I'm a site admin you can see right up here in the upper right hand, I'm a site admin and I have access to lots and lots of resources. And the one I'm going to be focusing on today is a Kubernetes cluster. Um, so I have a cluster I can go in here and let's say, um, we have a new data scientists come on board one. I can give him his own resources so he can do whatever he wants, use some GPU's and not affect other clusters. Um, so we have all these other clusters already created here. You can see here that, um, this is a very busy, um, you know, production system. They've got some dev clusters over here. >>I see here, we have a production cluster. So he needs to produce something for data scientists to use. It has to be well protected and, and not be treated like a development resource. So under his production cluster, I decided to create a new Kubernetes cluster. And literally I just push a button, create Kubernetes cluster once I've done that. And I'll just show you some of the screens and this is a live environment. So this is, I could actually do it all my hosts are used up right now, but I wouldn't be able to go in here and give it a name, just select, um, some hosts to use as the primary master controller and some workers answer a few more questions. And then once that's done, I have now created a special, a whole nother Kubernetes cluster, um, that I could also create tenants from. >>So tenants are really Kubernetes. Uh namespaces so in addition to taking hosts and Kubernetes clusters, I can also go to that, uh, to existing clusters and now carve out a namespace from that. So I look at some of the clusters that were already created and, um, let's see, we've got, um, we've got this year is an example of a tenant that I could have created from that production cluster. And to do that here in the namespace, I just hit create and similar to how you create a cluster. You can now carve down from a given cluster and we'll say the production cluster and give it a name and a description. I can even tell it, I want this specific one to be an AI ML project, um, which really is our ML ops license. So at the end of the day, I can say, okay, I'm going to create an ML ops tenant from that cluster that I created. >>And so I've already created it here for this demo. And I'm going to just go into that Kubernetes namespace now that we also call it tenant. I mean, it's like, multitenancy the name essentially means we're carving out resources so that somebody can be isolated from another environment. First thing I typically do. Um, and at this point I could also give access to this tenant and only this tenant to my data scientist. So the first thing I typically do is I go in here and you can actually assign users right here. So right now it's just me. But if I want it to, for example, give this, um, to Terry, I could go in here and find another user and assign him from this lead, from this list, as long as he's got the proper credentials here. So you can see here, all these other users have active directory credentials, and they, uh, when we created the cluster itself, we also made sure it integrated with our active directory, so that only authorized users can get in there. >>Let's say the first thing I want to do is make sure when I do Jupiter notebook work, or when Terry does, I'm going to connect him up straight up to the get hub repository. So he gives me a link to get hub and says, Hey man, this is all of my cluster work that I've been doing. I've got my source control there. My scripts, my Python notebooks, my Jupiter notebooks. So when I create that, I simply give him, you know, he gives me his, I create a configuration. I say, okay, here's a, here's a get repo. Here's the link to it. I can use a token, here's his username. And I can now put in that token. So this is actually a private repo and using a token, you know, standard get interface. And then the cool thing after that, you can go in here and actually copy the authorization secret. >>And this gets into the Kubernetes world. Um, you know, if you want to make sure you have secure integration with things like your source control or perhaps your active directory, that's all maintained in secrets. So you can take that secret. And when I then create his notebook, I can put that secret right in here in this, uh, launch Yammel. And I say, Hey, connect this Jupiter notebook up with this secret so he can log in. And when I've launched this Jupiter notebook cluster, this is actually now, uh, within my, my, uh, Kubernetes tenant. It is now really a pod. And if I want to, I can go right into a terminal for that, uh, Kubernetes tenant and say, coop CTL, these are standard, you know, CNCF certified Kubernetes get pods. And when I do this, it'll tell me all of the active pods and within those positive containers that I'm running. >>So I'm running quite a few pods and containers here in this, uh, artificial intelligence machine learning, um, tenant. So that's kind of cool. Also, if I wanted to, I could go straight and I can download the config for Kubernetes, uh, control. Uh well, and then I can do something like this, where on my own system where I'm more comfortable, perhaps coop CTL get pods. So this is running on my laptop and I just had to do a coop CTL refresh and give the IP address and authorization, um, information in order to connect from my laptop to that end point. So from a CIC D perspective from, you know, an it admin guides, he usually wants to use tools right on his, uh, desktop. So here am I back in my web browser, I'm also here on the dashboard of this, uh, Kubernetes, um, tenant, and I can see how it's doing. >>It looks like it's kind of busy here. I can focus specifically on a pod if I want to. I happen to know this pod is my Jupiter notebook pod. So aren't, I show how, you know, I could enable my data scientists by just giving him the, uh, URL or what we call a notebook service end points or notebook end point. And just by clicking on this URL or copying it, copying, you know, it's a link, uh, and then emailing it to them and say, okay, here's your, uh, you know, here's your duper notebook. And I say, Hey, just log in with your credentials. I've already logged in. Um, and so then he's got his Jupiter notebook here and you can see that he's connected to his GitHub repo directly. He's got all of the files that he needs to run his data science project and within here, and this is really in the data science realm, data scientists realm. >>He can see that he can have access to centralized storage and he can copy the files from his GitHub repo to that centralized storage. And, you know, these, these commands, um, are kind of cool. They're a little Jupiter magic commands, and we've got some of our own that showed that attachment to the cluster. Um, but you can see here if you run these commands, they're actually looking at the shared project repository managed by the container platform. So, you know, just to show you that again, I'll go back to the container platform. And in fact, the data scientist, uh, could do the same thing. Attitude put a notebook back to platform. So here's this project repository. So this is other big point. So now putting on my storage admin hat, you know, I've got this shared, um, storage, um, volume that is managed for me by the ESMO data fabric. >>Um, in, in here, you can see that the data scientist, um, from his get repo is able to through Jupiter notebook directly, uh, copy his code. He was able to run as Jupiter notebook and create this XG boost, uh, model. So this file can then be registered in this AIML tenant. So he can go in here and register his model. So this is, you know, this is really where the data scientist guy can self-service kick off his notebooks, even get a deployment end point so that he can then inference his cluster. So here again, another URL that you could then take this and put it into like a postman rest URL and get answers. Um, but let's say he wants to, um, he's been doing all this work and I want to make sure that his, uh, data's protected, uh, how about creating a mirror. >>So if I want to create a mirror of that data, now I go back to this other, uh, and this is the, the, uh, data fabric embedded in a very special cluster called the Picasso cluster. And it's a version of the ASML data fabric that allows you to launch what was formerly called Matt bar as a Kubernetes cluster. And when you create this special cluster, every other cluster that you create is automatically, uh, gets things like that. Tenant storage. I showed you to create a shared workspace, and it's automatically managed by this, uh, data fabric. Uh, and you're even given an end point to go into the data fabric and then use all of the awesome features of ASML data fabric. So here I can just log in here. And now I'm at the, uh, data fabric, web UI to do some data protection and mirroring. >>So >>Let's go over here. Let's say I want to, uh, create a mirror of that tenant. So I forgot to note what the name of my tenant was. I'm going to go back to my tenant, the name of the volume that I'm playing with here. So in my AIML tenant, I'm going to go to my source, control my project repository that I want to protect. And I see that the ESMO data fabric has created 10 and 30 as a volume. So I'll go back to my, um, data fabric here, and I'm going to look for 10 and 30. And if I want to, I can go into tenant 30, >>Okay. >>Down here, I can look at the usage. I can look at all of the, you know, I've used very little of the, uh, allocated storage that I want, but let's, uh, you know what, let's go ahead and create a volume to mirror that one. So very simple web UI that has said create volume. I go in here and I say, I want to do a, a tenant 30 mirror. And I say, mirror the mirror volume. Um, I want to use my Picasso cluster. I want to use tenant 30. So now that's actually looking up in the data fabric, um, database there's 10 and 30 K. So it knows exactly which one I want to use. I can go in here and I can say, you know, ext HCP, tenant, 30 mirror, you know, I can give it whatever name I want and this path here. >>And that's a whole nother, uh, demo is this could be in Tokyo. This could be mirrored to all kinds of places all over the world, because this is truly a global name, split namespace, which is a huge differentiator for us in this case, I'm creating a local mirror and that can go down here and, um, I can add, uh, audit and encryptions. I can do, um, access control. I can, you know, change permissions, you know, so full service, um, interactivity here. And of course this is using the web UI, but there's also rest API interfaces as well. So that is pretty much the, the brunt of what I wanted to show you in the demo. Um, so we got hands on and I'm just going to throw this up real quick and then come back to Yasser. See if he's got any questions he has received from anybody watching, if you have any new questions. >>Yeah. We've got a few questions. Um, we can, uh, just take some time to go, hopefully answer a few. Um, so it, it does look like you can integrate or incorporate your existing get hub, uh, to be able to, um, extract, uh, shared code or repositories. Correct? >>Yeah. So we have that built in and can either be, um, get hub or bit bucket it's, you know, pretty standard interface. So just like you can go into any given, get hub and do a clone of a, of a repo, pull it into your local environment. We integrated that directly into the gooey so that you can, uh, say to your, um, AIML tenant, uh, to your Jupiter notebook. You know, here's, here's my GitHub repo. When you open up my notebook, just connect me straight up. So it saves you some, some steps there because Jupiter notebook is designed to be integrated with get hub. So we have get hub integrated in as well or bit bucket. Right. >>Um, another question around the file system, um, has the map, our file system that was carried over, been modified in any way to run on top of Kubernetes. >>So yeah, I would say that the map, our file system data fabric, what I showed here is the Kubernetes version of it. So it gives you a lot of the same features, but if you need, um, perhaps run it on bare metal, maybe you have performance, um, concerns, um, you know, you can, uh, you can also deploy it as a separate bare metal instance of data fabric, but this is just one way that you can, uh, use it integrated directly into Kubernetes depends really the needs of, of the, uh, the user and that a fabric has a lot of different capabilities, but this is, um, it has a lot of the core file system capabilities where you can do snapshots and mirrors, and it it's of course, striped across multiple, um, multiple disks and nodes. And, uh, you know, Matt BARR data fabric has been around for years. It's, uh, and it's designed for integration with these, uh, analytic type workloads. >>Great. Um, you showed us how you can manage, um, Kubernetes clusters through the ASML container platform you buy. Um, but the question is, can you, uh, control who accesses, which tenant, I guess, namespace that you created, um, and also can you restrict or, uh, inject resource limitations for each individual namespace through the UI? >>Oh yeah. So that's, that's a great question. Yes. To both of those. So, um, as a site admin, I had lots of authority to create clusters, to go into any cluster I wanted, but typically for like the data scientist example I used, I would give him, I would create a user for him. And there's a couple of ways you can create users. Um, and it's all role-based access control. So I could create a local user and have container platform authenticate him, or I can say integrate directly with, uh, active directory or LDAP, and then even including which groups he has access to. And then in the user interface for the site admin, I could say he gets access to this tenant and only this tenant. Um, another thing you asked about is his limitations. So when you create the tenant to prevent that noisy neighbor problem, you can, um, go in and create quotas. >>So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, um, flow is okay, I've defined which cluster I want to use. I defined how much memory I want to use. So there's a quota right there. You could say, Hey, how many CPU's am I taking from this pool? And that's one of the cool things about the platform is that it abstracts all that away. You don't have to really know exactly which host, um, you know, you can create the cluster and select specific hosts, but once you've created the cluster, it's not just a big pool of resources. So you can say Bob, over here, um, he's only going to get 50 of the a hundred CPU's available and he's only going to get X amount of gigabytes of memory. And he's only going to get this much storage that he can consume. So you can then safely hand off something and know they're not going to take all the resources, especially the GPU's where those will be expensive. And you want to make sure that one person doesn't hog all the resources. And so that absolutely quotas are built in there. >>Fantastic. Well, we, I think we are out of time. Um, we have, uh, a list of other questions that we will absolutely reach out and, um, get all your questions answered, uh, for those of you who ask questions in the chat. Um, Don, thank you very much. Thanks everyone else for joining Don, will this recording be made available for those who couldn't make it today? >>I believe so. Honestly, I'm not sure what the process is, but, um, yeah, it's being recorded so they must've done that for a reason. >>Fantastic. Well, Don, thank you very much for your time and thank everyone else for joining. Thank you.

Published Date : Mar 17 2021

SUMMARY :

So if you have any questions, please feel free to put your questions in the chat, don't have to look that up if you don't know what I'm scouting. you know, continuous integration, continuous development integration with source control, and I'm supporting I can, uh, you know, And so I'm in the background here, making sure that he's got access to So this is a web UI. You can see here that, um, this is a very busy, um, you know, And I'll just show you some of the screens and this is a live environment. in the namespace, I just hit create and similar to how you create a cluster. So you can see here, all these other users have active I create that, I simply give him, you know, he gives me his, I create a configuration. So you can take that secret. So this is running on my laptop and I just had to do a coop CTL refresh And just by clicking on this URL or copying it, copying, you know, it's a link, So now putting on my storage admin hat, you know, I've got this shared, So here again, another URL that you could then take this and put it into like a postman rest URL And when you create this special cluster, every other cluster that you create is automatically, And I see that the ESMO data I can look at all of the, you know, I can, you know, change permissions, Um, so it, it does look like you can integrate So just like you can go into any given, Um, another question around the file system, um, has the it has a lot of the core file system capabilities where you can do snapshots and mirrors, and also can you restrict or, uh, inject resource limitations for each So when you create the tenant to prevent So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, Um, Don, thank you very much. I believe so.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
YasirPERSON

0.99+

TerryPERSON

0.99+

Don wakePERSON

0.99+

TokyoLOCATION

0.99+

50QUANTITY

0.99+

Yasmin JoffeyPERSON

0.99+

FirstQUANTITY

0.99+

two applicationsQUANTITY

0.99+

DonPERSON

0.99+

TodayDATE

0.99+

todayDATE

0.99+

St. Patrick's dayEVENT

0.98+

10QUANTITY

0.98+

bothQUANTITY

0.98+

30 K.QUANTITY

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

HPEORGANIZATION

0.97+

one personQUANTITY

0.97+

first thingQUANTITY

0.97+

YasserPERSON

0.97+

KafkaTITLE

0.97+

PythonTITLE

0.96+

ASMLORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

one wayQUANTITY

0.95+

JupiterLOCATION

0.94+

ESMOORGANIZATION

0.94+

GitHubORGANIZATION

0.94+

ASMLEVENT

0.93+

BobPERSON

0.93+

Matt BARRPERSON

0.92+

this yearDATE

0.91+

JupiterORGANIZATION

0.9+

each individualQUANTITY

0.86+

30OTHER

0.85+

a hundred CPUQUANTITY

0.82+

ASMLTITLE

0.82+

2021DATE

0.8+

coopORGANIZATION

0.78+

a dayQUANTITY

0.78+

KubernetesORGANIZATION

0.75+

coupleQUANTITY

0.75+

A Day in the LifeTITLE

0.73+

an ITTITLE

0.7+

30 mirrorQUANTITY

0.69+

caseQUANTITY

0.64+

CTLCOMMERCIAL_ITEM

0.57+

few more questionsQUANTITY

0.57+

coop CTLORGANIZATION

0.55+

yearsQUANTITY

0.55+

QuentinPERSON

0.51+

30QUANTITY

0.49+

Ezmeral DayPERSON

0.48+

lotsQUANTITY

0.43+

JupiterCOMMERCIAL_ITEM

0.42+

10TITLE

0.41+

PicassoORGANIZATION

0.38+

A Day in the Life of Data with the HPE Ezmeral Data Fabric


 

>>Welcome everyone to a day in the life of data with HPE as well. Data fabric, the session is being recorded and will be available for replay at a later time. When you want to come back and view it again, feel free to add any questions that you have into the chat. And Chad and I joined stark. We'll, we'll be more than willing to answer your questions. And now let me turn it over to Jimmy Bates. >>Thanks. Uh, let me go ahead and share my screen here and we'll get started. >>Hey everyone. Uh, once again, my name is Jimmy Bates. I'm a director of solutions architecture here for HPS Merle in the Americas. Uh, today I'd like to walk you through a journey on how our everyday life is evolving, how everything about our world continues to grow more connected about, and about how here at HPE, how we support the data that represents that digital evolution for our customers, with the HPE as rural data fabric to start with, let's define that term data. The concept of that data can be simplified to a record of life's events. No matter if it's personal professional or mechanical in nature, data is just records that represent and describe what has happened, what is happening or what we think will happen. And it turns out the more complete record we have of these events, the easier it is to figure out what comes next. >>Um, I like to refer to that as the omnipotence protocol. Um, let's look at this from a personal perspective of two very different people. Um, let me introduce you to James. He's a native citizen of the digital world. He's, he's been, he's been a citizen of this, uh, an a career professional in the it world for years. He's always on always connected. He loves to get all the information he needs on a smartphone. He works constantly with analytics. He predicts what his customers need, what they want, where they are, uh, and how best to reach them. Um, he's fully embraced the use of data in his life. This is Sue SCA. She's, she's a bit of a, um, of an opposite to James. She's not yet immigrated to our digital world. She's been dealing with the changes that are prevalent in our times. And she started a new business that allows her customers, the option of, um, of expressing their personalities and the mask that they wear. She wants to make sure her customers can upload images, logos, and designs in order to deliver that customized mask, uh, to brighten their interactions with others while being safe as they go about their day. But she needs a crash course in digital and the digital journey. She's recently as, as most of us have as transitioned from an office culture to a work from home culture, and she wants to continue to grow that revenue venture on the side >>At the core of these personalities is a journey that is, that is representative common challenge that we're all facing today. Our world has been steadily shrinking as our ability to reach out to one another has steadily increased. We're all on that journey together to know more about what is happening to be connected to what our business is doing to be instantly responsive to our customer needs and to deliver that personalized service to every individual. And it as moral, we see this across every industry, the challenge of providing tailored experiences to potential customers in a connected world to provide constant information on deliveries that we requested or provide an easier commute to our destination to, to change the inventories, um, to the just-in-time arrival for our fabrications to identify quality issues in real time to alter the production of each product. So it's tailored to the request of the end user to deliver energy in, in smarter, more efficient ways, uh, without injury w while protecting the environment and to identify those, those, uh, medical emerging threats, and to deliver those personalized treatments safely. >>And at the core of all of these changes, all of these different industries is data. Um, if you look at the major technology trends, um, they've been evolving down this path for some time now, we're we're well into our cloud journey. The mobile platform world is, is now just part of our core strategies. IOT is feeding constant streams of data often over those mobile, uh, platforms. And the edge is increasingly just part of our core, all of this combined with the massive amounts of data that's becoming, becoming available through it is driving autonomous solutions with machine learning and AI. Uh, this is, this is just one aspect of this, this data journey that we're on, but for success, it's got, uh, sorry for success. It's got to be paired. Um, it's gotta be paired with action. >>Um, >>Well, when you look at the, uh, um, if we take a look at James and Cisco, right, we can start to see, um, with the investments in those actions, um, how their travel they're realizing >>Their goals, >>Services, efforts, you know, uh, focused, deliver new data-driven applications are done in new ways that are smaller in nature and kind of rapidly iterate, um, to respond to the digital needs of, of our new world, um, containerization to deploy and manage those apps anywhere in our connected world, they need to be secure we'll time streaming architecture, um, from, from the, from the beginning to allow for continual interactions with our changing customer demands and all of this, especially in our current environment, while running cost reduction initiatives. This is just the current world that, that our solutions must live in. Um, with that framework in mind, um, I'd like to take the remainder of our time and kind of walk through some of the use cases where, where we at HPE helped organizations through this journey with, with, with the ASML data fabrics, >>Let's >>Start with what's happening in the mobile world. In fact, the HPE as moral data fabric is being used by a number of companies to provide infinitely personalized experiences. In this case, it could be James could be sushi. It could be anyone that opens up their smartphone in the morning, uh, quickly checking what's transpiring in the world with a selection of curated, relative relevant articles, images, and videos provided by data-driven algorithm workloads, all that data, the logs, the recommendations, and the delivery of those recommendations are done through a variety of companies using HP as rural software, um, that provides a very personalized experience for our users. In addition, other companies monitor the service quality of those mobile devices to ensure optimize connectivity as they move throughout their day. The same is true for digital communication for that video communication, what we're doing right now, especially in these days where it's our primary method of connecting as we deal with limited physical engagements. Um, there's been a clear spike in the usage of these types of services. HPE, as Merle is helping a number of these companies deliver on real time telemetry analysis, predicting demand, latency, monitoring, user experience, and analyzing in real time, responding with autonomous adjustments to maintain pleasant experiences for all participants involved. >>Um, >>Another area, um, we're eight or HBS ML data fabric is playing a crucial role in the daily experience inside our automobiles. We invest a lot of ourselves in our cars. We expect tailored experiences that help us stay safe and connected as we move from one destination to another, in the areas of autonomous driving connected car, a number of major car companies in the world are using our data fabric to take autonomous driving to the next level where it should be effectively collecting all data from sensors and cameras, and then feeding that back into a global data fabric. So that engineers that develop cars can train next generation, future driving algorithms that make our driving experience safer and more autonomy going forward. >>Now let's take a look at a different mode of travel. Uh, the airline industry is being impaired. Varied is being impacted very differently today from, from the car companies, with our software, uh, we help airlines travel agencies, and even us as consumers deal with pricing, calculations and challenges, uh, with, um, air traffic services. We, we deal with, um, um, uh, delivering services around route predictions on time arrivals, weather patterns, and tagging and tracking luggage. We help people with flight connections and finding out what the figuring out what the best options are for your, for your travel. Uh, we collect mountains of data, secure it in a global data fabric, so it can provide, be provided back in an analyzed form with it. The stressed industry can contain some very interesting insights, provide competitive offerings and better services to us as travelers. >>This is also true for powering biometrics. At scale, we work with the biggest biometrics databases in the world, providing the back end for their enormous biometric authentication pursuit. Just to kind of give you a rough idea. A biometric authentication is done with a number of different data points from fingerprints. I re scans numerous facial features. All of these data points are captured for every individual and uploaded into the database, such that when the user is requesting services, their biometric metrics can be pooled and validated in seconds. From a scale perspective, they're onboarding 1 million people a day more than 200 million a year with a hundred percent business continuity and the options do multi-master and a global data fabric as needed ensuring that users will have no issues in securely accessing their pension payouts medical services or what other types of services. They may be guaranteed >>Pivoting >>To a very different industry. Even agriculture was being impacted in digital ways. Using HPE as well, data fabric, we help farmers become more digital. We help them predict weather patterns, optimize sea production. We even helped see producers create custom seed for very specific weather and ground conditions. We combine all of these things to help optimize production and ensure we can feed future generations. In some cases, all of these data sources collected at the edge can be provided back to insurance companies to help farmers issue claims when micro patterns affect farmers in negative ways, we all benefit from optimized farming and the HBS Modena fabric is there to assist in that journey. We provide the framework and the workload guidance to collect relevant data, analyze it and optimize food production. Our customers demonstrate the agricultural industry is most definitely my immigrating to our digital world. >>Now >>That we've got the food, we need to ship it along with everything else, all over the world, as well as offer can be found in action in many of the largest logistics companies in the world. I mean, just tracking things with greater efficiency can lead to astounding insights. What flights and ships did the package take? What Hans held it along its journey, what weather conditions did it encounter? What, what customs office did it go through and, and how much of it's requested and being delivered this along with hundreds of other telemetry points can be used to provide very accurate trade and economic predictions around what's going on with trade in the world. These data sets are being used very intensively to understand economy conditions and plan for future event consequences. We also help answer, uh, questions for shipping containers that are, that are more basic. Uh, like where is my container located at is my container still on the correct ship? Uh, surprisingly, uh, this helps cut down on those pesky little events like lost containers. >>Um, it's astounding the amount of data that's in DNA, and it's not just the pairs. It's, it's the never ending patterns found with other patterns that none of it can be fully understood unless the micro is maintained in context to the macro. You can't really understand these small patterns unless you maintain that overall understanding of the entire DNA structure to help the HVS mold data fabric can be found across every aspect of the medical field. Most recently was there providing the software framework to collect genomic sequencing, landing it in the data fabric, empowering connected availability for analysis to predict and find patterns of significance to shorten the effort it takes to identify those potential triggers and make things like vaccines become becoming available. In record time. >>Data is about people at HPE asthma. We keep people connected all around the world. We do this in a variety of ways. We we've already looked at several of the ways that that happens. We help you find data. You need, we help you get from point a to point B. We help make sure those birthday gifts show up on time. Some other interesting ways we connect people via recipes, through social platforms and online services. We help people connect to that new recipe that is unexpected, but may just be the kind of thing you need for dinner tonight at HPDs where we provide our customers with the power to deliver services that are tailored to the individual from edge to core, from containers to cloud. Many of the services you encounter everyday are delivered to you through an HV as oral global data fabric. You may not see it, but we're there in the morning in the morning when you get up and we're there in the evening. Um, when you wind down, um, at HPE as role, we make data globally available across everywhere that your business needs to go. Um, I'd like to thank everyone, uh, for the time that you've given us today. And I'd like to turn it back over and open up the floor for questions at this time, >>Jimmy, here's a question. What are the ways consumers can get started with HPS >>The fabric? Well, um, uh, there's several ways to get started, right? We, we, uh, first off we have software available that you can download that there's extensive documentation and use cases posted on our website. Um, uh, we have services that we offer, like, um, assessment services that can come in and help you assess the, the data challenges that you're having, whether you're, you're just dealing with a scale issue, a security issue, or trying to migrate to a more containerized approach. We have a services to help you come in, assess that aspect. Um, we have a getting started bundles, um, and we have, um, so there's all kinds of services that, that help you get started on your journey. So what >>Does a typical first deployment look like? >>Well, that's, that's a very, very interesting question. Um, a typical first deployment, it really kind of varies depending on where you're at in the material. Are you James? Are you, um, um, Cisco, right? It really depends on, on where you're at in your journey. Um, but a typical deployment, um, is, is, is involved. Uh, we, we like to come in, we we'd like to do workshops, really understand your specific challenges and problems so that we can determine what solutions are best for you. Um, that to take a look at when we kind of settle on that we, we, um, the first deployment, uh, is, um, there's typically, um, a deployment of, uh, a, uh, a service offering, um, w with a software to kind of get you started along the way we kind of bundle that aspect. Um, as you move forward, if you're more mature and you already have existing container solutions, you already have existing, large scale data aspects of it. Um, it's really about the specific use case of your current problem that you're dealing with. Um, every solution, um, is tailored towards the individual challenges and problems that, that each one of us are facing. >>I break, they mentioned as part of the asthma family. So how does data fabric pair with the other solutions within Israel? >>Well, so I like to say there's, um, there, there's, there's three main areas, um, from a software standpoint, um, for when you count some of our, um, offerings with the GreenLake solution, but there are, so there are really four main areas with ESMO. There's the data fabric offering, which is really focused on, on, on, on delivering that data at scale for AI ML workloads for big data workloads for containerized workloads. There is the ESMO container platform, which really solves a lot of, um, some of the same problems, but really focus more on a compute delivery, uh, and a hundred percent Kubernetes environment. We also have security offerings, um, which, which help you take in this containerized world, uh, that help you take the different aspects of, um, securing those applications. Um, so that when the application, the containerized applications move from one framework or one infrastructure from one to the other, it really helps those, the security go with those applications so that they can operate in a zero trust environment. And of course, all of this, uh, options of being available to you, where everything has a service, including the hardware through some of our GreenLake offerings. So those are kind of the areas that, uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. >>Well, thanks, Jimmy really appreciate it. That's all the questions we have right now. So is there anything that you'd like to close with? >>Uh, you know, the, um, I I'm, I find it I'm very, uh, I'm honored to be here at HPE. Um, I, I really find it, it's amazing. Uh, as we work with our customers solving some really challenging problems that are core to their business, um, it's, it's always an interesting, um, interesting, um, day in the office because, uh, every problem is different because every problem is tailored to the specific challenges that our customers face. Um, while they're all will well, we will, what we went over today is a lot of the general areas and the general concepts that we're all on together in a journey, but the devil's always in the details. It's about understanding the specific challenges in the organization and, and as moral software is designed to help adapt, um, and, and empower your growth in your, in your company. So that you're focused on your business, in the complexity of delivering services across this connected world. That's what as will takes off your plate so that you don't have to worry about that. It just works, and you can focus on the things that impact your business more directly. >>Okay. Well, we really thank everyone for coming today and hope you learned, uh, an idea about how data fabric can begin to help your business with it. All of a sudden analytics, thank you for coming. Thanks.

Published Date : Mar 17 2021

SUMMARY :

Welcome everyone to a day in the life of data with HPE as well. Uh, let me go ahead and share my screen here and we'll get started. that digital evolution for our customers, with the HPE as rural data fabric to and designs in order to deliver that customized mask, uh, to brighten their interactions with others while protecting the environment and to identify those, those, uh, medical emerging threats, all of this combined with the massive amounts of data that's becoming, becoming available through it is This is just the current world that, that our solutions must live in. the service quality of those mobile devices to ensure optimize connectivity as they move a number of major car companies in the world are using our data fabric to take autonomous uh, we help airlines travel agencies, and even us as consumers deal with pricing, Just to kind of give you a rough idea. from optimized farming and the HBS Modena fabric is there to assist in that journey. and how much of it's requested and being delivered this along with hundreds of other telemetry points landing it in the data fabric, empowering connected availability for analysis to Many of the services you encounter everyday are delivered to you through What are the ways consumers can get started with HPS We have a services to help you uh, a service offering, um, w with a software to kind of get you started with the other solutions within Israel? uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. That's all the questions we have right now. in the organization and, and as moral software is designed to help adapt, an idea about how data fabric can begin to help your business with it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

ChadPERSON

0.99+

Jimmy BatesPERSON

0.99+

JimmyPERSON

0.99+

CiscoORGANIZATION

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

HansPERSON

0.99+

HPS MerleORGANIZATION

0.99+

IsraelLOCATION

0.99+

hundredsQUANTITY

0.99+

HPEORGANIZATION

0.99+

AmericasLOCATION

0.99+

tonightDATE

0.99+

each productQUANTITY

0.98+

HPDsORGANIZATION

0.98+

three main areasQUANTITY

0.97+

ESMOTITLE

0.97+

four main areasQUANTITY

0.96+

more than 200 million a yearQUANTITY

0.96+

MerleORGANIZATION

0.96+

hundred percentQUANTITY

0.95+

one aspectQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

first deploymentQUANTITY

0.94+

one frameworkQUANTITY

0.93+

two very different peopleQUANTITY

0.92+

one infrastructureQUANTITY

0.92+

zero trustQUANTITY

0.88+

Sue SCAPERSON

0.88+

1 million people a dayQUANTITY

0.87+

firstQUANTITY

0.84+

ModenaCOMMERCIAL_ITEM

0.82+

HBSORGANIZATION

0.82+

each oneQUANTITY

0.82+

one destinationQUANTITY

0.77+

eightQUANTITY

0.73+

yearsQUANTITY

0.72+

A DayQUANTITY

0.67+

telemetry pointsQUANTITY

0.67+

KubernetesTITLE

0.61+

EzmeralORGANIZATION

0.58+

JamesORGANIZATION

0.56+

HPEOTHER

0.53+

Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021


 

>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I'd like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Container Platform together with Run:AI and Nvidia deliver a world class solutions for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. >> What a great panel discussion. And these partners they really do have a good understanding of the possibilities, working on the platform, and I hope and expect we'll see this ecosystem continue to grow. That concludes the main program, which means you can now pick one of three live demos to attend and chat live with experts. Now those three include day in the life of IT Admin, day in the life of a data scientist, and even a day in the life of the HPE Ezmeral Data Fabric, where you can see the many ways the data fabric is used in your life today. Wish you could attend all three, no worries. The recordings will be available on demand for you and your teams. Moreover, the show doesn't stop here, HPE has a growing and thriving tech community, you should check it out. It's really a solid starting point for learning more, talking to smart people about great ideas and seeing how Ezmeral can be part of your own data journey. Again, thanks very much to all of you for joining, until next time, keep unleashing the power of your data.

Published Date : Mar 17 2021

SUMMARY :

and how it can help you Hey, would you mind just talking a bit and integrated that with the and really what that's meant for Dataiku. So, basically I'd like the quote here Florian Douetteau, and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and HPE Container Platform together with Run:AI efficiency of the solution. So first, it is important to understand for our team to have you and even a day in the life of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

Ron KafkaPERSON

0.99+

RonPERSON

0.99+

Omri GellerPERSON

0.99+

Florian DouetteauPERSON

0.99+

HPEORGANIZATION

0.99+

Daniel HladkyPERSON

0.99+

DataikuORGANIZATION

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

NvidiaORGANIZATION

0.99+

2018DATE

0.99+

DSSORGANIZATION

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

early 2020DATE

0.99+

firstQUANTITY

0.98+

Data Science StudioORGANIZATION

0.98+

EzmeralPERSON

0.98+

EzmeralORGANIZATION

0.98+

Dataiku Data Science StudioORGANIZATION

0.97+

three live demosQUANTITY

0.97+

bothQUANTITY

0.97+

about 80%QUANTITY

0.96+

HPEsORGANIZATION

0.95+

three stagesQUANTITY

0.94+

two great guest speakersQUANTITY

0.93+

OmriPERSON

0.91+

Analytics UnleashedORGANIZATION

0.91+

D3STITLE

0.87+

almost 7,000 jobsQUANTITY

0.87+

HPE Container PlatformTITLE

0.86+

HPE Ezmeral Container PlatformTITLE

0.83+

HBE EzmeralORGANIZATION

0.83+

RunORGANIZATION

0.82+

Ezmeral Container PlatformTITLE

0.81+

about five years agoDATE

0.8+

PlatformTITLE

0.71+

EzmeralTITLE

0.7+

Run:AIORGANIZATION

0.7+

Ezmeral DataORGANIZATION

0.69+

2021DATE

0.68+

Ezmeral Ecosystem ProgramTITLE

0.68+

ICSORGANIZATION

0.67+

RunTITLE

0.66+

Partner Scale InitiativesORGANIZATION

0.66+

The Data Drop: Industry Insights | HPE Ezmeral Day 2021


 

(upbeat music) >> Welcome friends to HPE Ezmeral's analytics unleashed. I couldn't be more excited to have you here today. We have a packed and informative agenda. It's going to give you not just a perspective on what HPE Ezmeral is and what it can do for your organization, but you should leave here with some insights and perspectives that will help you on your edge to cloud data journey in general. The lineup we have today is awesome. We have industry experts like Kirk Borne, who's going to talk about the shape this space will take to key customers and partners who are using Ezmeral technology as a fundamental part of their stack to solve really big, hairy, complex real data problems. We will hear from the execs who are leading this effort to understand the strategy and roadmap forward as well as give you a sneak peek into the new ISV ecosystem that is hosted in the Ezmeral marketplace. And finally, we have some live music being played in the form of three different demos. There's going to be a fun time so do jump in and chat with us at any time or engage with us on Twitter in real time. So grab some coffee, buckle up and let's get going. (upbeat music) Getting data right is one of the top priorities for organizations to affect digital strategy. So right now we're going to dig into the challenges customers face when trying to deploy enterprise wide data strategies and with me to unpack this topic is Kirk Borne, principal data scientist, and executive advisor, Booz Allen Hamilton. Kirk, great to see you. Thank you sir, for coming into the program. >> Great to be here, Dave. >> So hey, enterprise scale data science and engineering initiatives, they're non-trivial. What do you see as some of the challenges in scaling data science and data engineering ops? >> The first challenge is just getting it out of the sandbox because so many organizations, they, they say let's do cool things with data, but how do you take it out of that sort of play phase into an operational phase? And so being able to do that is one of the biggest challenges, and then being able to enable that for many different use cases then creates an enormous challenge because do you replicate the technology and the team for each individual use case or can you unify teams and technologies to satisfy all possible use cases. So those are really big challenges for companies organizations everywhere to about. >> What about the idea of, you know, industrializing those those data operations? I mean, what does that, what does that mean to you? Is that a security connotation, a compliance? How do you think about it? >> It's actually, all of those I'm industrialized to me is sort of like, how do you not make it a one-off but you make it a sort of a reproducible, solid risk compliant and so forth system that can be reproduced many different times. And again, using the same infrastructure and the same analytic tools and techniques but for many different use cases. So we don't have to rebuild the wheel, reinvent the wheel re reinvent the car. So to speak every time you need a different type of vehicle you need to build a car or a truck or a race car. There's some fundamental principles that are common to all of those. And that's what that industrialization is. And it includes security compliance with regulations and all those things but it also means just being able to scale it out to to new opportunities beyond the ones that you dreamed of when you first invented the thing. >> Yeah. Data by its very nature as you well know, it's distributed, but for a you've been at this awhile for years we've been trying to sort of shove everything into a monolithic architecture and in in hardening infrastructures or around that. And in many organizations it's become a block to actually getting stuff done. But so how, how are you seeing things like the edge emerge How do you, how do you think about the edge? How do you see that evolving and how do you think customers should be dealing with with edge and edge data? >> Well, that's really kind of interesting. I had many years at NASA working on data systems, and back in those days, the idea was you would just put all the data in a big data center and then individual scientists would retrieve that data and do analytics on it do their analysis on their local computer. And you might say that's sort of like edge analytics so to speak because they're doing analytics at their home computer, but that's not what edge means. It means actually doing the analytics the insights discovery at the point of data collection. And so that's that's really real time business decision-making you don't bring the data back and then try to figure out some time in the future what to do. And I think in autonomous vehicles a good example of why you don't want to do that because if you collect data from all the cameras and radars and lidars that are on a self-driving car and you move that data back to a data cloud while the car is driving down the street and let's say a child walks in front of the car you send all the data back at computes and does some object recognition and pattern detection. And 10 minutes later, it sends a message to the car. Hey, you need to put your brakes off. Well, it's a little kind of late at that point. And so you need to make those discoveries those insight discoveries, those pattern discoveries and hence the proper decisions from the patterns in the data at the point of data collection. And so that's data analytics at the edge. And so, yes, you can ring the data back to a central cloud or distributed cloud. It almost doesn't even matter if, if if your data is distributed sort of any use case in any data scientist or any analytic team and the business can access it then what you really have is a data mesh or a data fabric that makes it accessible at the point that you need it, whether it's at the edge or on some static post event processing, for example typical business quarter reporting takes a long look at your last three months of business. Well, that's fine in that use case, but you can't do that for a lot of other real time analytic decision making. >> Well, that's interesting. I mean, it sounds like you think of the edge not as a place, but as you know where it makes sense to actually, you know the first opportunity, if you will, to process the data at at low latency where it needs to be low latency is that a good way to think about it? >> Yeah, absolutely. It's the low latency that really matters. Sometimes we think we're going to solve that with things like 5G networks. We're going to be able to send data really fast across the wire. But again, that self-driving car has yet another example because what if you, all of a sudden the network drops out you still need to make the right decision with the network not even being . >> That darn speed of light problem. And so you use this term data mesh or data fabric double-click on that. What do you mean by that? >> Well, for me, it's, it's, it's, it's sort of a unified way of thinking about all your data. And when I think of mesh, I think of like a weaving on a loom, or you're creating a blanket or a cloth and you do weaving and you do that all that cross layering of the different threads. And so different use cases in different applications in different techniques can make use of this one fabric no matter what, where it is in the, in the business or again, if it's at the edge or, or back at the office one unified fabric, which has a global namespace. So anyone can access the data they need and sort of uniformly no matter where they're using it. And so it's, it's a way of unifying all of the data and use cases and sort of a virtual environment that it could have that no log you need to worry about. So what's what's the actual file name or what's the actual server this thing is on you can just do that for whatever use case you have. Let's I think it helps you enterprises now to reach a stage which I like to call the self-driving enterprise. Okay. So it's modeled after the self-driving car. So the self-driving enterprise needs the business leaders in the business itself, you would say needs to make decisions oftentimes in real time. All right. And so you need to do sort of predictive modeling and cognitive awareness of the context of what's going on. So all of these different data sources enable you to do all those things with data. And so, for example, any kind of a decision in a business any kind of decision in life, I would say is a prediction. It's you say to yourself if I do this such and such will happen if I do that, this other thing will happen. So a decision is always based upon a prediction about outcomes, and you want to optimize that outcome. So both predictive and prescriptive analytics need to happen in this in this same stream of data and not statically afterwards. And so that's, self-driving enterprises enabled by having access to data wherever you and whenever you need it. And that's what that fabric, that data fabric and data mesh provides for you, at least in my opinion. >> Well, so like carrying that analogy like the self-driving vehicle you're abstracting that complexity away in in this metadata layer that understands whether it's on prem or in the public cloud or across clouds or at the edge where the best places to process that data. What makes sense, does it make sense to move it or not? Ideally, I don't have to. Is that how you're thinking about it is that why we need this notion of a data fabric >> Right. It really abstracts away all the sort of the complexity that the it aspects of the job would require, but not every person in the business is going to have that familiarity with with the servers and the access protocols and all kinds of it related things. And so abstracting that away. And that's in some sense, what containers do basically the containers abstract away all the information about servers and connectivity and protocols and all this kind of thing. You just want to deliver some data to an analytic module that delivers me an insight or a prediction. I don't need to think about all those other things. And so that abstraction really makes it empowering for the entire organization. We like to talk a lot about data democratization and analytics democratization. This really gives power to every person in the organization to do things without becoming an it expert. >> So the last, last question we have time for here. So it sounds like. Kirk, the next 10 years of data are not going to be like the last 10 years, it'd be quite different. >> I think so. I think we're moving to this. Well, first of all, we're going to be focused way more on the why question, like, why are we doing this stuff? The more data we collect, we need to know why we're doing it. And what are the phrases I've seen a lot in the past year which I think is going to grow in importance in the next 10 years is observability. So observability to me is not the same as monitoring. Some people say monitoring is what we do. But what I like to say is, yeah, that's what you do but why you do it is observability. You have to have a strategy. Why, what, why am I collecting this data? Why am I collecting it here? Why am I collecting it at this time resolution? And so, so getting focused on those, why questions create be able to create targeted analytics solutions for all kinds of diff different business problems. And so it really focuses it on small data. So I think the latest Gartner data and analytics trending reports, so we're going to see a lot more focus on small data in the near future >> Kirk borne. You're a dot connector. Thanks so much for coming on the cube and being a part of the program. >> My pleasure (upbeat music) (relaxing upbeat music)

Published Date : Mar 17 2021

SUMMARY :

It's going to give you What do you see as some of the challenges and the team for each individual use case So to speak every time you need and how do you think customers at the point that you need the first opportunity, if you It's the low latency that really matters. And so you use this term data mesh in the business itself, you would say or at the edge where the best in the business is going to So the last, last question data in the near future on the cube and being

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

NASAORGANIZATION

0.99+

Kirk BornePERSON

0.99+

KirkPERSON

0.99+

GartnerORGANIZATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

three different demosQUANTITY

0.98+

todayDATE

0.98+

first challengeQUANTITY

0.98+

first opportunityQUANTITY

0.98+

HPEORGANIZATION

0.98+

past yearDATE

0.96+

EzmeralORGANIZATION

0.96+

HPE EzmeralORGANIZATION

0.95+

TwitterORGANIZATION

0.94+

firstQUANTITY

0.93+

10 minutes laterDATE

0.93+

each individualQUANTITY

0.91+

Booz AllenORGANIZATION

0.83+

next 10 yearsDATE

0.83+

2021DATE

0.82+

HamiltonPERSON

0.79+

last 10 yearsDATE

0.7+

yearsQUANTITY

0.59+

three monthsQUANTITY

0.59+

Ezmeral DayEVENT

0.43+

HPE Ezmeral Preview | HPE Ezmeral \\ Analytics Unleashed


 

>>on March 17th at 8 a.m. >>Pacific. The >>Cube is hosting Israel Day with support from Hewlett Packard. Enterprise I am really excited about is moral. It's H. P s set of solutions that will allow containerized apps and workloads to run >>anywhere. Talking on Prem in the public cloud across clouds >>are really anywhere, including the emergent edge you can think of, as well as a data fabric and a platform to allow you to manage work across all >>these domains. >>That is more all day. We have an exciting lineup of guests, including Kirk Born, who was a famed >>astrophysicist and >>extraordinary data scientist. >>He's from Booz >>Allen. Hamilton will also be joined by my longtime friend Kumar. Sorry >>Conte, who is CEO >>and head of software at HP. In addition, you'll hear from Robert Christiansen >>of HPV will discuss >>data strategies that make sense >>for you, >>and we'll hear from >>customers and partners from around the globe who >>are using as moral >>capabilities to >>create and deploy transformative >>products and solutions that are >>impacting lives every single day. We'll also give you a chance to have a few breakout rooms >>and go deeper on specific topics >>that are important to you, and we'll give you a demo toward the end. So you want to hang around now? Most of all, we >>have a team of experts >>standing by to answer any questions that you may have. >>So please >>do join in on the chat room. It's gonna be a great event. So grab your coffee, your tea or your favorite beverage and grab a note >>pad. We'll see >>you there. March 17th at 8 a.m. >>8 a.m. Pacific >>on the Cube.

Published Date : Mar 11 2021

SUMMARY :

that will allow containerized apps and workloads to run Talking on Prem in the public cloud across clouds We have an exciting lineup of guests, including Kirk Born, Hamilton will also be joined by my longtime friend Kumar. and head of software at HP. We'll also give you a chance to have a few breakout that are important to you, and we'll give you a demo toward the end. do join in on the chat room. We'll see you there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robert ChristiansenPERSON

0.99+

Kirk BornPERSON

0.99+

KumarPERSON

0.99+

Hewlett PackardORGANIZATION

0.99+

ContePERSON

0.99+

HPORGANIZATION

0.99+

8 a.m. PacificDATE

0.98+

HamiltonPERSON

0.95+

AllenPERSON

0.93+

HPEORGANIZATION

0.9+

March 17th at 8 a.m.DATE

0.87+

Israel DayEVENT

0.82+

H. PORGANIZATION

0.8+

PacificLOCATION

0.78+

HPE EzmeralORGANIZATION

0.75+

PremPERSON

0.75+

single dayQUANTITY

0.71+

EzmeralPERSON

0.66+

HPVORGANIZATION

0.64+

CubeCOMMERCIAL_ITEM

0.62+

BoozORGANIZATION

0.51+

Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021


 

>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Ezmeral Container Platform together with Run:AI and Nvidia deliver a word about solution for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. (bright upbeat music)

Published Date : Mar 11 2021

SUMMARY :

and how it can help you journey has been with HPE? and integrated that with the and really what that's meant for Dataiku. and put machine learning and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and outcomes with vendors efficiency of the solution. So first, it is important to understand and new partners to our marketplace. Thank you, Ron.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

Ron KafkaPERSON

0.99+

Florian DouetteauPERSON

0.99+

RonPERSON

0.99+

Omri GellerPERSON

0.99+

HPEORGANIZATION

0.99+

Daniel HladkyPERSON

0.99+

NvidiaORGANIZATION

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

2018DATE

0.99+

DataikuORGANIZATION

0.99+

DSSORGANIZATION

0.99+

last yearDATE

0.99+

todayDATE

0.99+

OmriPERSON

0.99+

Data Science StudioORGANIZATION

0.98+

early 2020DATE

0.98+

firstQUANTITY

0.98+

EzmeralORGANIZATION

0.98+

Dataiku Data Science StudioORGANIZATION

0.97+

about 80%QUANTITY

0.97+

bothQUANTITY

0.97+

HPEsORGANIZATION

0.95+

three stagesQUANTITY

0.94+

two great guest speakersQUANTITY

0.93+

oneQUANTITY

0.93+

almost 7,000 jobsQUANTITY

0.92+

Analytics UnleashedORGANIZATION

0.91+

HPE Ezmeral Container PlatformTITLE

0.84+

HBE EzmeralORGANIZATION

0.83+

RunORGANIZATION

0.83+

Ezmeral Container PlatformTITLE

0.82+

D3STITLE

0.81+

about five years agoDATE

0.8+

HPE Ezmeral Container PlatformTITLE

0.79+

2021DATE

0.76+

Run:AIORGANIZATION

0.72+

EzmeralTITLE

0.7+

PlatformTITLE

0.69+

Ezmeral Container PlatformTITLE

0.68+

ICSORGANIZATION

0.67+

Partner Scale InitiativesORGANIZATION

0.66+

HPETITLE

0.62+

DSSTITLE

0.6+

Ezmeral ContainerTITLE

0.59+

ContainerTITLE

0.56+

HPE EzmeralEVENT

0.55+

FirstQUANTITY

0.52+

RunTITLE

0.51+

DayEVENT

0.51+

Robert Christiansen & Kumar Sreekanti | HPE Ezmeral Day 2021


 

>> Okay. Now we're going to dig deeper into HPE Ezmeral and try to better understand how it's going to impact customers. And with me to do that are Robert Christiansen, who is the Vice President of Strategy in the office of the CTO and Kumar Sreekanti, who is the Chief Technology Officer and Head of Software, both of course, with Hewlett Packard Enterprise. Gentlemen, welcome to the program. Thanks for coming on. >> Good seeing you, Dave. Thanks for having us. >> It's always good to see you guys. >> Thanks for having us. >> So, Ezmeral, kind of an interesting name, catchy name, but Kumar, what exactly is HPE Ezmeral? >> It's indeed a catchy name. Our branding team has done fantastic job. I believe it's actually derived from Esmeralda, is the Spanish for emarald. Often it's supposed some very mythical bars, and they derived Ezmeral from there. And we all initially when we heard, it was interesting. So, Ezmeral was our effort to take all the software, the platform tools that HPE has and provide this modern operating platform to the customers and put it under one brand. So, it has a modern container platform, it does persistent storage with the data fabric and it doesn't include as many of our customers from that. So, think of it as a modern container platform for modernization and digitazation for the customers. >> Yeah, it's an interesting, you talk about platform, so it's not, you know, a lot of times people say product, but you're positioning it as a platform so that has a broader implication. >> That's very true. So, as the customers are thinking of this digitazation, modernization containers and Microsoft, as you know, there is, has become the stable all. So, it's actually a container orchestration platform with golfers open source going into this as well as the persistence already. >> So, by the way, Ezmeral, I think Emerald in Spanish, I think in the culture, it also has immunity powers as well. So immunity from lock-in, (Robert and Kumar laughing) and all those other terrible diseases, maybe it helps us with COVID too. Robert, when you talk to customers, what problems do you probe for that Ezmeral can do a good job solving? >> Yeah, that's a really great question because a lot of times they don't even know what it is that they're trying to solve for other than just a very narrow use case. But the idea here is to give them a platform by which they can bridge both the public and private environment for what they do, the application development, specifically in the data side. So, when yo're looking to bring containerization, which originally got started on the public cloud and it has moved its way, I should say it become popular in the public cloud and it moved its way on premises now, Ezmeral really opens the door to three fundamental things, but, you know, how do I maintain an open architecture like you're referring to, to some low or no lock-in of my applications. Number two, how do I gain a data fabric or a data consistency of accessing the data so I don't have to rewrite those applications when I do move them around. And then lastly, where everybody's heading, the real value is in the AI ML initiatives that companies are really bringing and that value of their data and locking that data at where the data is being generated and stored. And so the Ezmeral platform is those multiple pieces that Kumar was talking about stacked together to deliver the solutions for the client. >> So Kumar, how does it work? What's the sort of IP or the secret source behind it all? What makes HPE different? >> Yeah. Continuing on (indistinct) it's a modern glass form of optimizing the data and workloads. But I think I would say there are three unique characteristics of this platform. Number one is that it actually provides you both an ability to run statefull and stateless as workloads under the same platform. And number two is, as we were thinking about, unlike another Kubernete is open source, it actually add, use you all open-source Kurbenates as well as an orchestration behind them so you can actually, you can provide this hybrid thing that Robert was talking about. And then actually we built the workflows into it, for example, they'll actually announced along with it Ezmeral, ML expert on the customers can actually do the workflow management around specific data woakload. So, the magic is if you want to see the secrets out of all the efforts that has been going into some of the IP acquisitions that HPE has done over the years, we said we BlueData, MAPR, and the Nimble, all these pieces are coming together and providing a modern digitization platform for the customers. >> So these pieces, they all have a little bit of a machine intelligence in them, you have people, who used to think of AI as this sort of separate thing, I mean the same thing with containers, right? But now it's getting embedded into the stack. What is the role of machine intelligence or machine learning in Ezmeral? >> I would take a step back and say, you know, there's very well the customers, the amount of data that is being generated and 95% or 98% of the data is machine generated. And it does a series of a window gravity, and it is sitting at the edge and we were the only one that had edge to the cloud data fabric that's built to it. So, the number one is that we are bringing computer or a cloud to the data that taking the data to the cloud, right, if you will. It's a cloud like experience that provides the customer. AI is not much value to us if we don't harness the data. So, I said this in one of the blog was we have gone from collecting the data, to the finding the insights into the data, right. So, that people have used all sorts of analysis that we are to find data is the new oil. So, the AI and the data. And then now your applications have to be modernized and nobody wants write an application in a non microservices fashion because you wanted to build the modernization. So, if you bring these three things, I want to have a data gravity with lots of data, I have built an AI applications and I want to have those three things I think we bring to the customer. >> So, Robert let's stay on customers for a minute. I mean, I want to understand the business impact, the business case, I mean, why should all the cloud developers have all the fun, you've mentioned it, you're bridging the cloud and on-prem, they talk about when you talk to customers and what they are seeing is the business impact, what's the real drivers for that? >> That's a great question cause at the end of the day, I think the recent survey that was that cost and performance are still the number one requirement for this, just real close second is agility, the speed at which they want to move and so those two are the top of mind every time. But the thing we find Ezmeral, which is so impactful is that nobody brings together the Silicon, the hardware, the platform, and all of that stack together work and combine like Ezmeral does with the platforms that we have and specifically, we start getting 90, 92, 93% utilization out of AI ML workloads on very expensive hardware, it really, really is a competitive advantage over a public cloud offering, which does not offer those kinds of services and the cost models are so significantly different. So, we do that by collapsing the stack, we take out as much intellectual property, excuse me, as much software pieces that are necessary so we are closest to the Silicon, closest to the applications, bring it to the hardware itself, meaning that we can interleave the applications, meaning that you can get to true multitenancy on a particular platform that allows you to deliver a cost optimized solution. So, when you talk about the money side, absolutely, there's just nothing out there and then on the second side, which is agility. One of the things that we know is today is that applications need to be built in pipelines, right, this is something that's been established now for quite some time. Now, that's really making its way on premises and what Kumar was talking about with, how do we modernize? How do we do that? Well, there's going to be some that you want to break into microservices containers, and there's some that you don't. Now, the ones that they're going to do that they're going to get that speed and motion, et cetera, out of the gate and they can put that on premises, which is relatively new these days to the on-premises world. So, we think both won't be the advantage. >> Okay. I want to unpack that a little bit. So, the cost is clearly really 90 plus percent utilization. >> Yes. >> I mean, Kumar, you know, even pre virtualization, we know that it was like, even with virtualization, you never really got that high. I mean, people would talk about it, but are you really able to sustain that in real world workloads? >> Yeah. I think when you make your exchangeable cut up into smaller pieces, you can insert them into many areas. We have one customer was running 18 containers on a single server and each of those containers, as you know, early days of new data, you actually modernize what we consider week run containers or microbiome. So, if you actually build these microservices, and you all and you have versioning all correctly, you can pack these things extremely well. And we have seen this, again, it's not a guarantee, it all depends on your application and your, I mean, as an engineer, we want to always understand all of these caveats work, but it is a very modern utilization of the platform with the data and once you know where the data is, and then it becomes very easy to match those two. >> Now, the other piece of the value proposition that I heard Robert is it's basically an integrated stack. So I don't have to cobble together a bunch of open source components, there's legal implications, there's obviously performance implications. I would imagine that resonates and particularly with the enterprise buyer because they don't have the time to do all this integration. >> That's a very good point. So there is an interesting question that enterprises, they want to have an open source so there is no lock-in, but they also need help to implement and deploy and manage it because they don't have the expertise. And we all know that the IKEA desk has actually brought that API, the past layer standardization. So what we have done is we have given the open source and you arrive to the Kubernetes API, but at the same time orchestration, persistent stories, the data fabric, the AI algorithms, all of them are bolted into it and on the top of that, it's available both as a licensed software on-prem, and the same software runs on the GreenLake. So you can actually pay as you go and then we run it for them in a colo or, or in their own data center. >> Oh, good. That was one of my latter questions. So, I can get this as a service pay by the drink, essentially I don't have to install a bunch of stuff on-prem and pay it perpetualized... >> There is a lot of containers and is the reason and the lapse of service in the last discover and knowledge gone production. So both Ezmeral is available, you can run it on-prem, on the cloud as well, a congenital platform, or you can run instead on GreenLake. >> Robert, are there any specific use case patterns that you see emerging amongst customers? >> Yeah, absolutely. So there's a couple of them. So we have a, a really nice relationship that we see with any of the Splunk operators that were out there today, right? So Splunk containerized, their operator, that operator is the number one operator, for example, for Splunk in the IT operation side or notifications as well as on the security operations side. So we've found that that runs highly effective on top of Ezmeral, on top of our platforms so we just talked about, that Kumar just talked about, but I want to also give a little bit of backgrounds to that same operator platform. The way that the Ezmeral platform has done is that we've been able to make it highly active, active with HA availability at nine, it's going to be at five nines for that same Splunk operator on premises, on the Kubernetes open source, which is as far as I'm concerned, a very, very high end computer science work. You understand how difficult that is, that's number one. Number two is you'll see just a spark workloads as a whole. All right. Nobody handles spark workloads like we do. So we put a container around them and we put them inside the pipeline of moving people through that basic, ML AI pipeline of getting a model through its system, through its trained, and then actually deployed to our ML ops pipeline. This is a key fundamental for delivering value in the data space as well. And then lastly, this is, this is really important when you think about the data fabric that we offer, the data fabric itself doesn't necessarily have to be bolted with the container platform, the container, the actual data fabric itself, can be deployed underneath a number of our, you know, for competitive platforms who don't handle data well. We know that, we know that they don't handle it very well at all. And we get lots and lots of calls for people saying, "Hey, can you take your Ezmeral data fabric "and solve my large scale, "highly challenging data problems?" And we say, "yeah, "and then when you're ready for a real world, "full time enterprise ready container platform, "we'd be happy to prove that too." >> So you're saying you're, if I'm inferring correctly, you're one of the values as you're simplifying that whole data pipeline and the whole data science, science project pun intended, I guess. (Robert and Kumar laughing) >> That's true. >> Absolutely. >> So, where does a customer start? I mean, what, what are the engagements like? What's the starting point? >> It's means we're probably one of the most trusted and robust supplier for many, many years and we have a phenomenal workforce of both the (indistinct), world leading support organization, there are many places to start with. One is obviously all these salaries that are available on the GreenLake, as we just talked about, and they can start on a pay as you go basis. There are many customers that actually some of them are from the early days of BlueData and MAPR, and then already running and they actually improvise on when, as they move into their next version more of a message. You can start with simple as well as container platform or system with the store, a computer's operation and can implement as an analyst to start working. And then finally as a big company like HPE as an everybody's company, that finance it's services, it's very easy for the customers to be able to get that support on day to day operations. >> Thank you for watching everybody. It's Dave Vellante for theCUBE. Keep it right there for more great content from Ezmeral.

Published Date : Mar 10 2021

SUMMARY :

in the office of the Thanks for having us. digitazation for the customers. so it's not, you know, a lot So, as the customers are So, by the way, Ezmeral, of accessing the data So, the magic is if you I mean the same thing and it is sitting at the edge is the business impact, One of the things that we know is today So, the cost is clearly really I mean, Kumar, you know, and you have versioning all correctly, of the value proposition and the same software service pay by the drink, and the lapse of service that operator is the number one operator, and the whole data science, that are available on the GreenLake, Thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KumarPERSON

0.99+

RobertPERSON

0.99+

90QUANTITY

0.99+

Dave VellantePERSON

0.99+

Robert ChristiansenPERSON

0.99+

Kumar SreekantiPERSON

0.99+

SplunkORGANIZATION

0.99+

EzmeralPERSON

0.99+

95%QUANTITY

0.99+

DavePERSON

0.99+

HPEORGANIZATION

0.99+

98%QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

twoQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

IKEAORGANIZATION

0.99+

OneQUANTITY

0.99+

MAPRORGANIZATION

0.99+

one customerQUANTITY

0.99+

BlueDataORGANIZATION

0.99+

90 plus percentQUANTITY

0.99+

bothQUANTITY

0.99+

eachQUANTITY

0.99+

NimbleORGANIZATION

0.99+

second sideQUANTITY

0.98+

oneQUANTITY

0.98+

GreenLakeORGANIZATION

0.98+

todayDATE

0.98+

EzmeralORGANIZATION

0.97+

EmeraldPERSON

0.97+

HPE EzmeralORGANIZATION

0.97+

three unique characteristicsQUANTITY

0.96+

92QUANTITY

0.95+

one brandQUANTITY

0.94+

Number oneQUANTITY

0.94+

single serverQUANTITY

0.93+

SpanishOTHER

0.92+

three thingsQUANTITY

0.92+

nineQUANTITY

0.9+

18 conQUANTITY

0.89+

number twoQUANTITY

0.88+

KubernetesTITLE

0.86+

93%QUANTITY

0.86+

KubernetesORGANIZATION

0.85+

Number twoQUANTITY

0.83+

secondQUANTITY

0.8+

COVIDOTHER

0.79+

EzmeralTITLE

0.77+

coupleQUANTITY

0.75+

three fundamental thingsQUANTITY

0.75+

KuberneteTITLE

0.73+

GreenLakeTITLE

0.7+

Kirk Borne, Booz Allen | HPE Ezmeral Day 2021


 

>>okay. Getting data right is one of the top priorities for organizations to affect digital strategy. So right now we're going to dig into the challenges customers face when trying to deploy enterprise wide data strategies. And with me to unpack this topic is Kirk born principal data Scientists and executive advisor Booz Allen Hamilton. Kirk, great to see you. Thank you, sir, for coming on the program. >>Great to be here, Dave. >>So hey, enterprise scale data science and engineering initiatives there. Nontrivial. What do you see? Some of the challenges and scaling data science and data engineering ops. >>Well, one of the first challenge is just getting it out of the sandbox because so many organizations, they say, let's do cool things with data. But how do you take it out of that sort of play phase into an operational phase? And so being able to do that is one of the biggest challenges. And then being able to enable that for many different use cases then creates an enormous challenge. Because do you replicate the technology and the team for each individual use case, or can you unify teams and technologies to satisfy all possible use cases? And so those are really big challenges for companies, organizations everywhere to think about >>what about the idea of industrializing those those data operations? I mean, what does that? What does that mean to you? Is that a security connotation? A compliance? How do you think about it? >>It's actually all of those industrialized to me is sort of like How do you not make it a one off? But you make it sort of a reproducible, solid, risk compliant and so forth system that can be reproduced many different times and again using the same infrastructure and the same analytic tools and techniques, but for many different use cases, so we don't have to rebuild the will reinvent the wheel, reinvent the car, so to speak. Every time you need a different type of vehicle, you build a car or a truck or a race car. There's some fundamental principles that are common to all of those, and that's where that industrialization is, and it includes security, compliance with regulations and all those things. But it also means just being able to scale it out to to new opportunities beyond the ones that you dreamed of when you first invented the thing >>you know, data by its very nature. As you well know, it's distributed, but for you've been at this a while. For years, we've been trying to sort of shove everything into a monolithic architecture and and in hardening infrastructures around that and many organizations, it's It's become a block to actually getting stuff done. But so how? How are you seeing things like the edge emerged? How do you How do you think about the edge? How do you see that evolving? And how do you think customers should be dealing with with edge and edge data? >>Well, it's really kind of interesting. I had many years at NASA working on data systems, and back in those days, the the idea was you would just put all the data in a big data center, and then individual scientists would retrieve that data and do analytics on it, do their analysis on their local computer. And you might say that sort of like edge analytics, so to speak, because they're doing analytics at at their home computer. But that's not what edge means. It means actually doing the analytics, the insights, discovery at the point of data collection, and so that's that's really real time Business decision making. You don't bring the data back and then try to figure out sometime in the future what to do. And I think an autonomous vehicle is a good example of why you don't want to do that. Because if you collect data from all the cameras and radars and light ours that are on a self driving car and you move that data back to a data cloud while the car is driving down the street and let's say a child walks in front of the car, you send all the data back. It computes and does some object recognition and pattern detection, and 10 minutes later sent a message to the car. Hey, you need to put your brakes on. Well, it's a little kind of late at that point, and so you need to make those discoveries, insight, discoveries, those pattern discoveries and hence the proper decisions from the patterns in the data at the point of data collection. And so that's Data Analytics at the edge. And so, yes, you can bring the data back to a central cloud or distributed cloud. It almost doesn't even matter if if your data is distributed, so any use case in any data, scientists or any analytic team in the business can access it. Then what you really have is a data mesh or a data fabric that makes it accessible at the point that you need it, whether it's at the edge or in some static post, uh, event processing. For example, typical business quarter reporting takes a long look at your last three months of business. Well, that's fine in that use case, but you can't do that for a lot of other real time analytic decision making. Well, >>that's interesting. I mean, it sounds like you think the the edge not as a place, but as you know, where it makes sense to actually, you know, the first opportunity, if you will, to process the data at low latency, where it needs to be low latency. Is that a good way to think about it? >>Absolutely. It's a little late and see that really matters. Uh, sometimes we think we're gonna solve that with things like five G networks. We're gonna be able to send data really fast across the wire. But again, that self driving cars yet another example because what if you all of a sudden the network drops out, you still need to make the right decision with the network not even being there, >>that darn speed of light problem. Um, and so you use this term data mash or or data fabric? Double click on that. What do you mean by that? >>Well, for me, it's it's, uh, it's a sort of a unified way of thinking about all your data. And when I think of mesh, I think of like weaving on a loom, or you're you're creating a blanket or a cloth and you do weaving, and you do that. All that cross layering of the different threads and so different use cases in different applications and different techniques can make use of this one fabric, no matter where it is in the in the business. Or again if it's at the edge or or back at the office. One unified fabric, which has a global name space so anyone can access the data they need, sort of uniformly, no matter where they're using it. And so it's a way of this unifying all the data and use cases and sort of a virtual environment that that no longer you need to worry about. So what's what's the actual file name or what's the actual server of this thing is on? Uh, you can just do that for whatever use case you have. But I think it helps Enterprises now to reach a stage which I like to call the self driving enterprise. Okay, so it's modeled after the self driving car. So the self driving enterprise needs the business leaders in the business itself. You would say it needs to make decisions oftentimes in real time, all right. And so you need to do sort of predictive modeling and cognitive awareness of the context of what's going on. So all these different data sources enable you to do all those things with data. And so, for example, any kind of a decision in a business, any kind of decision in life, I would say, is a prediction, right? You say to yourself, If I do this such and such will happen If I do that, this other thing will happen. So a decision is always based upon a prediction about outcomes, and you want to optimize that outcome so both predictive and prescriptive analytics need to happen in this in this same stream of data and not statically afterwards, so that self driving enterprises enabled by having access to data wherever and whenever you need it. And that's what that fabric that data fabric and data mesh provides for you, at least in my opinion. >>Well, so like carrying that analogy like the self driving vehicle, your abstracting, that complexity away in this metadata layer that understands whether it's on prem or in the public cloud or across clouds or at the edge where the best places to process that data, what makes sense? Does it make sense to move it or not? Ideally, I don't have to. Is that how you're thinking about it? Is that why we need this notion of a data fabric >>right? It really abstracts away all the sort of complexity that the I T aspects of the job would require. But not every person in the business is going to have that familiarity with the servers and the access protocols and all kinds of it related things, and so abstracting that away. And that's in some sense what containers do. Basically, the containers abstract away that all the information about servers and connectivity protocols and all this kind of thing You just want to deliver some data to an analytic module that delivers me. And inside our prediction, I don't need to think about all those other things so that abstraction really makes it empowering for the entire organization. You like to talk a lot about data, democratization and analytics democratization. This really gives power to every person in the organization to do things without becoming an I t. Expert. >>So the last last question, we have time for years. So it sounds like Kirk the next 10 years of data not going to be like the last 10 years will be quite different. >>I think so. I think we're moving to this. Well, first of all, we're going to be focused way more on the why question. Why are we doing this stuff? The more data we collect, we need to know why we're doing it. And one of the phrases I've seen a lot in the past year, which I think is going to grow in importance in the next 10 years, is observe ability, so observe ability to me is not the same as monitoring. Some people say monitoring is what we do. But what I like to say is, yeah, that's what you do. But why you do it is observe ability. You have to have a strategy. Why what? Why am I collecting this data? Why am I collecting it here? Why am I collecting it at this time? Resolution? And so getting focused on those why questions create be able to create targeted analytic solutions for all kinds of different different business problems. And so it really focuses it on small data. So I think the latest Gartner data and Analytics trending reports said we're gonna see a lot more focused on small data in the near future. >>Kirk born your dot connector. Thanks so much >>for coming on. The Cuban >>being part of the program. >>My pleasure. Mm mm.

Published Date : Mar 10 2021

SUMMARY :

for coming on the program. What do you see? the technology and the team for each individual use case, or can you unify teams and opportunities beyond the ones that you dreamed of when you first invented the thing And how do you think customers should be dealing with with edge and edge data? fabric that makes it accessible at the point that you need it, whether it's at the edge or in some static I mean, it sounds like you think the the edge not as a place, But again, that self driving cars yet another example because what if you all of a sudden the network drops out, Um, and so you use this term data And so you need to do sort of predictive modeling and cognitive awareness Well, so like carrying that analogy like the self driving vehicle, But not every person in the business is going to have that familiarity So it sounds like Kirk the next 10 And one of the phrases I've seen a lot in the past year, which I think is going to grow in importance in the next 10 years, Thanks so much for coming on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

NASAORGANIZATION

0.99+

KirkPERSON

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

Booz Allen HamiltonPERSON

0.98+

GartnerORGANIZATION

0.98+

first opportunityQUANTITY

0.98+

first challengeQUANTITY

0.97+

each individualQUANTITY

0.96+

DoubleQUANTITY

0.91+

Kirk BornePERSON

0.91+

firstQUANTITY

0.91+

Booz AllenPERSON

0.89+

next 10 yearsDATE

0.86+

10 minutes laterDATE

0.86+

past yearDATE

0.85+

five GORGANIZATION

0.83+

last 10 yearsDATE

0.83+

CubanPERSON

0.82+

one fabricQUANTITY

0.76+

next 10 yearsDATE

0.75+

2021DATE

0.75+

caseQUANTITY

0.73+

One unifiedQUANTITY

0.71+

HPE Ezmeral DayEVENT

0.56+

yearsQUANTITY

0.54+

three monthsQUANTITY

0.53+

lastDATE

0.38+