Image Title

Search Results for esmo:

A Day in the Life of an IT Admin | HPE Ezmeral Day 2021


 

>>Hi, everyone. Welcome to ASML day. My name is Yasmin Joffey. I'm the director of systems engineering for ASML at HPE. Today. We're here and joined by my colleague, Don wake, who is a technical marketing engineer who will talk to us about the date and the life of an it administrator through the lens of ASML container platform. We'll be answering your questions real time. So if you have any questions, please feel free to put your questions in the chat, and we should have some time at the end for some live Q and a. Don wants to go ahead and kick us off. >>All right. Thanks a lot, Yasir. Yeah, my name is Don wake. I'm the tech marketing guy and welcome to asthma all day, day in the life of an it admin and happy St. Patrick's day. At the same time, I hope you're wearing green virtual pinch. If you're not wearing green, don't have to look that up if you don't know what I'm scouting. So we're just going to go through some quick things. Talk about discussion of modern business. It needs to kind of set the stage and go right into a demo. Um, so what is the need here that we're trying to fulfill with, uh, ASML container platform? It's, it's all rooted in analytics. Um, modern businesses are driven by data. Um, they are also application centric and the separation of applications and data has never been more important or, or the relationship between the two applications are very data hungry. >>These days, they consume data in all new ways. The applications themselves are, are virtualized, containerized, and distributed everywhere, and optimizing every decision and every application is, is become a huge problem to tackle for every enterprise. Um, so we look at, um, for example, data science, um, as one big use case here, um, and it's, it's really a team sport and I'm today wearing the hat of perhaps, you know, operations team, maybe software engineer, guy working on, you know, continuous integration, continuous development integration with source control, and I'm supporting these data scientists, data analysts. And I also have some resource control. I can decide whether or not the data science team gets a, a particular cluster of compute and storage so that they can do their work. So this is the solution that I've been given as an it admin, and that is the ASML container platform. >>And just walking through this real quick, at the top, I'm trying to, as wherever possible, not get involved in these guys' lives. So the data engineers, scientists, app developers, dev ops guys, they all have particular needs and they can access their resources and spin up clusters, or just do work with the Jupiter notebook or run spark or Kafka or any of the, you know, popular analytics platforms by just getting in points that we can provide to them web URLs and their self service. But in the backend, I can then as the it guy makes sure the Kubernetes clusters are up and running, I can assign particular access to particular roles. I can make sure the data's well protected and I can connect them. I can import clusters from public clouds. I can, uh, you know, put my like clusters on premise if I want to. >>And I can do all this through this centralized control plane. So today I'm just going to show you I'm supporting some data scientists. So one of our very own guys is actually doing a demo right now as well, called the a day in the life of the data scientist. And he's on the opposite side, not caring about all the stuff I'm doing in the backend and he's training models and registering the models and working with data, uh, inside his, you know, Jupiter notebook, running inferences, running postman scripts. And so I'm in the background here, making sure that he's got access to his cluster storage protected, make sure it's, um, you know, his training models are up, he's got service endpoints, connecting him to, um, you know, his source control and making sure he's got access to all that stuff. So he's got like a taxi ride prediction model that he's working on and he has a Jupiter notebook and models. So why don't we, um, get hands on and I'll just jump right over it. >>It was no container platform. So this is a web UI. So this is the interface into the container platform. Our centralized control plane, I'm using my active directory credentials to log in here. >>And >>When I log in, I've also been assigned a particular role, uh, with regard to how much of the resources I can access. Now, in my case, I'm a site admin you can see right up here in the upper right hand, I'm a site admin and I have access to lots and lots of resources. And the one I'm going to be focusing on today is a Kubernetes cluster. Um, so I have a cluster I can go in here and let's say, um, we have a new data scientists come on board one. I can give him his own resources so he can do whatever he wants, use some GPU's and not affect other clusters. Um, so we have all these other clusters already created here. You can see here that, um, this is a very busy, um, you know, production system. They've got some dev clusters over here. >>I see here, we have a production cluster. So he needs to produce something for data scientists to use. It has to be well protected and, and not be treated like a development resource. So under his production cluster, I decided to create a new Kubernetes cluster. And literally I just push a button, create Kubernetes cluster once I've done that. And I'll just show you some of the screens and this is a live environment. So this is, I could actually do it all my hosts are used up right now, but I wouldn't be able to go in here and give it a name, just select, um, some hosts to use as the primary master controller and some workers answer a few more questions. And then once that's done, I have now created a special, a whole nother Kubernetes cluster, um, that I could also create tenants from. >>So tenants are really Kubernetes. Uh namespaces so in addition to taking hosts and Kubernetes clusters, I can also go to that, uh, to existing clusters and now carve out a namespace from that. So I look at some of the clusters that were already created and, um, let's see, we've got, um, we've got this year is an example of a tenant that I could have created from that production cluster. And to do that here in the namespace, I just hit create and similar to how you create a cluster. You can now carve down from a given cluster and we'll say the production cluster and give it a name and a description. I can even tell it, I want this specific one to be an AI ML project, um, which really is our ML ops license. So at the end of the day, I can say, okay, I'm going to create an ML ops tenant from that cluster that I created. >>And so I've already created it here for this demo. And I'm going to just go into that Kubernetes namespace now that we also call it tenant. I mean, it's like, multitenancy the name essentially means we're carving out resources so that somebody can be isolated from another environment. First thing I typically do. Um, and at this point I could also give access to this tenant and only this tenant to my data scientist. So the first thing I typically do is I go in here and you can actually assign users right here. So right now it's just me. But if I want it to, for example, give this, um, to Terry, I could go in here and find another user and assign him from this lead, from this list, as long as he's got the proper credentials here. So you can see here, all these other users have active directory credentials, and they, uh, when we created the cluster itself, we also made sure it integrated with our active directory, so that only authorized users can get in there. >>Let's say the first thing I want to do is make sure when I do Jupiter notebook work, or when Terry does, I'm going to connect him up straight up to the get hub repository. So he gives me a link to get hub and says, Hey man, this is all of my cluster work that I've been doing. I've got my source control there. My scripts, my Python notebooks, my Jupiter notebooks. So when I create that, I simply give him, you know, he gives me his, I create a configuration. I say, okay, here's a, here's a get repo. Here's the link to it. I can use a token, here's his username. And I can now put in that token. So this is actually a private repo and using a token, you know, standard get interface. And then the cool thing after that, you can go in here and actually copy the authorization secret. >>And this gets into the Kubernetes world. Um, you know, if you want to make sure you have secure integration with things like your source control or perhaps your active directory, that's all maintained in secrets. So you can take that secret. And when I then create his notebook, I can put that secret right in here in this, uh, launch Yammel. And I say, Hey, connect this Jupiter notebook up with this secret so he can log in. And when I've launched this Jupiter notebook cluster, this is actually now, uh, within my, my, uh, Kubernetes tenant. It is now really a pod. And if I want to, I can go right into a terminal for that, uh, Kubernetes tenant and say, coop CTL, these are standard, you know, CNCF certified Kubernetes get pods. And when I do this, it'll tell me all of the active pods and within those positive containers that I'm running. >>So I'm running quite a few pods and containers here in this, uh, artificial intelligence machine learning, um, tenant. So that's kind of cool. Also, if I wanted to, I could go straight and I can download the config for Kubernetes, uh, control. Uh well, and then I can do something like this, where on my own system where I'm more comfortable, perhaps coop CTL get pods. So this is running on my laptop and I just had to do a coop CTL refresh and give the IP address and authorization, um, information in order to connect from my laptop to that end point. So from a CIC D perspective from, you know, an it admin guides, he usually wants to use tools right on his, uh, desktop. So here am I back in my web browser, I'm also here on the dashboard of this, uh, Kubernetes, um, tenant, and I can see how it's doing. >>It looks like it's kind of busy here. I can focus specifically on a pod if I want to. I happen to know this pod is my Jupiter notebook pod. So aren't, I show how, you know, I could enable my data scientists by just giving him the, uh, URL or what we call a notebook service end points or notebook end point. And just by clicking on this URL or copying it, copying, you know, it's a link, uh, and then emailing it to them and say, okay, here's your, uh, you know, here's your duper notebook. And I say, Hey, just log in with your credentials. I've already logged in. Um, and so then he's got his Jupiter notebook here and you can see that he's connected to his GitHub repo directly. He's got all of the files that he needs to run his data science project and within here, and this is really in the data science realm, data scientists realm. >>He can see that he can have access to centralized storage and he can copy the files from his GitHub repo to that centralized storage. And, you know, these, these commands, um, are kind of cool. They're a little Jupiter magic commands, and we've got some of our own that showed that attachment to the cluster. Um, but you can see here if you run these commands, they're actually looking at the shared project repository managed by the container platform. So, you know, just to show you that again, I'll go back to the container platform. And in fact, the data scientist, uh, could do the same thing. Attitude put a notebook back to platform. So here's this project repository. So this is other big point. So now putting on my storage admin hat, you know, I've got this shared, um, storage, um, volume that is managed for me by the ESMO data fabric. >>Um, in, in here, you can see that the data scientist, um, from his get repo is able to through Jupiter notebook directly, uh, copy his code. He was able to run as Jupiter notebook and create this XG boost, uh, model. So this file can then be registered in this AIML tenant. So he can go in here and register his model. So this is, you know, this is really where the data scientist guy can self-service kick off his notebooks, even get a deployment end point so that he can then inference his cluster. So here again, another URL that you could then take this and put it into like a postman rest URL and get answers. Um, but let's say he wants to, um, he's been doing all this work and I want to make sure that his, uh, data's protected, uh, how about creating a mirror. >>So if I want to create a mirror of that data, now I go back to this other, uh, and this is the, the, uh, data fabric embedded in a very special cluster called the Picasso cluster. And it's a version of the ASML data fabric that allows you to launch what was formerly called Matt bar as a Kubernetes cluster. And when you create this special cluster, every other cluster that you create is automatically, uh, gets things like that. Tenant storage. I showed you to create a shared workspace, and it's automatically managed by this, uh, data fabric. Uh, and you're even given an end point to go into the data fabric and then use all of the awesome features of ASML data fabric. So here I can just log in here. And now I'm at the, uh, data fabric, web UI to do some data protection and mirroring. >>So >>Let's go over here. Let's say I want to, uh, create a mirror of that tenant. So I forgot to note what the name of my tenant was. I'm going to go back to my tenant, the name of the volume that I'm playing with here. So in my AIML tenant, I'm going to go to my source, control my project repository that I want to protect. And I see that the ESMO data fabric has created 10 and 30 as a volume. So I'll go back to my, um, data fabric here, and I'm going to look for 10 and 30. And if I want to, I can go into tenant 30, >>Okay. >>Down here, I can look at the usage. I can look at all of the, you know, I've used very little of the, uh, allocated storage that I want, but let's, uh, you know what, let's go ahead and create a volume to mirror that one. So very simple web UI that has said create volume. I go in here and I say, I want to do a, a tenant 30 mirror. And I say, mirror the mirror volume. Um, I want to use my Picasso cluster. I want to use tenant 30. So now that's actually looking up in the data fabric, um, database there's 10 and 30 K. So it knows exactly which one I want to use. I can go in here and I can say, you know, ext HCP, tenant, 30 mirror, you know, I can give it whatever name I want and this path here. >>And that's a whole nother, uh, demo is this could be in Tokyo. This could be mirrored to all kinds of places all over the world, because this is truly a global name, split namespace, which is a huge differentiator for us in this case, I'm creating a local mirror and that can go down here and, um, I can add, uh, audit and encryptions. I can do, um, access control. I can, you know, change permissions, you know, so full service, um, interactivity here. And of course this is using the web UI, but there's also rest API interfaces as well. So that is pretty much the, the brunt of what I wanted to show you in the demo. Um, so we got hands on and I'm just going to throw this up real quick and then come back to Yasser. See if he's got any questions he has received from anybody watching, if you have any new questions. >>Yeah. We've got a few questions. Um, we can, uh, just take some time to go, hopefully answer a few. Um, so it, it does look like you can integrate or incorporate your existing get hub, uh, to be able to, um, extract, uh, shared code or repositories. Correct? >>Yeah. So we have that built in and can either be, um, get hub or bit bucket it's, you know, pretty standard interface. So just like you can go into any given, get hub and do a clone of a, of a repo, pull it into your local environment. We integrated that directly into the gooey so that you can, uh, say to your, um, AIML tenant, uh, to your Jupiter notebook. You know, here's, here's my GitHub repo. When you open up my notebook, just connect me straight up. So it saves you some, some steps there because Jupiter notebook is designed to be integrated with get hub. So we have get hub integrated in as well or bit bucket. Right. >>Um, another question around the file system, um, has the map, our file system that was carried over, been modified in any way to run on top of Kubernetes. >>So yeah, I would say that the map, our file system data fabric, what I showed here is the Kubernetes version of it. So it gives you a lot of the same features, but if you need, um, perhaps run it on bare metal, maybe you have performance, um, concerns, um, you know, you can, uh, you can also deploy it as a separate bare metal instance of data fabric, but this is just one way that you can, uh, use it integrated directly into Kubernetes depends really the needs of, of the, uh, the user and that a fabric has a lot of different capabilities, but this is, um, it has a lot of the core file system capabilities where you can do snapshots and mirrors, and it it's of course, striped across multiple, um, multiple disks and nodes. And, uh, you know, Matt BARR data fabric has been around for years. It's, uh, and it's designed for integration with these, uh, analytic type workloads. >>Great. Um, you showed us how you can manage, um, Kubernetes clusters through the ASML container platform you buy. Um, but the question is, can you, uh, control who accesses, which tenant, I guess, namespace that you created, um, and also can you restrict or, uh, inject resource limitations for each individual namespace through the UI? >>Oh yeah. So that's, that's a great question. Yes. To both of those. So, um, as a site admin, I had lots of authority to create clusters, to go into any cluster I wanted, but typically for like the data scientist example I used, I would give him, I would create a user for him. And there's a couple of ways you can create users. Um, and it's all role-based access control. So I could create a local user and have container platform authenticate him, or I can say integrate directly with, uh, active directory or LDAP, and then even including which groups he has access to. And then in the user interface for the site admin, I could say he gets access to this tenant and only this tenant. Um, another thing you asked about is his limitations. So when you create the tenant to prevent that noisy neighbor problem, you can, um, go in and create quotas. >>So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, um, flow is okay, I've defined which cluster I want to use. I defined how much memory I want to use. So there's a quota right there. You could say, Hey, how many CPU's am I taking from this pool? And that's one of the cool things about the platform is that it abstracts all that away. You don't have to really know exactly which host, um, you know, you can create the cluster and select specific hosts, but once you've created the cluster, it's not just a big pool of resources. So you can say Bob, over here, um, he's only going to get 50 of the a hundred CPU's available and he's only going to get X amount of gigabytes of memory. And he's only going to get this much storage that he can consume. So you can then safely hand off something and know they're not going to take all the resources, especially the GPU's where those will be expensive. And you want to make sure that one person doesn't hog all the resources. And so that absolutely quotas are built in there. >>Fantastic. Well, we, I think we are out of time. Um, we have, uh, a list of other questions that we will absolutely reach out and, um, get all your questions answered, uh, for those of you who ask questions in the chat. Um, Don, thank you very much. Thanks everyone else for joining Don, will this recording be made available for those who couldn't make it today? >>I believe so. Honestly, I'm not sure what the process is, but, um, yeah, it's being recorded so they must've done that for a reason. >>Fantastic. Well, Don, thank you very much for your time and thank everyone else for joining. Thank you.

Published Date : Mar 17 2021

SUMMARY :

So if you have any questions, please feel free to put your questions in the chat, don't have to look that up if you don't know what I'm scouting. you know, continuous integration, continuous development integration with source control, and I'm supporting I can, uh, you know, And so I'm in the background here, making sure that he's got access to So this is a web UI. You can see here that, um, this is a very busy, um, you know, And I'll just show you some of the screens and this is a live environment. in the namespace, I just hit create and similar to how you create a cluster. So you can see here, all these other users have active I create that, I simply give him, you know, he gives me his, I create a configuration. So you can take that secret. So this is running on my laptop and I just had to do a coop CTL refresh And just by clicking on this URL or copying it, copying, you know, it's a link, So now putting on my storage admin hat, you know, I've got this shared, So here again, another URL that you could then take this and put it into like a postman rest URL And when you create this special cluster, every other cluster that you create is automatically, And I see that the ESMO data I can look at all of the, you know, I can, you know, change permissions, Um, so it, it does look like you can integrate So just like you can go into any given, Um, another question around the file system, um, has the it has a lot of the core file system capabilities where you can do snapshots and mirrors, and also can you restrict or, uh, inject resource limitations for each So when you create the tenant to prevent So I didn't show the process of actually creating a Quentin, a tenant, but integral to that, Um, Don, thank you very much. I believe so.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
YasirPERSON

0.99+

TerryPERSON

0.99+

Don wakePERSON

0.99+

TokyoLOCATION

0.99+

50QUANTITY

0.99+

Yasmin JoffeyPERSON

0.99+

FirstQUANTITY

0.99+

two applicationsQUANTITY

0.99+

DonPERSON

0.99+

TodayDATE

0.99+

todayDATE

0.99+

St. Patrick's dayEVENT

0.98+

10QUANTITY

0.98+

bothQUANTITY

0.98+

30 K.QUANTITY

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

HPEORGANIZATION

0.97+

one personQUANTITY

0.97+

first thingQUANTITY

0.97+

YasserPERSON

0.97+

KafkaTITLE

0.97+

PythonTITLE

0.96+

ASMLORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

one wayQUANTITY

0.95+

JupiterLOCATION

0.94+

ESMOORGANIZATION

0.94+

GitHubORGANIZATION

0.94+

ASMLEVENT

0.93+

BobPERSON

0.93+

Matt BARRPERSON

0.92+

this yearDATE

0.91+

JupiterORGANIZATION

0.9+

each individualQUANTITY

0.86+

30OTHER

0.85+

a hundred CPUQUANTITY

0.82+

ASMLTITLE

0.82+

2021DATE

0.8+

coopORGANIZATION

0.78+

a dayQUANTITY

0.78+

KubernetesORGANIZATION

0.75+

coupleQUANTITY

0.75+

A Day in the LifeTITLE

0.73+

an ITTITLE

0.7+

30 mirrorQUANTITY

0.69+

caseQUANTITY

0.64+

CTLCOMMERCIAL_ITEM

0.57+

few more questionsQUANTITY

0.57+

coop CTLORGANIZATION

0.55+

yearsQUANTITY

0.55+

QuentinPERSON

0.51+

30QUANTITY

0.49+

Ezmeral DayPERSON

0.48+

lotsQUANTITY

0.43+

JupiterCOMMERCIAL_ITEM

0.42+

10TITLE

0.41+

PicassoORGANIZATION

0.38+

A Day in the Life of Data with the HPE Ezmeral Data Fabric


 

>>Welcome everyone to a day in the life of data with HPE as well. Data fabric, the session is being recorded and will be available for replay at a later time. When you want to come back and view it again, feel free to add any questions that you have into the chat. And Chad and I joined stark. We'll, we'll be more than willing to answer your questions. And now let me turn it over to Jimmy Bates. >>Thanks. Uh, let me go ahead and share my screen here and we'll get started. >>Hey everyone. Uh, once again, my name is Jimmy Bates. I'm a director of solutions architecture here for HPS Merle in the Americas. Uh, today I'd like to walk you through a journey on how our everyday life is evolving, how everything about our world continues to grow more connected about, and about how here at HPE, how we support the data that represents that digital evolution for our customers, with the HPE as rural data fabric to start with, let's define that term data. The concept of that data can be simplified to a record of life's events. No matter if it's personal professional or mechanical in nature, data is just records that represent and describe what has happened, what is happening or what we think will happen. And it turns out the more complete record we have of these events, the easier it is to figure out what comes next. >>Um, I like to refer to that as the omnipotence protocol. Um, let's look at this from a personal perspective of two very different people. Um, let me introduce you to James. He's a native citizen of the digital world. He's, he's been, he's been a citizen of this, uh, an a career professional in the it world for years. He's always on always connected. He loves to get all the information he needs on a smartphone. He works constantly with analytics. He predicts what his customers need, what they want, where they are, uh, and how best to reach them. Um, he's fully embraced the use of data in his life. This is Sue SCA. She's, she's a bit of a, um, of an opposite to James. She's not yet immigrated to our digital world. She's been dealing with the changes that are prevalent in our times. And she started a new business that allows her customers, the option of, um, of expressing their personalities and the mask that they wear. She wants to make sure her customers can upload images, logos, and designs in order to deliver that customized mask, uh, to brighten their interactions with others while being safe as they go about their day. But she needs a crash course in digital and the digital journey. She's recently as, as most of us have as transitioned from an office culture to a work from home culture, and she wants to continue to grow that revenue venture on the side >>At the core of these personalities is a journey that is, that is representative common challenge that we're all facing today. Our world has been steadily shrinking as our ability to reach out to one another has steadily increased. We're all on that journey together to know more about what is happening to be connected to what our business is doing to be instantly responsive to our customer needs and to deliver that personalized service to every individual. And it as moral, we see this across every industry, the challenge of providing tailored experiences to potential customers in a connected world to provide constant information on deliveries that we requested or provide an easier commute to our destination to, to change the inventories, um, to the just-in-time arrival for our fabrications to identify quality issues in real time to alter the production of each product. So it's tailored to the request of the end user to deliver energy in, in smarter, more efficient ways, uh, without injury w while protecting the environment and to identify those, those, uh, medical emerging threats, and to deliver those personalized treatments safely. >>And at the core of all of these changes, all of these different industries is data. Um, if you look at the major technology trends, um, they've been evolving down this path for some time now, we're we're well into our cloud journey. The mobile platform world is, is now just part of our core strategies. IOT is feeding constant streams of data often over those mobile, uh, platforms. And the edge is increasingly just part of our core, all of this combined with the massive amounts of data that's becoming, becoming available through it is driving autonomous solutions with machine learning and AI. Uh, this is, this is just one aspect of this, this data journey that we're on, but for success, it's got, uh, sorry for success. It's got to be paired. Um, it's gotta be paired with action. >>Um, >>Well, when you look at the, uh, um, if we take a look at James and Cisco, right, we can start to see, um, with the investments in those actions, um, how their travel they're realizing >>Their goals, >>Services, efforts, you know, uh, focused, deliver new data-driven applications are done in new ways that are smaller in nature and kind of rapidly iterate, um, to respond to the digital needs of, of our new world, um, containerization to deploy and manage those apps anywhere in our connected world, they need to be secure we'll time streaming architecture, um, from, from the, from the beginning to allow for continual interactions with our changing customer demands and all of this, especially in our current environment, while running cost reduction initiatives. This is just the current world that, that our solutions must live in. Um, with that framework in mind, um, I'd like to take the remainder of our time and kind of walk through some of the use cases where, where we at HPE helped organizations through this journey with, with, with the ASML data fabrics, >>Let's >>Start with what's happening in the mobile world. In fact, the HPE as moral data fabric is being used by a number of companies to provide infinitely personalized experiences. In this case, it could be James could be sushi. It could be anyone that opens up their smartphone in the morning, uh, quickly checking what's transpiring in the world with a selection of curated, relative relevant articles, images, and videos provided by data-driven algorithm workloads, all that data, the logs, the recommendations, and the delivery of those recommendations are done through a variety of companies using HP as rural software, um, that provides a very personalized experience for our users. In addition, other companies monitor the service quality of those mobile devices to ensure optimize connectivity as they move throughout their day. The same is true for digital communication for that video communication, what we're doing right now, especially in these days where it's our primary method of connecting as we deal with limited physical engagements. Um, there's been a clear spike in the usage of these types of services. HPE, as Merle is helping a number of these companies deliver on real time telemetry analysis, predicting demand, latency, monitoring, user experience, and analyzing in real time, responding with autonomous adjustments to maintain pleasant experiences for all participants involved. >>Um, >>Another area, um, we're eight or HBS ML data fabric is playing a crucial role in the daily experience inside our automobiles. We invest a lot of ourselves in our cars. We expect tailored experiences that help us stay safe and connected as we move from one destination to another, in the areas of autonomous driving connected car, a number of major car companies in the world are using our data fabric to take autonomous driving to the next level where it should be effectively collecting all data from sensors and cameras, and then feeding that back into a global data fabric. So that engineers that develop cars can train next generation, future driving algorithms that make our driving experience safer and more autonomy going forward. >>Now let's take a look at a different mode of travel. Uh, the airline industry is being impaired. Varied is being impacted very differently today from, from the car companies, with our software, uh, we help airlines travel agencies, and even us as consumers deal with pricing, calculations and challenges, uh, with, um, air traffic services. We, we deal with, um, um, uh, delivering services around route predictions on time arrivals, weather patterns, and tagging and tracking luggage. We help people with flight connections and finding out what the figuring out what the best options are for your, for your travel. Uh, we collect mountains of data, secure it in a global data fabric, so it can provide, be provided back in an analyzed form with it. The stressed industry can contain some very interesting insights, provide competitive offerings and better services to us as travelers. >>This is also true for powering biometrics. At scale, we work with the biggest biometrics databases in the world, providing the back end for their enormous biometric authentication pursuit. Just to kind of give you a rough idea. A biometric authentication is done with a number of different data points from fingerprints. I re scans numerous facial features. All of these data points are captured for every individual and uploaded into the database, such that when the user is requesting services, their biometric metrics can be pooled and validated in seconds. From a scale perspective, they're onboarding 1 million people a day more than 200 million a year with a hundred percent business continuity and the options do multi-master and a global data fabric as needed ensuring that users will have no issues in securely accessing their pension payouts medical services or what other types of services. They may be guaranteed >>Pivoting >>To a very different industry. Even agriculture was being impacted in digital ways. Using HPE as well, data fabric, we help farmers become more digital. We help them predict weather patterns, optimize sea production. We even helped see producers create custom seed for very specific weather and ground conditions. We combine all of these things to help optimize production and ensure we can feed future generations. In some cases, all of these data sources collected at the edge can be provided back to insurance companies to help farmers issue claims when micro patterns affect farmers in negative ways, we all benefit from optimized farming and the HBS Modena fabric is there to assist in that journey. We provide the framework and the workload guidance to collect relevant data, analyze it and optimize food production. Our customers demonstrate the agricultural industry is most definitely my immigrating to our digital world. >>Now >>That we've got the food, we need to ship it along with everything else, all over the world, as well as offer can be found in action in many of the largest logistics companies in the world. I mean, just tracking things with greater efficiency can lead to astounding insights. What flights and ships did the package take? What Hans held it along its journey, what weather conditions did it encounter? What, what customs office did it go through and, and how much of it's requested and being delivered this along with hundreds of other telemetry points can be used to provide very accurate trade and economic predictions around what's going on with trade in the world. These data sets are being used very intensively to understand economy conditions and plan for future event consequences. We also help answer, uh, questions for shipping containers that are, that are more basic. Uh, like where is my container located at is my container still on the correct ship? Uh, surprisingly, uh, this helps cut down on those pesky little events like lost containers. >>Um, it's astounding the amount of data that's in DNA, and it's not just the pairs. It's, it's the never ending patterns found with other patterns that none of it can be fully understood unless the micro is maintained in context to the macro. You can't really understand these small patterns unless you maintain that overall understanding of the entire DNA structure to help the HVS mold data fabric can be found across every aspect of the medical field. Most recently was there providing the software framework to collect genomic sequencing, landing it in the data fabric, empowering connected availability for analysis to predict and find patterns of significance to shorten the effort it takes to identify those potential triggers and make things like vaccines become becoming available. In record time. >>Data is about people at HPE asthma. We keep people connected all around the world. We do this in a variety of ways. We we've already looked at several of the ways that that happens. We help you find data. You need, we help you get from point a to point B. We help make sure those birthday gifts show up on time. Some other interesting ways we connect people via recipes, through social platforms and online services. We help people connect to that new recipe that is unexpected, but may just be the kind of thing you need for dinner tonight at HPDs where we provide our customers with the power to deliver services that are tailored to the individual from edge to core, from containers to cloud. Many of the services you encounter everyday are delivered to you through an HV as oral global data fabric. You may not see it, but we're there in the morning in the morning when you get up and we're there in the evening. Um, when you wind down, um, at HPE as role, we make data globally available across everywhere that your business needs to go. Um, I'd like to thank everyone, uh, for the time that you've given us today. And I'd like to turn it back over and open up the floor for questions at this time, >>Jimmy, here's a question. What are the ways consumers can get started with HPS >>The fabric? Well, um, uh, there's several ways to get started, right? We, we, uh, first off we have software available that you can download that there's extensive documentation and use cases posted on our website. Um, uh, we have services that we offer, like, um, assessment services that can come in and help you assess the, the data challenges that you're having, whether you're, you're just dealing with a scale issue, a security issue, or trying to migrate to a more containerized approach. We have a services to help you come in, assess that aspect. Um, we have a getting started bundles, um, and we have, um, so there's all kinds of services that, that help you get started on your journey. So what >>Does a typical first deployment look like? >>Well, that's, that's a very, very interesting question. Um, a typical first deployment, it really kind of varies depending on where you're at in the material. Are you James? Are you, um, um, Cisco, right? It really depends on, on where you're at in your journey. Um, but a typical deployment, um, is, is, is involved. Uh, we, we like to come in, we we'd like to do workshops, really understand your specific challenges and problems so that we can determine what solutions are best for you. Um, that to take a look at when we kind of settle on that we, we, um, the first deployment, uh, is, um, there's typically, um, a deployment of, uh, a, uh, a service offering, um, w with a software to kind of get you started along the way we kind of bundle that aspect. Um, as you move forward, if you're more mature and you already have existing container solutions, you already have existing, large scale data aspects of it. Um, it's really about the specific use case of your current problem that you're dealing with. Um, every solution, um, is tailored towards the individual challenges and problems that, that each one of us are facing. >>I break, they mentioned as part of the asthma family. So how does data fabric pair with the other solutions within Israel? >>Well, so I like to say there's, um, there, there's, there's three main areas, um, from a software standpoint, um, for when you count some of our, um, offerings with the GreenLake solution, but there are, so there are really four main areas with ESMO. There's the data fabric offering, which is really focused on, on, on, on delivering that data at scale for AI ML workloads for big data workloads for containerized workloads. There is the ESMO container platform, which really solves a lot of, um, some of the same problems, but really focus more on a compute delivery, uh, and a hundred percent Kubernetes environment. We also have security offerings, um, which, which help you take in this containerized world, uh, that help you take the different aspects of, um, securing those applications. Um, so that when the application, the containerized applications move from one framework or one infrastructure from one to the other, it really helps those, the security go with those applications so that they can operate in a zero trust environment. And of course, all of this, uh, options of being available to you, where everything has a service, including the hardware through some of our GreenLake offerings. So those are kind of the areas that, uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. >>Well, thanks, Jimmy really appreciate it. That's all the questions we have right now. So is there anything that you'd like to close with? >>Uh, you know, the, um, I I'm, I find it I'm very, uh, I'm honored to be here at HPE. Um, I, I really find it, it's amazing. Uh, as we work with our customers solving some really challenging problems that are core to their business, um, it's, it's always an interesting, um, interesting, um, day in the office because, uh, every problem is different because every problem is tailored to the specific challenges that our customers face. Um, while they're all will well, we will, what we went over today is a lot of the general areas and the general concepts that we're all on together in a journey, but the devil's always in the details. It's about understanding the specific challenges in the organization and, and as moral software is designed to help adapt, um, and, and empower your growth in your, in your company. So that you're focused on your business, in the complexity of delivering services across this connected world. That's what as will takes off your plate so that you don't have to worry about that. It just works, and you can focus on the things that impact your business more directly. >>Okay. Well, we really thank everyone for coming today and hope you learned, uh, an idea about how data fabric can begin to help your business with it. All of a sudden analytics, thank you for coming. Thanks.

Published Date : Mar 17 2021

SUMMARY :

Welcome everyone to a day in the life of data with HPE as well. Uh, let me go ahead and share my screen here and we'll get started. that digital evolution for our customers, with the HPE as rural data fabric to and designs in order to deliver that customized mask, uh, to brighten their interactions with others while protecting the environment and to identify those, those, uh, medical emerging threats, all of this combined with the massive amounts of data that's becoming, becoming available through it is This is just the current world that, that our solutions must live in. the service quality of those mobile devices to ensure optimize connectivity as they move a number of major car companies in the world are using our data fabric to take autonomous uh, we help airlines travel agencies, and even us as consumers deal with pricing, Just to kind of give you a rough idea. from optimized farming and the HBS Modena fabric is there to assist in that journey. and how much of it's requested and being delivered this along with hundreds of other telemetry points landing it in the data fabric, empowering connected availability for analysis to Many of the services you encounter everyday are delivered to you through What are the ways consumers can get started with HPS We have a services to help you uh, a service offering, um, w with a software to kind of get you started with the other solutions within Israel? uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. That's all the questions we have right now. in the organization and, and as moral software is designed to help adapt, an idea about how data fabric can begin to help your business with it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

ChadPERSON

0.99+

Jimmy BatesPERSON

0.99+

JimmyPERSON

0.99+

CiscoORGANIZATION

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

HansPERSON

0.99+

HPS MerleORGANIZATION

0.99+

IsraelLOCATION

0.99+

hundredsQUANTITY

0.99+

HPEORGANIZATION

0.99+

AmericasLOCATION

0.99+

tonightDATE

0.99+

each productQUANTITY

0.98+

HPDsORGANIZATION

0.98+

three main areasQUANTITY

0.97+

ESMOTITLE

0.97+

four main areasQUANTITY

0.96+

more than 200 million a yearQUANTITY

0.96+

MerleORGANIZATION

0.96+

hundred percentQUANTITY

0.95+

one aspectQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

first deploymentQUANTITY

0.94+

one frameworkQUANTITY

0.93+

two very different peopleQUANTITY

0.92+

one infrastructureQUANTITY

0.92+

zero trustQUANTITY

0.88+

Sue SCAPERSON

0.88+

1 million people a dayQUANTITY

0.87+

firstQUANTITY

0.84+

ModenaCOMMERCIAL_ITEM

0.82+

HBSORGANIZATION

0.82+

each oneQUANTITY

0.82+

one destinationQUANTITY

0.77+

eightQUANTITY

0.73+

yearsQUANTITY

0.72+

A DayQUANTITY

0.67+

telemetry pointsQUANTITY

0.67+

KubernetesTITLE

0.61+

EzmeralORGANIZATION

0.58+

JamesORGANIZATION

0.56+

HPEOTHER

0.53+

A Day in the Life of a Data Scientist


 

>>Hello, everyone. Welcome to the a day in the life of a data science talk. Uh, my name is Terry Chang. I'm a data scientist for the ASML container platform team. And with me, I have in the chat room, they will be moderating the chat. I have Matt MCO as well as Doug Tackett, and we're going to dive straight into kind of what we can do with the asthma container platform and how we can support the role of a data scientist. >>So just >>A quick agenda. So I'm going to do some introductions and kind of set the context of what we're going to talk about. And then we're actually going to dive straight into the ASML container platforms. So we're going to walk straight into what a data scientist will do, kind of a pretty much a day in the life of the data scientists. And then we'll have some question and answer. So big data has been the talk within the last few years within the last decade or so. And with big data, there's a lot of ways to derive meaning. And then a lot of businesses are trying to utilize their applications and trying to optimize every decision with their, uh, application utilizing data. So previously we had a lot of focus on data analytics, but recently we've seen a lot of data being used for machine learning. So trying to take any data that they can and send it off to the data scientists to start doing some modeling and trying to do some prediction. >>So that's kind of where we're seeing modern businesses rooted in analytics and data science in itself is a team sport. We're seeing that it doesn't, we need more than data scientists to do all this modeling. We need data engineers to take the data, massage the data and do kind of some data manipulation in order to get it right for the data scientists. We have data analysts who are monitoring the models, and we even have the data scientists themselves who are building and iterating through multiple different models until they find a one that is satisfactory to the business needs. Then once they're done, they can send it off to the software engineers who will actually build it out into their application, whether it's a mobile app or a web app. And then we have the operations team kind of assigning the resources and also monitoring it as well. >>So we're really seeing data science as a team sport, and it does require a lot of different expertise and here's the kind of basic machine learning pipeline that we see in the industry now. So, uh, at the top we have this training environment and this is, uh, an entire loop. Uh, we'll have some registration, we'll have some inferencing and at the center of all, this is all the data prep, as well as your repositories, such as for your data, for any of your GitHub repository, things of that sort. So we're kind of seeing the machine learning industry, go follow this very basic pattern and at a high level I'll glance through this very quickly, but this is kind of what the, uh, machine learning pipeline will look like on the ASML container platform. So at the top left, we'll have our, our project depository, which is our, uh, persistent storage. >>We'll have some training clusters, we'll have a notebook, we'll have an inference deployment engine and a rest API, which is all sitting on top of the Kubernetes cluster. And the benefit of the container platform is that this is all abstracted away from the data scientist. So I will actually go straight into that. So just to preface, before we go into the data as small container platform, where we're going to look at is a machine learning example, problem that is, uh, trying to predict how long a specific taxi ride will take. So with a Jupiter notebook, the data scientists can take all of this data. They can do their data manipulation, train a model on a specific set of features, such as the location of a taxi ride, the duration of a taxi ride, and then model it to trying to figure out, you know, what, what kind of prediction we can get on a future taxi ride. >>So that's the example that we will talk through today. I'm going to hop out of my slides and jump into my web browser. So let me zoom in on this. So here I have a Jupiter environment and, um, this is all running on the container platform. All I need is actually this link and I can access my environment. So as a data scientist, I can grab this link from my it admin or my system administrator. And I could quickly start iterating and, and start coding. So on the left-hand side of the Jupiter, we actually have a file directory structure. So this is already synced up to my get repository, which I will show in a little bit on the container platform so quickly I can pull any files that are on my get hub repository. I can even push with a button here, but I can, uh, open up this Python notebook. >>And with all this, uh, unique features of the Jupiter environment, I can start coding. So each of these cells can run Python code and in specific the container at the ESMO container platform team, we've actually built our own in-house lime magic commands. So these are unique commands, um, that we can use to interact with the underlying infrastructure of the container platform. So the first line magic command that I want to mention is this command called percent attachments. When I run this command, I'll actually get the available training clusters that I can send training jobs to. So this specific notebook, uh, it's pretty much been created for me to quickly iterate and develop a model very quickly. I don't have to use all the resources. I don't have to allocate a full set of GPU boxes onto my little Jupiter environment. So with the training cluster, I can attach these individual data science notebooks to those training clusters and the data scientists can actually utilize those resources as a shared environment. >>So the, essentially the shared large eight GPU box can actually be shared. They don't have to be allocated to a single data scientist moving on. We have another line magic command, it's called percent percent Python training. This is how we're going to utilize that training cluster. So I will prepare the cell percent percent with the name of the training cluster. And this is going to tell this notebook to send this entire training cell, to be trained on those resources on that training cluster. So the data scientists can quickly iterate through a model. They can then format that model and all that code into a large cell and send it off to that training cluster. So because of that training cluster is actually located somewhere else. It has no context of what has been done locally in this notebook. So we're going to have to do and copy everything into one large cell. >>So as you see here, I'm going to be importing some libraries and I'm in a, you know, start defining some helper functions. I'm going to read in my dataset and with the typical data science modeling life cycle, we're going to have to take in the data. We're going to have to do some data pre-processing. So maybe the data scientists will do this. Maybe the data engineer will do this, but they have access to that data. So I'm here. I'm actually getting there to be reading in the data from the project repository. And I'll talk about this a little bit later with all of the clusters within the container platform, we have access to some project repository that has been set up using the underlying data fabric. So with this, I have, uh, some data preprocessing, I'm going to cleanse some of my data that I noticed that maybe something is missing or, uh, some data doesn't look funky. >>Maybe the data types aren't correct. This will all happen here in these cells. So once that is done, I can print out that the data is done cleaning. I can start training my model. So here we have to split our data, set into a test, train, uh, data split so that we have some data for actually training the model and some data to test the model. So I can split my data there. I could create my XG boost object to start doing my training and XG boost is kind of like a decision tree machine learning algorithm, and I'm going to fit my data into this, uh, XG boost algorithm. And then I'm going to do some prediction. And then in addition, I'm actually going to be tracking some of the metrics and printing them out. So these are common metrics that we, that data scientists want to see when they do their training of the algorithm. >>Just to see if some of the accuracy is being improved, if the loss is being improved or the mean absolute error. So things like that. So these are all things, data scientists want to see. And at the end of this training job, I'm going to be saving the model. So I'm going to be saving it back into the project repository in which we will have access to. And at the end, I will print out the end time so I can execute that cell. And I've already executed that cell. So you'll see all of these print statements happening here. So importing the libraries, the training was run reading and data, et cetera. All of this has been printed out from that training job. Um, and in order to access that, uh, kind of glance through that, we would get an output with a unique history URL. >>So when we send the training job to that training cluster, we'll the training cluster will send back a unique URL in which we'll use the last line magic command that I want to talk about called percent logs. So percent logs will actually, uh, parse out that response from the training cluster. And actually we can track in real time what is happening in that training job so quickly, we can see that the data scientist has a sandbox environment available to them. They have access to their get repository. They have access to a project repository in which they can read in some of their data and save the model. So very quick interactive environment for the data scientists to do all of their work. And it's all provisioned on the ASML container platform. And it's also abstracted away. So here, um, I want to mention that again, this URL is being surfaced through the container platform. >>The data scientist doesn't have to interact with that at all, but let's take, it's take a step back. Uh, this is the day to day in the life of the data scientists. Now, if we go backwards into the container platform and we're going to walk through how it was all set up for them. So here is my login page to the container platform. I'm going to log in as my user, and this is going to bring me to the, uh, view of the, uh, Emma lops tenant within the container platform. So this is where everything has been set up for me, the data scientist doesn't have to see this if they don't need to, but what I'll walk through now is kind of the topics that I mentioned previously that we would go back into. So first is the project repository. So this project deposited comes with each tenant that is created on the platform. >>So this is a more, nothing more than a shared collaborative workspace environment in which data scientist or any data scientist who is allocated to this tenant. They have this politics client that can visually see all their data of all, all of their code. And this is actually taking a piece of the underlying data fabric and using that for your project depository. So you can see here, I have some code I can create and see my scoring script. I can see the models that have been created within this tenant. So it's pretty much a powerful tool in which you can store your code store any of your data and have the ability to read and write from any of your Jupiter environments or any of your created clusters within this tenant. So a very cool ad here in which you can, uh, quickly interact with your data. >>The next thing I want to show is the source control. So here is where you would plug in all of your information for your source control. And if I edit this, you guys will actually see all the information that I've passed in to configure the source control. So on the backend, the container platform will take these credentials and connect the Jupiter notebooks you create within this tenant to that get repository. So this is the information that I've passed in. If GitHub is not of interest, we also have support for bit bucket here as well. So next I want to show you guys that we do have these notebook environments. So, um, the notebook environment was created here and you can see that I have a notebook called Teri notebook, and this is all running on the Kubernetes environment within the container platform. So either the data scientists can come here and create their notebook or their project admin can create the notebook. >>And all you'd have to do is come here to this notebook end points. And this, the container platform will actually map the container platform to a specific port in which you can just give this link to the data scientists. And this link will actually bring them to their own Jupiter environment and they can start doing all of their model just as I showed in that previous Jupiter environment. Next I want to show the training cluster. This is the training cluster that was created in which I can attach my notebook to start utilizing those training clusters. And then the last thing I want to show is the model, the deployment cluster. So once that model has been saved, we have a model registry in which we can register the model into the platform. And then the last step is to create a deployment clusters. So here on my screen, I have a deployment cluster called taxi deployment. >>And then all these serving end points have been configured for me. And most importantly, this endpoint model. So the deployment cluster is actually a wrap the, uh, train model with the flask wrapper and add a rest endpoint to it so quickly. I can operationalize my model by taking this end point and creating a curl command, or even a post request. So here I have my trusty postman tool in which I can format a post request. So I've taken that end point from the container platform. I've formatted my body, uh, right here. So these are some of the features that I want to send to that model. And I want to know how long this specific taxi ride at this location at this time of day would take. So I can go ahead and send that request. And then quickly I will get an output of the ride. >>Duration will take about 2,600 seconds. So pretty much we've walked through how a data scientists can quickly interact with their notebook. They can train their model. And then coming into the platform, we saw the project repository, we saw the source control. We can register the model within the platform, and then quickly we can operationalize that model with our deployment cluster, uh, and have our model up and running and available for inference. So that wraps up the demo. Uh, I'm gonna pass it back to Doug and Matt and see if they want to come off mute and see if there are any questions, Matt, Doug, you there. Okay. >>Yeah. Hey, Hey Terry, sorry. Sorry. Just had some trouble getting off mute there. Uh, no, that was a, that was an excellent presentation. And I think there are generally some questions that come up when I talk to customers around how integrated into the Kubernetes ecosystem is this capability and where does this sort of Ezreal starts? And the open source, uh, technologies like, um, cube flow as an example, uh, begin. >>Yeah, sure. Matt. So this is kind of one layer up. We have our Emma LOBs tenant and this is all running on a piece of a Kubernetes cluster. So if I log back out and go into the site admin view, this is where you would see all the Kubernetes clusters being created. And it's actually all abstracted away from the data scientists. They don't have to know Kubernetes. They just interact with the platform if they want to. But here in the site admin view, I had this Kubernetes dashboard and here on the left-hand side, I have all my Kubernetes sections. So if I just add some compute hosts, whether they're VMs or cloud compute hosts, like ETQ hosts, we can have these, uh, resources abstracted away from us to then create a Kubernetes cluster. So moving on down, I have created this Kubernetes cluster utilizing those resources. >>Um, so if I go ahead and edit this cluster, you'll actually see that have these hosts, which is just a click and a click and drop method. I can move different hosts to then configure my Kubernetes cluster. Once my Kubernetes cluster is configured, I can then create Kubernetes tenant or in this case, it's a namespace. So once I have this namespace available, I can then go into that tenant. And as my user, I don't actually see that it is running on Kubernetes. So in addition with our ML ops tenants, you have the ability to bootstrap cute flow. So queue flow is a open source machine learning framework that is run on Kubernetes, and we have the ability to link that up as well. So, uh, coming back to my Emma lops tenant, I can log in what I showed is the ASML container platform version of Emma flops. But you see here, we've also integrated QP flow. So, uh, very, uh, a nod to, uh, HPS contribution to, you know, utilizing open source. Um, it's actually all configured within our platform. So, um, hopefully, >>Yeah, actually, Tara, can you hear me? It's Doug. So there were a couple of other questions actually about key flare that came in. I wonder whether you could just comment on why we've chosen cube flow. Cause I know there was a question about ML flow in stead and what the differences between ML flow and coop flow. >>Yeah, sure. So the, just to reiterate, there are some questions about QP flow and I'm just, >>Yeah, so obviously one of, uh, one of the people watching saw the queue flow dashboard there, I guess. Um, and so couldn't help but get excited about it. But there was another question about whether, you know, ML flow versus cube flow and what the difference was between them. >>Yeah. So with flow, it's, it's an open source framework that Google has developed. It's a very powerful framework that comes with a lot of other unique tools and Kubernetes. So with Q flow, you really have the ability to launch other notebooks. You have the ability to utilize different Kubernetes operators like TensorFlow and PI torch. You can utilize a lot of the, some of the frameworks within Q4 to do training like Q4 pipelines, which visually allow you to see your training jobs, uh, within the queue flow. It also has a plethora of different serving mechanisms, such as Seldin, uh, for, you know, deploying your, your machine learning models. You have Ks serving, you have TF serving. So Q4 is very, it's a very powerful tool for data scientists to utilize if they want a full end to end open source and know how to use Kubernetes. So it's just a, another way to do your machine learning model development and right with ML flow, it's actually a different piece of the machine learning pipeline. So ML flow mainly focuses on model experimentation, comparing different models, uh, during the training and it off it can be used with Q4. >>The complimentary Terry I think is what you're saying. Sorry. I know we are dramatically running out of time now. So that was really fantastic demo. Thank you very much, indeed. >>Exactly. Thank you. So yeah, I think that wraps it up. Um, one last thing I want to mention is there is this slide that I want to show in case you have any other questions, uh, you can visit hp.com/asml, hp.com/container platform. If you have any questions and that wraps it up. So thank you guys.

Published Date : Mar 17 2021

SUMMARY :

I'm a data scientist for the ASML container platform team. So I'm going to do some introductions and kind of set the context of what we're going to talk about. the models, and we even have the data scientists themselves who are building and iterating So at the top left, we'll have our, our project depository, which is our, And the benefit of the container platform is that this is all abstracted away from the data scientist. So that's the example that we will talk through today. So the first line magic command that I want to mention is this command called percent attachments. So the data scientists can quickly iterate through a model. So maybe the data scientists will do this. So once that is done, I can print out that the data is done cleaning. So I'm going to be saving it back into the project repository in which we will So here, um, I want to mention that again, this URL is being So here is my login page to the container So this is a more, nothing more than a shared collaborative workspace environment in So on the backend, the container platform will take these credentials and connect So once that model has been saved, we have a model registry in which we can register So I've taken that end point from the container platform. So that wraps up the demo. And the open source, uh, technologies like, um, cube flow as an example, So moving on down, I have created this Kubernetes cluster So once I have this namespace available, So there were a couple of other questions actually So the, just to reiterate, there are some questions about QP flow and I'm just, But there was another question about whether, you know, ML flow versus cube flow and So with Q flow, you really have the ability to launch So that was really fantastic demo. So thank you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DougPERSON

0.99+

Doug TackettPERSON

0.99+

Terry ChangPERSON

0.99+

TerryPERSON

0.99+

TaraPERSON

0.99+

MattPERSON

0.99+

PythonTITLE

0.99+

GoogleORGANIZATION

0.99+

Matt MCOPERSON

0.99+

JupiterLOCATION

0.99+

KubernetesTITLE

0.99+

first lineQUANTITY

0.98+

eachQUANTITY

0.98+

GitHubORGANIZATION

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

about 2,600 secondsQUANTITY

0.97+

Q4TITLE

0.97+

A Day in the Life of a Data ScientistTITLE

0.97+

hp.com/asmlOTHER

0.97+

last decadeDATE

0.97+

one layerQUANTITY

0.95+

hp.com/containerOTHER

0.92+

single dataQUANTITY

0.91+

EmmaPERSON

0.91+

one large cellQUANTITY

0.91+

each tenantQUANTITY

0.88+

oneQUANTITY

0.84+

one last thingQUANTITY

0.81+

Q flowTITLE

0.8+

EmmaTITLE

0.8+

ESMOORGANIZATION

0.76+

last few yearsDATE

0.74+

one ofQUANTITY

0.73+

dayQUANTITY

0.72+

eight GPUQUANTITY

0.7+

SeldinTITLE

0.69+

Q4DATE

0.67+

percent percentOTHER

0.65+

EzrealORGANIZATION

0.65+

some questionsQUANTITY

0.65+

ASMLTITLE

0.65+

ASMLORGANIZATION

0.61+

peopleQUANTITY

0.49+

ETQTITLE

0.46+

TeriORGANIZATION

0.4+

EmmaORGANIZATION

0.35+

3 Quick Wins That Drive Big Gains in Enterprise Workloads


 

hey welcome to analytics unleashed i'm robert christensen your host today thank you for joining us today we have three quick wins that drive big gains in the enterprise workloads and today we have olaf with erickson we have john with orok and we have dragon with dxc welcome thank you for joining me gentlemen yeah good to be here thank you thank you good to have you hey olaf let's start off with you what big problems are you trying to solve today that are doing for those quick wins what are you trying to do today top top of mind yeah when we started looking into this microservices for our financial platform we immediately saw the challenges that we have and we wanted to have a strong partner and we have a good relationship with hp before so we turned to hp because we know that they have the technical support that we need the possibilities that we need in our platform to fulfill our requirements and also the reliability that we would need so tell me i think this is really important you guys are starting into a digital wallet space that correct yeah that's correct so we are in a financial platform so we are spanning across the world and delivering our financial services to our end customers well that's not classically what you hear about ericsson diving into what's really started you guys down that path and specifically these big wins around this digitization no what what we could see earlier was that we have a mobile networks right so we have a lot of a strong user base within them uh both kind of networks and in the where we started in the emerging markets uh you normally they have a lot of unbanked people and that people also were the ones that you want to target so be able to instead of going down and use your cash for example to buy your fruits or your electricity bill etc you could use your mobile wallet and and that's how it all started and now we're also turning into the emerged markets also like the western side part of worlds etc that's fantastic and i hey i want to talk to john here john's with o'rock and he's the one of those early adopters of those container platforms for the uh in the united states here the federal government tell us a little bit about that program and what's going on with that john yeah sure absolutely appreciate it yeah so with orock what we've done is we developed one of the first fedramp authorized container platforms that runs in our moderate and soon to be high cloud and what that does is building on the israel platform gave us the capability of offering customers both commercial as well as federal the capability and the flexibility of running their workloads in a you know as a service model where they can customize and typically what customers have to do is they have to either build it internally or if they go to the cloud they have to be able to take what resources are available then tweak to those designs to make what they need so in this architecture built on open source and with our own infrastructure we offer you know very low cost zero egress capability but the also the workload processing that they would need to run data analytics machine language and other types of high performance processing that typically they would need as we move forward in this computer age so john you you touched on a topic that's i think is really critical and you had mentioned open source why is open source a key aspect for this transformation that we're seeing coming up in like the next decade yeah sure yeah with open source we shifted early on to the company to move to open source only to offer the flexibility we didn't want to be set on one particular platform to operate within so we took and built the cloud infrastructure we went with open source as an open architecture that we can scale and grow within because of that we were one of the very first fedramp authorizations built on open source not on a specific platform and what we've seen from that is the increased performance capability that we would get as well as the flexibility to add additional components that typically you don't get on other platforms so it was a it was a good move we went with and one that the customer will definitely benefit from that that's that's huge actually because performance leads to better cost and better cost leads better performance around that i i'm just super super happy with all the advanced work that you always are doing there is fantastic and dragon so so you're in a space that i think is really interesting you're dealing with what everybody likes to talk about that's autonomous vehicles you're working with automobile manufacturers you're dealing with data at a scale that is unprecedented can you just open that door for us to talk to about these big big wins that you're trying to get over the line with these enterprises yeah absolutely and um thank you robert we approach uh leveraging esmeral from the data fabric angle we practically have a fully integrated the esmeral data fabric into our robotic drive solution rewarding drive solution is actually a game changer as you've mentioned in accelerating the development of autonomous driving vehicles it's a an end-to-end hyper-scale machine learning and ai platform as i mentioned based on the esmeralda data fabric which is used by the some of the largest manufacturers in the world for development of their autonomous driving algorithms and i think we all in technology i think and following up at the same type of news and research right across the globe in in this area so we're pretty proud that we're one of the leaders in actually providing uh hyperscale machine learning platforms for uh kind manufacturers some of them i cannot talk about but bmw is one of uh one of the current manufacturers that we provide uh these type of solutions and they have publicly spoken about their uh d3 platform uh data driven development platform uh just to give you an idea um of the scale as robert mentioned uh daily we collect over 1.5 petabytes of data of raw data did you say daily data daily the storage capacity is over 250 petabytes and growing uh there's over 100 000 cores and over 200 gpus in the in in the compute area um over 50 50 petabytes of data is delivered every two weeks into a hardware in loop right for testing and we have daily uh thousands of engineers and data scientists accessing the relevant data and developing machine learning models on the daily basis right part of it is the simulation right simulation cuts the cost as well as the uh time right for developing of the autonomous uh driving algorithms and uh the the simulations are taking probably 75 percent of the research uh that's being done on this platform that's amazing dragon i i i i the more i get involved with that and i've been part of these conversations with a number of the folks that are involved with it i i computer science me my geekiness my little propeller head starts coming out i might just blows my mind and i think so i'm going to pivot back over to olaf oh left so you're talking about something that is a global network of financial services okay correct and the flow of transactional typically non-relational transactional data flows to actual transactions going through you have issues of potential fraud you have issues a safety and you have multi-geographic regional problems with data and data privacy how are you guys addressing that today so so to answer that question today we have managed to solve that using the container platform to together with the data fabric but as you say we need to span across different regions we need to have the data as secure as possible because we have a lot of legal aspects to look into because if our data disappears but your money is also disappearing so it's a really important area for us with the security and the reliability of the platforms so so that's why we also went this way to make sure that we have this strong partner that could help us with this because just looking at where we are deployed in in more than 23 countries today and and we it's processing more than 900 million us dollars per day in our systems currently so it is a lot of money passing through and you need to take security in a it's as it's a very important point right it really is it really is and so uh john i mean you you uh obviously are dealing with you know a lot of folks that have three letters as acronyms around the government agencies and uh they range in various degrees of certa of security when you say fedramp i mean what could you just uh articulate why the esmerald platform was something that you selected to go to that fedrak compliant container platform because i think that's that that kind of speaks to the to the industrial strength of what we're talking about yeah it all comes down to being able to offer a product that's secure that the customers can trust and when we went with fedramp fedramp has very stringent security requirements that have monthly poems which are performance reviews and and updates that need to be done if not on a daily basis on a monthly basis so the customers there's a lot that goes on behind the scenes that they don't are able to articulate and what by selecting the hp esmerald platform for containers um one of the key strengths that we looked at was the esmo fabric and it's all about the data it's all about securing the data moving the data transferring the data and from a customer's perspective they want to be able to operate in an environment that they can trust no different than being able to turn on their lights or making sure there's water in their utilities you know containers with the israel platform built on orok's infrastructure gives that capability fedramp enables the security tied to the platform that we're able to follow so it's government uh guided which includes this and many and over hundreds of controls that typically you know the customers don't have time or the capability to address so our commercial customers benefit our federal customers you know that you discuss they're able to follow and check the box to meet those requirements and the container platform gives us a capability where now we're able to move files which we'll hear about through the optimal fabric and then we're able to run the workloads in the containers themselves and give isolation and the security element of fed wrapping esmeral gave us that capability in order to paint that environment fedramp authorized that the customers benefit from from security so they have confidence in running their workloads using their data and able to focus on their core job at hand and not worry about their infrastructure the fundamental requirement isn't it that that isolation between that compute and storage and going up a layer there in in a way that provides them a set of services that they can i wouldn't say set it and forget it but really had the confidence that what they're getting is the best performance for the dollars that they're spending uh john my hat's off to what the work that you all do in there thank you we appreciate it yeah yeah and dragon i want to i wanted to pivot a little bit here because you are primarily the the operator what i consider one of the largest data fabrics on the on the planet for that matter um and i just want to talk a little bit about the openness of our architecture right of all the multiple protocols that we support that allow for you know you know some people may have selected a different set of application deployment models and virtualization models that allow to plug into the data fabric you know it did can you talk a little bit about that yeah and i i think um in my mind right um to operate uh such a uh data fabric at scale right um there were three key elements that we were looking for right uh that we found in uh esmeralda fabric ring the first one was a speed cost and scalability right the second one was the globally distributed data lake or ability to distribute data globally and third was certainly the strength of our partnership with with hpe in this case right so if you look at the uh as well data fabric it's it's fast it's cost effective and it's certainly highly scalable because we as you just mentioned stretch the uh sort of the capabilities of the data fabric to hundreds of petabytes and over a million the data points if you will and it important what was important for us was that the esmeralda fabric actually eliminates the need for multiple vendor solutions which would be otherwise required right because it provides integrated file system database or or a data lake right and the data management on top of it right usually you would probably need to incorporate multiple tools right from different vendors and the file system itself it's it's so important right when you're working at scale like this right and honestly in our research maybe there are three file systems in the world that can support uh this kind of size of the auto data fabric the distributed data lake was also important to us and the reason for that is you can imagine that these large car manufacturers are testing and have testing vehicles all around the world right they're not just doing it locally around the uh their data their id centers right so uh collecting the data and this 1.5 petabytes example right uh for for bmw on a daily basis it's it's it's really challenging unless you have the ability to actually leverage the data in a distributed data like fashion right so data can basically reside in different data centers globally or even on-premise and in cloud environments which became uh very important later because a lot of this car manufacturers actually have oems right that would like to get either portions of the data or get access to the data in a in different environments not necessarily in their data center um and truly i think uh to build something at this scale right uh you you need a strong partner and we certainly had that in hpe and uh we got the comprehensive support right for uh for the software um but but more importantly i think uh partner that clearly understood uh criticality of the data fabric trend and the need for the vice fast response right to our clients and you know jointly i think we met all the challenges and it's so doing i think we made the esmo data fabric a much better and stronger product over the over the last few years that's fantastic thank you dragon appreciate it uh hey so if we're going to wrap up here any last words olaf do you want to share with us no looking forward now in from our perspective on helping out with the kobe 19 situation that we have uh enabling people to still be in the market without actually touching each other and and and leaving maybe for action market and being at home etc doing those transactions that's great thank you john in last comment yeah thanks yeah uh look for uh a joint offering announcement coming up between hpe and orok where we're going to be offering sandbox as a service where the data analytics and machine language where people can actually test drive the actual environment as a service and if they like it then they can move into a production-wise environment so stay tuned for that that's great john thank you for that and hey dragon last words yeah last words um we're pretty happy what we have done already for car manufacturers we're taking this solution right in terms of the uh distributed data-like capabilities as well as the uh hyperscale machine learning and ai platform to other industries and we hope to do it jointly with you well we hope that you do it with us as well so thank you very much everybody gentlemen thank you so much for joining us i appreciate it thank you very much thank you very much hey this is robert christensen with analytics unleashed i want to thank all of our guests here today and we'll catch you next time thank you for joining us bye [Music] [Music] [Music] easy [Music] you

Published Date : Mar 17 2021

SUMMARY :

and the reason for that is you can

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
robert christensenPERSON

0.99+

olafPERSON

0.99+

75 percentQUANTITY

0.99+

todayDATE

0.99+

robertPERSON

0.99+

more than 900 millionQUANTITY

0.99+

johnPERSON

0.99+

more than 23 countriesQUANTITY

0.99+

ericksonPERSON

0.99+

hpORGANIZATION

0.99+

over 100 000 coresQUANTITY

0.99+

orokORGANIZATION

0.99+

three key elementsQUANTITY

0.99+

1.5 petabytesQUANTITY

0.99+

over a millionQUANTITY

0.99+

over 200 gpusQUANTITY

0.99+

over 250 petabytesQUANTITY

0.99+

three lettersQUANTITY

0.98+

ericssonPERSON

0.98+

thirdQUANTITY

0.98+

over 1.5 petabytesQUANTITY

0.98+

hundreds of petabytesQUANTITY

0.98+

orockPERSON

0.97+

bothQUANTITY

0.97+

o'rockPERSON

0.97+

fedrakORGANIZATION

0.97+

every two weeksQUANTITY

0.97+

next decadeDATE

0.96+

three file systemsQUANTITY

0.96+

esmeraldaORGANIZATION

0.95+

fedrampORGANIZATION

0.95+

united statesLOCATION

0.94+

thousands of engineersQUANTITY

0.93+

orokPERSON

0.93+

hpeORGANIZATION

0.93+

second oneQUANTITY

0.92+

oneQUANTITY

0.9+

lot of moneyQUANTITY

0.9+

firstQUANTITY

0.9+

monthlyQUANTITY

0.89+

three quick winsQUANTITY

0.88+

over 50 50 petabytes ofQUANTITY

0.87+

first oneQUANTITY

0.85+

3QUANTITY

0.82+

federal governmentORGANIZATION

0.82+

esmoORGANIZATION

0.8+

last few yearsDATE

0.77+

hundreds of controlsQUANTITY

0.77+

a lot of unbanked peopleQUANTITY

0.73+

dragonPERSON

0.73+

platformQUANTITY

0.66+

one of the leadersQUANTITY

0.66+

wrappingORGANIZATION

0.64+

israelORGANIZATION

0.63+

dragonTITLE

0.63+

lot of folksQUANTITY

0.62+

israelLOCATION

0.61+

dailyQUANTITY

0.54+

sourceTITLE

0.48+

kobeOTHER

0.44+