Image Title

Search Results for Steven Huels:

Steven Huels | KubeCon + CloudNativeCon NA 2021


 

(upbeat soft intro music) >> Hey everyone. Welcome back to theCube's live coverage from Los Angeles of KubeCon and CloudNativeCon 2021. Lisa Martin with Dave Nicholson, Dave and I are pleased to welcome our next guest remotely. Steven Huels joins us, the senior director of Cloud Services at Red Hat. Steven, welcome to the program. >> Steven: Thanks, Lisa. Good to be here with you and Dave. >> Talk to me about where you're seeing traction from an AI/ML perspective? Like where are you seeing that traction? What are you seeing? Like it. >> It's a great starter question here, right? Like AI/ML is really being employed everywhere, right? Regardless of industry. So financial services, telco, governments, manufacturing, retail. Everyone at this point is finding a use for AI/ML. They're looking for ways to better take advantage of the data that they've been collecting for these years. It really, it wasn't all that long ago when we were talking to customers about Kubernetes and containers, you know, AI/ML really wasn't a core topic where they were looking to use a Kubernetes platform to address those types of workloads. But in the last couple of years, that's really skyrocketed. We're seeing a lot of interest from existing customers that are using Red Hat open shift, which is a Kubernetes based platform to take those AI/ML workloads and take them from what they've been doing for additionally, for experimentation, and really get them into production and start getting value out of them at the end of it. >> Is there a common theme, you mentioned a number of different verticals, telco, healthcare, financial services. Is there a common theme, that you're seeing among these organizations across verticals? >> ^There is. I mean, everyone has their own approach, like the type of technique that they're going to get the most value out of. But the common theme is really that everyone seems to have a really good handle on experimentation. They have a lot of very brig data scientists, model developers that are able to take their data and out of it, but where they're all looking to get, get our help or looking for help, is to put those models into production. So ML ops, right. So how do I take what's been built on, on somebody's machine and put that into production in a repeatable way. And then once it's in production, how do I monitor it? What am I looking for as triggers to indicate that I need to retrain and how do I iterate on this sequentially and rapidly applying what would really be traditional dev ops software development, life cycle methodologies to ML and AI models. >> So Steve, we're joining you from KubeCon live at the moment. What's, what's the connection with Kubernetes and how does Kubernetes enable machine learning artificial intelligence? How does it enable it and what are some of the special considerations to in mind? >> So the immediate connection for Red Hat, is Red Hat's open shift is basically an enterprise grade Kubernetics. And so the connection there is, is really how we're working with customers and how customers in general are looking to take advantage of all the benefits that you can get from the Kubernetes platform that they've been applying to their traditional software development over the years, right? The, the agility, the ability to scale up on demand, the ability to have shared resources, to make specialized hardware available to the individual communities. And they want to start applying those foundational elements to their AI/Ml practices. A lot of data science work traditionally was done with high powered monolithic machines and systems. They weren't necessarily shared across development communities. So connecting something that was built by a data scientist, to something that then a software developer was going to put into production was challenging. There wasn't a lot of repeatability in there. There wasn't a lot of scalability, there wasn't a lot of auditability and these are all things that we know we need when talking about analytics and AI/ML. There's a lot of scrutiny put on the auditability of what you put into production, something that's making decisions that impact on whether or not somebody gets a loan or whether or not somebody is granted access to systems or decisions that are made. And so that the connection there is really around taking advantage of what has proven itself in kubernetes to be a very effective development model and applying that to AI/ML and getting the benefits in being able to put these things into production. >> Dave: So, so Red Hat has been involved in enterprises for a long time. Are you seeing most of this from a Kubernetes perspective, being net new application environments or are these extensions of what we would call legacy or traditional environments. >> They tend to be net new, I guess, you know, it's, it's sort of, it's transitioned a little bit over time. When we first started talking to customers, there was desire to try to do all of this in a single Kubernetes cluster, right? How can I take the same environment that had been doing our, our software development, beef it up a little bit and have it applied to our data science environment. And over time, like Kubernetes advanced rights. So now you can actually add labels to different nodes and target workloads based on specialized machinery and hardware accelerators. And so that has shifted now toward coming up with specialized data science environments, but still connecting the clusters in that's something that's being built on that data science environment is essentially being deployed then through, through a model pipeline, into a software artifact that then makes its way into an application that that goes live. And, and really, I think that that's sensible, right? Because we're constantly seeing a lot of evolution in, in the types of accelerators, the types of frameworks, the types of libraries that are being made available to data scientists. And so you want the ability to extend your data science cluster to take advantage of those things and to give data scientists access to that those specialized environments. So they can try things out, determine if there's a better way to, to do what they're doing. And then when they find out there is, be able to rapidly roll that into your production environment. >> You mentioned the word acceleration, and that's one of the words that we talk about when we talk about 2020, and even 2021, the acceleration in digital transformation that was necessary really a year and a half ago, for companies to survive. And now to be able to pivot and thrive. What are you seeing in terms of customers appetites for, for adopting AI/ML based solutions? Has it accelerated as the pandemic has accelerated digital transformation. >> It's definitely accelerated. And I think, you know, the pandemic probably put more of a focus for businesses on where can they start to drive more value? How can they start to do more with less? And when you look at systems that are used for customer interactions, whether they're deflecting customer cases or providing next best action type recommendations, AI/ML fits the bill there perfectly. So when they were looking to optimize, Hey, where do we put our spend? What can help us accelerate and grow? Even in this virtual world we're living in, AI/ML really floated to the top there, that's definitely a theme that we've seen. >> Lisa: Is there a customer example that you think that you could mention that really articulates the value over that? >> You know, I think a lot of it, you know, we've published one specifically around HCA health care, and this had started actually before the pandemic, but I think especially, it's applicable because of the nature of what a pandemic is, where HCA was using AI/ML to essentially accelerate diagnosis of sepsis, right. They were using it for, for disease diagnoses. That same type of, of diagnosis was being applied to looking at COVID cases as well. And so there was one that we did in Canada with, it's called 'how's your flattening', which was basically being able to track and do some predictions around COVID cases in the Canadian provinces. And so that one's particularly, I guess, kind of close to home, given the nature of the pandemic, but even within Red Hat, we started applying a lot more attention to how we could help with customer support cases, right. Knowing that if folks were going to be out with any type of illness. We needed to be able to be able to handle that case, you know, workload without negatively impacting work-life balance for, for other associates. So we looked at how can we apply AI/ML to help, you know, maintain and increase the quality of customer service we were providing. >> it's a great use case. Did you have a keynote or a session, here at KubeCon CloudNative? >> I did. I did. And it really focused specifically on that whole ML ops and model ops pipeline. It was called involving Kubernetes and bracing model ops. It was for a Kubernetes AI day. I believe it aired on Wednesday of this week. Tuesday, maybe. It all kind of condenses in the virtual world. >> Doesn't it? It does. >> So one of the questions that Lisa and I have for folks where we sit here, I don't know, was it year seven or so of the Dawn of Kubernetes, if I have that, right. Where do you think we are, in this, in this wave of adoption, coming from a Red Hat perspective, you have insight into, what's been going on in enterprises for the last 20 plus years. Where are we in this wave? >> That's a great question. Every time, like you, it's sort of that cresting wave sort of, of analogy, right? That when you get to top one wave, you notice the next wave it's even bigger. I think we've certainly gotten to the point where, where organizations have accepted that Kubernetes can, is applicable across all the workloads that they're looking to put in production. Now, the focus has shifted on optimizing those workloads, right? So what are the things that we need to run in our in-house data centers? What are things that we need, or can benefit from using commodity hardware from one of the hyperscalers, how do we connect those environments and more effectively target workloads? So if I look at where things are going to the future, right now, we see a lot of things being targeted based on cluster, right? We say, Hey, we have a data science cluster. It has characteristics because of X, Y, and Z. And we put all of our data science workloads into that cluster. In the future, I think we want to see more workload specific, type of categorization of workloads so that we're able to match available hardware with workloads rather than targeting a workload at a specific cluster. So a developer or data scientist can say, Hey, my particular algorithm here needs access to GPU acceleration and the following frameworks. And then it, the Kubernetes scheduler is able to determine of the available environments. What's the capacity, what are the available resources and match it up accordingly. So we get into a more dynamic environment where the developers and those that are actually building on top of these platforms actually have to know less and less about the clusters they're running on. It just have to know what types of resources they need access to. >> Lisa: So sort of democratizing that. Steve, thank you for joining Dave and me on the program tonight, talking about the traction that you're seeing with AI/ML, Kubernetes as an enabler, we appreciate your time. >> Thank you. >> Thanks Steve. >> For Dave Nicholson. I'm Lisa Martin. You're watching theCube live from Los Angeles KubeCon and CloudNativeCon 21. We'll be right back with our next guest. (subtle music playing) >> Lisa: I have been in the software and technology industry for over 12 years now. So I've had the opportunity as a marketer to really understand and interact with customers across the entire buyer's journey. Hi, I'm Lisa Martin and I'm a host of theCube. Being a host on the cube has been a dream of mine for the last few years. I had the opportunity to meet Jeff and Dave and John at EMC World a few years ago and got the courage up to say, Hey, I'm really interested in this. I love talking with customers...

Published Date : Oct 15 2021

SUMMARY :

Dave and I are pleased to welcome Good to be here with you and Dave. Talk to me about where But in the last couple of years, that you're seeing among these that they're going to get the considerations to in mind? and applying that to AI/ML Are you seeing most of this and have it applied to our and that's one of the How can they start to do more with less? apply AI/ML to help, you know, Did you have a keynote in the virtual world. It does. of the Dawn of Kubernetes, that they're looking to put in production. Dave and me on the program tonight, KubeCon and CloudNativeCon 21. a dream of mine for the last few years.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steven HuelsPERSON

0.99+

StevePERSON

0.99+

Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

DavePERSON

0.99+

StevenPERSON

0.99+

CanadaLOCATION

0.99+

JeffPERSON

0.99+

TuesdayDATE

0.99+

2021DATE

0.99+

KubeConEVENT

0.99+

Los AngelesLOCATION

0.99+

Red HatORGANIZATION

0.99+

2020DATE

0.99+

JohnPERSON

0.99+

HCAORGANIZATION

0.99+

tonightDATE

0.99+

CloudNativeConEVENT

0.98+

oneQUANTITY

0.98+

a year and a half agoDATE

0.98+

Cloud ServicesORGANIZATION

0.98+

pandemicEVENT

0.98+

KubernetesTITLE

0.97+

over 12 yearsQUANTITY

0.96+

firstQUANTITY

0.96+

COVIDOTHER

0.95+

CloudNativeCon 2021EVENT

0.95+

Wednesday of this weekDATE

0.94+

Dawn of KubernetesEVENT

0.93+

CanadianLOCATION

0.92+

singleQUANTITY

0.9+

Red HatTITLE

0.9+

waveEVENT

0.89+

KubernetesEVENT

0.89+

CloudNativeCon 21EVENT

0.84+

yearsDATE

0.84+

Los AngelesEVENT

0.83+

NA 2021EVENT

0.82+

few years agoDATE

0.77+

last couple of yearsDATE

0.76+

last 20 plus yearsDATE

0.76+

KubeCon CloudNativeORGANIZATION

0.76+

AI/MLTITLE

0.75+

crestingEVENT

0.7+

sepsisOTHER

0.7+

KuberneticsORGANIZATION

0.69+

theCubeORGANIZATION

0.69+

HatTITLE

0.69+

AI/MLOTHER

0.67+

sevenQUANTITY

0.6+

RedORGANIZATION

0.58+

lastDATE

0.58+

wave ofEVENT

0.58+