Image Title

Search Results for AI/ML:Top Startups Building Foundational Models:

Jay Marshall, Neural Magic | AWS Startup Showcase S3E1


 

(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

Jay MarshallPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

Nir ShavitPERSON

0.99+

20-yearQUANTITY

0.99+

AlexaTITLE

0.99+

2010sDATE

0.99+

sevenQUANTITY

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

each coreQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

JavaTITLE

0.99+

YouTubeORGANIZATION

0.99+

TodayDATE

0.99+

nine yearsQUANTITY

0.98+

bothQUANTITY

0.98+

BERTTITLE

0.98+

theCUBEORGANIZATION

0.98+

ChatGPTTITLE

0.98+

20 yearsQUANTITY

0.98+

over 50%QUANTITY

0.97+

second natureQUANTITY

0.96+

todayDATE

0.96+

ARMORGANIZATION

0.96+

oneQUANTITY

0.95+

DeepSparseTITLE

0.94+

neuralmagic.com/gravitonOTHER

0.94+

SiliconANGLEORGANIZATION

0.94+

WebSphereTITLE

0.94+

nineQUANTITY

0.94+

firstQUANTITY

0.93+

Startup ShowcaseEVENT

0.93+

five millisecondsQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.91+

twoQUANTITY

0.9+

YOLOORGANIZATION

0.89+

CUBEORGANIZATION

0.88+

OPTTITLE

0.88+

last six monthsDATE

0.88+

season threeQUANTITY

0.86+

doubleQUANTITY

0.86+

one customerQUANTITY

0.86+

SupercloudEVENT

0.86+

one sideQUANTITY

0.85+

VicePERSON

0.85+

x86OTHER

0.83+

AI/ML: Top Startups Building Foundational ModelsTITLE

0.82+

ECSTITLE

0.81+

$100 billionQUANTITY

0.81+

DevOpsTITLE

0.81+

WebLogicTITLE

0.8+

EKSTITLE

0.8+

a minuteQUANTITY

0.8+

neuralmagic.comOTHER

0.79+

Steven Hillion & Jeff Fletcher, Astronomer | AWS Startup Showcase S3E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI/ML Top Startups Building Foundation Model Infrastructure. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem to talk about data and analytics. I'm your host, Lisa Martin and today we're excited to be joined by two guests from Astronomer. Steven Hillion joins us, it's Chief Data Officer and Jeff Fletcher, it's director of ML. They're here to talk about machine learning and data orchestration. Guys, thank you so much for joining us today. >> Thank you. >> It's great to be here. >> Before we get into machine learning let's give the audience an overview of Astronomer. Talk about what that is, Steven. Talk about what you mean by data orchestration. >> Yeah, let's start with Astronomer. We're the Airflow company basically. The commercial developer behind the open-source project, Apache Airflow. I don't know if you've heard of Airflow. It's sort of de-facto standard these days for orchestrating data pipelines, data engineering pipelines, and as we'll talk about later, machine learning pipelines. It's really is the de-facto standard. I think we're up to about 12 million downloads a month. That's actually as a open-source project. I think at this point it's more popular by some measures than Slack. Airflow was created by Airbnb some years ago to manage all of their data pipelines and manage all of their workflows and now it powers the data ecosystem for organizations as diverse as Electronic Arts, Conde Nast is one of our big customers, a big user of Airflow. And also not to mention the biggest banks on Wall Street use Airflow and Astronomer to power the flow of data throughout their organizations. >> Talk about that a little bit more, Steven, in terms of the business impact. You mentioned some great customer names there. What is the business impact or outcomes that a data orchestration strategy enables businesses to achieve? >> Yeah, I mean, at the heart of it is quite simply, scheduling and managing data pipelines. And so if you have some enormous retailer who's managing the flow of information throughout their organization they may literally have thousands or even tens of thousands of data pipelines that need to execute every day to do things as simple as delivering metrics for the executives to consume at the end of the day, to producing on a weekly basis new machine learning models that can be used to drive product recommendations. One of our customers, for example, is a British food delivery service. And you get those recommendations in your application that says, "Well, maybe you want to have samosas with your curry." That sort of thing is powered by machine learning models that they train on a regular basis to reflect changing conditions in the market. And those are produced through Airflow and through the Astronomer platform, which is essentially a managed platform for running airflow. So at its simplest it really is just scheduling and managing those workflows. But that's easier said than done of course. I mean if you have 10 thousands of those things then you need to make sure that they all run that they all have sufficient compute resources. If things fail, how do you track those down across those 10,000 workflows? How easy is it for an average data scientist or data engineer to contribute their code, their Python notebooks or their SQL code into a production environment? And then you've got reproducibility, governance, auditing, like managing data flows across an organization which we think of as orchestrating them is much more than just scheduling. It becomes really complicated pretty quickly. >> I imagine there's a fair amount of complexity there. Jeff, let's bring you into the conversation. Talk a little bit about Astronomer through your lens, data orchestration and how it applies to MLOps. >> So I come from a machine learning background and for me the interesting part is that machine learning requires the expansion into orchestration. A lot of the same things that you're using to go and develop and build pipelines in a standard data orchestration space applies equally well in a machine learning orchestration space. What you're doing is you're moving data between different locations, between different tools, and then tasking different types of tools to act on that data. So extending it made logical sense from a implementation perspective. And a lot of my focus at Astronomer is really to explain how Airflow can be used well in a machine learning context. It is being used well, it is being used a lot by the customers that we have and also by users of the open source version. But it's really being able to explain to people why it's a natural extension for it and how well it fits into that. And a lot of it is also extending some of the infrastructure capabilities that Astronomer provides to those customers for them to be able to run some of the more platform specific requirements that come with doing machine learning pipelines. >> Let's get into some of the things that make Astronomer unique. Jeff, sticking with you, when you're in customer conversations, what are some of the key differentiators that you articulate to customers? >> So a lot of it is that we are not specific to one cloud provider. So we have the ability to operate across all of the big cloud providers. I know, I'm certain we have the best developers that understand how best practices implementations for data orchestration works. So we spend a lot of time talking to not just the business outcomes and the business users of the product, but also also for the technical people, how to help them better implement things that they may have come across on a Stack Overflow article or not necessarily just grown with how the product has migrated. So it's the ability to run it wherever you need to run it and also our ability to help you, the customer, better implement and understand those workflows that I think are two of the primary differentiators that we have. >> Lisa: Got it. >> I'll add another one if you don't mind. >> You can go ahead, Steven. >> Is lineage and dependencies between workflows. One thing we've done is to augment core Airflow with Lineage services. So using the Open Lineage framework, another open source framework for tracking datasets as they move from one workflow to another one, team to another, one data source to another is a really key component of what we do and we bundle that within the service so that as a developer or as a production engineer, you really don't have to worry about lineage, it just happens. Jeff, may show us some of this later that you can actually see as data flows from source through to a data warehouse out through a Python notebook to produce a predictive model or a dashboard. Can you see how those data products relate to each other? And when something goes wrong, figure out what upstream maybe caused the problem, or if you're about to change something, figure out what the impact is going to be on the rest of the organization. So Lineage is a big deal for us. >> Got it. >> And just to add on to that, the other thing to think about is that traditional Airflow is actually a complicated implementation. It required quite a lot of time spent understanding or was almost a bespoke language that you needed to be able to develop in two write these DAGs, which is like fundamental pipelines. So part of what we are focusing on is tooling that makes it more accessible to say a data analyst or a data scientist who doesn't have or really needs to gain the necessary background in how the semantics of Airflow DAGs works to still be able to get the benefit of what Airflow can do. So there is new features and capabilities built into the astronomer cloud platform that effectively obfuscates and removes the need to understand some of the deep work that goes on. But you can still do it, you still have that capability, but we are expanding it to be able to have orchestrated and repeatable processes accessible to more teams within the business. >> In terms of accessibility to more teams in the business. You talked about data scientists, data analysts, developers. Steven, I want to talk to you, as the chief data officer, are you having more and more conversations with that role and how is it emerging and evolving within your customer base? >> Hmm. That's a good question, and it is evolving because I think if you look historically at the way that Airflow has been used it's often from the ground up. You have individual data engineers or maybe single data engineering teams who adopt Airflow 'cause it's very popular. Lots of people know how to use it and they bring it into an organization and say, "Hey, let's use this to run our data pipelines." But then increasingly as you turn from pure workflow management and job scheduling to the larger topic of orchestration you realize it gets pretty complicated, you want to have coordination across teams, and you want to have standardization for the way that you manage your data pipelines. And so having a managed service for Airflow that exists in the cloud is easy to spin up as you expand usage across the organization. And thinking long term about that in the context of orchestration that's where I think the chief data officer or the head of analytics tends to get involved because they really want to think of this as a strategic investment that they're making. Not just per team individual Airflow deployments, but a network of data orchestrators. >> That network is key. Every company these days has to be a data company. We talk about companies being data driven. It's a common word, but it's true. It's whether it is a grocer or a bank or a hospital, they've got to be data companies. So talk to me a little bit about Astronomer's business model. How is this available? How do customers get their hands on it? >> Jeff, go ahead. >> Yeah, yeah. So we have a managed cloud service and we have two modes of operation. One, you can bring your own cloud infrastructure. So you can say here is an account in say, AWS or Azure and we can go and deploy the necessary infrastructure into that, or alternatively we can host everything for you. So it becomes a full SaaS offering. But we then provide a platform that connects at the backend to your internal IDP process. So however you are authenticating users to make sure that the correct people are accessing the services that they need with role-based access control. From there we are deploying through Kubernetes, the different services and capabilities into either your cloud account or into an account that we host. And from there Airflow does what Airflow does, which is its ability to then reach to different data systems and data platforms and to then run the orchestration. We make sure we do it securely, we have all the necessary compliance certifications required for GDPR in Europe and HIPAA based out of the US, and a whole bunch host of others. So it is a secure platform that can run in a place that you need it to run, but it is a managed Airflow that includes a lot of the extra capabilities like the cloud developer environment and the open lineage services to enhance the overall airflow experience. >> Enhance the overall experience. So Steven, going back to you, if I'm a Conde Nast or another organization, what are some of the key business outcomes that I can expect? As one of the things I think we've learned during the pandemic is access to realtime data is no longer a nice to have for organizations. It's really an imperative. It's that demanding consumer that wants to have that personalized, customized, instant access to a product or a service. So if I'm a Conde Nast or I'm one of your customers, what can I expect my business to be able to achieve as a result of data orchestration? >> Yeah, I think in a nutshell it's about providing a reliable, scalable, and easy to use service for developing and running data workflows. And talking of demanding customers, I mean, I'm actually a customer myself, as you mentioned, I'm the head of data for Astronomer. You won't be surprised to hear that we actually use Astronomer and Airflow to run all of our data pipelines. And so I can actually talk about my experience. When I started I was of course familiar with Airflow, but it always seemed a little bit unapproachable to me if I was introducing that to a new team of data scientists. They don't necessarily want to have to think about learning something new. But I think because of the layers that Astronomer has provided with our Astro service around Airflow it was pretty easy for me to get up and running. Of course I've got an incentive for doing that. I work for the Airflow company, but we went from about, at the beginning of last year, about 500 data tasks that we were running on a daily basis to about 15,000 every day. We run something like a million data operations every month within my team. And so as one outcome, just the ability to spin up new production workflows essentially in a single day you go from an idea in the morning to a new dashboard or a new model in the afternoon, that's really the business outcome is just removing that friction to operationalizing your machine learning and data workflows. >> And I imagine too, oh, go ahead, Jeff. >> Yeah, I think to add to that, one of the things that becomes part of the business cycle is a repeatable capabilities for things like reporting, for things like new machine learning models. And the impediment that has existed is that it's difficult to take that from a team that's an analyst team who then provide that or a data science team that then provide that to the data engineering team who have to work the workflow all the way through. What we're trying to unlock is the ability for those teams to directly get access to scheduling and orchestrating capabilities so that a business analyst can have a new report for C-suite execs that needs to be done once a week, but the time to repeatability for that report is much shorter. So it is then immediately in the hands of the person that needs to see it. It doesn't have to go into a long list of to-dos for a data engineering team that's already overworked that they eventually get it to it in a month's time. So that is also a part of it is that the realizing, orchestration I think is fairly well and a lot of people get the benefit of being able to orchestrate things within a business, but it's having more people be able to do it and shorten the time that that repeatability is there is one of the main benefits from good managed orchestration. >> So a lot of workforce productivity improvements in what you're doing to simplify things, giving more people access to data to be able to make those faster decisions, which ultimately helps the end user on the other end to get that product or the service that they're expecting like that. Jeff, I understand you have a demo that you can share so we can kind of dig into this. >> Yeah, let me take you through a quick look of how the whole thing works. So our starting point is our cloud infrastructure. This is the login. You go to the portal. You can see there's a a bunch of workspaces that are available. Workspaces are like individual places for people to operate in. I'm not going to delve into all the deep technical details here, but starting point for a lot of our data science customers is we have what we call our Cloud IDE, which is a web-based development environment for writing and building out DAGs without actually having to know how the underpinnings of Airflow work. This is an internal one, something that we use. You have a notebook-like interface that lets you write python code and SQL code and a bunch of specific bespoke type of blocks if you want. They all get pulled together and create a workflow. So this is a workflow, which gets compiled to something that looks like a complicated set of Python code, which is the DAG. I then have a CICD process pipeline where I commit this through to my GitHub repo. So this comes to a repo here, which is where these DAGs that I created in the previous step exist. I can then go and say, all right, I want to see how those particular DAGs have been running. We then get to the actual Airflow part. So this is the managed Airflow component. So we add the ability for teams to fairly easily bring up an Airflow instance and write code inside our notebook-like environment to get it into that instance. So you can see it's been running. That same process that we built here that graph ends up here inside this, but you don't need to know how the fundamentals of Airflow work in order to get this going. Then we can run one of these, it runs in the background and we can manage how it goes. And from there, every time this runs, it's emitting to a process underneath, which is the open lineage service, which is the lineage integration that allows me to come in here and have a look and see this was that actual, that same graph that we built, but now it's the historic version. So I know where things started, where things are going, and how it ran. And then I can also do a comparison. So if I want to see how this particular run worked compared to one historically, I can grab one from a previous date and it will show me the comparison between the two. So that combination of managed Airflow, getting Airflow up and running very quickly, but the Cloud IDE that lets you write code and know how to get something into a repeatable format get that into Airflow and have that attached to the lineage process adds what is a complete end-to-end orchestration process for any business looking to get the benefit from orchestration. >> Outstanding. Thank you so much Jeff for digging into that. So one of my last questions, Steven is for you. This is exciting. There's a lot that you guys are enabling organizations to achieve here to really become data-driven companies. So where can folks go to get their hands on this? >> Yeah, just go to astronomer.io and we have plenty of resources. If you're new to Airflow, you can read our documentation, our guides to getting started. We have a CLI that you can download that is really I think the easiest way to get started with Airflow. But you can actually sign up for a trial. You can sign up for a guided trial where our teams, we have a team of experts, really the world experts on getting Airflow up and running. And they'll take you through that trial and allow you to actually kick the tires and see how this works with your data. And I think you'll see pretty quickly that it's very easy to get started with Airflow, whether you're doing that from the command line or doing that in our cloud service. And all of that is available on our website >> astronomer.io. Jeff, last question for you. What are you excited about? There's so much going on here. What are some of the things, maybe you can give us a sneak peek coming down the road here that prospects and existing customers should be excited about? >> I think a lot of the development around the data awareness components, so one of the things that's traditionally been complicated with orchestration is you leave your data in the place that you're operating on and we're starting to have more data processing capability being built into Airflow. And from a Astronomer perspective, we are adding more capabilities around working with larger datasets, doing bigger data manipulation with inside the Airflow process itself. And that lends itself to better machine learning implementation. So as we start to grow and as we start to get better in the machine learning context, well, in the data awareness context, it unlocks a lot more capability to do and implement proper machine learning pipelines. >> Awesome guys. Exciting stuff. Thank you so much for talking to me about Astronomer, machine learning, data orchestration, and really the value in it for your customers. Steve and Jeff, we appreciate your time. >> Thank you. >> My pleasure, thanks. >> And we thank you for watching. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem. I'm your host, Lisa Martin. You're watching theCUBE, the leader in live tech coverage. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the AWS Startup Showcase let's give the audience and now it powers the data ecosystem What is the business impact or outcomes for the executives to consume how it applies to MLOps. and for me the interesting that you articulate to customers? So it's the ability to run it if you don't mind. that you can actually see as data flows the other thing to think about to more teams in the business. about that in the context of orchestration So talk to me a little bit at the backend to your So Steven, going back to you, just the ability to spin up but the time to repeatability a demo that you can share that allows me to come There's a lot that you guys We have a CLI that you can download What are some of the things, in the place that you're operating on and really the value in And we thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeff FletcherPERSON

0.99+

StevenPERSON

0.99+

StevePERSON

0.99+

Steven HillionPERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Conde NastORGANIZATION

0.99+

USLOCATION

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

HIPAATITLE

0.99+

AWSORGANIZATION

0.99+

two guestsQUANTITY

0.99+

AirflowORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

10 thousandsQUANTITY

0.99+

OneQUANTITY

0.99+

Electronic ArtsORGANIZATION

0.99+

oneQUANTITY

0.99+

PythonTITLE

0.99+

two modesQUANTITY

0.99+

AirflowTITLE

0.98+

10,000 workflowsQUANTITY

0.98+

about 500 data tasksQUANTITY

0.98+

todayDATE

0.98+

one outcomeQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

GDPRTITLE

0.97+

SQLTITLE

0.97+

GitHubORGANIZATION

0.96+

astronomer.ioOTHER

0.94+

SlackORGANIZATION

0.94+

AstronomerORGANIZATION

0.94+

some years agoDATE

0.92+

once a weekQUANTITY

0.92+

AstronomerTITLE

0.92+

theCUBEORGANIZATION

0.92+

last yearDATE

0.91+

KubernetesTITLE

0.88+

single dayQUANTITY

0.87+

about 15,000 every dayQUANTITY

0.87+

one cloudQUANTITY

0.86+

IDETITLE

0.86+

Steven Huels | KubeCon + CloudNativeCon NA 2021


 

(upbeat soft intro music) >> Hey everyone. Welcome back to theCube's live coverage from Los Angeles of KubeCon and CloudNativeCon 2021. Lisa Martin with Dave Nicholson, Dave and I are pleased to welcome our next guest remotely. Steven Huels joins us, the senior director of Cloud Services at Red Hat. Steven, welcome to the program. >> Steven: Thanks, Lisa. Good to be here with you and Dave. >> Talk to me about where you're seeing traction from an AI/ML perspective? Like where are you seeing that traction? What are you seeing? Like it. >> It's a great starter question here, right? Like AI/ML is really being employed everywhere, right? Regardless of industry. So financial services, telco, governments, manufacturing, retail. Everyone at this point is finding a use for AI/ML. They're looking for ways to better take advantage of the data that they've been collecting for these years. It really, it wasn't all that long ago when we were talking to customers about Kubernetes and containers, you know, AI/ML really wasn't a core topic where they were looking to use a Kubernetes platform to address those types of workloads. But in the last couple of years, that's really skyrocketed. We're seeing a lot of interest from existing customers that are using Red Hat open shift, which is a Kubernetes based platform to take those AI/ML workloads and take them from what they've been doing for additionally, for experimentation, and really get them into production and start getting value out of them at the end of it. >> Is there a common theme, you mentioned a number of different verticals, telco, healthcare, financial services. Is there a common theme, that you're seeing among these organizations across verticals? >> ^There is. I mean, everyone has their own approach, like the type of technique that they're going to get the most value out of. But the common theme is really that everyone seems to have a really good handle on experimentation. They have a lot of very brig data scientists, model developers that are able to take their data and out of it, but where they're all looking to get, get our help or looking for help, is to put those models into production. So ML ops, right. So how do I take what's been built on, on somebody's machine and put that into production in a repeatable way. And then once it's in production, how do I monitor it? What am I looking for as triggers to indicate that I need to retrain and how do I iterate on this sequentially and rapidly applying what would really be traditional dev ops software development, life cycle methodologies to ML and AI models. >> So Steve, we're joining you from KubeCon live at the moment. What's, what's the connection with Kubernetes and how does Kubernetes enable machine learning artificial intelligence? How does it enable it and what are some of the special considerations to in mind? >> So the immediate connection for Red Hat, is Red Hat's open shift is basically an enterprise grade Kubernetics. And so the connection there is, is really how we're working with customers and how customers in general are looking to take advantage of all the benefits that you can get from the Kubernetes platform that they've been applying to their traditional software development over the years, right? The, the agility, the ability to scale up on demand, the ability to have shared resources, to make specialized hardware available to the individual communities. And they want to start applying those foundational elements to their AI/Ml practices. A lot of data science work traditionally was done with high powered monolithic machines and systems. They weren't necessarily shared across development communities. So connecting something that was built by a data scientist, to something that then a software developer was going to put into production was challenging. There wasn't a lot of repeatability in there. There wasn't a lot of scalability, there wasn't a lot of auditability and these are all things that we know we need when talking about analytics and AI/ML. There's a lot of scrutiny put on the auditability of what you put into production, something that's making decisions that impact on whether or not somebody gets a loan or whether or not somebody is granted access to systems or decisions that are made. And so that the connection there is really around taking advantage of what has proven itself in kubernetes to be a very effective development model and applying that to AI/ML and getting the benefits in being able to put these things into production. >> Dave: So, so Red Hat has been involved in enterprises for a long time. Are you seeing most of this from a Kubernetes perspective, being net new application environments or are these extensions of what we would call legacy or traditional environments. >> They tend to be net new, I guess, you know, it's, it's sort of, it's transitioned a little bit over time. When we first started talking to customers, there was desire to try to do all of this in a single Kubernetes cluster, right? How can I take the same environment that had been doing our, our software development, beef it up a little bit and have it applied to our data science environment. And over time, like Kubernetes advanced rights. So now you can actually add labels to different nodes and target workloads based on specialized machinery and hardware accelerators. And so that has shifted now toward coming up with specialized data science environments, but still connecting the clusters in that's something that's being built on that data science environment is essentially being deployed then through, through a model pipeline, into a software artifact that then makes its way into an application that that goes live. And, and really, I think that that's sensible, right? Because we're constantly seeing a lot of evolution in, in the types of accelerators, the types of frameworks, the types of libraries that are being made available to data scientists. And so you want the ability to extend your data science cluster to take advantage of those things and to give data scientists access to that those specialized environments. So they can try things out, determine if there's a better way to, to do what they're doing. And then when they find out there is, be able to rapidly roll that into your production environment. >> You mentioned the word acceleration, and that's one of the words that we talk about when we talk about 2020, and even 2021, the acceleration in digital transformation that was necessary really a year and a half ago, for companies to survive. And now to be able to pivot and thrive. What are you seeing in terms of customers appetites for, for adopting AI/ML based solutions? Has it accelerated as the pandemic has accelerated digital transformation. >> It's definitely accelerated. And I think, you know, the pandemic probably put more of a focus for businesses on where can they start to drive more value? How can they start to do more with less? And when you look at systems that are used for customer interactions, whether they're deflecting customer cases or providing next best action type recommendations, AI/ML fits the bill there perfectly. So when they were looking to optimize, Hey, where do we put our spend? What can help us accelerate and grow? Even in this virtual world we're living in, AI/ML really floated to the top there, that's definitely a theme that we've seen. >> Lisa: Is there a customer example that you think that you could mention that really articulates the value over that? >> You know, I think a lot of it, you know, we've published one specifically around HCA health care, and this had started actually before the pandemic, but I think especially, it's applicable because of the nature of what a pandemic is, where HCA was using AI/ML to essentially accelerate diagnosis of sepsis, right. They were using it for, for disease diagnoses. That same type of, of diagnosis was being applied to looking at COVID cases as well. And so there was one that we did in Canada with, it's called 'how's your flattening', which was basically being able to track and do some predictions around COVID cases in the Canadian provinces. And so that one's particularly, I guess, kind of close to home, given the nature of the pandemic, but even within Red Hat, we started applying a lot more attention to how we could help with customer support cases, right. Knowing that if folks were going to be out with any type of illness. We needed to be able to be able to handle that case, you know, workload without negatively impacting work-life balance for, for other associates. So we looked at how can we apply AI/ML to help, you know, maintain and increase the quality of customer service we were providing. >> it's a great use case. Did you have a keynote or a session, here at KubeCon CloudNative? >> I did. I did. And it really focused specifically on that whole ML ops and model ops pipeline. It was called involving Kubernetes and bracing model ops. It was for a Kubernetes AI day. I believe it aired on Wednesday of this week. Tuesday, maybe. It all kind of condenses in the virtual world. >> Doesn't it? It does. >> So one of the questions that Lisa and I have for folks where we sit here, I don't know, was it year seven or so of the Dawn of Kubernetes, if I have that, right. Where do you think we are, in this, in this wave of adoption, coming from a Red Hat perspective, you have insight into, what's been going on in enterprises for the last 20 plus years. Where are we in this wave? >> That's a great question. Every time, like you, it's sort of that cresting wave sort of, of analogy, right? That when you get to top one wave, you notice the next wave it's even bigger. I think we've certainly gotten to the point where, where organizations have accepted that Kubernetes can, is applicable across all the workloads that they're looking to put in production. Now, the focus has shifted on optimizing those workloads, right? So what are the things that we need to run in our in-house data centers? What are things that we need, or can benefit from using commodity hardware from one of the hyperscalers, how do we connect those environments and more effectively target workloads? So if I look at where things are going to the future, right now, we see a lot of things being targeted based on cluster, right? We say, Hey, we have a data science cluster. It has characteristics because of X, Y, and Z. And we put all of our data science workloads into that cluster. In the future, I think we want to see more workload specific, type of categorization of workloads so that we're able to match available hardware with workloads rather than targeting a workload at a specific cluster. So a developer or data scientist can say, Hey, my particular algorithm here needs access to GPU acceleration and the following frameworks. And then it, the Kubernetes scheduler is able to determine of the available environments. What's the capacity, what are the available resources and match it up accordingly. So we get into a more dynamic environment where the developers and those that are actually building on top of these platforms actually have to know less and less about the clusters they're running on. It just have to know what types of resources they need access to. >> Lisa: So sort of democratizing that. Steve, thank you for joining Dave and me on the program tonight, talking about the traction that you're seeing with AI/ML, Kubernetes as an enabler, we appreciate your time. >> Thank you. >> Thanks Steve. >> For Dave Nicholson. I'm Lisa Martin. You're watching theCube live from Los Angeles KubeCon and CloudNativeCon 21. We'll be right back with our next guest. (subtle music playing) >> Lisa: I have been in the software and technology industry for over 12 years now. So I've had the opportunity as a marketer to really understand and interact with customers across the entire buyer's journey. Hi, I'm Lisa Martin and I'm a host of theCube. Being a host on the cube has been a dream of mine for the last few years. I had the opportunity to meet Jeff and Dave and John at EMC World a few years ago and got the courage up to say, Hey, I'm really interested in this. I love talking with customers...

Published Date : Oct 15 2021

SUMMARY :

Dave and I are pleased to welcome Good to be here with you and Dave. Talk to me about where But in the last couple of years, that you're seeing among these that they're going to get the considerations to in mind? and applying that to AI/ML Are you seeing most of this and have it applied to our and that's one of the How can they start to do more with less? apply AI/ML to help, you know, Did you have a keynote in the virtual world. It does. of the Dawn of Kubernetes, that they're looking to put in production. Dave and me on the program tonight, KubeCon and CloudNativeCon 21. a dream of mine for the last few years.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steven HuelsPERSON

0.99+

StevePERSON

0.99+

Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

DavePERSON

0.99+

StevenPERSON

0.99+

CanadaLOCATION

0.99+

JeffPERSON

0.99+

TuesdayDATE

0.99+

2021DATE

0.99+

KubeConEVENT

0.99+

Los AngelesLOCATION

0.99+

Red HatORGANIZATION

0.99+

2020DATE

0.99+

JohnPERSON

0.99+

HCAORGANIZATION

0.99+

tonightDATE

0.99+

CloudNativeConEVENT

0.98+

oneQUANTITY

0.98+

a year and a half agoDATE

0.98+

Cloud ServicesORGANIZATION

0.98+

pandemicEVENT

0.98+

KubernetesTITLE

0.97+

over 12 yearsQUANTITY

0.96+

firstQUANTITY

0.96+

COVIDOTHER

0.95+

CloudNativeCon 2021EVENT

0.95+

Wednesday of this weekDATE

0.94+

Dawn of KubernetesEVENT

0.93+

CanadianLOCATION

0.92+

singleQUANTITY

0.9+

Red HatTITLE

0.9+

waveEVENT

0.89+

KubernetesEVENT

0.89+

CloudNativeCon 21EVENT

0.84+

yearsDATE

0.84+

Los AngelesEVENT

0.83+

NA 2021EVENT

0.82+

few years agoDATE

0.77+

last couple of yearsDATE

0.76+

last 20 plus yearsDATE

0.76+

KubeCon CloudNativeORGANIZATION

0.76+

AI/MLTITLE

0.75+

crestingEVENT

0.7+

sepsisOTHER

0.7+

KuberneticsORGANIZATION

0.69+

theCubeORGANIZATION

0.69+

HatTITLE

0.69+

AI/MLOTHER

0.67+

sevenQUANTITY

0.6+

RedORGANIZATION

0.58+

lastDATE

0.58+

wave ofEVENT

0.58+

Sandy Carter, AWS | AWS Summit DC 2021


 

>>text, you know, consumer opens up their iphone and says, oh my gosh, I love the technology behind my eyes. What's it been like being on the shark tank? You know, filming is fun, hang out, just fun and it's fun to be a celebrity at first your head gets really big and you get a good tables at restaurants who says texas has got a little possess more skin in the game today in charge of his destiny robert Hirschbeck, No stars. Here is CUBA alumni. Yeah, okay. >>Hi. I'm john Ferry, the co founder of silicon angle Media and co host of the cube. I've been in the tech business since I was 19 1st programming on many computers in a large enterprise and then worked at IBM and Hewlett Packard total of nine years in the enterprise brian's jobs from programming, Training, consulting and ultimately as an executive salesperson and then started my first company with 1997 and moved to Silicon Valley in 1999. I've been here ever since. I've always loved technology and I love covering you know, emerging technology as trained as a software developer and love business and I love the impact of software and technology to business to me creating technology that starts the company and creates value and jobs is probably the most rewarding things I've ever been involved in. And I bring that energy to the queue because the Cubans were all the ideas are and what the experts are, where the people are and I think what's most exciting about the cube is that we get to talk to people who are making things happen, entrepreneur ceo of companies, venture capitalists, people who are really on a day in and day out basis, building great companies and the technology business is just not a lot of real time live tv coverage and, and the cube is a non linear tv operation. We do everything that the T. V guys on cable don't do. We do longer interviews. We asked tougher questions, we ask sometimes some light questions. We talked about the person and what they feel about. It's not prompted and scripted. It's a conversation authentic And for shows that have the Cube coverage and makes the show buzz. That creates excitement. More importantly, it creates great content, great digital assets that can be shared instantaneously to the world. Over 31 million people have viewed the cube and that is the result. Great content, great conversations and I'm so proud to be part of you with great team. Hi, I'm john ferrier. Thanks for watching the cube. >>Hello and welcome to the cube. We are here live on the ground in the expo floor of a live event. The AWS public sector summit. I'm john for your host of the cube. We're here for the next two days. Wall to wall coverage. I'm here with Sandy carter to kick off the event. Vice president partner as partners on AWS public sector. Great to see you Sandy, >>so great to see you john live and in person, right? >>I'm excited. I'm jumping out of my chair because I did a, I did a twitter periscope yesterday and said a live event and all the comments are, oh my God, an expo floor a real events. Congratulations. >>True. Yeah. We're so excited yesterday. We had our partner day and we sold out the event. It was rock them and pack them and we had to turn people away. So what a great experience. Right, >>Well, I'm excited. People are actually happy. We tried, we tried covering mobile world congress in Barcelona. Still, people were there, people felt good here at same vibe. People are excited to be in person. You get all your partners here. You guys have had had an amazing year. Congratulations. We did a couple awards show with you guys. But I think the big story is the amazon services for the partners. Public sector has been a real game changer. I mean we talked about it before, but again, it continues to happen. What's the update? >>Yeah, well we had, so there's lots of announcements. So let me start out with some really cool growth things because I know you're a big growth guy. So we announced here at the conference yesterday that our government competency program for partners is now the number one industry in AWS for are the competency. That's a huge deal. Government is growing so fast. We saw that during the pandemic, everybody was moving to the cloud and it's just affirmation with the government competency now taking that number one position across AWS. So not across public sector across AWS and then one of our fastest growing areas as well as health care. So we now have an A. T. O. Authority to operate for HIPPA and Hi trust and that's now our fastest growing area with 85% growth. So I love that new news about the growth that we're seeing in public sector and all the energy that's going into the cloud and beyond. >>You know, one of the things that we talked about before and another Cuban of you. But I want to get your reaction now current state of the art now in the moment the pandemic has highlighted the antiquated outdated systems and highlighted help inadequate. They are cloud. You guys have done an amazing job to stand up value quickly now we're in a hybrid world. So you've got hybrid automation ai driving a complete change and it's happening pretty quick. What's the new things that you guys are seeing that's emerging? Obviously a steady state of more growth. But what's the big success programs that you're seeing right now? >>Well, there's a few new programs that we're seeing that have really taken off. So one is called proserve ready. We announced yesterday that it's now G. A. And the U. S. And a media and why that's so important is that our proserve team a lot of times when they're doing contracts, they run out of resources and so they need to tap on the shoulder some partners to come and help them. And the customers told us that they wanted them to be pro served ready so to have that badge of honor if you would that they're using the same template, the same best practices that we use as well. And so we're seeing that as a big value creator for our partners, but also for our customers because now those partners are being trained by us and really helping to be mentored on the job training as they go. Very powerful program. >>Well, one of the things that really impressed by and I've talked to some of your MSP partners on the floor here as they walk by, they see the cube, they're all doing well. They're all happy. They got a spring in their step. And the thing is that this public private partnerships is a real trend we've been talking about for a while. More people in the public sector saying, hey, I want I need a commercial relationship, not the old school, you know, we're public. We have all these rules. There's more collaboration. Can you share your thoughts on how you see that evolving? Because now the partners in the public sector are partnering closer than ever before. >>Yeah, it's really um, I think it's really fascinating because a lot of our new partners are actually commercial partners that are now choosing to add a public sector practice with them. And I think a lot of that is because of these public and private partnerships. So let me give you an example space. So we were at the space symposium our first time ever for a W. S at the space symposium and what we found was there were partners, they're like orbital insight who's bringing data from satellites, There are public sector partner, but that data is being used for insurance companies being used for agriculture being used to impact environment. So I think a lot of those public private partnerships are strengthening as we go through Covid or have like getting alec of it. And we do see a lot of push in that area. >>Talk about health care because health care is again changing radically. We talked to customers all the time. They're like, they have a lot of legacy systems but they can't just throw them away. So cloud native aligns well with health care. >>It does. And in fact, you know, if you think about health care, most health care, they don't build solutions themselves, they depend on partners to build them. So they do the customer doesn't buy and the partner does the build. So it's a great and exciting area for our partners. We just launched a new program called the mission accelerator program. It's in beta and that program is really fascinating because our healthcare partners, our government partners and more now can use these accelerators that maybe isolate a common area like um digital analytics for health care and they can reuse those. So it's pretty, I think it's really exciting today as we think about the potential health care and beyond. >>You know, one of the challenge that I always thought you had that you guys do a good job on, I'd love to get your reaction to now is there's more and more people who want to partner with you than ever before. And sometimes it hasn't always been easy in the old days like to get fed ramp certified or even deal with public sector. If you were a commercial vendor, you guys have done a lot with accelerating certifications. Where are you on that spectrum now, what's next? What's the next wave of partner onboarding or what's the partner trends around the opportunities in public sector? >>Well, one of the new things that we announced, we have tested out in the U. S. You know, that's the amazon way, right, Andy's way, you tested your experiment. If it works, you roll it out, we have a concierge program now to help a lot of those new partners get inundated into public sector. And so it's basically, I'm gonna hold your hand just like at a hotel. I would go up and say, hey, can you direct me to the right restaurant or to the right museum, we do the same thing, we hand hold people through that process. Um, if you don't want to do that, we also have a new program called navigate which is built for brand new partners. And what that enables our partners to do is to kind of be guided through that process. So you are right. We have so many partners now who want to come and grow with us that it's really essential that we provide a great partner, experienced a how to on board. >>Yeah. And the A. P. M. Was the amazon partner network also has a lot of crossover. You see a lot a lot of that going on because the cloud, it's you can do both. >>Absolutely. And I think it's really, you know, we leverage all of the ap in programs that exist today. So for example, there was just a new program that was put out for a growth rebate and that was driven by the A. P. N. And we're leveraging and using that in public sector too. So there's a lot of prosecutes going on to make it easier for our partners to do business with us. >>So I have to ask you on a personal note, I know we've talked about before, your very comfortable the virtual now hybrid space. How's your team doing? How's the structure looks like, what are your goals, what are you excited about? >>Well, I think I have the greatest team ever. So of course I'm excited about our team and we are working in this new hybrid world. So it is a change for everybody uh the other day we had some people in the office and some people calling in virtually so how to manage that, right was really quite interesting. Our goals that we align our whole team around and we talked a little bit about this yesterday are around mission which are the solution areas migration, so getting everything to the cloud and then in the cloud, we talk about modernization, are you gonna use Ai Ml or I O T? And we actually just announced a new program around that to to help out IOT partners to really build and understand that data that's coming in from I O T I D C says that that idea that IOT data has increased by four times uh in the, during the covid period. So there's so many more partners who need help. >>There's a huge shift going on and you know, we always try to explain on the cube. Dave and I talked about a lot and it's re platform with the cloud, which is not just lift and shift you kind of move and then re platform then re factoring your business and there's a nuance there between re platform in which is great. Take advantage of cloud scale. But the re factoring allows for this unique advantage of these high level services. >>That's right >>and this is where people are winning. What's your reaction to that? >>Oh, I completely agree. I think this whole area of modernizing your application, like we have a lot of folks who are doing mainframe migrations and to your point if they just lift what they had in COBOL and they move it to a W S, there's really not a lot of value there, but when they rewrite the code, when they re factor the code, that's where we're seeing tremendous breakthrough momentum with our partner community, you know, Deloitte is one of our top partners with our mainframe migration. They have both our technology and our consulting um, mainframe migration competency there to one of the other things I think you would be interested in is in our session yesterday we just completed some research with r C T O s and we talked about the next mega trends that are coming around Web three dato. And I'm sure you've been hearing a lot about web www dot right? Yeah, >>0.04.0, it's all moving too fast. I mean it's moving >>fast. And so some of the things we talked to our partners about yesterday are like the metaverse that's coming. So you talked about health care yesterday electronic caregiver announced an entire application for virtual caregivers in the metaverse. We talked about Blockchain, you know, and the rise of Blockchain yesterday, we had a whole set of meetings, everybody was talking about Blockchain because now you've got El Salvador Panama Ukraine who have all adopted Bitcoin which is built on the Blockchain. So there are some really exciting things going on in technology and public sector. >>It's a societal shift and I think the confluence of tech user experience data, new, decentralized ways of changing society. You're in the middle of it. >>We are and our partners are in the middle of it and data data, data data, that's what I would say. Everybody is using data. You and I even talked about how you guys are using data. Data is really a hot topic and we we're really trying to help our partners figure out just how to migrate the data to the cloud but also to use that analytics and machine learning on it too. Well, >>thanks for sharing the data here on our opening segment. The insights we will be getting out of the Great Sandy. Great to see you got a couple more interviews with you. Thanks for coming on. I appreciate you And thanks for all your support. You guys are doing great. Your partners are happy you're on a great wave. Congratulations. Thank you, john appreciate more coverage from the queue here. Neither is public sector summit. We'll be right back. Mhm Yeah. >>Mhm. Mhm robert Herjavec. People obviously know you from shark tank

Published Date : Sep 28 2021

SUMMARY :

What's it been like being on the shark tank? We do everything that the T. V guys on cable don't do. We are here live on the ground in the expo floor of a live event. a live event and all the comments are, oh my God, an expo floor a real events. out the event. We did a couple awards show with you guys. We saw that during the pandemic, You know, one of the things that we talked about before and another Cuban of you. And the customers told us that they wanted them to be pro served ready so to have that badge of honor if Well, one of the things that really impressed by and I've talked to some of your MSP partners on the floor here as they walk by, So I think a lot of those public private partnerships are strengthening as we go through Covid or have We talked to customers all the time. And in fact, you know, if you think about health care, most health care, You know, one of the challenge that I always thought you had that you guys do a good job on, I'd love to get your reaction to Well, one of the new things that we announced, we have tested out in the U. S. You know, that's the amazon way, You see a lot a lot of that going on because the cloud, it's you to make it easier for our partners to do business with us. So I have to ask you on a personal note, I know we've talked about before, your very comfortable the virtual now So of course I'm excited about our team and we are working it's re platform with the cloud, which is not just lift and shift you kind of move and What's your reaction to that? there to one of the other things I think you would be interested in is in our session yesterday we I mean it's moving And so some of the things we talked to our partners about yesterday are like You're in the middle of it. We are and our partners are in the middle of it and data data, Great to see you got a couple more interviews with you. People obviously know you from shark tank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

john FerryPERSON

0.99+

DeloitteORGANIZATION

0.99+

Hewlett PackardORGANIZATION

0.99+

yesterdayDATE

0.99+

amazonORGANIZATION

0.99+

AndyPERSON

0.99+

SandyPERSON

0.99+

1997DATE

0.99+

robert HirschbeckPERSON

0.99+

1999DATE

0.99+

robert HerjavecPERSON

0.99+

AWSORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

IBMORGANIZATION

0.99+

19QUANTITY

0.99+

nine yearsQUANTITY

0.99+

BarcelonaLOCATION

0.99+

iphoneCOMMERCIAL_ITEM

0.99+

first companyQUANTITY

0.99+

U. S.LOCATION

0.99+

oneQUANTITY

0.99+

silicon angle MediaORGANIZATION

0.99+

CUBAORGANIZATION

0.98+

john ferrierPERSON

0.98+

HIPPAORGANIZATION

0.98+

bothQUANTITY

0.98+

johnPERSON

0.98+

pandemicEVENT

0.98+

todayDATE

0.97+

first timeQUANTITY

0.96+

Over 31 million peopleQUANTITY

0.95+

CubansPERSON

0.94+

El SalvadorLOCATION

0.94+

Sandy CarterPERSON

0.94+

texasORGANIZATION

0.93+

four timesQUANTITY

0.93+

AWS SummitEVENT

0.92+

symposiumEVENT

0.92+

COBOLTITLE

0.92+

fed rampORGANIZATION

0.91+

85% growthQUANTITY

0.89+

twitterORGANIZATION

0.87+

Great SandyLOCATION

0.84+

A. T. O. AuthorityORGANIZATION

0.84+

brianPERSON

0.84+

W.EVENT

0.82+

1st programmingQUANTITY

0.81+

wave ofEVENT

0.74+

number oneQUANTITY

0.74+

Ai MlTITLE

0.73+

A. P.ORGANIZATION

0.71+

BlockchainTITLE

0.69+

space symposiumEVENT

0.69+

couple awardsQUANTITY

0.69+

coupleQUANTITY

0.69+

congressORGANIZATION

0.67+

PanamaLOCATION

0.66+

waveEVENT

0.64+

VicePERSON

0.63+

T. VORGANIZATION

0.6+

metaverseTITLE

0.6+

johnEVENT

0.58+

CovidTITLE

0.57+

interviewsQUANTITY

0.55+

DC 2021LOCATION

0.54+

navigateTITLE

0.54+

two daysQUANTITY

0.54+

tankTITLE

0.52+

Sandy Carter | AWS Global Public Sector Partner Awards 2021


 

(upbeat music) >> Welcome to the special CUBE presentation of the AWS Global Public Sector Partner Awards Program. I'm here with the leader of the partner program, Sandy Carter, Vice President, AWS, Amazon Web Services @Sandy_Carter on Twitter, prolific on social and great leader. Sandy, great to see you again. And congratulations on this great program we're having here. In fact, thanks for coming out for this keynote. Well, thank you, John, for having me. You guys always talk about the coolest thing. So we had to be part of it. >> Well, one of the things that I've been really loving about this success of public sector we talked to us before is that as we start coming out of the pandemic, is becoming very clear that the cloud has helped a lot of people and your team has done amazing work, just want to give you props for that and say, congratulations, and what a great time to talk about the winners. Because everyone's been working really hard in public sector, because of the pandemic. The internet didn't break. And everyone stepped up with cloud scale and solve some problems. So take us through the award winners and talk about them. Give us an overview of what it is. The criteria and all the specifics. >> Yeah, you got it. So we've been doing this annually, and it's for our public sector partners overall, to really recognize the very best of the best. Now, we love all of our partners, John, as you know, but every year we'd like to really hone in on a couple who really leverage their skills and their ability to deliver a great customer solution. They demonstrate those Amazon leadership principles like working backwards from the customer, having a bias for action, they've engaged with AWS and very unique ways. And as well, they've contributed to our customer success, which is so very important to us and to our customers as well. >> That's awesome. Hey, can we put up a slide, I know we have slide on the winners, I want to look at them, with the tiles here. So here's a list of some of the winners. I see a nice little stars on there. Look at the gold star. I knows IronNet, CrowdStrike. That's General Keith Alexander's company, I mean, super relevant. Presidio, we've interviewed them before many times, got Palantir in there. And is there another one, I want to take a look at some of the other names here. >> In overall we had 21 categories. You know, we have over 1900 public sector partners today. So you'll notice that the awards we did, a big focus on mission. So things like government, education, health care, we spotlighted some of the brand new technologies like Containers, Artificial Intelligence, Amazon Connect. And we also this year added in awards for innovative use of our programs, like think big for small business and PTP as well. >> Yeah, well, great roundup, they're looking forward to hearing more about those companies. I have to ask you, because this always comes up, we're seeing more and more ecosystem discussions when we talk about the future of cloud. And obviously, we're going to, you know, be at Mobile World Congress, theCUBE, back in physical form, again, (indistinct) will continue to go on. The notion of ecosystem is becoming a key competitive advantage for companies and missions. So I have to ask you, why are partners so important to your public sector team? Talk about the importance of partners in context to your mission? >> Yeah, you know, our partners are critical. We drive most of our business and public sector through partners. They have great relationships, they've got great skills, and they have, you know, that really unique ability to meet the customer needs. If I just highlighted a couple of things, even using some of our partners who won awards, the first is, you know, migrations are so critical. Andy talked at Reinvent about still 96% of applications still sitting on premises. So anybody who can help us with the velocity of migrations is really critical. And I don't know if you knew John, but 80% of our migrations are led by partners. So for example, we gave awards to Collibra and Databricks as best lead migration for data as well as Datacom for best data lead migration as well. And that's because they increase the velocity of migrations, which increases customer satisfaction. They also bring great subject matter expertise, in particular around that mission that you're talking about. So for instance, GDIT won best Mission Solution For Federal, and they had just an amazing solution that was a secure virtual desktop that reduced a federal agencies deployment process, from months to days. And then finally, you know, our partners drive new opportunities and innovate on behalf of our customers. So we did award this year for P to P, Partnering to Partner which is a really big element of ecosystems, but it was won by four points and in quizon, and they were able to work together to implement a data, implement a data lake and an AI, ML solution, and then you just did the startup showcase, we have a best startup delivering innovation too, and that was EduTech (indistinct) Central America. And they won for implementing an amazing student registration and early warning system to alert and risks that may impact a student's educational achievement. So those are just some of the reasons why partners are important. I could go on and on. As you know, I'm so passionate about my partners, >> I know you're going to talk for an hour, we have to cut you off a little there. (indistinct) love your partners so much. You have to focus on this mission thing. It was a strong mission focus in the awards this year. Why are customers requiring much more of a mission focused? Is it because, is it a part of the criteria? I mean, we're seeing a mission being big. Why is that the case? >> Well, you know, IDC, said that IT spend for a mission or something with a purpose or line of business was five times greater than IT. We also recently did our CTO study where we surveyed thousands of CTOs. And the biggest and most changing elements today is really not around the technology. But it's around the industry, healthcare, space that we talked about earlier, or government. So those are really important. So for instance, New Reburial, they won Best Emission for Healthcare. And they did that because of their new smart diagnostic system. And then we had a partner when PA consulting for Best Amazon Connect solution around a mission for providing support for those most at risk, the elderly population, those who already had pre existing conditions, and really making sure they were doing what they called risk shielding during COVID. Really exciting and big, strong focus on mission. >> Yeah, and it's also, you know, we've been covering a lot on this, people want to work for a company that has purpose, and that has missions. I think that's going to be part of the table stakes going forward. I got to ask you on the secrets of success when this came up, I love asking this question, because, you know, we're starting to see the playbooks of what I call post COVID and cloud scale 2.0, whatever you want to call it, as you're starting to see this new modern era of success formulas, obviously, large scale value creation mission. These are points we're hearing and keep conversations across the board. What do you see as the secret of success for these parties? I mean, obviously, it's indirect for Amazon, I get that, but they're also have their customers, they're your customers, customers. That's been around for a while. But there's a new model emerging. What are the secrets from your standpoint of success? you know, it's so interesting, John, that you asked me this, because this is the number one question that I get from partners too. I would say the first secret is being able to work backwards from your customer, not just technology. So take one of our award winners Cognizant. They won for their digital tolling solution. And they work backwards from the customer and how to modernize that, or Pariveda, who is one of our best energy solution winners. And again, they looked at some of these major capital projects that oil companies were doing, working backwards from what the customer needed. I think that's number one, working backwards from the customer. Two, is having that mission expertise. So given that you have to have technology, but you also got to have that expertise in the area. We see that as a big secret of our public sector partners. So education cloud, (indistinct) one for education, effectual one for government and not for profit, Accenture won, really leveraging and showcasing their global expansion around public safety and disaster response. Very important as well. And then I would say the last secret of success is building repeatable solutions using those strong skills. So Deloitte, they have a great solution for migration, including mainframes. And then you mentioned early on, CloudStrike and IronNet, just think about the skill sets that they have there for repeatable solutions around security. So I think it's really around working backwards from the customer, having that mission expertise, and then building a repeatable solution, leveraging your skill sets. >> That's a great formula for success. I got you mentioned IronNet, and cybersecurity. One of things that's coming up is, in addition to having those best practices, there's also like real problems to solve, like, ransomware is now becoming a government and commercial problem, right. So (indistinct) seeing that happen a lot in DC, that's a front burner. That's a societal impact issue. That's like a cybersecurity kind of national security defense issue, but also, it's a technical one. And also public sector, through my interviews, I can tell you the past year and a half, there's been a lot of creativity of new solutions, new problems or new opportunities that are not yet identified as problems and I'd love to get your thoughts on my concern is with Jeff Bar yesterday from AWS, who's been blogging all the the news and he is a leader in the community. He was saying that he sees like 5G in the edge as new opportunities where it's creative. It's like he compared to the going to the home improvement store where he just goes to buy one thing. He does other things. And so there's a builder culture. And I think this is something that's coming out of your group more, because the pandemic forced these problems, and they forced new opportunities to be creative, and to build. What's your thoughts? >> Yeah, so I see that too. So if you think about builders, you know, we had a partner, Executive Council yesterday, we had 900, executives sign up from all of our partners. And we asked some survey questions like, what are you building with today? And the number one thing was artificial intelligence and machine learning. And I think that's such a new builders tool today, John, and, you know, one of our partners who won an award for the most innovative AI&ML was Kablamo And what they did was they use AI&ML to do a risk assessment on bushfires or wildfires in Australia. But I think it goes beyond that. I think it's building for that need. And this goes back to, we always talk about #techforgood. Presidio, I love this award that they won for best nonprofit, the Cherokee Nation, which is one of our, you know, Native American heritage, they were worried about their language going out, like completely out like no one being able to speak yet. And so they came to Presidio, and they asked how could we have a virtual classroom platform for the Cherokee Nation? And they created this game that's available on your phone, so innovative, so much of a builder's culture to capture that young generation, so they don't you lose their language. So I do agree. I mean, we're seeing builders everywhere, we're seeing them use artificial intelligence, Container, security. And we're even starting with quantum, so it is pretty powerful of what you can do as a public sector partner. >> I think the partner equation is just so wide open, because it's always been based on value, adding value, right? So adding value is just what they do. And by the way, you make money doing it if you do a good job of adding value. And, again, I just love riffing on this, because Dave and I talked about this on theCUBE all the time, and it comes up all the time in cloud conversations. The lock in isn't proprietary technology anymore, its value, and scale. So you starting to see builders thrive in that environment. So really good points. Great best practice. And I think I'm very bullish on the partner ecosystems in general, and people do it right, flat upside. I got to ask you, though, going forward, because this is the big post COVID kind of conversation. And last time we talked on theCUBE about this, you know, people want to have a growth strategy coming out of COVID. They want to be, they want to have a tail win, they want to be on the right side of history. No one wants to be in the losing end of all this. So last year in 2021 your goals were very clear, mission, migrations, modernization. What's the focus for the partners beyond 2021? What are you guys thinking to enable them, 21 is going to be a nice on ramp to this post COVID growth strategy? What's the focus beyond 2021 for you and your partners? >> Yeah, it's really interesting, we're going to actually continue to focus on those three M's mission, migration and modernization. But we'll bring in different elements of it. So for example, on mission, we see a couple of new areas that are really rising to the top, Smart Cities now that everybody's going back to work and (indistinct) down, operations and maintenance and global defense and using gaming and simulation. I mean, think about that digital twin strategy and how you're doing that. For migration, one of the big ones we see emerging today is data-lead migration. You know, we have been focused on applications and mainframes, but data has gravity. And so we are seeing so many partners and our customers demanding to get their data from on premises to the cloud so that now they can make real time business decisions. And then on modernization. You know, we talked a lot about artificial intelligence and machine learning. Containers are wicked hot right now, provides you portability and performance. I was with a startup last night that just moved everything they're doing to ECS our Container strategy. And then we're also seeing, you know, crippin, quantum blockchain, no code, low code. So the same big focus, mission migration, modernization, but the underpinnings are going to shift a little bit beyond 2021. >> That's great stuff. And you know, you have first of all people don't might not know that your group partners and Amazon Web Services public sector, has a big surface area. You talking about government, health care, space. So I have to ask you, you guys announced in March the space accelerator and you recently announced that you selected 10 companies to participate in the accelerated program. So, I mean, this is this is a space centric, you know, targeting, you know, low earth orbiting satellites to exploring the surface of the Moon and Mars, which people love. And because the space is cool, let's say the tech and space, they kind of go together, right? So take us through, what's this all about? How's that going? What's the selection, give us a quick update, while you're here on this space accelerated selection, because (indistinct) will have had a big blog post that went out (indistinct). >> Yeah, I would be thrilled to do that. So I don't know if you know this. But when I was young, I wanted to be an astronaut. We just helped through (indistinct), one of our partners reach Mars. So Clint, who is a retired general and myself got together, and we decided we needed to do something to help startups accelerate in their space mission. And so we decided to announce a competition for 10 startups to get extra help both from us, as well as a partner Sarafem on space. And so we announced it, everybody expected the companies to come from the US, John, they came from 44 different countries. We had hundreds of startups enter, and we took them through this six week, classroom education. So we had our General Clint, you know, helping and teaching them in space, which he's done his whole life, we provided them with AWS credits, they had mentoring by our partner, Sarafem. And we just down selected to 10 startups, that was what Vernors blog post was. If you haven't read it, you should look at some of the amazing things that they're going to do, from, you know, farming asteroids to, you know, helping with some of the, you know, using small vehicles to connect to larger vehicles, when we all get to space. It's very exciting. Very exciting, indeed, >> You have so much good content areas and partners, exploring, it's a very wide vertical or sector that you're managing. Is there any pattern? Well, I want to get your thoughts on post COVID success again, is there any patterns that you're seeing in terms of the partner ecosystem? You know, whether its business model, or team makeup, or more mindset, or just how they're organizing that that's been successful? Is there like a, do you see a trend? Is there a certain thing, then I've got the working backwards thing, I get that. But like, is there any other observations? Because I think people really want to know, am I doing it right? Am I being a good manager, when you know, people are going to be working remotely more? We're seeing more of that. And there's going to be now virtual events, hybrid events, physical events, the world's coming back to normal, but it's never going to be the same. Do you see any patterns? >> Yeah, you know, we're seeing a lot of small partners that are making an entrance and solving some really difficult problems. And because they're so focused on a niche, it's really having an impact. So I really believe that that's going to be one of the things that we see, I focus on individual creators and companies who are really tightly aligned and not trying to do everything, if you will. I think that's one of the big trends. I think the second we talked about it a little bit, John, I think you're going to see a lot of focus on mission. Because of that purpose. You know, we've talked about #techforgood, with everything going on in the world. As people have been working from home, they've been reevaluating who they are, and what do they stand for, and people want to work for a company that cares about people. I just posted my human footer on LinkedIn. And I got my first over a million hits on LinkedIn, just by posting this human footer, saying, you know what, reply to me at a time that's convenient for you, not necessarily for me. So I think we're going to see a lot of this purpose driven mission, that's going to come out as well. >> Yeah, and I also noticed that, and I was on LinkedIn, I got a similar reaction when I started trying to create more of a community model, not so much have people attend our events, and we need butts in the seats. It was much more personal, like we wanted you to join us, not attend and be like a number. You know, people want to be part of something. This seem to be the new mission. >> Yeah, I completely agree with that. I think that, you know, people do want to be part of something and they want, they want to be part of the meaning of something too, right. Not just be part of something overall, but to have an impact themselves, personally and individually, not just as a company. And I think, you know, one of the other trends that we saw coming up too, was the focus on technology. And I think low code, no code is giving a lot of people entry into doing things I never thought they could do. So I do think that technology, artificial intelligence Containers, low code, no code blockchain, those are going to enable us to even do greater mission-based solutions. >> Low code, no code reduces the friction to create more value, again, back to the value proposition. Adding value is the key to success, your partners are doing it. And of course, being part of something great, like the Global Public Sector Partner Awards list is a good one. And that's what we're talking about here. Sandy, great to see you. Thank you for coming on and sharing your insights and an update and talking more about the 2021, Global Public Sector partner Awards. Thanks for coming on. >> Thank you, John, always a pleasure. >> Okay, the Global Leaders here presented on theCUBE, again, award winners doing great work in mission, modernization, again, adding value. That's what it's all about. That's the new competitive advantage. This is theCUBE. I'm John Furrier, your host, thanks for watching. (upbeat music)

Published Date : Jun 17 2021

SUMMARY :

Sandy, great to see you again. just want to give you props for and to our customers as well. So here's a list of some of the winners. And we also this year added in awards So I have to ask you, and they have, you know, Why is that the case? And the biggest and most I got to ask you on the secrets of success and I'd love to get your thoughts on And so they came to Presidio, And by the way, you make money doing it And then we're also seeing, you know, And you know, you have first of all that they're going to do, And there's going to be now that that's going to be like we wanted you to join us, And I think, you know, and talking more about the 2021, That's the new competitive advantage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

DeloitteORGANIZATION

0.99+

Sandy CarterPERSON

0.99+

ClintPERSON

0.99+

AmazonORGANIZATION

0.99+

SandyPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

CollibraORGANIZATION

0.99+

MarchDATE

0.99+

AustraliaLOCATION

0.99+

AWSORGANIZATION

0.99+

USLOCATION

0.99+

10 companiesQUANTITY

0.99+

21 categoriesQUANTITY

0.99+

Jeff BarPERSON

0.99+

DatabricksORGANIZATION

0.99+

900QUANTITY

0.99+

80%QUANTITY

0.99+

yesterdayDATE

0.99+

MarsLOCATION

0.99+

2021DATE

0.99+

GDITORGANIZATION

0.99+

five timesQUANTITY

0.99+

firstQUANTITY

0.99+

AccentureORGANIZATION

0.99+

10 startupsQUANTITY

0.99+

EduTechORGANIZATION

0.99+

DatacomORGANIZATION

0.99+

last yearDATE

0.99+

IronNetORGANIZATION

0.99+

Keith AlexanderPERSON

0.99+

44 different countriesQUANTITY

0.99+

Global Public Sector Partner AwardsEVENT

0.99+

TwoQUANTITY

0.99+

this yearDATE

0.99+

four pointsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

IDCORGANIZATION

0.98+

six weekQUANTITY

0.98+

PresidioORGANIZATION

0.98+

@Sandy_CarterPERSON

0.98+

oneQUANTITY

0.98+

CrowdStrikeORGANIZATION

0.98+

MoonLOCATION

0.98+

bothQUANTITY

0.97+

pandemicEVENT

0.97+

Global Public Sector partner AwardsEVENT

0.97+

Central AmericaLOCATION

0.97+

last nightDATE

0.97+

todayDATE

0.97+

ReinventORGANIZATION

0.97+

over 1900 public sector partnersQUANTITY

0.96+

first secretQUANTITY

0.96+

Best Amazon ConnectORGANIZATION

0.96+

DCLOCATION

0.96+

CognizantPERSON

0.96+

OneQUANTITY

0.95+

VernorsPERSON

0.95+

an hourQUANTITY

0.95+

SarafemORGANIZATION

0.95+

Cherokee NationORGANIZATION

0.94+

GeneralPERSON

0.94+

thousands of CTOsQUANTITY

0.94+

ParivedaORGANIZATION

0.93+

secondQUANTITY

0.93+

Abhinav Joshi & Tushar Katarki, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> Announcer: From around the globe, it's theCUBE with coverage of KubeCon + CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem partners. >> Welcome back I'm Stu Miniman, this is theCUBE's coverage of KubeCon + CloudNativeCon Europe 2020, the virtual event. Of course, when we talk about Cloud Native we talk about Kubernetes there's a lot that's happening to modernize the infrastructure but a very important thing that we're going to talk about today is also what's happening up the stack, what sits on top of it and some of the new use cases and applications that are enabled by all of this modern environment and for that we're going to talk about artificial intelligence and machine learning or AI and ML as we tend to talk in the industry, so happy to welcome to the program. We have two first time guests joining us from Red Hat. First of all, we have Abhinav Joshi and Tushar Katarki they are both senior managers, part of the OpenShift group. Abhinav is in the product marketing and Tushar is in product management. Abhinav and Tushar thank you so much for joining us. >> Thanks a lot, Stu, we're glad to be here. >> Thanks Stu and glad to be here at KubeCon. >> All right, so Abhinav I mentioned in the intro here, modernization of the infrastructure is awesome but really it's an enabler. We know... I'm an infrastructure person the whole reason we have infrastructure is to be able to drive those applications, interact with my data and the like and of course, AI and ML are exciting a lot going on there but can also be challenging. So, Abhinav if I could start with you bring us inside your customers that you're talking to, what are the challenges, the opportunities? What are they seeing in this space? Maybe what's been holding them back from really unlocking the value that is expected? >> Yup, that's a very good question to kick off the conversation. So what we are seeing as an organization they typically face a lot of challenges when they're trying to build an AI/ML environment, right? And the first one is like a talent shortage. There is a limited amount of the AI, ML expertise in the market and especially the data scientists that are responsible for building out the machine learning and the deep learning models. So yeah, it's hard to find them and to be able to retain them and also other talents like a data engineer or app DevOps folks as well and the lack of talent can actually stall the project. And the second key challenge that we see is the lack of the readily usable data. So the businesses collect a lot of data but they must find the right data and make it ready for the data scientists to be able to build out, to be able to test and train the machine learning models. If you don't have the right kind of data to the predictions that your model is going to do in the real world is only going to be so good. So that becomes a challenge as well, to be able to find and be able to wrangle the right kind of data. And the third key challenge that we see is the lack of the rapid availability of the compute infrastructure, the data and machine learning, and the app dev tools for the various personas like a data scientist or data engineer, the software developers and so on that can also slow down the project, right? Because if all your teams are waiting on the infrastructure and the tooling of their choice to be provisioned on a recurring basis and they don't get it in a timely manner, it can stall the projects. And then the next one is the lack of collaboration. So you have all these kinds of teams that are involved in the AI project, and they have to collaborate with each other because the work one of the team does has a dependency on a different team like say for example, the data scientists are responsible for building the machine learning models and then what they have to do is they have to work with the app dev teams to make sure the models get integrated as part of the app dev processes and ultimately rolled out into the production. So if all these teams are operating in say silos and there is lack of collaboration between the teams, so this can stall the projects as well. And finally, what we see is the data scientists they typically start the machine learning modeling on their individual PCs or laptops and they don't focus on the operational aspects of the solution. So what this means is when the IT teams have to roll all this out into a production kind of deployment, so they get challenged to take all the work that has been done by the individuals and then be able to make sense out of it, be able to make sure that it can be seamlessly brought up in a production environment in a consistent way, be it on-premises, be it in the cloud or be it say at the edge. So these are some of the key challenges that we see that the organizations are facing, as they say try to take the AI projects from pilot to production. >> Well, some of those things seem like repetition of what we've had in the past. Obviously silos have been the bane of IT moving forward and of course, for many years we've been talking about that gap between developers and what's happening in the operation side. So Tushar, help us connect the dots, containers, Kubernetes, the whole DevOps movement. How is this setting us up to actually be successful for solutions like AI and ML? >> Sure Stu I mean, in fact you said it right like in the world of software, in the world of microservices, in the world of app modernization, in the world of DevOps in the past 10, 15 years, but we have seen this evolution revolution happen with containers and Kubernetes driving more DevOps behavior, driving more agile behavior so this in fact is what we are trying to say here can ease up the cable to EIML also. So the various containers, Kubernetes, DevOps and OpenShift for software development is directly applicable for AI projects to make them move agile, to get them into production, to make them more valuable to organization so that they can realize the full potential of AI. We already touched upon a few personas so it's useful to think about who the users are, who the personas are. Abhinav I talked about data scientists these are the people who obviously do the machine learning itself, do the modeling. Then there are data engineers who do the plumbing who provide the essential data. Data is so essential to machine learning and deep learning and so there are data engineers that are app developers who in some ways will then use the output of what the data scientists have produced in terms of models and then incorporate them into services and of course, none of these things are purely cast in stone there's a lot of overlap you could find that data scientists are app developers as well, you'll see some of app developers being data scientist later data engineer. So it's a continuum rather than strict boundaries, but regardless what all of these personas groups of people need or experts need is self service to that preferred tools and compute and storage resources to be productive and then let's not forget the IT, engineering and operations teams that need to make all this happen in an easy, reliable, available manner and something that is really safe and secure. So containers help you, they help you quickly and easily deploy a broad set of machine learning tools, data tools across the cloud, the hybrid cloud from data center to public cloud to the edge in a very consistent way. Teams can therefore alternatively modify, change a shared container images, machine learning models with (indistinct) and track changes. And this could be applicable to both containers as well as to the data by the way and be transparent and transparency helps in collaboration but also it could help with the regulatory reasons later on in the process. And then with containers because of the inherent processes solution, resource control and protection from threat they can also be very secure. Now, Kubernetes takes it to the next level first of all, it forms a cluster of all your compute and data resources, and it helps you to run your containerized tools and whatever you develop on them in a consistent way with access to these shared compute and centralized compute and storage and networking resources from the data center, the edge or the public cloud. They provide things like resource management, workload scheduling, multi-tendency controls so that you can be a proper neighbors if you will, and quota enforcement right? Now that's Kubernetes now if you want to up level it further if you want to enhance what Kubernetes offers then you go into how do you write applications? How do you actually make those models into services? And that's where... and how do you lifecycle them? And that's sort of the power of Helm and for the more Kubernetes operators really comes into the picture and while Helm helps in installing some of this for a complete life cycle experience. A kubernetes operator is the way to go and they simplify the acceleration and deployment and life cycle management from end-to-end of your entire AI, ML tool chain. So all in all organizations therefore you'll see that they need to dial up and define models rapidly just like applications that's how they get ready out of it quickly. There is a lack of collaboration across teams as Abhinav pointed out earlier, as you noticed that has happened still in the world of software also. So we're talking about how do you bring those best practices here to AI, ML. DevOps approaches for machine learning operations or many analysts and others have started calling as MLOps. So how do you kind of bring DevOps to machine learning, and fosters better collaboration between teams, application developers and IT operations and create this feedback loop so that the time to production and the ability to take more machine learning into production and ML-powered applications into production increase is significant. So that's kind of the, where I wanted shine the light on what you were referring to earlier, Stu. >> All right, Abhinav of course one of the good things about OpenShift is you have quite a lot of customers that have deployed the solution over the years, bring us inside some of your customers what are they doing for AI, ML and help us understand really what differentiates OpenShift in the marketplace for this solution set. >> Yeah, absolutely that's a very good question as well and we're seeing a lot of traction in terms of all kinds of industries, right? Be it the financial services like healthcare, automotive, insurance, oil and gas, manufacturing and so on. For a wide variety of use cases and what we are seeing is at the end of the day like all these deployments are focused on helping improve the customer experience, be able to automate the business processes and then be able to help them increase the revenue, serve their customers better, and also be able to save costs. If you go to openshift.com/ai-ml it's got like a lot of customer stories in there but today I will not touch on three of the customers we have in terms of the different industries. The first one is like Royal Bank of Canada. So they are a top global financial institution based out of Canada and they have more than 17 million clients globally. So they recently announced that they build out an AI-powered private cloud platform that was based on OpenShift as well as the NVIDIA DGX AI compute system and this whole solution is actually helping them to transform the customer banking experience by being able to deliver an AI-powered intelligent apps and also at the same time being able to improve the operational efficiency of their organization. And now with this kind of a solution, what they're able to do is they're able to run thousands of simulations and be able to analyze millions of data points in a fraction of time as compared to the solution that they had before. Yeah, so like a lot of great work going on there but now the next one is the ETCA healthcare. So like ETCA is one of the leading healthcare providers in the country and they're based out of the Nashville, Tennessee. And they have more than 184 hospitals as well as more than 2,000 sites of care in the U.S. as well as in the UK. So what they did was they developed a very innovative machine learning power data platform on top of our OpenShift to help save lives. The first use case was to help with the early detection of sepsis like it's a life-threatening condition and then more recently they've been able to use OpenShift in the same kind of stack to be able to roll out the new applications that are powered by machine learning and deep learning let say to help them fight COVID-19. And recently they did a webinar as well that had all the details on the challenges they had like how did they go about it? Like the people, process and technology and then what the outcomes are. And we are proud to be a partner in the solution to help with such a noble cause. And the third example I want to share here is the BMW group and our partner DXC Technology what they've done is they've actually developed a very high performing data-driven data platform, a development platform based on OpenShift to be able to analyze the massive amount of data from the test fleet, the data and the speed of the say to help speed up the autonomous driving initiatives. And what they've also done is they've redesigned the connected drive capability that they have on top of OpenShift that's actually helping them provide various use cases to help improve the customer experience. With the customers and all of the customers are able to leverage a lot of different value-add services directly from within the car, their own cars. And then like last year at the Red Hat Summit they had a keynote as well and then this year at Summit, they were one of the Innovation Award winners. And we have a lot more stories but these are the three that I thought are actually compelling that I should talk about here on theCUBE. >> Yeah Abhinav just a quick follow up for you. One of the things of course we're looking at in 2020 is how has the COVID-19 pandemic, people working from home how has that impacted projects? I have to think that AI and ML are one of those projects that take a little bit longer to deploy, is it something that you see are they accelerating it? Are they putting on pause or are new project kicking off? Anything you can share from customers you're hearing right now as to the impact that they're seeing this year? >> Yeah what we are seeing is that the customers are now even more keen to be able to roll out the digital (indistinct) but we see a lot of customers are now on the accelerated timeline to be able to say complete the AI, ML project. So yeah, it's picking up a lot of momentum and we talk to a lot of analyst as well and they are reporting the same thing as well. But there is the interest that is actually like ramping up on the AI, ML projects like across their customer base. So yeah it's the right time to be looking at the innovation services that it can help improve the customer experience in the new virtual world that we live in now about COVID-19. >> All right, Tushar you mentioned that there's a few projects involved and of course we know at this conference there's a very large ecosystem. Red Hat is a strong contributor to many, many open source projects. Give us a little bit of a view as to in the AI, ML space who's involved, which pieces are important and how Red Hat looks at this entire ecosystem? >> Thank you, Stu so as you know technology partnerships and the power of open is really what is driving the technology world these days in any ways and particularly in the AI ecosystem. And that is mainly because one of the machine learning is in a bootstrap in the past 10 years or so and a lot of that emerging technology to take advantage of the emerging data as well as compute power has been built on the kind of the Linux ecosystem with openness and languages like popular languages like Python, et cetera. And so what you... and of course tons of technology based in Java but the point really here is that the ecosystem plays a big role and open plays a big role and that's kind of Red Hat's best cup of tea, if you will. And that really has plays a leadership role in the open ecosystem so if we take your question and kind of put it into two parts, what is the... what we are doing in the community and then what we are doing in terms of partnerships themselves, commercial partnerships, technology partnerships we'll take it one step at a time. In terms of the community itself, if you step back to the three years, we worked with other vendors and users, including Google and NVIDIA and H2O and other Seldon, et cetera, and both startups and big companies to develop this Kubeflow ecosystem. The Kubeflow is upstream community that is focused on developing MLOps as we talked about earlier end-to-end machine learning on top of Kubernetes. So Kubeflow right now is in 1.0 it happened a few months ago now it's actually at 1.1 you'll see that coupon here and then so that's the Kubeflow community in addition to that we are augmenting that with the Open Data Hub community which is something that extends the capabilities of the Kubeflow community to also add some of the data pipelining stuff and some of the data stuff that I talked about and forms a reference architecture on how to run some of this on top of OpenShift. So the Open Data Hub community also has a great way of including partners from a technology partnership perspective and then tie that with something that I mentioned earlier, which is the idea of Kubernetes operators. Now, if you take a step back as I mentioned earlier, Kubernetes operators help manage the life cycle of the entire application or containerized application including not only the configuration on day one but also day two activities like update and backups, restore et cetera whatever the application needs. Afford proper functioning that a "operator" needs for it to make sure so anyways, the Kubernetes operators ecosystem is also flourishing and we haven't faced that with the OperatorHub.io which is a community marketplace if you will, I don't call it marketplace a community hub because it's just comprised of community operators. So the Open Data Hub actually can take community operators and can show you how to run that on top of OpenShift and manage the life cycle. Now that's the reference architecture. Now, the other aspect of it really is as I mentioned earlier is the commercial aspect of it. It is from a customer point of view, how do I get certified, supported software? And to that extent, what we have is at the top of the... from a user experience point of view, we have certified operators and certified applications from the AI, ML, ISV community in the Red Hat marketplace. And from the Red Hat marketplace is where it becomes easy for end users to easily deploy these ISVs and manage the complete life cycle as I said. Some of the examples of these kinds of ISVs include startups like H2O although H2O is kind of well known in certain sectors PerceptiLabs, Cnvrg, Seldon, Starburst et cetera and then on the other side, we do have other big giants also in this which includes partnerships with NVIDIA, Cloudera et cetera that we have announced, including our also SaaS I got to mention. So anyways these provide... create that rich ecosystem for data scientists to take advantage of. A TEDx Summit back in April, we along with Cloudera, SaaS Anaconda showcased a live demo that shows all these things to working together on top of OpenShift with this operator kind of idea that I talked about. So I welcome people to go and take a look the openshift.com/ai-ml that Abhinav already referenced should have a link to that it take a simple Google search might download if you need some of that, but anyways and the other part of it is really our work with the hardware OEMs right? And so obviously NVIDIA GPUs is obviously hardware, and that accelerations is really important in this world but we are also working with other OEM partners like HP and Dell to produce this accelerated AI platform that turnkey solutions to run your data-- to create this open AI platform for "private cloud" or the data center. The other thing obviously is IBM, IBM Cloud Pak for Data is based on OpenShift that has been around for some time and is seeing very good traction, if you think about a very turnkey solution, IBM Cloud Pak is definitely kind of well ahead in that and then finally Red Hat is about driving innovation in the open-source community. So, as I said earlier, we are doing the Open Data Hub which that reference architecture that showcases a combination of upstream open source projects and all these ISV ecosystems coming together. So I welcome you to take a look at that at opendatahub.io So I think that would be kind of the some total of how we are not only doing open and community building but also doing certifications and providing to our customers that assurance that they can run these tools in production with the help of a rich certified ecosystem. >> And customer is always key to us so that's the other thing that the goal here is to provide our customers with a choice, right? They can go with open source or they can go with a commercial solution as well. So you want to make sure that they get the best in cloud experience on top of our OpenShift and our broader portfolio as well. >> All right great, great note to end on, Abhinav thank you so much and Tushar great to see the maturation in this space, such an important use case. Really appreciate you sharing this with theCUBE and Kubecon community. >> Thank you, Stu. >> Thank you, Stu. >> Okay thank you and thanks a lot and have a great rest of the show. Thanks everyone, stay safe. >> Thanks you and stay with us for a lot more coverage from KubeCon + CloudNativeCon Europe 2020, the virtual edition I'm Stu Miniman and thank you as always for watching theCUBE. (soft upbeat music plays)

Published Date : Aug 18 2020

SUMMARY :

the globe, it's theCUBE and some of the new use Thanks a lot, Stu, to be here at KubeCon. and the like and of course, and make it ready for the data scientists in the operation side. and for the more Kubernetes operators that have deployed the and also at the same time One of the things of course is that the customers and how Red Hat looks at and some of the data that the goal here is great to see the maturation and have a great rest of the show. the virtual edition I'm Stu Miniman

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+