Image Title

Search Results for modern apps:

Brian Stevens, Neural Magic | Cube Conversation


 

>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.

Published Date : Feb 13 2023

SUMMARY :

CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

BrianPERSON

0.99+

Brian StevensPERSON

0.99+

DavePERSON

0.99+

95%QUANTITY

0.99+

2015DATE

0.99+

John FurrierPERSON

0.99+

90QUANTITY

0.99+

2016DATE

0.99+

32 bitQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

Brian StevePERSON

0.99+

Neural MagicORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two callsQUANTITY

0.99+

both thingsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

second thingQUANTITY

0.99+

bothQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

first callQUANTITY

0.99+

two thingsQUANTITY

0.99+

second partQUANTITY

0.99+

OneQUANTITY

0.99+

both feetQUANTITY

0.98+

OracleORGANIZATION

0.98+

both modesQUANTITY

0.98+

todayDATE

0.98+

80sDATE

0.98+

firstQUANTITY

0.98+

second commandQUANTITY

0.98+

Gou Rao, Portworx & Julio Tapia, Red Hat | KubeCon + CloudNativeCon 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back to theCUBE here in San Diego for KubeCon CloudNativeCon, with John Troyer, I'm Stu Miniman, and happy to welcome to the program two guests, first time guests, I believe. Julio Tapia, who's the director of Cloud BU partner and community with Red Hat and Gou Rao, who's the founder and CEO at Portworx. Gentlemen, thanks so much for joining us. >> Thank you, happy to be here. >> Thanks for having us. >> Alright, let's start with community, ecosystem, it's a big theme we have here at the show. Tell us your main focus, what the team's doing here. >> Sure, so I'm part of a product team, we're responsible for OpenShift, OpenStack and Red Hat virtualization. And my responsibility is to build a partner ecosystem and to do our community development. On the partner front, we work with a lot of different partners. We work with ISVs, we work with OEMs, SIs, COD providers, TelCo partners. And my role is to help evangelize, to help on integrations, a lot of joint solutions, and then do a little bit of go to market as well. And the community side, it's to evangelize with upstream projects or customers with developers, and so forth. >> Alright, so, Gou, actually, it's not luck, but I had a chance to catch up with the Red Hat storage team. Back when I was on the vendor side I partnered with them. Red Hat doesn't sell gear, they're a software company. Everything open-source, and when it comes to data and storage, obviously they're working with partners. So put Portworx into the mix and tell us about the relationship and what you both do together. >> Sure, yeah, we're a Red Hat OpenShift partner. We've been working with them for quite some time now, partner with IBM as well. But yeah, Portworx, we focus on enabling cloud native storage, right? So we complement the OpenShift ecosystem. Essentially we enable people to run stateful services in OpenShift with a lot of agility and we bring DR backup functionality to OpenShift. I'm sure you're familiar with this, but, people, when they deploy OpenShift, they're running fleets of OpenShift clusters. So, multi-cluster management and data accessibility across clusters is a big topic. >> Yeah, if you could, I hear the term cloud native storage, what does that really mean? You know, back a few years ago, containers were stateless, I didn't have my persistent storage, it was super challenging as to how we deal with this. And now we have some options, but what is the goal of what we're doing here? >> There really is no notion of a stateless application, right? Especially when it comes to enterprise applications. What cloud native storage means is, to us at least, it signifies a couple of things. First of all, the consumer of storage is not a machine anymore, right? Typical storage systems are designed to provide storage to either a virtual machine or a hardware server. The consumer of storage is now a container that's running inside of a machine. And in fact, an application is never just one container, it's many containers running on different systems so it's a distributed problem. So what cloud native storage means is the following things. Providing container granular data services, being application aware, meaning that you're providing services to many containers that are running on different systems, and facilitating the data life cycle management of those applications from a Kubernetes way, right? The user experience is now driven through Kubernetes as opposed to a storage admin driving that functionality so it's these three things that make a platform cloud native. >> I want to dig into the operator concept for a little bit here, as it applies to storage. So, first, Operators. I first heard of this a couple years back with the CoreOS folks, who are now part of Red Hat and it's a piece of technology that came into the Kubernetes ecosystem, seems to be very well adopted, they talked about it today on the keynote. And I'd love to hear a little bit more about the ecosystem. But first I want to figure out what it is and in my head, I didn't quite understand it and I'm like, well, okay, automation and life cycle, I get it. There's a bunch of things, Puppet and Chef and Ansible and all sorts of things there. There's also things that know about cloud like Terraform, or Cloudform, or Halloumi, all these sort of things here. But this seems like this is a framework around life cycle, it might be a little higher in the semantic level or knows a little bit more about what's going on inside Kubernetes. >> I'll just touch on this, so Operators, it's a way to codify business logic into the application, so how to manage, how to install, how to manage the life cycle of the application on top of the Kubernetes cluster. So it's a way of automating. >> Right, but-- >> And just to add to that, you mentioned Ansible, Salt, right? So, as engineers, we're always trying to make our lives easier. And so, infrastructure automation certainly is a concept here. What Operators does is it elevates those same needs to more of an application construct level, right? So it's a piece of intelligent software that is watching the entire run-time of an application as opposed to provisioning infrastructure and stepping out of the way. Think of it as a living being, it is constantly running and reacting to what the application is doing and what its needs are. So, on one hand you have automation that sets things up and then the job is done. Here the job is never done, you're sort of, right there as a side car along with the application. >> Nice, but for any sort of life cycle or for any sort of project like this, you have to have code sharing and contributing, right? And so, Julio, can you tell us a little about that? >> What we do is we're obviously all in on Operators. And so we've invested a great deal in terms of documentation and training and workshops. We have certification programs, we're really helping create the ecosystem and facilitate the whole process. You may be familiar, we announced Operator Framework a year ago, it includes Operator SDKs. So we have an Operator SDK for Helm, for Ansible, for Go. We also have announced Operator Life Cycle Manager which does the install, the maintenance and the whole life cycle management process. And then earlier this year we did introduce also, Operatorhub.io which is a community of our Operators, we have about 150 Operators as part of that. >> How does the Operator Framework relate to OpenShare versus upstream Kubernetes? Is it an OpenShift and Red Hat specific thing, or? >> Yes, so, Operatorhub.io is a listing of Operators that includes community Operators. And then we also have certified Operators. And the community Operators run on any Kubernetes instance. The certified Operators make sure that we run on OpenShift specifically. So that's kind of the distinction between those two. >> I remember a Red Hat summit where you talked about some bits. So, give us a little walk around the show, some of the highlights from Operators, the ecosystem, obviously, we've got Portworx here but there's a broad ecosystem. >> Yeah, so we have a huge huge ecosystem. The ISVs play a big part of this. So we've got Operators database partners, security partners, app monitoring partners, storage partners. Yesterday we had an OpenShift commons event, we showcased five of our big Operator partnerships with Couchbase, with MongoDB, with Portworx obviously, with StorageOS and with Dynatrace. But we have a lot of partners in a lot of different areas that are creating these Operators, are certifying them, and they're starting to get a lot of use with customers so it's pretty exciting stuff. >> Gou, I'd love your viewpoint on this because of course, Portworx, good Red Hat partner but you need to work with all the Kubernetes opt-ins out there so, what's the importance of Operators to your business? >> Yeah, you know. OpenShift, obviously, it's one of the leading platforms for Kubernetes out there and so, the reason that is, it's because it's the expectations that it sets to an enterprise customer. It's that Red Hat experience behind it and so the notion of having an Operator that's certified by Red Hat and Red Hat going through the vetting process and making sure that all of the components that it is recommending from its ecosystem that you're putting onto OpenShift, that whole process gives a whole new level of enterprise experience, so, for us, that's been really good, right? Working with Red Hat, going through the process with them and making sure that they are actually double clicking on everything we submit, and there's a real, we iterate with them. So the quality of the product that's put out there within OpenShift is very high. So, we've deployed these Operators now, the Operator that Portworx just announced, right? We have it running in customers' hands so these are real end users, you'll be talking to Ford later on today. Harvard, for example, and so the level of automation that it has provided to them in their platform, it's quite high. >> I was kind of curious to shift maybe to the conference here that you all have a long history. With organizations and both of you personally in the Kubernetes world and cloud native world. We're here at KubeCon CloudNativeCon, North America, 2019. It's pretty big. And I see a lot of folks here, a lot of vendors, a lot of engineers, huge conference, 12,000 people. I mean, any perspective? >> So I've been at Red Hat a little over six years and I was at the very first KubeCon many years ago in San Francisco, I think we had about 200 people there. So this show has really grown over the years. And we're obviously big supporters, we've participated in KubeCon in Shanghai and Barcelona, we're obviously here. We're just super excited about seeing the ecosystem and the whole community grow and expand, so, very exciting. >> Gou? >> Yeah, I mean, like Julio mentioned, right? So, all the way from DockerCon to where we are today and I think last year was 8000 people in Seattle and I think there're probably I've heard numbers like 12? So it's also equally interesting to see the maturity of the products around Kubernetes. And that level of consistency and lack of fracture, right? From mainstream Kubernetes to how it's being adopted in OpenShift, there's consistency across the different Kubernetes platforms. Also, it's very interesting to see how on-prem and public cloud Kubernetes are coexisting. Four years ago we were kind of worried on how that would turn out, but I think it's enabling those hybrid-cloud workloads and I think today in this KubeCon we see a lot of people talking about that and having interest around it. >> That's a really great point there. Julio, want to give you the final word, for people that aren't yet engaged in the ecosystem of Operators, how can they learn more and get involved? >> Yeah, so we're excited to work with everybody, our ecosystem includes customers, partners, contributors, so as long as you're all in on Operators, we're ready to help. We've got tools, we've documentation, we have workshops, we have training, we have certification programs. And we also can help you with go to market. We're very fortunate to have a huge customer footprint, and so for those partners that have solutions, databases, storage solutions, there's a lot of joint opportunities out there that we can participate in. So, really excited to do that. >> Julio, Gou, thank you so much, you have a final word, Gou? >> I was just going to say, so, to follow up on the Operator comment on the certification that Julio mentioned earlier, so the Operator that we have, we were able to achieve level five certification. The level five signifies just the amount of automation that's built into it, so the concept of having Operators help people deploy these complex applications, that's a very important concept in Kubernetes itself. So, glad to be a Red Hat partner. >> That's actually a really good point, we have an Operator maturity model, level one, two, three, four, five. Level one and two are more your installations and upgrades. But the really highly capable ones, the fours and fives, are really to be commended. And Portworx is one of those partners. So we're excited to be here with them. >> That is a powerful statement, we talk about the complexity and how many pieces are in there. Everybody's looking to really help cross that chasm, get the vast majority of people. We need to allow environments to have more automation, more simplicity, a story I heard loud and clear at AnsibleFest earlier this year and through the partner ecosystem. It's good to see progress, so congratulations and thank you both for joining us. >> Thank you, thank you. >> Thank you. >> All right, for John Troyer, I'm Stu Miniman, back with lots more here from KubeCon CloudNativeCon 2019, thanks for watching theCUBE. (electronic music)

Published Date : Nov 19 2019

SUMMARY :

brought to you by Red Hat, I'm Stu Miniman, and happy to welcome to the program it's a big theme we have here at the show. And the community side, it's to evangelize to catch up with the Red Hat storage team. and we bring DR backup functionality to OpenShift. it was super challenging as to how we deal with this. and facilitating the data life cycle management that came into the Kubernetes ecosystem, into the application, so how to manage, and stepping out of the way. and facilitate the whole process. So that's kind of the distinction between those two. the ecosystem, obviously, we've got Portworx here and they're starting to get a lot of use with customers and so the notion of having an Operator in the Kubernetes world and cloud native world. and the whole community grow and expand, So it's also equally interesting to see the maturity for people that aren't yet engaged in the ecosystem And we also can help you with go to market. so the Operator that we have, the fours and fives, are really to be commended. and thank you both for joining us. back with lots more here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

IBMORGANIZATION

0.99+

JulioPERSON

0.99+

Julio TapiaPERSON

0.99+

SeattleLOCATION

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

two guestsQUANTITY

0.99+

San DiegoLOCATION

0.99+

fiveQUANTITY

0.99+

last yearDATE

0.99+

San Diego, CaliforniaLOCATION

0.99+

twoQUANTITY

0.99+

ShanghaiLOCATION

0.99+

Gou RaoPERSON

0.99+

BarcelonaLOCATION

0.99+

GouPERSON

0.99+

PortworxORGANIZATION

0.99+

FordORGANIZATION

0.99+

KubeConEVENT

0.99+

8000 peopleQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

oneQUANTITY

0.99+

North AmericaLOCATION

0.99+

first timeQUANTITY

0.98+

YesterdayDATE

0.98+

DynatraceORGANIZATION

0.98+

TelCoORGANIZATION

0.98+

CouchbaseORGANIZATION

0.98+

firstQUANTITY

0.98+

a year agoDATE

0.98+

OpenShiftTITLE

0.98+

Four years agoDATE

0.98+

three thingsQUANTITY

0.97+

one containerQUANTITY

0.97+

over six yearsQUANTITY

0.97+

KubernetesTITLE

0.97+

DockerConEVENT

0.97+

Operatorhub.ioORGANIZATION

0.96+

CloudNativeConEVENT

0.96+

12QUANTITY

0.96+

about 200 peopleQUANTITY

0.96+

fivesQUANTITY

0.95+

about 150 OperatorsQUANTITY

0.95+

Operator FrameworkTITLE

0.95+

2019DATE

0.93+

CloudNativeCon 2019EVENT

0.93+

earlier this yearDATE

0.93+

Dave Malik, Cisco | Cisco Live US 2019


 

>> Narrator: Live from San Diego, California. It's theCUBE. covering Cisco Live US 2019. Brought to you by Cisco and its ecosystem partners. >> Welcome back to San Diego, everybody. You're watching Cisco Live 2019. This is theCUBE, the leader in live tech coverage. This is day three of our wall-to-wall coverage. We go out to the events, we extract the signal from the noise. My name is Dave Vellante. Stu Miniman is here. Our third host, Lisa Martin is also in the house. Dave Malik is here. He's a fellow and Chief Architect at Cisco. David, good to see you. >> Oh, glad to be here. >> Thanks for coming on. First of all, congratulations on being a fellow. What does that mean, a Cisco Fellow? What do you got to go through to achieve that status? >> It's pretty arduous task. It's one of the most highest technical designations in Cisco, but we work across multiple architectures in technologies, as well as our partners, as well, to drive corporate-wide strategy. >> So you've been talking to customers here, you've been presenting. I think you said you gave three presentations here? Multi-cloud, blockchain, and some stuff on machine intelligence, ML. >> Yes. >> Let's hit those. Kind of summarize the overall themes, and then we'll maybe get into each, and then we got a zillion questions for you. >> Sure, excellent. So multi-cloud, I think one of the customers, we're clearly hearing from them is around, how do we get a universal policy model and connectivity model, and how do you orchestrate workloads seamlessly? And those are some of the challenges that we trying to address at this conference. On blockchain, a lot of buzz out there. We're not talking about Bitcoin or cryptocurrency, it's really about leveraging blockchain from a networking perspective, or an identity and encryption, and providing a uniform ledger that everything is pervasive across infrastructure. And then ML, I think it's the heart of every conversation. How do we take pervasive analytics and bring it into the network so we can drive actionable insights into automation? >> So let's start with the third one. When you talk about ML, was your talk on machine learning? Did it spill into artificial intelligence? What's the difference to you from a technology perspective? >> Machine learning is really getting a lot of the data and looking at repetitive patterns in a very common fashion, and doing a massive correlation across multiple domains. So you may have some things happening in the branch, the data set, or a WAN in cloud, but the whole idea is how do you put them together to drive insight? And through artificial intelligence and algorithms, we can try to take those insights and automate them and push them back into the infrastructure or to the application layer. So now you're driving intelligence for not just consumers or devices, but also humans as well to drive insight. >> All right. So Dave, I wonder if you'd help connect with us what you were talking about there, and we'll get to the multicloud piece because I was at an Amazon show last week from Amazon, talking about how when they look at all the technologies that they use to get packages, their fulfillment centers, everything that they do as a business, ML and AI, they said, is underneath that, and AWS is what's driving that technology from that standpoint. Now, multicloud, AWS is a partner of yours. >> Yes. >> Can you give us how you work in multicloud and does ML and IA, is that a Cisco specific? Are you working with some of the standards out there to connect all those pieces? Help us look at some of the big picture of those items. >> So we believe we're agnostic, whether you connect to Amazon, Azure, Google, et cetera, we believe in a uniform policy model and connectivity model, which is very, very arduous today. So you shouldn't have to have a specific policy model, connectivity model, security model for that matter, for each provider. So we're normalizing that plane completely, which is awesome. Then, at a workload level, regardless of whether your workload is spun up or spun down, it should have the same security posture and visibility. We have certain customers that are running as single applications across multiple clouds, so your data is going to be obviously on-prem, you may be running analytics in TenserFlow, compute in EC2, and connecting to O365, that's one app. And where we're seeing the models go is are you leveraging technology such as this? Do you offer service mesh? How do we tie a lot of these micro-services together and then be able to layer workload orchestration on top? So regardless of where your workload sits, and one key point that we keep hearing from our customers is their ungovernance. How we provide cloud-based governance regardless of where their workload is, and that's something we're doing in a very large fashion with customers that have a multicloud strategy. >> So Stu, I think there's still some confusion around multicloud generally, and maybe Cisco's strategy. I wonder if we could maybe clear it up a little bit. >> Dave, it's that big elephant in the room, and I always feel like everybody describes multicloud from a different angle. >> So let's dig into this a little bit, and let's hear from Cisco's perspective. So you got, to my count, five companies really going after this space. You got Cisco, VMware, IBM Red Hat, Microsoft, and Google with Anthos. Probably all those guys are partners of yours. >> Yes. >> Okay, but you guys want to provide the bromide or the single pane of glass, okay. I'm hearing open and agnostic. That's a differentiator. Security, you're in a good position to make an argument that you're in a good position to make things secure. You got the network and so forth. High-performance network, and cost-effective. Everybody's going to make that argument relative to having multiple stovepipes, but that's part of your story as well. So the question. Why Cisco? What's the key differentiator and what gives you confidence that you can really help win in this marketplace? >> So our core competencies are our networking and security. Whether it's cloud-based security or on-prem security, it's uniform. From a security perspective, we have a universal architecture. Whether it's the endpoint, the edge, the cloud, they're all sharing information and intelligence. That's really important. Instead of having bespoke products, these products and solutions need to communicate with each other, so if someone's sick in one area, we're informing the other one. So threat intelligence and network intelligence is huge. Then more importantly, after working with Google, Microsoft, and Amazon, we have on-prem solutions as well, so as customers are going on their multicloud journey, and eventually the workload will transition, you have the same management experience and security experience. So Anthos was a recent announcement, AWS as well, where you can run on-prem Kubernetes, and you can take the same workload and move it to AWS or GCP, but the management model and the control pane model, they are extremely similar and you don't have to learn anything new from a training perspective. >> Okay, but I used the term agnostic, oh, no. You did agnostic, I said open. But you don't care if it's Anthos or VMware, or OpenShift, you don't care. >> Don't care. >> And, architecturally, how is it that you can successfully not care? >> Because the underlying, fundamental principles is you can load any workload you want with this, bare metal, virtualized, or Kubernetes-based containers, they all need the same. For example, everyone needs bread and water. It's not different. So why should you be able to discriminate against a workload or OpenShare if they're using Pivotal Cloud Foundry, for example? The same model, all applications still need security, visibility, networking, and management, but they should not be different across all clouds, and that's traditionally what you're seeing from the other vendors in the market. They're very unique to their stovepipe, and we want to break down those stovepipes across the board, regardless of what app and what workload you have. >> Dave, talk a little bit about the automation that Cisco's delivering to help enable this because there's skill set challenges, just the scale of these environments are more than humans alone can take care of, so how does that automation, I know you're heavily involved in the CX beast of Cisco. How does that all tie together? >> So we're working on a lot of automation projects with our large enterprises and SPs, I mean, you see Rakuten being fairly prominent in the show, but more importantly, we understand not everyone's building a greenfield environment, not everything is purely public cloud. We have to deal with brownfield, we have to deal with third-party ecosystem partners, so you can't have a vertically tight single-vendor solution. So again, to your point, it's completely open. Then we have frameworks, meaning you have orchestrators that can talk down to the device through programmatic interfaces. That's why we see DevNet surrounding us, but then more importantly, we're looking at services that have workflows that could span on-prem, off-prem, third-party, it doesn't really matter. And we stitch a lot of those workloads southbound, but more importantly, northbound to security at ITSM Systems. So those frameworks are coming into life, whether you're a telecom cloud provider or you're a large enterprise. And they slowly fall into those workflows as they become more multi-domain. You saw David Goeckeler the other day, talking about SD-WAN, ECI, and campus wired and wireless. These domains are coming together and that's where we're driving a lot of the automation work. >> So automation is a linchpin to what business outcome? Ultimately, what are customers trying to achieve through automation? >> There's a couple of things. Mean time to value. So if you're a service provider, to your internal customers or external, time to value and speed and agility are key. The other ones are mean time to repair and mean time to detect. If I can shorten the time to detect and shorten time to react, then I can take proactive and preemptive action in situations that may happen. So time to value is really, really important. Cost is a play, obviously, 'cause when you have more and more machines doing your work, your OPEX will come down, but it's really not purely a cost play. Agility and speed are really driving automation to that scale as we're working with folks like Rakuten and others. >> What do you see, Dave, as the big challenges of achieving automation when customers, first of all, I was talking like, 10, 15 years ago people, they were afraid of automation. Some still are. But they I think understand as part of a digital transformation, they got to automate. So what are the challenges that they're having and how are you helping them solve them? >> So typically, what people have thought about automation has been more network-centric, but as we just discussed multicloud, automation is extending all the way to the public cloud, at the workload or at the functional level, if you're running in Lambda, for example. And then more importantly, traditionally, customers have been leveraging Python scripts and things of that nature, but the days of scripters are there, but they cannot scale. You need a model-driven framework, you need model-driven telemetry to get insight. So I think the learning curve of customers moving to a model-driven mindset is extremely important, and it's not just about the network alone, it's also about the application. So that's why we're driving a lot of our frameworks and education and training. And talent's a big gap that we're helping with with our training programs. >> Okay, so you're talking about insights. There's a lot of data. The saying goes, "data is plentiful, insights aren't." So how do you get from data to insights? Is that where the machine intelligence comes in? Maybe you can explain that. >> There's a combination. Machines can process much faster than humans can, but more importantly, somebody has to drive the 30 or 40 years of experience that Cisco has from our tech, our architects and CX, and our customers and the community that we're developing through DevNet. So taking trusted expertise from humans, from all that knowledge base, combining that with machine learning so we get the best of both worlds. 'Cause you need that experience. And that is driving insight so we can filter the signal from the noise, and then more importantly, how do you take that signal and then, in an automated fashion, push that down to an intent-based architecture across the board. >> Dave, can you take us inside a little bit of your touchpoints into customers? In the old days, it was a CCIE, his job, his title, it was equipment that he would touch, and today, talking about this multicloud and the automation, it's very dispersed as to who owns it, most of what I am managing is not something that's under their purview, so the touchpoints you have into the company and the relationship you have changed a lot in the last three, five years or so. >> Absolutely, 'cause the buying center's also changing, because folks are getting more and more centric around the line of business and want the outcome we want to drive for their clients. So the cloud architecture teams that are being built, they're more horizontal now. You'll have a security person, an application, networking, operations, for example, and what we're actually pioneering, a lot of the enterprises and SPs, is building the site reliability engineering teams, or SRE, which Google has obviously pioneered, and we're bringing those concepts and teams through a CX framework, through telecos, and some of their high-end enterprises initially, and you'll see more around that over the coming months. Our SRE jobs, if you go on LinkedIn, you'll probably see hundreds of them out there now. >> One of the other things we've been watching is Cisco has a very broad portfolio. This whole CX piece has to make sure that, from a customer's standpoint, no matter where the portfolio, whether core, edge, IOT, all these various devices, I should have a simplified experience today, which isn't necessarily, my words, Cisco's legacy. How do you make sure, is software a unifying factor inside the company? Give us a little bit about those dynamics inside. >> Absolutely, so we take a life cycle approach. It's not one and done. From the time there's a concept where you want to build out a blueprint, but there's no transformation journey, we have to make sure we walk the client through preparation, planning, design, architecture optimization, but then making sure they actually adopt, and get the true value. So we're working with our customers to make sure that they go around the entire life cycle, from end to end, from cradle to grave, and be able to constantly optimize. You're hearing the word continuous pretty much everywhere. It's kind of the fundamental of CICD, so we believe in a continuous life cycle approach that we're walking the customers end to end to make sure from the point of purchase to the point of decommissioning, making sure they're getting the most value out of the solutions they're getting from Cisco. >> All right Dave, we'll give you the last word on Cisco Live 2019. Thoughts? Takeaways? >> I think there's just amazing energy here, and there's a lot more to come. Come down to the CX booth and we'll have to show you some more gadgets and solutions where we're taking our forward customers. >> Great. David, thank you very much for coming to The Cube. >> Pleasure, thank you. >> All right, 28,000 people and The Cube bringing it to you live. This is Dave Vellante with Stu Miniman. Lisa Martin is also in the house. We'll be right back from Cisco Live San Diego 2019, Day 3. You're watching The Cube.

Published Date : Jun 12 2019

SUMMARY :

Brought to you by Cisco and its ecosystem partners. We go out to the events, What do you got to go through to achieve that status? It's one of the most highest technical I think you said you gave three presentations here? and then we got a zillion questions for you. and how do you orchestrate workloads seamlessly? What's the difference to you from a technology perspective? So you may have some things happening in the branch, and AWS is what's driving that technology and does ML and IA, is that a Cisco specific? and then be able to layer workload orchestration on top? So Stu, I think there's still some confusion around Dave, it's that big elephant in the room, So you got, to my count, five companies and what gives you confidence that and you don't have to learn anything new or OpenShift, you don't care. So why should you be able to discriminate that Cisco's delivering to help enable this So again, to your point, it's completely open. and shorten time to react, and how are you helping them solve them? and it's not just about the network alone, So how do you get from data to insights? and our customers and the community and the relationship you have and want the outcome we want to drive for their clients. One of the other things we've been watching is and get the true value. All right Dave, we'll give you Come down to the CX booth and we'll have to show you David, thank you very much for coming to The Cube. The Cube bringing it to you live.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave MalikPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

DavePERSON

0.99+

David GoeckelerPERSON

0.99+

Stu MinimanPERSON

0.99+

AWSORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

San DiegoLOCATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

40 yearsQUANTITY

0.99+

LambdaTITLE

0.99+

PythonTITLE

0.99+

last weekDATE

0.99+

28,000 peopleQUANTITY

0.99+

StuPERSON

0.99+

five companiesQUANTITY

0.99+

one appQUANTITY

0.99+

third hostQUANTITY

0.99+

each providerQUANTITY

0.99+

RakutenORGANIZATION

0.99+

third oneQUANTITY

0.98+

AzureORGANIZATION

0.98+

eachQUANTITY

0.98+

EC2TITLE

0.98+

hundredsQUANTITY

0.97+

LinkedInORGANIZATION

0.97+

IBM Red HatORGANIZATION

0.97+

oneQUANTITY

0.97+

FirstQUANTITY

0.96+

both worldsQUANTITY

0.96+

todayDATE

0.96+

five yearsQUANTITY

0.96+

TenserFlowTITLE

0.96+

three presentationsQUANTITY

0.96+

AnthosORGANIZATION

0.95+

single paneQUANTITY

0.94+

Day 3QUANTITY

0.94+

Pivotal Cloud FoundryTITLE

0.93+

OneQUANTITY

0.92+

one areaQUANTITY

0.91+

The CubeTITLE

0.91+

10, 15 years agoDATE

0.89+

one keyQUANTITY

0.88+

single applicationsQUANTITY

0.88+

singleQUANTITY

0.87+

MLTITLE

0.86+

CXTITLE

0.86+

OpenShiftTITLE

0.84+

IATITLE

0.84+

theCUBEORGANIZATION

0.81+

O365TITLE

0.8+

AnthosTITLE

0.8+

2019TITLE

0.77+

a zillion questionsQUANTITY

0.73+

Ashesh Badani, Red Hat | Red Hat Summit 2017


 

>> Man: Live, from Boston, Massachusetts, it's The Cube, covering Red Hat Summit 2017, brought to you by Red Hat. >> Welcome back to The Cube's coverage of the Red Hat Summit, here in Boston, Massachusetts. I'm you're host Rebecca Knight along with my co-host Stu Miniman. We're joined by Ashesh Badani. He is the Vice President and General Manager of OpenShift here at Red Hat. Thanks so much, Ashesh. >> Thanks for having me on yet again. >> Yes, you are a Cube veteran, so welcome back. We're always happy to talk to you. You're also an OpenShift veteran. You've been there five years, and before the cameras are rolling you were talking about how we really are at a tipping point here with OpenShift, and we're seeing a widespread adoption and embrace of containers. Can you share the context with us. >> Sure, so I think we've spent a fair amount of time in this market talking about how important containers are, the value of containers, DevOps, microservices. I think at this Red Hat Summit, we've spent a fair amount of time trying to ensure that people understand one containers are real, in terms of, you know, adoption level that we're seeing. They're being run in production and at scale. And across a variety of industries, right. So, just at this summit we've had over 30 customers from across the world, across industries like financial services, government, transportation, tech, telco, a variety of different industries talking about how they've been deploying and using containers. At our keynotes we had Macquarie Bank from Australia, Barclay's Bank from the U.K. We had United Health slash OPTUM. All talking about, you know, mission critical applications, how their developers running applications, both new applications, right, microservice-style applications, but also existing legacy applications on the OpenShift platform. >> Ashesh, I've been watching this for a few years, we've talked to you many times, we talked about containers. Maybe I'm oversimplifying it but let me know. It feels like OpenShift is your delivery mechanism to take some things that might be hard if I tried to do them myself and made it a lot simpler. Kind of give like Red Hat did for Linux, I have containers, I have Kubernetes, I have OpenStack, and all three of those I didn't hear a ton at the show, I heard a lot about OpenShift and the OpenShift family because underneath OpenShift are those pieces. Am I gettin' it right, or there's more nuance you need-- >> Great observation, great observation, yeah, and we're seeing that from our customers, too. So, when they're making strategic choice, they're talking about, you know, how can I find the container platform to run at scale. When they make their choice, all they're thinking about well what's the existing, you know, development tools I've got. Can it integrate with the ones that I have in place. What's the underlying infrastructure they can run on. OpenStack of course is a great one, right. We have many customers, Santander, BBVA Bank are just two examples of those, but then also, can I run the OpenShift structure in a hybrid cloud, or I guess what we're calling a multi-cloud world now. Amazon, Google, Asher, and so on. But actually interestingly enough we made some announcements with Amazon as well at the show with regard to making sure some AWS service are able to be integrated into the OpenShare platform. So, we find customers today finding a lot of value in the flexibility of the deployment platforms they have in place, integration with various developer tools. I think my colleague Harry Mower was on earlier talking about OpenShift.io, again, you know, super interesting, super exciting now it's been from our perspective with regard to giving developers more choice. And in addition to that, you know, the other parts of the portfolio, right, going to your point, earlier. We're trying to attach that increasingly as options for customers around OpenShift. Storage is a great example. So we announced some work we've doing with regard to container storage with our classified system for OpenShift. >> So you're talking about simplification and that does seem to be a real theme here. Once you've solved that problem, what's next, what are some of the other customer issues that you need to resolve and help them overcome and make their lives easier? >> Yeah, so, the rate of change in technology, as you well know, you've been following this now for a while is just dramatic, right. I think it's probably faster than we've ever seen in a long, long time. I was having a conversation with a large franchise customer with regard to, you know, just as we feel like, you know, we're getting people to adopt Hadoop, everyone seems to have moved on to Spark. And now we're on Spark and people are talking about, oh, maybe Flink is next. Now that we get to Flink, now they're saying AI and ML is next. It's just like, well, where does this stop, right. So I don't think it stops. The question is, you know, at what point of time do you sort of jump in. Embrace the change, right, that's sort of what Devops all about right, continuous change, you know, embrace it, be able to evolve with it, fail fast, pick yourself up, and then have the organization be in this sort of continuous learning, this kaizen environment. >> Yeah, Ashesh, from day one of the keynote talked about the platforms and you know Red Hat Enterprise Linux was kind of the first big platform that can live a lot of environments. Seems OpenShift is a second platform, and the scope of it seems to be growing. We talked to Harry about the OpenShift.io. He alluded to the fact that we might see expansion into the family there. What is, you said that innovation, and you know change keeps growing. What's the boundaries of what OpenShift's going to cover. Where do you see it today and where's the vision go moving forward? >> Yeah, so (laughs) great question, a double-edged sword right. Because on the one hand of course we want to make sure OpenShift is a foundation for doing a lot of stuff. But then there's also the Linux philosophy. Do one thing, do it well, right. And so there's always this temptation with regard to keeping on wanting to take new things on, right, I mean for a long time people have said, hey, why aren't we in the database business? You know, why aren't you doing more? Well the question is, you know, how many things can we do well? Because anything we commit to, as you well know, Red Hat will invest significant amount of engineering effort upstream in the community to help drive it forward, right. We've done that on Linux container front. We're doing that in Kubernetes. Obviously we do that with RHEL, we've done that Jboss technologies. So, we're very, very cognizant of making sure that we provide an environment and basically an ecosystem around us that can grow and be able to attach the momentum we have in place. As a result of that we announced the container health index at this conference, right. Mostly because, you know, there's just no way for one company to provide all the services that are possible, right. So to be able to grade applications that come in, be able to sort of give customers confidence that, you know, these can be certified and work in our environment, and then be able to kind of expand out that ecosystem is going to be really important going forward. >> Yeah, Ashesh that's an interesting one, the container health index. I'm going to play with the term there. What's the health of the container industry there. We at The Cube at DockerCon a couple weeks ago had a couple of Red Hatters on the program. There was kind of a reshuffling, you know. The Moby project, open source, we've got Docker CE, Docker EE, Docker actually referenced, you know, Fedora and CentOS and RHEL as you know, something that they did similar to but, what's your take on the announcements there? >> Sure, sure, I'll probably butcher this quote tremendously, but it was Mark Twain or someone said, "The rumors of my whatever are greatly exaggerated," so. You know, there's always, you know, some amount of change that sort of happens, especially with new technology, and you've got so many players sort of jumping in, right. I mean of course there's Docker Inc. There's Red Hat but there's, you know, Google and IBM and Microsoft and Amazon, and there's a lot of companies, right, that all look at this as a way of advancing the number of workloads that come onto their platforms. You know, we've seen some of the challenges, if you will, that Docker Inc. has been facing as well as the great work it's been doing to help drive the community forward, right. Those are both interesting things. And they've got a business to run. We've announced, we've seen the changes announced with regard to some of the renaming and Moby, and I think there's still a lot more detail that need to be fleshed out. And so I, we're going to wait for the dust to settle. I think we want to make sure our customers are confident. We've had this conversation with many customers that whatever direction that, you know, we go in, we will continue supporting that technology. We will stand behind it. We will make sure we're putting upstream engineers to help drive the community that will provide the greatest value for customers. >> Ashesh, you're one of the judges for the Innovation Awards here. Can you tell us a little bit more about the secret sauce that you're looking for. First of all, how you choose these winners, and what it is you're looking for. >> Yeah, so I'm really proud of the work I do to help support the judging of the Innovation Awards. You know, I think it's a fantastic thing we do to recognize, I was telling Stu earlier, you know we could probably have done a dozen more awards, right, the entries that are coming in are just fantastic. We try to change up the categories a little bit every year to kind of match with the changes in industry, like for example, you know, DevOps, Macquarie Bank was a great example of enterprise transformation. You know, they had this great line in their keynote right, where their ambition I think really impressed a lot of the judges with regard to, hey our competition is not necessarily the other financial service companies, it's the last app you opened. That's a remarkable thing, right. Especially for an existing traditional financial services company, you see. So, I think what we look for is scope, ambition, and vision, but also how you're executing against it, and what demonstrable results do you have for that. And so, you probably saw that, as, you know, we talked about all the various innovation awards we gave, right, whether it's Macquarie Bank or, you know, British Columbia Empower Individuals, right, so the whole notion of celebrating the impact of individual, and create an exchange for them to engage with the wider civic body. That's really important for us. >> Ashesh, one of the innovation award-winners OPTUM we talked to, they're an OpenShift customer. They're really excited with the AWS announcement. We've been chewing on it, talking to a lot of people. We think it's the most significant news coming out of the show. As you said, there's certain details that need to bake out when we look at some of these things. By the time we get to AWS Reinvent we'll probably understand a little bit some of the pricing and, you know, some of the other pieces, and it'll be there, but, you know, bring us from your viewpoint, from an OpenShift standpoint what this means to kind of an extension of the product line and your customers. >> Yeah, so, we've got, at least at this show you had over 30 customers presenting about their use of OpenShift. And we typically find them deploying OpenShift in a variety of different environments including AWS. So for example Swiss Rail, right, obviously out of Switzerland, is taking advantage of, you know, running it in their own data center, taking advantage of AWS as well. When they're doing that they want to make sure that they can consume services from Amazon. Just as if they were running it on Amazon, right. They like the container platform that OpenShift provides, and they like the abstraction level that it puts in place. Of course they have different choices, right. They can choose to run it on OpenStack, they can choose to run OpenShift in some other public cloud provider, yet there are many services that Amazon's releasing that are extremely interesting and value that they provide to their customers. By being able to have relationship with Amazon, and have an almost native experience of those services with regard to OpenShift, regardless of the underlying infrastructure OpenShift runs, it is a very powerful value proposition, definitely for our customers. It's a great one for Amazon because it allows for their services to be used across a multitude of environments. And we feel good about that because we're creating value for our customers, and of course not precluding them from using other services as well. >> I'm wondering if you could shed a little light on the financials, and how you think about things. I mean, you made this great point about the banks saying our competition is the last app you opened. How do you think, with OpenShift, which is free, how do you view your competition, and how do you think about it in terms of the way companies are making their decisions about where they're putting their money in IT investments. >> Right, so OpenShift isn't free, so I'll just make sure-- (all laugh) >> OpenShift.io >> OpenShift.io, I'm sorry, I'm sorry, yes. >> So, consider OpenShift.io as a great gateway into the OpenShift experience, right. It's a cloud-based web environment allows you to develop in browsers, allows you some collaboration with other developers. There's actually a really cool part of the tech, I don't know if Harry talked about right, which is, we almost have, almost machine-learning aspect part of it, you know, that's in play with regard to, you know, if this is the code you're using, here are what other users are doing with it, making recommendations, and so on, so it's a really modern integrated, you know, development environment that we're sort of introducing. That of course doesn't mean that customers can't use existing ones that they have in place. So this is just giving customers more choice. By doing that, we're basically expanding the span of options the customers have. We introduced something called OpenShift Application Runtimes also at this conference, which is supporting existing Java languages or tools or frameworks, right, whether it's Jboss, EAP, Vortex, WildFly, Spring Boot, but also newer ones like No-JavaScript, right, so again, in the spirit of, let's give you choices, let's have you sort of use what you most want to use, and then from our perspective, right, you know, we will create value when it's been deployed at scale. >> Ashesh, before the event at the beginning of it you guys run something called OpenShift Commons. There's some deep education and a lot of it very interactive. I'm curious if there's anything that's kind of surprised you or interesting nuggets that you got from the users. Either stuff that they were further ahead or further behind, or just something that's grabbin' their attention that you could share with our users. >> Well, what I've been really happy to see with the OpenShift Commons is, well, this is a couple things, right. One is we try our best to make it literally a community event, right, so we call it OpenShift Commons but it is a community event. So in the past and even now, we have providers of technologies, even though they might compete with Red Hat and OpenShift available to talk to. Customers, users of our technology, right, so we want it to be an open, welcoming environment for various providers. Second, we're seeing more and more customers wanting to come out and share their experiences, right. So at this OpenShift Commons, I think we had maybe over 10 customers present on, you know, how they were using OpenShift, and sharing with other customers. Number three, this really attracts other customers. I just had a large financial services institution come and say, you know, we attended OpenShift Commons for the first time. This is a fantastic community. How can we become a part of this? You know, get us involved. There's no cost to join, right, it's free and open, and now our numbers are pretty significant. And then when that's in place, right, the ecosystem forms around it. Now, so we have several different ISVs, global system integrators who are all sort of, you know, coalescing, to provide additional services. >> Ashesh, thanks so much for your time, we appreciate it. It's always a pleasure to have you on the program. >> Ashesh: Thanks again, see you all next time. >> I'm Rebecca Knight for Stu Miniman. There'll be more from the Red Hat Summit after this. (relaxed digital beats)

Published Date : May 4 2017

SUMMARY :

brought to you by Red Hat. of the Red Hat Summit, here in Boston, Massachusetts. and before the cameras are rolling in terms of, you know, adoption level that we're seeing. Am I gettin' it right, or there's more nuance you need-- And in addition to that, you know, that you need to resolve and help them overcome just as we feel like, you know, talked about the platforms and you know Well the question is, you know, you know, something that they did similar to that whatever direction that, you know, we go in, First of all, how you choose these winners, it's the last app you opened. and it'll be there, but, you know, is taking advantage of, you know, our competition is the last app you opened. I'm sorry, yes. so again, in the spirit of, let's give you choices, or interesting nuggets that you got from the users. present on, you know, how they were using OpenShift, It's always a pleasure to have you on the program. There'll be more from the Red Hat Summit after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Ashesh BadaniPERSON

0.99+

SantanderORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AsheshPERSON

0.99+

Mark TwainPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

HarryPERSON

0.99+

SwitzerlandLOCATION

0.99+

AWSORGANIZATION

0.99+

Harry MowerPERSON

0.99+

Red HatORGANIZATION

0.99+

United HealthORGANIZATION

0.99+

Docker Inc.ORGANIZATION

0.99+

BBVA BankORGANIZATION

0.99+

Macquarie BankORGANIZATION

0.99+

Barclay's BankORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

AsherORGANIZATION

0.99+

OpenShift.ioTITLE

0.99+

JavaTITLE

0.99+

SecondQUANTITY

0.99+

bothQUANTITY

0.99+

five yearsQUANTITY

0.99+

OpenShiftTITLE

0.99+

second platformQUANTITY

0.99+

Red Hat SummitEVENT

0.98+

over 30 customersQUANTITY

0.98+

AustraliaLOCATION

0.98+

LinuxTITLE

0.98+

two examplesQUANTITY

0.98+

OpenShift CommonsEVENT

0.98+

Red Hat Summit 2017EVENT

0.98+

CubeORGANIZATION

0.98+

FlinkORGANIZATION

0.98+

Innovation AwardsEVENT

0.97+

oneQUANTITY

0.97+

OpenShiftORGANIZATION

0.97+

OneQUANTITY

0.97+

KubernetesTITLE

0.97+

first timeQUANTITY

0.96+

JbossTITLE

0.96+

OpenShareTITLE

0.96+

Spring BootTITLE

0.96+

over 10 customersQUANTITY

0.96+

OpenStackTITLE

0.95+

FirstQUANTITY

0.95+