Image Title

Search Results for pineapple:

Luis Ceze, OctoML | Amazon re:MARS 2022


 

(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)

Published Date : Jun 24 2022

SUMMARY :

live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Luis CezePERSON

0.99+

QualcommORGANIZATION

0.99+

LuisPERSON

0.99+

2015DATE

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Boston DynamicsORGANIZATION

0.99+

two hoursQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

2017DATE

0.99+

JapanLOCATION

0.99+

Madrona Venture CapitalORGANIZATION

0.99+

AMDORGANIZATION

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

IBMORGANIZATION

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

2016DATE

0.99+

University of WashingtonORGANIZATION

0.99+

TodayDATE

0.99+

PepsiORGANIZATION

0.99+

BothQUANTITY

0.99+

yesterdayDATE

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

SiMa.aiORGANIZATION

0.99+

OctoMLTITLE

0.99+

OctoMLORGANIZATION

0.99+

IntelORGANIZATION

0.98+

one instanceQUANTITY

0.98+

DevOpsTITLE

0.98+

Madrona Venture GroupORGANIZATION

0.98+

SwamiPERSON

0.98+

MadronaORGANIZATION

0.98+

about six yearsQUANTITY

0.96+

SpotORGANIZATION

0.96+

The Lean Machine LearningTITLE

0.95+

firstQUANTITY

0.95+

theCUBEORGANIZATION

0.94+

ARMsORGANIZATION

0.94+

pineappleORGANIZATION

0.94+

Raspberry PisORGANIZATION

0.92+

TensorFlowTITLE

0.89+

SnapdragonORGANIZATION

0.89+

about three years oldQUANTITY

0.89+

a couple years agoDATE

0.88+

two hyperscaler cloud providersQUANTITY

0.88+

first onesQUANTITY

0.87+

one ofQUANTITY

0.85+

50 millisecondsQUANTITY

0.83+

Apache TVMORGANIZATION

0.82+

both laughQUANTITY

0.82+

three major cloud providersQUANTITY

0.81+

Bala Rajaraman, IBM | IBM Think 2019


 

>> Live from San Francisco it's the Cube, covering IBM Think 2019. Brought to you by IBM. >> Welcome back to Moscone North. You're watching the Cube's live coverage of IBM Think 2019. This is day three of four days of coverage. I'm Stu Miniman, my cohost is Dave Velante. We've been talking so much about multi Cloud this week that a pineapple express has hit San Francisco, heavy winds and rains but we're safe and dry inside. They're handing out ponchos and making sure that everybody can still at all the information that they have. Happy to welcome back to the program Bala Rajaraman, who's an IBM fellow and vice president with the IBM Cloud Group. Bala, thanks so much for joining us. >> Very nice to meet you. Very nice to meet you guys and thank you again. Very good to see you guys. So, it's always, and I mean this, an honor to be able to talk to the IBM Fellows. I've had the pleasure of working with a number of IBM Fellows, and, of course, we've had many of them on the Cube. It is not just an honorific. It means you've done the work, you've been with IBM for more years than we'll mention on camera. >> (chuckles loudly) >> I did protect you there. But, Bala, we had you on the program a year ago, I think. Give us the update as to what you've been working on and, as we're speaking right now, the IBM research, the key note is going on and I love the connection between what happens at IBM in some of the, you know the pure research, what happens at universities and that funnel of innovation that happens through the company. >> Oh, that's a great question. I'm glad to be back here and it's been a fairly eventful year as you guys know. I worked on our public cloud, we worked with a lot of clients, and we looked at kind of the dynamics of the market, and what is the transition to take advantage of Cloud technologies and there was certain, not just barriers, but certain opportunities in terms of looking at things like private cloud, and you guys have done some really good work on some of the research there. So, private Clouds became a point of focus for me and over the last year, working with a lot of clients the notion of hybrid became really important. And hybrid is not just a Cloud structure it is how you actually build applications on top of it. So, when you look at some of the announcements around things like Watson everywhere it is not driven by just having Watson in different places but the use cases it addresses. So, things like manufacturing where you're bringing more intelligence to the edge, to the manufacturing floor, but you take advantage of big data analytics on the Cloud. How does that work together? How do you address a lot of the technical movements of data, etc. And so that was really the great opportunity and insights that we saw and that drove our multi Cloud and public Cloud strategy. >> You bring up a really good point. I mean, the application is, you know, it's the reason why infrastructure existed, is to run the application and the data's important and I think back 10 years ago, it was like, well, am I going to burst applications? Are they going to stretch between them? And the dialogue has changed quite a bit. It's now with micro services architectures. It's not that my application's spread, it's that pieces of applications could live different places. They can live in a multi Cloud, I sometimes might be splitting it up into geography or time. So, IBM has strong ties, it has lots of applications that deliver, and working in all of these developer micro services environment. Tell us where the work's happening and what you're hearing from users? >> You know, it's a really good question. So, I think we really see three movements here. We accept the fact and the market has validated it in terms of hybrid Cloud, which is, you got pieces running on Prime, you have pieces running on the Edge, you got pieces running on one or more Cloud providers. So the hybrid multi Cloud landscape is really a preferred architecture. But that architecture also brings complexity, and the three dimensions of complexity that I see are one, around programming models and integration. How do all of these components integrate together from a programming perspective? Because you choosing different Clouds for different reasons and how do those capabilities integrate together? The second element is data. You got data moving to different Clouds, you got compute moving to data. How does data governance, how does data integration work? And Rob Thomas talked a lot about some of our different shaders there. The third element is managing the environment from a security perspective, from a compliance perspective, from a configurational consistency perspective, from an upgrade perspective, from an availability and monitoring per... These three dimensions and the amount of work we're doing in that context, not just in terms of the existing portfolio around integration, but when you look at the complexity of micro services, a number of entities, you really start bringing in elements of AI into the discussion. So, how do you enable operations with AI? How do you enable data placement, categorization, governance with AI? So, it is, even thought it might seem like different technologies, I think bringing them together just to solve this problem is perhaps one of the most exciting things that we can provide to the market. >> So, Bala, when it was becoming clear that public Cloud was going to be a force, way back when, people with large estates on Prime started talking about Hybrid, we use that term now, maybe they didn't use it then, but the notion, as Stu was describing, that you'd have some parts of the workload in public, some parts in private, maybe there's bursting. This was long before Edge and the ascendancy of micro services and Docker and Core OS and the like, and then it became pretty obvious to a lot of users, wow, this is really complicated and the use cases just don't warrant the business case. So, these things have changed. We've seen the ascendancy of these other services. You just laid out three complexities, the programming models, the data movement, which is huge, and then, how do you manage all that? So, how are the use cases evolving? Is the business case more compelling now, today, than it was, say 10, 12 years ago? >> Yes, and I think that's a really, really good question because it takes the problem to the next level. The need for Hybrid always existed. It was impractical to look at very, very large complex workloads, transactional needs, to say that there is a one solution fits it all, I can move it somewhere. I think expanding and taking advantage of different Cloud capabilities is much more of a realistic scenario and a more pragmatic, cost effective, and it meets many of the business cases. >> And that's how we got to the 20 percent though-- >> Exactly. >> Which (mumbles) would call a chapter one. >> Yep. So, now we have chapter two. Now, why is chapter two realistic? Your question was very apropos, meaning that there's complexity, and when you open up the aperture to more choices the complexity expands exponentially. What has been really central to it, has been the notion of what degree of consistency can I get across all of these elements? And open source, the emergence of things like containers and Kubernetes, not just from a run time perspective, but from a manageability and orchestration perspective, and giving you a foundation against which the consistency that it can take advantage of, is been the fundamental revolution over the last two years, which has made that intractable problem that we had with multiple choices and the complexity therein to become much more feasible. And so, if you look at our strategy underpinning those three dimensions of programming models and integration, data and management, which are not complexities but realistic needs for enterprises to take things into production. The notion of an underlying open, multi Cloud hybrid platform based on technologies like containers and Kubernetes and orchestrating across that is the fundamental transformation that has happened. And that is the exciting part. If it's open you create an ecosystem, you really address enterprise concerns from how do I build stuff in a consistent way and leverage skills in the market to all the way, how can I manage it to production goals and security goals. I think we are on the cusp of something that can really transform the way enterprises build applications, and that's what Jenny was mentioning when she said that we are very well positioned to take advantage of the Hybrid transformation and the markets behind it. That is the technical underpinnings of why we think we can do it. >> I'm glad you brought up ecosystem because it's vitally important and you've got a few larger companies, I mean wouldn't it be nice if we just say, "Oh, I'll just use one cloud?" well, that's not going to happen. That's not practical. You'd love it to be IBM's cloud, Amazon would love it to be their cloud. It's just not going to happen. So, you have this complexity. Ecosystem is critical. You've only got a few companies that really have the resources to deliver what you described and to attract the ecosystem. So, specifically, can you talk about the ecosystem and how that's evolving, from IBM's perspective? >> So, we're just peeling the onion, and I think we're going through a good progression. When you look at development of an ecosystem, the ability to provide choice to an enterprise, and the foundations on which the ecosystem is built is very critical. Now, if you look at the history of ecosystems it's been built on certain standard programming models, a certain APIs, so, Arvind keeps talking about things like TCPIP was the foundation of why the internet became a platform. So, in a similar vein, when you look at things like Kubernetes, the open standards around it, the ability through all of these orchestration and run time capabilities to create a variety of choice, and the set of choices work together and can be managed together. That is going to create an immense ecos... We are already seeing pieces of it, right? I mean, Kubernetes is becoming a model in which many providers are providing the same component across different clouds. You see the the adaption of Kubernetes across different clouds. So, rather than looking at an individual part of the ecosystem, it is how can we create a broad ecosystem based on open standards, open capabilities, interoperable standards, whether they are formal standards or they are de facto standards. That is what is exciting about this environment. >> And you're essentially saying that Kubernetes is sort of that analog to old reliable TCPIP here, or is that-- >> Yes, to a certain extent. I mean, I think if I combine TCPIP, HTTP, DNS, how things work together, how things can be managed together, you're moving up to the next level of coherent standards across every provider. And that set of standards, the things that made the internet work, Kubernetes makes applications work. So, networks work together, now applications work together and data works together, which is really nice. >> That a rat hole, Stu, but those are largely government funded standards, which, after a while, dried up because people said, "Okay, hey, we're there," and now you got open source as the sort of new-- >> Open source is the engine for innovation, and I think it's a circuitous way to get to that pithy phrase that says, "Open source is the engine of innovation." but that is really the progressive logic that gets you to the fact that it's important. So, Bala, if we have a solid foundational layer one of the things, if I think back in my career 10 years or even 20 years, things like automation and intelligence in my environment, we've been talking about it for a long time. Can you explain why now, 2019 is different and how some of these are actually coming to reality more than some of the efforts we've done in the past? >> That's a great point because there are two interesting trends that are happening. One of them is, the ability to build intelligent systems at scale is being enabled by the cloud. You have the emergence of standard platforms. Now it becomes an application game, which is how can I leverage the scale, the availability and the models of innovation to solve really tricky problems? Whether it is supply chains that are globally distributed or enterprises that need survivability in different ways, all the way from the Clouds to the Edge, what other new architecture is possible? But this distribution's also caused complexity, and when you have complexity you have to bring some of these new technologies into play, like AI and so on and so forth, and so, the combination of these three events, Cloud, the emergence of open standards that span multiple Clouds, and the complexity it creates, but the answer's that complexity that also have emerged, to me, is a very critical point for innovation. I think the landscape is going to look completely different going forward. >> And I don't think you had the business case for automation, right? Do you remember people were afraid of automation. It's like, "Well, why should we really do this? "We can handle this manually," but today, with digital transformation, data, machine intelligence and the Cloud you can actually make a significant business case to transform your business and drive competitive advantage that you couldn't make 20 years ago. >> You have no choice but to look at automation-- >> I think that. >> Because the scale and that everything's there. >> And go back to the notion of micro services. You're taking something that you could fence and you could apply certain prescriptive measures to keep it under control, now you have micro services, you have SAS systems, you have data that is being dispersed, you have computing that's being dispersed. The only way to take advantage of that agility is to create a different level of being able to understand the systems, secure the systems, and that is going to be driven by new technologies, completely new technologies. >> Alright, so, Bala, you mentioned one of my favorite words, innovation, so what are you seeing in the cloud, both from IBM, from your customers, from your partners, where is that incubation for some of those next trends, you and I, if we were prepped from this, thinking about Bell Lev back in the day or the space race, where do we get those ancillary innovations that help transform industries? How will Cloud impact that? >> I think there's two interesting questions there. One is how will cloud impact innovation, but more importantly, how will innovation impact cloud? Right, and both of these directions are important. So, Cloud really gives you the ability to Cloud, and, again I look at Cloud as, kind of in quotes "Cloud" because it includes a variety of easy access to resources, the open source innovation, the ecosystem that gets built, all of them are drivers of innovation. And it gives a way to easily exploit that innovation. I see that as the fundamental value of Cloud. Now, the interesting part is there's a bunch of other innovations, whether you look at the Debater from Watson, or you look at quantum technologies, you look at some of the Watson capabilities around conversation. How do those start transforming existing processes? So, when you look at, for example, to me one of the exciting things about Debater is when you can process incredible amounts of information, not only to provide insight but to provide rational insights and rationalizable insights. It is a tremendous innovation. Can that be applied to topics like why is my network having a problem? And can you actually debate with a system to isolate the problem? The amount of possibilities, when you look at those, how they transform, how you run your Clouds, how you run applications in the Cloud, how you work across the ecosystem, I think there's a tremendous amount of potential. And I think obviously, with things like quantum solving a different class of problems, making it easily accessible, solving different kinds of security issues, the potential is... The accessibility to innovation, with the innovation, and how it impacts the foundation that delivers that innovation. I think there's a great marriage right there. >> Bala, I want to give you the final word, lots going on here at IBM, we'd seen a year ago, we were five or six different shows pulled together, we're here at the renovated Moscone Center, thousands of people walking around, going to so many different sessions, diversity. Give us a key take away that you want people to have when they walk away from IBM Think 2019. >> So, to me, the two key take aways are one, your observation that everything is coming together is really symptomatic of the change in IBM. We are bringing things together to address complexity, make complexity simple for our clients, to bring innovation to our clients. So that's number one. And that has to be done in an open, in an ecosystem across, not just providers, but across a whole, not only a partnership but a resource ecosystem, a open source ecosystem, and the drivers of innovation that we are participating in and how we are going to influence that is something that I look forward to as well. So that's the combination. >> And it's got to be done through code. I mean, it can't just be services and I know IBM knows this, right? >> Oh, yes. >> It's built this company, this recent chapter on top of services, but that's a huge opportunity for IBM, to take its deep industry expertise, codify it through software and code, and deliver on that vision. This is an enormous opportunity. >> Exactly, and the opportunities for code are great because now it's really transforming what new code, what is the potential of code in this ecosystem. >> Well, Bala, really appreciate you coming back, sharing your body of effort that's happening to help pull together and help simplify this multi hybrid Cloud environment. >> Great, thank you very much, guys. >> Great to have you again. >> Thanks. Alright, and we're here for another two days helping to break down all the complexities, go through the nuances, speak to the thought leaders, the customers, the partners. Dave Velante is my cohost for this segment. John Furrier's here, Lisa Martin's here and I'm Stu Miniman, and as always thank you for watching the Cube. (music)

Published Date : Feb 13 2019

SUMMARY :

Brought to you by IBM. and making sure that everybody Very nice to meet you. and I love the connection and over the last year, and the data's important and the three dimensions and the use cases just don't and it meets many of the business cases. Which (mumbles) would And that is the exciting part. the resources to deliver the ability to provide the things that made the internet work, but that is really the progressive logic and so, the combination of And I don't think you had Because the scale and and you could apply certain and how it impacts the foundation that you want people to have and the drivers of innovation And it's got to be done through code. and deliver on that vision. Exactly, and the to help pull together and help simplify the customers, the partners.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VelantePERSON

0.99+

fiveQUANTITY

0.99+

JennyPERSON

0.99+

IBMORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Lisa MartinPERSON

0.99+

AmazonORGANIZATION

0.99+

Bala RajaramanPERSON

0.99+

10 yearsQUANTITY

0.99+

second elementQUANTITY

0.99+

20 percentQUANTITY

0.99+

third elementQUANTITY

0.99+

OneQUANTITY

0.99+

BalaPERSON

0.99+

San FranciscoLOCATION

0.99+

2019DATE

0.99+

two daysQUANTITY

0.99+

ArvindPERSON

0.99+

bothQUANTITY

0.99+

20 yearsQUANTITY

0.99+

two keyQUANTITY

0.99+

Rob ThomasPERSON

0.99+

two interesting questionsQUANTITY

0.98+

three dimensionsQUANTITY

0.98+

John FurrierPERSON

0.98+

Moscone CenterLOCATION

0.98+

a year agoDATE

0.98+

10DATE

0.98+

four daysQUANTITY

0.98+

oneQUANTITY

0.98+

IBM Cloud GroupORGANIZATION

0.97+

last yearDATE

0.97+

20 years agoDATE

0.97+

two interesting trendsQUANTITY

0.97+

three eventsQUANTITY

0.97+

todayDATE

0.97+

KubernetesTITLE

0.96+

one solutionQUANTITY

0.96+

DockerTITLE

0.96+

10 years agoDATE

0.95+

CoreTITLE

0.95+

thousands of peopleQUANTITY

0.94+

Moscone NorthLOCATION

0.94+

this weekDATE

0.93+

six different showsQUANTITY

0.93+

WatsonTITLE

0.93+

EdgeTITLE

0.93+

2019TITLE

0.91+

StuPERSON

0.89+

threeQUANTITY

0.85+

PrimeCOMMERCIAL_ITEM

0.85+

CubeCOMMERCIAL_ITEM

0.81+

last two yearsDATE

0.81+

12 years agoDATE

0.78+

One of themQUANTITY

0.76+

CloudTITLE

0.73+