Image Title

Search Results for IBM Machine Learning Launch 2017:

Jean Francois Puget, IBM | IBM Machine Learning Launch 2017


 

>> Announcer: Live from New York, it's theCUBE, covering the IBM machine learning launch event. Brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Alright, we're back. Jean Francois Puget is here, he's the distinguished engineer for machine learning and optimization at IBM analytics, CUBE alum. Good to see you again. >> Yes. >> Thanks very much for coming on, big day for you guys. >> Jean Francois: Indeed. >> It's like giving birth every time you guys give one of these products. We saw you a little bit in the analyst meeting, pretty well attended. Give us the highlights from your standpoint. What are the key things that we should be focused on in this announcement? >> For most people, machine learning equals machine learning algorithms. Algorithms, when you look at newspapers or blogs, social media, it's all about algorithms. Our view that, sure, you need algorithms for machine learning, but you need steps before you run algorithms, and after. So before, you need to get data, to transform it, to make it usable for machine learning. And then, you run algorithms. These produce models, and then, you need to move your models into a production environment. For instance, you use an algorithm to learn from past credit card transaction fraud. You can learn models, patterns, that correspond to fraud. Then, you want to use those models, those patterns, in your payment system. And moving from where you run the algorithm to the operation system is a nightmare today, so our value is to automate what you do before you run algorithms, and then what you do after. That's our differentiator. >> I've had some folks in theCUBE in the past have said years ago, actually, said, "You know what, algorithms are plentiful." I think he made the statement, I remember my friend Avi Mehta, "Algorithms are free. "It's what you do with them that matters." >> Exactly, that's, I believe in autonomy that open source won for machine learning algorithms. Now the future is with open source, clearly. But it solves only a part of the problem you're facing if you want to action machine learning. So, exactly what you said. What do you do with the results of algorithm is key. And open source people don't care much about it, for good reasons. They are focusing on producing the best algorithm. We are focusing on creating value for our customers. It's different. >> In terms of, you mentioned open source a couple times, in terms of customer choice, what's your philosophy with regard to the various tooling and platforms for open source, how do you go about selecting which to support? >> Machine learning is fascinating. It's overhyped, maybe, but it's also moving very quickly. Every year there is a new cool stuff. Five years ago, nobody spoke about deep learning. Now it's everywhere. Who knows what will happen next year? Our take is to support open source, to support the top open source packages. We don't know which one will win in the future. We don't know even if one will be enough for all needs. We believe one size does not fit all, so our take is support a curated list of mid-show open source. We start with Spark ML for many reasons, but we won't stop at Spark ML. >> Okay, I wonder if we can talk use cases. Two of my favorite, well, let's just start with fraud. Fraud has become much, much better over the past certainly 10 years, but still not perfect. I don't know if perfection is achievable, but lot of false positives. How will machine learning affect that? Can we expect as consumers even better fraud detection in more real time? >> If we think of the full life cycle going from data to value, we will provide a better answer. We still use machine learning algorithm to create models, but a model does not tell you what to do. It will tell you, okay, for this credit card transaction coming, it has a high probability to be fraud. Or this one has a lower priority, uh, probability. But then it's up to the designer of the overall application to make decisions, so what we recommend is to use machine learning data prediction but not only, and then use, maybe, (murmuring). For instance, if your machine learning model tells you this is a fraud with a high probability, say 90%, and this is a customer you know very well, it's a 10-year customer you know very well, then you can be confident that it's a fraud. Then if next fraud tells you this is 70% probability, but it's a customer since one week. In a week, we don't know the customer, so the confidence we can get in machine learning should be low, and there you will not reject the transaction immediately. Maybe you will enter, you don't approve it automatically, maybe you will send a one-time passcode, or you enter a serve vendor system, but you don't reject it outright. Really, the idea is to use machine learning predictions as yet another input for making decisions. You're making decision informed on what you could learn from your past. But it's not replacing human decision-making. Our approach with IBM, you don't see IBM speak much about artificial intelligence in general because we don't believe we're here to replace humans. We're here to assist humans, so we say, augmented intelligence or assistance. That's the role we see for machine learning. It will give you additional data so that you make better decisions. >> It's not the concept that you object to, it's the term artificial intelligence. It's really machine intelligence, it's not fake. >> I started my career as a PhD in artificial intelligence, I won't say when, but long enough. At that time, there were already promise that we have Terminator in the next decade and this and that. And the same happened in the '60s, or it was after the '60s. And then, there is an AI winter, and we have a risk here to have an AI winter because some people are just raising red flags that are not substantiated, I believe. I don't think that technology's here that we can replace human decision-making altogether any time soon, but we can help. We can certainly make some proficient, more efficient, more productive with machine learning. >> Having said that, there are a lot of cognitive functions that are getting replaced, maybe not by so-called artificial intelligence, but certainly by machines and automation. >> Yes, so we're automating a number of things, and maybe we won't need to have people do quality check and just have an automated vision system detect defects. Sure, so we're automating more and more, but this is not new, it has been going on for centuries. >> Well, the list evolved. So, what can humans do that machines can't, and how would you expect that to change? >> We're moving away from IMB machine learning, but it is interesting. You know, each time there is a capacity that a machine that will automate, we basically redefine intelligence to exclude it, so you know. That's what I foresee. >> Yeah, well, robots a while ago, Stu, couldn't climb stairs, and now, look at that. >> Do we feel threatened because a robot can climb a stair faster than us? Not necessarily. >> No, it doesn't bother us, right. Okay, question? >> Yeah, so I guess, bringing it back down to the solution that we're talking about today, if I now am doing, I'm doing the analytics, the machine learning on the mainframe, how do we make sure that we don't overrun and blow out all our MIPS? >> We recommend, so we are not using the mainframe base compute system. We recommend using ZIPS, so additional calls to not overload, so it's a very important point. We claim, okay, if you do everything on the mainframe, you can learn from operational data. You don't want to disturb, and you don't want to disturb takes a lot of different meanings. One that you just said, you don't want to slow down your operation processings because you're going to hurt your business. But you also want to be careful. Say we have a payment system where there is a machine learning model predicting fraud probability, a part of the system. You don't want a young bright data scientist decide that he had a great idea, a great model, and he wants to push his model in production without asking anyone. So you want to control that. That's why we insist, we are providing governance that includes a lot of things like keeping track of how models were created from which data sets, so lineage. We also want to have access control and not allow anyone to just deploy a new model because we make it easy to deploy, so we want to have a role-based access and only someone someone with some executive, well, it depends on the customer, but not everybody can update the production system, and we want to support that. And that's something that differentiates us from open source. Open source developers, they don't care about governance. It's not their problem, but it is our customer problem, so this solution will come with all the governance and integrity constraints you can expect from us. >> Can you speak to, first solution's going to be on z/OS, what's the roadmap look like and what are some of those challenges of rolling this out to other private cloud solutions? >> We are going to shape this quarter IBM machine learning for Z. It starts with Spark ML as a base open source. This is not, this is interesting, but it's not all that is for machine learning. So that's how we start. We're going to add more in the future. Last week we announced we will shape Anaconda, which is a major distribution for Python ecosystem, and it includes a number of machine learning open source. We announced it for next quarter. >> I believe in the press release it said down the road things like TensorFlow are coming, H20. >> But Anaconda will announce for next quarter, so we will leverage this when it's out. Then indeed, we have a roadmap to include major open source, so major open source are the one from Anaconda (murmuring), mostly. Key deep learning, so TensorFlow and probably one or two additional, we're still discussing. One that I'm very keen on, it's called XGBoost in one word. People don't speak about it in newspapers, but this is what wins all Kaggle competitions. Kaggle is a machine learning competition site. When I say all, all that are not imagery cognition competitions. >> Dave: And that was ex-- >> XGBoost, X-G-B-O-O-S-T. >> Dave: XGBoost, okay. >> XGBoost, and it's-- >> Dave: X-ray gamma, right? >> It's really a package. When I say we don't know which package will win, XGBoost was introduced a year ago also, or maybe a bit more, but not so long ago, and now, if you have structure data, it is the best choice today. It's a really fast-moving, but so, we will support mid-show deep learning package and mid-show classical learning package like the one from Anaconda or XGBoost. The other thing we start with Z. We announced in the analyst session that we will have a power version and a private cloud, meaning XTC69X version as well. I can't tell you when because it's not firm, but it will come. >> And in public cloud as well, I guess we'll, you've got components in the public cloud today like the Watson Data Platform that you've extracted and put here. >> We have extracted part of the testing experience, so we've extracted notebooks and a graphical tool called ModelBuilder from DSX as part of IBM machine learning now, and we're going to add more of DSX as we go. But the goal is to really share code and function across private cloud and public cloud. As Rob Thomas defined it, we want with private cloud to offer all the features and functionality of public cloud, except that it would run inside a firewall. We are really developing machine learning and Watson machine learning on a command code base. It's an internal open source project. We share code, and then, we shape on different platform. >> I mean, you haven't, just now, used the word hybrid. Every now and then IBM does, but do you see that so-called hybrid use case as viable, or do you see it more, some workloads should run on prem, some should run in the cloud, and maybe they'll never come together? >> Machine learning, you basically have to face, one is training and the other is scoring. I see people moving training to cloud quite easily, unless there is some regulation about data privacy. But training is a good fit for cloud because usually you need a large computing system but only for limited time, so elasticity's great. But then deployment, if you want to score transaction in a CICS transaction, it has to run beside CICS, not cloud. If you want to score data on an IoT gateway, you want to score other gateway, not in a data center. I would say that may not be what people think first, but what will drive really the split between public cloud, private, and on prem is where you want to apply your machine learning models, where you want to score. For instance, smart watches, they are switching to gear to fit measurement system. You want to score your health data on the watch, not in the internet somewhere. >> Right, and in that CICS example that you gave, you'd essentially be bringing the model to the CICS data, is that right? >> Yes, that's what we do. That's a value of machine learning for Z is if you want to score transactions happening on Z, you need to be running on Z. So it's clear, mainframe people, they don't want to hear about public cloud, so they will be the last one moving. They have their reasons, but they like mainframe because it ties really, really secure and private. >> Dave: Public cloud's a dirty word. >> Yes, yes, for Z users. At least that's what I was told, and I could check with many people. But we know that in general the move is for public cloud, so we want to help people, depending on their journey, of the cloud. >> You've got one of those, too. Jean Francois, thanks very much for coming on theCUBE, it was really a pleasure having you back. >> Thank you. >> You're welcome. Alright, keep it right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from the Waldorf Astoria. IBM's machine learning announcement, be right back. (electronic keyboard music)

Published Date : Feb 15 2017

SUMMARY :

Brought to you by IBM. Good to see you again. on, big day for you guys. What are the key things that we and then what you do after. "It's what you do with them that matters." So, exactly what you said. but we won't stop at Spark ML. the past certainly 10 years, so that you make better decisions. that you object to, that we have Terminator in the next decade cognitive functions that and maybe we won't need to and how would you expect that to change? to exclude it, so you know. and now, look at that. Do we feel threatened because No, it doesn't bother us, right. and you don't want to disturb but it's not all that I believe in the press release it said so we will leverage this when it's out. and now, if you have structure data, like the Watson Data Platform But the goal is to really but do you see that so-called is where you want to apply is if you want to score so we want to help people, depending on it was really a pleasure having you back. from the Waldorf Astoria.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

Jean FrancoisPERSON

0.99+

IBMORGANIZATION

0.99+

10-yearQUANTITY

0.99+

Stu MinimanPERSON

0.99+

Avi MehtaPERSON

0.99+

New YorkLOCATION

0.99+

AnacondaORGANIZATION

0.99+

70%QUANTITY

0.99+

Jean Francois PugetPERSON

0.99+

next yearDATE

0.99+

TwoQUANTITY

0.99+

Last weekDATE

0.99+

next quarterDATE

0.99+

90%QUANTITY

0.99+

Rob ThomasPERSON

0.99+

one-timeQUANTITY

0.99+

todayDATE

0.99+

Five years agoDATE

0.99+

one wordQUANTITY

0.99+

CICSORGANIZATION

0.99+

PythonTITLE

0.99+

a year agoDATE

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

next decadeDATE

0.98+

one weekQUANTITY

0.98+

first solutionQUANTITY

0.98+

XGBoostTITLE

0.98+

a weekQUANTITY

0.97+

Spark MLTITLE

0.97+

'60sDATE

0.97+

ModelBuilderTITLE

0.96+

one sizeQUANTITY

0.96+

OneQUANTITY

0.95+

firstQUANTITY

0.94+

Watson Data PlatformTITLE

0.93+

each timeQUANTITY

0.93+

KaggleORGANIZATION

0.92+

StuPERSON

0.91+

this quarterDATE

0.91+

DSXTITLE

0.89+

XGBoostORGANIZATION

0.89+

Waldorf AstoriaORGANIZATION

0.86+

Spark ML.TITLE

0.85+

z/OSTITLE

0.82+

yearsDATE

0.8+

centuriesQUANTITY

0.75+

10 yearsQUANTITY

0.75+

DSXORGANIZATION

0.72+

TerminatorTITLE

0.64+

XTC69XTITLE

0.63+

IBM Machine Learning Launch 2017EVENT

0.63+

couple timesQUANTITY

0.57+

machine learningEVENT

0.56+

XTITLE

0.56+

WatsonTITLE

0.55+

these productsQUANTITY

0.53+

-G-BCOMMERCIAL_ITEM

0.53+

H20ORGANIZATION

0.52+

TensorFlowORGANIZATION

0.5+

theCUBEORGANIZATION

0.49+

CUBEORGANIZATION

0.37+

Luis Ceze, OctoML | Amazon re:MARS 2022


 

(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)

Published Date : Jun 24 2022

SUMMARY :

live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Luis CezePERSON

0.99+

QualcommORGANIZATION

0.99+

LuisPERSON

0.99+

2015DATE

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Boston DynamicsORGANIZATION

0.99+

two hoursQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

2017DATE

0.99+

JapanLOCATION

0.99+

Madrona Venture CapitalORGANIZATION

0.99+

AMDORGANIZATION

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

IBMORGANIZATION

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

2016DATE

0.99+

University of WashingtonORGANIZATION

0.99+

TodayDATE

0.99+

PepsiORGANIZATION

0.99+

BothQUANTITY

0.99+

yesterdayDATE

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

SiMa.aiORGANIZATION

0.99+

OctoMLTITLE

0.99+

OctoMLORGANIZATION

0.99+

IntelORGANIZATION

0.98+

one instanceQUANTITY

0.98+

DevOpsTITLE

0.98+

Madrona Venture GroupORGANIZATION

0.98+

SwamiPERSON

0.98+

MadronaORGANIZATION

0.98+

about six yearsQUANTITY

0.96+

SpotORGANIZATION

0.96+

The Lean Machine LearningTITLE

0.95+

firstQUANTITY

0.95+

theCUBEORGANIZATION

0.94+

ARMsORGANIZATION

0.94+

pineappleORGANIZATION

0.94+

Raspberry PisORGANIZATION

0.92+

TensorFlowTITLE

0.89+

SnapdragonORGANIZATION

0.89+

about three years oldQUANTITY

0.89+

a couple years agoDATE

0.88+

two hyperscaler cloud providersQUANTITY

0.88+

first onesQUANTITY

0.87+

one ofQUANTITY

0.85+

50 millisecondsQUANTITY

0.83+

Apache TVMORGANIZATION

0.82+

both laughQUANTITY

0.82+

three major cloud providersQUANTITY

0.81+

Linton Ward, IBM & Asad Mahmood, IBM - DataWorks Summit 2017


 

>> Narrator: Live from San Jose, in the heart of Silicon Valley, it's theCUBE! Covering Data Works Summit 2017. Brought to you by Hortonworks. >> Welcome back to theCUBE. I'm Lisa Martin with my co-host George Gilbert. We are live on day one of the Data Works Summit in San Jose in the heart of Silicon Valley. Great buzz in the event, I'm sure you can see and hear behind us. We're very excited to be joined by a couple of fellows from IBM. A very longstanding Hortonworks partner that announced a phenomenal suite of four new levels of that partnership today. Please welcome Asad Mahmood, Analytics Cloud Solutions Specialist at IBM, and medical doctor, and Linton Ward, Distinguished Engineer, Power Systems OpenPOWER Solutions from IBM. Welcome guys, great to have you both on the queue for the first time. So, Linton, software has been changing, companies, enterprises all around are really looking for more open solutions, really moving away from proprietary. Talk to us about the OpenPOWER Foundation before we get into the announcements today, what was the genesis of that? >> Okay sure, we recognized the need for innovation beyond a single chip, to build out an ecosystem, an innovation collaboration with our system partners. So, ranging from Google to Mellanox for networking, to Hortonworks for software, we believe that system-level optimization and innovation is what's going to bring the price performance advantage in the future. That traditional seamless scaling doesn't really bring us there by itself but that partnership does. >> So, from today's announcements, a number of announcements that Hortonworks is adopting IBM's data science platforms, so really the theme this morning of the keynote was data science, right, it's the next leg in really transforming an enterprise to be very much data driven and digitalized. We also saw the announcement about Atlas for data governance, what does that mean from your perspective on the engineering side? >> Very exciting you know, in terms of building out solutions of hardware and software the ability to really harden the Hortonworks data platform with servers, and storage and networking I think is going to bring simplification to on-premises, like people are seeing with the Cloud, I think the ability to create the analyst workbench, or the cognitive workbench, using the data science experience to create a pipeline of data flow and analytic flow, I think it's going to be very strong for innovation. Around that, most notable for me is the fact that they're all built on open technologies leveraging communities that universities can pick up, contribute to, I think we're going to see the pace of innovation really pick up. >> And on that front, on pace of innovation, you talked about universities, one of the things I thought was really a great highlight in the customer panel this morning that Raj Verma hosted was you had health care, insurance companies, financial services, there was Duke Energy there, and they all talked about one of the great benefits of open source is that kids in universities have access to the software for free. So from a talent attraction perspective, they're really kind of fostering that next generation who will be able to take this to the next level, which I think is a really important point as we look at data science being kind of the next big driver or transformer and also going, you know, there's not a lot of really skilled data scientists, how can that change over time? And this is is one, the open source community that Hortonworks has been very dedicated to since the beginning, it's a great it's really a great outcome of that. >> Definitely, I think the ability to take the risk out of a new analytical project is one benefit, and the other benefit is there's a tremendous, not just from young people, a tremendous amount of interest among programmers, developers of all types, to create data science skills, data engineering and data science skills. >> If we leave aside the skills for a moment and focus on the, sort of, the operationalization of the models once they're built, how should we think about a trained model, or, I should break it into two pieces. How should we think about training the models, where the data comes from and who does it? And then, the orchestration and deployment of them, Cloud, Edge Gateway, Edge device, that sort of thing. >> I think it all comes down to exactly what your use case is. You have to identify what use case you're trying to tackle, whether that's applicable to clinical medicine, whether that's applicable to finance, to banking, to retail or transportation, first you have to have that use case in mind, then you can go about training that model, developing that model, and for that you need to have a good, potent, robust data set to allow you to carry out that analysis and whether you want to do exploratory analysis or you want to do predictive analysis, that needs to be very well defined in your training stage. Once you have that model developed, then we have certain services, such as Watson Machine Learning, within data science experience that will allow you to take that model that you just developed, just moments ago, and just deploy that as a restful API that you can then embed into an application and to your solution, and in that solution you can basically use across industry. >> Are there some use cases where you have almost like a tiering of models where, you know, there're some that are right at the edge like, you know, a big device like a car and then, you know, there's sort of the fog level which is the, say, cell towers or other buildings nearby and then there's something in the Cloud that's sort of like, master model or an ensemble of models, I don't assume that's like, Evel Knievel would say you know, "Don't try that at home," but sort-of, is the tooling being built to enable that? >> So the tooling is already in existence right now. You can actually go ahead right now and be able to build out prototypes, even full-level, full-range applications right on the Cloud, and you can do that, you can do that thanks to Data Science Experience, you can do that thanks to IBM Bluemix, you can go ahead and do that type of analysis right there and not only that, you can allow that analysis to actually guide you along the path from building a model to building a full-range application and this is all happening on the Cloud level. We can talk more about it happening on on-premise level but on the Cloud level specifically, you can have those applications built on the fly, on the Cloud and have them deployed for web apps, for moblie apps, et cetera. >> One of the things that you talked about is use cases in certain verticals, IBM has been very strong and vertically focused for a very long time, but you kind of almost answered the question that I'd like to maybe explore a little bit more about building these models, training the models, in say, health care or telco and being able to deploy them, where's the horizontal benefits there that IBM would be able to deliver faster to other industries? >> Definitely, I think the main thing is that IBM, first of all, gives you that opportunity, that platform to say that hey, you have a data set, you have a use case, let's give you the tooling, let's give you the methodology to take you from data, to a model, to ultimately that full range application and specifically, I've built some applications specific to federal health care, specifically to address clinical medicine and behavioral medicine and that's allowed me to actually use IBM tools and some open source technologies as well to actually go out and build these applications on the fly as a prototype to show, not only the realm, the art of the possible when it comes to these technologies, but also to solve problems, because ultimately, that's what we're trying to accomplish here. We're trying to find real-world solutions to real-world problems. >> Linton, let me re-direct something towards you about, a lot of people are talking about how Moore's law slowing down or even ending, well at least in terms of speed of processors, but if you look at the, not just the CPU but FPGA or Asic or the tensor processing unit, which, I assume is an Asic, and you have the high speed interconnects, if we don't look at just, you know what can you fit on one chip, but you look at, you know 3D what's the density of transistors in a rack or in a data center, is that still growing as fast or faster, and what does it mean for the types of models that we can build? >> That's a great question. One of the key things that we did with the OpenPOWER Foundation, is to open up the interfaces to the chip, so with NVIDIA we have NVLink, which gives us a substantial increase in bandwidth, we have created something called OpenCAPI, which is a coherent protocol, to get to other types of accelerators, so we believe that hybrid computing in that form, you saw NVIDIDA on-stage this morning, and we believe especially for deploring the acceleration provided for GPUs is going to continue to drive substantial growth, it's a very exciting time. >> Would it be fair to say that we're on the same curve, if we look at it, not from the point of view of, you know what can we fit on a little square, but if we look at what can we fit in a data center or the power available to model things, you know Jeff Dean at Google said, "If Android users "talk into their phones for two to three minutes a day, "we need two to three times the data centers we have." Can we grow that price performance faster and enable sort of things that we did not expect? >> I think the innovation that you're describing will, in fact, put pressure on data centers. The ability to collect data from autonomous vehicles or other N points is really going up. So, we're okay for the near-term but at some point we will have to start looking at other technologies to continue that growth. Right now we're in the throws of what I call fast data versus slow data, so keeping the slow data cheaply and getting the fast data closer to the compute is a very big deal for us, so NAND flash and other non-volatile technologies for the fast data are where the innovation is happening right now, but you're right, over time we will continue to collect more and more data and it will put pressure on the overall technologies. >> Last question as we get ready to wrap here, Asad, your background is fascinating to me. Having a medical degree and working in federal healthcare for IBM, you talked about some of the clinical work that you're doing and the models that you're helping to build. What are some of the mission critical needs that you're seeing in health care today that are really kind of driving, not just health care organizations to do big data right, but to do data science right? >> Exactly, so I think one of the biggest questions that we get and one of the biggest needs that we get from the healthcare arena is patient-centric solutions. There are a lot of solutions that are hoping to address problems that are being faced by physicians on a day-to-day level, but there are not enough applications that are addressing the concerns that are the pain points that patients are facing on a daily basis. So the applications that I've started building out at IBM are all patient-centric applications that basically put the level of their data, their symptoms, their diagnosis, in their hands alone and allows them to actually find out more or less what's going wrong with my body at any particular time during the day and then find the right healthcare professional or the right doctor that is best suited to treating that condition, treating that diagnosis. So I think that's the big thing that we've seen from the healthcare market right now. The big need that we have, that we're currently addressing with our Cloud analytics technology which is just becoming more and more advanced and sophisticated and is trending towards some of the other health trends or technology trends that we have currently right now on the market, including the Blockchain, which is tending towards more of a de-centralized focus on these applications. So it's actually they're putting more of the data in the hands of the consumer, of the hands of the patient, and even in the hands of the doctor. >> Wow, fantastic. Well you guys, thank you so much for joining us on theCUBE. Congratulations on your first time being on the show, Asad Mahmood and Linton Ward from IBM, we appreciate your time. >> Thank you very much. >> Thank you. >> And for my co-host George Gilbert, I'm Lisa Martin, you're watching theCUBE live on day one of the Data Works Summit from Silicon Valley but stick around, we've got great guests coming up so we'll be right back.

Published Date : Jun 13 2017

SUMMARY :

Brought to you by Hortonworks. Welcome guys, great to have you both to build out an ecosystem, an innovation collaboration to be very much data driven and digitalized. the ability to really harden the Hortonworks data platform and also going, you know, there's not a lot is one benefit, and the other benefit is of the models once they're built, and for that you need to have a good, potent, to actually guide you along the path that platform to say that hey, you have a data set, the acceleration provided for GPUs is going to continue or the power available to model things, you know and getting the fast data closer to the compute for IBM, you talked about some of the clinical work There are a lot of solutions that are hoping to address Well you guys, thank you so much for joining us on theCUBE. on day one of the Data Works Summit from Silicon Valley

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

Jeff DeanPERSON

0.99+

Duke EnergyORGANIZATION

0.99+

twoQUANTITY

0.99+

Asad MahmoodPERSON

0.99+

Silicon ValleyLOCATION

0.99+

GoogleORGANIZATION

0.99+

Raj VermaPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AsadPERSON

0.99+

MellanoxORGANIZATION

0.99+

San JoseLOCATION

0.99+

HortonworksORGANIZATION

0.99+

Evel KnievelPERSON

0.99+

OpenPOWER FoundationORGANIZATION

0.99+

two piecesQUANTITY

0.99+

LintonPERSON

0.99+

Linton WardPERSON

0.99+

three timesQUANTITY

0.99+

Data Works SummitEVENT

0.99+

oneQUANTITY

0.98+

first timeQUANTITY

0.98+

todayDATE

0.98+

one chipQUANTITY

0.98+

one benefitQUANTITY

0.97+

OneQUANTITY

0.96+

AndroidTITLE

0.96+

three minutes a dayQUANTITY

0.95+

bothQUANTITY

0.94+

day oneQUANTITY

0.94+

MoorePERSON

0.93+

this morningDATE

0.92+

OpenCAPITITLE

0.91+

firstQUANTITY

0.9+

single chipQUANTITY

0.89+

Data Works Summit 2017EVENT

0.88+

telcoORGANIZATION

0.88+

DataWorks Summit 2017EVENT

0.85+

NVLinkCOMMERCIAL_ITEM

0.79+

NVIDIDATITLE

0.76+

IBM BluemixORGANIZATION

0.75+

Watson Machine LearningTITLE

0.75+

Power Systems OpenPOWER SolutionsORGANIZATION

0.74+

EdgeTITLE

0.67+

Edge GatewayTITLE

0.62+

coupleQUANTITY

0.6+

CoveringEVENT

0.6+

NarratorTITLE

0.56+

AtlasTITLE

0.52+

LintonORGANIZATION

0.51+

WardPERSON

0.47+

3DQUANTITY

0.36+

Wrap Up - IBM Machine Learning Launch - #IBMML - #theCUBE


 

(jazzy intro music) [Narrator] Live from New York, it's the Cube! Covering the IBM Machine Learning Launch Event, brought to you by IBM. Now, here are your hosts: Dave Vellante and Stu Miniman. >> Welcome back to New York City, everybody. This is theCUBE, the leader in live tech coverage. We've been covering, all morning, the IBM Machine Learning announcement. Essentially what IBM did is they brought Machine Learning to the z platform. My co-host and I, Stu Miniman, have been talking to a number of guests, and we're going to do a quick wrap here. You know, Stu, my take is, when we first heard about this, and the world first heard about this, we were like, "Eh, okay, that's nice, that's interesting." But what it underscores is IBM's relentless effort to continue to keep z relevant. We saw it with the early Linux stuff, we're now seeing it with all the OpenSource and Spark tooling. You're seeing IBM make big positioning efforts to bring analytics and transactions together, and the simple point is, a lot of the world's really important data runs on mainframes. You were just quoting some stats, which were pretty interesting. >> Yeah, I mean, Dave, you know, one of the biggest challenges we know in IT is migrating. Moving from one thing to another is really tough. I love the comment from Barry Baker. Well, if I need to change my platform, by the time I've moved it, that whole digital transformation, we've missed that window. It's there. We know how long that takes: months, quarters. I was actually watching Twitter, and it looks like Chris Maddern is here. Chris was the architect of Venmo, which my younger sisters, all the millennials that I know, everybody uses Venmo. He's here, and he was like, "Almost all the banks, airlines, and retailers "still run on mainframes in 2017, and it's growing. "Who knew?" You've got a guy here that's developing really cool apps that was finding this interesting, and that's an angle I've been looking at today, Dave, is how do you make it easy for developers to leverage these platforms that are already there? The developers aren't going to need to care whether it's a mainframe or a cloud or x86 underneath. IBM is giving you the options, and as a number of our guests said, they're not looking to solve all the problems here. Here's taking this really great, new type of application using Machine Learning and making it available on that platform that so many of their customers already use. >> Right, so we heard a little bit of roadmap here: the ML for z goes GA in Q1, and then we don't have specific timeframes, but we're going to see Power platform pick this up. We heard from Jean-Francois Puget that they'll have an x86 version, and then obviously a cloud version. It's unclear what that hybrid cloud will look like. It's a little fuzzy right now, but that's something that we're watching. Obviously a lot of the model development and training is going to live in the cloud, but the scoring is going to be done locally is how the data scientists like to think about these things. So again, Stu, more mainframe relevance. We've got another cycle coming soon for the mainframe. We're two years into the z13. When IBM has mainframe cycles, it tends to give a little bump to earnings. Now, granted, a smaller and smaller portion of the company's business is mainframe, but still, mainframe drags a lot of other software with it, so it remains a strategic component. So one of the questions we get a lot is what's IBM doing in so-called hardware? Of course, IBM says it's all software, but we know they're still selling boxes, right? So, all the hardware guys, EMC, Dell, IBM, HPE, et cetera. A lot of software content, but it's still a hardware business. So there's really two platforms there: there's the z and there's the Power. And those are both strategic to IBM. It sold its x86 business because it didn't see it as strategic. They just put Bob Picciano in charge of the Power business, so there's obviously real commitments to those platforms. Will they make a dent in the market share numbers? Unclear. It looks like it's steady as she goes, not dramatic increase in share. >> Yeah, and Dave, I didn't hear anybody come in here and say this offering is going to say, well let me dump x86 and go buy mainframe. That's not the target that I heard here. I would have loved to hear a little bit more as to where this fits into the broader IOT strategy. We talked a little bit on the intro, Dave. There's a lot of reasons why data's going to stick at the edge when we look at the numbers. For the huge growth of public cloud, the amount of data in public cloud hasn't caught up to the equivalent of what it would be in data centers itself. What I mean by that is, we usually spend, say 30% on average for storage costs inside a data center. If we look at public cloud, it's more around 10%. So, at AWS Reinvent, I talked to a number of the ecosystem partners, that started to see things like data lakes starting to appear in the cloud. This solution isn't in the data lake family, but it's with the analytics and everything that's happening with streaming and machine learning. It's large repositories of data and huge transactions of data that are happening in the mainframe, and just trying to squint through where all the data lives, and the new waves of technologies coming in. We heard how this can tie into some of the mobile and streaming activities that aren't on the mainframe, so that it can pull them into the other decisions, but some broader picture that I'm sure IBM will be able to give in the future. >> Well, normally you would expect a platform that is however many decades old the mainframe is, after the whole mainframe downsizing trend, you would expect there would be a managed decline in that business. I mean, you're seeing it in a lot of places now. We've talked about this, with things like Symmetrics, right? You minimize and focus the R&D investments, and you try to manage cost, you manage the decline of the business. IBM has almost sort of flipped that. They say, okay, we've got DB2, we're going to continue to invest in that platform. We've got our major subsystems, we're going to enhance the platform with Open Source technologies. We've got a big enough base that we can continue to mine perpetually. The more interesting thing to me about this announcement is it underscores how IBM is leveraging its analytics platform. So, we saw the announcement of the Watson Data Platform last September, which was sort of this end-to-end data pipeline collaboration between different persona engine, which is quite unique in the marketplace, a lot of differentiation there. Still some services. Last week at Spark Summit, I talked to some of the users and some of the partners of the Watson Data Platform. They said it's great, we love it, it's probably the most robust in the marketplace, but it's still a heavy lift. It still requires a fair amount of services, and IBM's still pushing those services. So IBM still has a large portion of the company still a services company. So, not surprising there, but as I've said many many times, the challenge IBM has is to really drive that software business, simplify the deployment and management of that software for its customers, which is something that I think it's working hard on doing. And the other thing is you're seeing IBM leverage those platforms, those analytics platforms, into different hardware segments, or hardware/cloud segments, whether it's BlueMix, z, Power, so, pushing it out through the organization. IBM still has a stack, like Oracle has a stack, so wherever it can push its own stack, it's going to do that, cuz the margins are better. At the same time, I think it understands very well, it's got to have open source choice. >> Yeah, absolutely, and that's something we heard loud and clear here, Dave, which is what we expect from IBM: choice of language, choice of framework. When I hear the public cloud guys, it's like, "Oh, well here's kind of the main focus we have, "and maybe we'll have a little bit of choice there." Absolutely the likes of Google and Amazon are working with open source, but at least first blush, when I look at things, it looks like once IBM fleshes this out -- and as we've said, it's the Spark to start and others that they're adding on -- but IBM could have a broader offering than I expect to see from some of the public cloud guys. We'll see. As you know, Dave, Google's got their cloud event in a couple of weeks in San Francisco. We'll be covering that, and of course Amazon, you expect their regular cadence of announcements that they'll make. So, definitely a new front in the Cloud Wars as it were, for machine learning. >> Excellent! Alright, Stu, we got to wrap, cuz we're broadcasting the livestream. We got to go set up for that. Thanks, I really appreciate you coming down here and co-hosting with me. Good event. >> Always happy to come down to the Big Apple, Dave. >> Alright, good. Alright, thanks for watching, everybody! So, check out SiliconAngle.com, you'll get all the new from this event and around the world. Check out SiliconAngle.tv for this and other CUBE activities, where we're going to be next. We got a big spring coming up, end of winter, big spring coming in this season. And check out WikiBon.com for all the research. Thanks guys, good job today, that's a wrap! We'll see you next time. This is theCUBE, we're out. (jazzy music)

Published Date : Feb 15 2017

SUMMARY :

New York, it's the Cube! a lot of the world's really important data the biggest challenges we Obviously a lot of the model a number of the ecosystem partners, the challenge IBM has is to really kind of the main focus we have, We got to go set up for that. down to the Big Apple, Dave. and around the world.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

ChrisPERSON

0.99+

DavePERSON

0.99+

Barry BakerPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

Chris MaddernPERSON

0.99+

2017DATE

0.99+

Bob PiccianoPERSON

0.99+

GoogleORGANIZATION

0.99+

DellORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

San FranciscoLOCATION

0.99+

StuPERSON

0.99+

New York CityLOCATION

0.99+

Last weekDATE

0.99+

New YorkLOCATION

0.99+

OracleORGANIZATION

0.99+

oneQUANTITY

0.99+

30%QUANTITY

0.99+

two platformsQUANTITY

0.99+

two yearsQUANTITY

0.99+

LinuxTITLE

0.99+

AlrigPERSON

0.99+

last SeptemberDATE

0.99+

Jean-Francois PugetPERSON

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.98+

todayDATE

0.98+

Watson Data PlatformTITLE

0.98+

VenmoORGANIZATION

0.97+

Spark SummitEVENT

0.97+

Q1DATE

0.96+

Big AppleLOCATION

0.96+

EMCORGANIZATION

0.95+

HPEORGANIZATION

0.95+

BlueMixTITLE

0.94+

SparkTITLE

0.91+

WikiBon.comORGANIZATION

0.9+

IBM Machine Learning LaunchEVENT

0.89+

one thingQUANTITY

0.86+

AWS ReinventORGANIZATION

0.82+

around 10%QUANTITY

0.8+

x86COMMERCIAL_ITEM

0.78+

SiliconAngle.tvORGANIZATION

0.77+

#IBMMLTITLE

0.76+

z13COMMERCIAL_ITEM

0.74+

endDATE

0.71+

Machine LearningTITLE

0.65+

x86TITLE

0.62+

CUBEORGANIZATION

0.56+

OpenSourceTITLE

0.56+

TwitterTITLE

0.54+

LearningTITLE

0.5+

decadesQUANTITY

0.48+

SymmetricsTITLE

0.46+

SiliconAngle.comORGANIZATION

0.43+

theCUBEORGANIZATION

0.41+

WarsTITLE

0.35+