Skyla Loomis, IBM | AnsibleFest 2020
>> (upbeat music) [Narrator] From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020, brought to you by Red Hat. >> Hello welcome back to theCUBE virtual coverage of AnsibleFest 2020 Virtual. We're not face to face this year. I'm John Furrier, your host. We're bringing it together remotely. We're in the Palo Alto Studios with theCUBE and we're going remote for our guests this year. And I hope you can come together online enjoy the content. Of course, go check out the events site on Demand Live. And certainly I have a lot of great content. I've got a great guest Skyla Loomis Vice president, for the Z Application Platform at IBM. Also known as IBM Z talking Mainframe. Skyla, thanks for coming on theCUBE Appreciate it. >> Thank you for having me. So, you know, I've talked many conversations about the Mainframe of being relevant and valuable in context to cloud and cloud native because if it's got a workload you've got containers and all this good stuff, you can still run anything on anything these days. By integrating it in with all this great glue layer, lack of a better word or oversimplifying it, you know, things going on. So it's really kind of cool. Plus Walter Bentley in my previous interview was talking about the success of Ansible, and IBM working together on a really killer implementation. So I want to get into that, but before that let's get into IBM Z. How did you start working with IBM Z? What's your role there? >> Yeah, so I actually just got started with Z about four years ago. I spent most of my career actually on the distributed platform, largely with data and analytics, the analytics area databases and both On-premise and Public Cloud. But I always considered myself a friend to Z. So in many of the areas that I'd worked on, we'd, I had offerings where we'd enabled it to work with COS or Linux on Z. And then I had this opportunity come up where I was able to take on the role of leading some of our really core runtimes and databases on the Z platform, IMS and z/TPF. And then recently just expanded my scope to take on CICS and a number of our other offerings related to those kind of in this whole application platform space. And I was really excited because just of how important these runtimes and this platform is to the world,really. You know, our power is two thirds of our fortune 100 clients across banking and insurance. And it's you know, some of the most powerful transaction platforms in the world. You know doing hundreds of billions of transactions a day. And you know, just something that's really exciting to be a part of and everything that it does for us. >> It's funny how distributed systems and distributed computing really enable more longevity of everything. And now with cloud, you've got new capabilities. So it's super excited. We're seeing that a big theme at AnsibleFest this idea of connecting, making things easier you know, talk about distributed computing. The cloud is one big distribute computer. So everything's kind of playing together. You have a panel discussion at AnsibleFest Virtual. Could you talk about what your topic is and share, what was some of the content in there? Content being, content as in your presentation? Not content. (laughs) >> Absolutely. Yeah, so I had the opportunity to co-host a panel with a couple of our clients. So we had Phil Allison from Black Knight and Pat Lane from Allstate and they were really joining us and talking about their experience now starting to use Ansible to manage to z/OS. So we just actually launched some content collections and helping to enable and accelerate, client's use of using Ansible to manage to z/OS back in March of this year. And we've just seen tremendous client uptake in this. And these are a couple of clients who've been working with us and, you know, getting started on the journey of now using Ansible with Z they're both you know, have it in the enterprise already working with Ansible on other platforms. And, you know, we got to talk with them about how they're bringing it into Z. What use cases they're looking at, the type of culture change, that it drives for their teams as they embark on this journey and you know where they see it going for them in the future. >> You know, this is one of the hot items this year. I know that events virtual so has a lot of content flowing around and sessions, but collections is the top story. A lot of people talking collections, collections collections, you know, integration and partnering. It hits so many things but specifically, I like this use case because you're talking about real business value. And I want to ask you specifically when you were in that use case with Ansible and Z. People are excited, it seems like it's working well. Can you talk about what problems that it solves? I mean, what was some of the drivers behind it? What were some of the results? Could you give some insight into, you know, was it a pain point? Was it an enabler? Can you just share why that was getting people are getting excited about this? >> Yeah well, certainly automation on Z, is not new, you know there's decades worth of, of automation on the platform but it's all often proprietary, you know, or bundled up like individual teams or individual people on teams have specific assets, right. That they've built and it's not shared. And it's certainly not consistent with the rest of the enterprise. And, you know, more and more, you're kind of talking about hybrid cloud. You know, we're seeing that, you know an application is not isolated to a single platform anymore right. It really expands. And so being able to leverage this common open platform to be able to manage Z in the same way that you manage the entire rest of your enterprise, whether that's Linux or Windows or network or storage or anything right. You know you can now actually bring this all together into a common automation plane in control plane to be able to manage to all of this. It's also really great from a skills perspective. So, it enables us to really be able to leverage. You know Python on the platform and that's whole ecosystem of Ansible skills that are out there and be able to now use that to work with Z. >> So it's essentially a modern abstraction layer of agility and people to work on it. (laughs) >> Yeah >> You know it's not the joke, Hey, where's that COBOL programmer. I mean, this is a serious skill gap issues though. This is what we're talking about here. You don't have to replace the, kill the old to bring in the new, this is an example of integration where it's classic abstraction layer and evolution. Is that, am I getting that right? >> Absolutely. I mean I think that Ansible's power as an orchestrator is part of why, you know, it's been so successful here because it's not trying to rip and replace and tell you that you have to rewrite anything that you already have. You know, it is that glue sort of like you used that term earlier right? It's that glue that can span you know, whether you've got rec whether you've got ACL, whether you're using z/OSMF you know, or any other kind of custom automation on the platform, you know, it works with everything and it can start to provide that transparency into it as well, and move to that, like infrastructure as code type of culture. So you can bring it into source control. You can have visibility to it as part of the Ansible automation platform and tower and those capabilities. And so you, it really becomes a part of the whole enterprise and enables you to codify a lot of that knowledge. That, you know, exists again in pockets or in individuals and make it much more accessible to anybody new who's coming to the platform. >> That's a great point, great insight.& It's worth calling out. I'm going to make a note of that and make a highlight from that insight. That was awesome. I got to ask about this notion of client uptake. You know, when you have z/OS and Ansible kind of come in together, what are the clients area? When do they get excited? When do they know that they've got to do? And what are some of the client reactions? Are they're like, wake up one day and say, "Hey, yeah I actually put Ansible and z/OS together". You know peanut butter and chocolate is (mumbles) >> Honestly >> You know, it was just one of those things where it's not obvious, right? Or is it? >> Actually I have been surprised myself at how like resoundingly positive and immediate the reactions have been, you know we have something, one of our general managers runs a general managers advisory council and at some of our top clients on the platform and you know we meet with them regularly to talk about, you know, the future direction that we're going. And we first brought this idea of Ansible managing to Z there. And literally unanimously everybody was like yes, give it to us now. (laughs) It was pretty incredible, you know? And so it's you know, we've really just seen amazing uptake. We've had over 5,000 downloads of our core collection on galaxy. And again that's just since mid to late March when we first launched. So we're really seeing tremendous excitement with it. >> You know, I want to want to talk about some of the new announcements, but you brought that up. I wanted to kind of tie into it. It is addictive when you think modernization, people success is addictive. This is another theme coming out of AnsibleFest this year is that when the sharing, the new content you know, coders content is the theme. I got to ask you because you mentioned earlier about the business value and how the clients are kind of gravitating towards it. They want it.It is addictive, contagious. In the ivory towers in the big, you know, front office, the business. It's like, we've got to make everything as a service. Right. You know, you hear that right. You know, and say, okay, okay, boss You know, Skyla, just go do it. Okay. Okay. It's so easy. You can just do it tomorrow, but to make everything as a service, you got to have the automation, right. So, you know, to bridge that gap has everything is a service whether it's mainframe. I mean okay. Mainframe is no problem. If you want to talk about observability and microservices and DevOps, eventually everything's going to be a service. You got to have the automation. Could you share your, commentary on how you view that? Because again, it's a business objective everything is a service, then you got to make it technical then you got to make it work and so on. So what's your thoughts on that? >> Absolutely. I mean, agility is a huge theme that we've been focusing on. We've been delivering a lot of capabilities around a cloud native development experience for folks working on COBOL, right. Because absolutely you know, there's a lot of languages coming to the platform. Java is incredibly powerful and it actually runs better on Z than it runs on any other platform out there. And so, you know, we're seeing a lot of clients you know, starting to, modernize and continue to evolve their applications because the platform itself is incredibly modern, right? I mean we come out with new releases, we're leading the industry in a number of areas around resiliency, in our security and all of our, you know, the face of encryption and number of things that come out with, but, you know the applications themselves are what you know, has not always kept pace with the rate of change in the industry. And so, you know, we're really trying to help enable our clients to make that leap and continue to evolve their applications in an important way, and the automation and the tools that go around it become very important. So, you know, one of the things that we're enabling is the self service, provisioning experience, right. So clients can, you know, from Open + Shift, be able to you know, say, "Hey, give me an IMS and z/OS connect stack or a kicks into DB2 stack." And that is all under the covers is going to be powered by Ansible automation. So that really, you know, you can get your system programmers and your talent out of having to do these manual tasks, right. Enable the development community. So they can use things like VS Code and Jenkins and GET Lab, and you'll have this automated CICB pipeline. And again, Ansible under the covers can be there helping to provision those test environments. You know, move the data, you know, along with the application, changes through the pipeline and really just help to support that so that, our clients can do what they need to do. >> You guys got the collections in the hub there, so automation hub, I got to ask you where do you see the future of the automating within z/OS going forward? >> Yeah, so I think, you know one of the areas that we'd like to see go is head more towards this declarative state so that you can you know, have this declarative configuration defined for your Z environment and then have Ansible really with the data and potency right. Be able to, go out and ensure that the environment is always there, and meeting those requirements. You know that's partly a culture change as well which goes along with it, but that's a key area. And then also just, you know, along with that becoming more proactive overall part of, you know, AI ops right. That's happening. And I think Ansible on the automation that we support can become you know, an integral piece of supporting that more intelligent and proactive operational direction that, you know, we're all going. >> Awesome Skyla. Great to talk to you. And so insightful, appreciate it. One final question. I want to ask you a personal question because I've been doing a lot of interviews around skill gaps and cybersecurity, and there's a lot of jobs, more job openings and there are a lot of people. And people are with COVID working at home. People are looking to get new skilled up positions, new opportunities. Again cybersecurity and spaces and event we did and want to, and for us its huge, huge openings. But for people watching who are, you know, resetting getting through this COVID want to come out on the other side there's a lot of online learning tools out there. What skill sets do you think? Cause you brought up this point about modernization and bringing new people and people as a big part of this event and the role of the people in community. What areas do you think people could really double down on? If I wanted to learn a skill. Or an area of coding and business policy or integration services, solution architects, there's a lot of different personas, but what skills can I learn? What's your advice to people out there? >> Yeah sure. I mean on the Z platform overall and skills related to Z, COBOL, right. There's, you know, like two billion lines of COBOL out there in the world. And it's certainly not going away and there's a huge need for skills. And you know, if you've got experience from other platforms, I think bringing that in, right. And really being able to kind of then bridge the two things together right. For the folks that you're working for and the enterprise we're working with you know, we actually have a bunch of education out there. You got to master the mainframe program and even a competition that goes on that's happening now, for folks who are interested in getting started at any stage, whether you're a student or later in your career, but you know learning, you know, learn a lot of those platforms you're going to be able to then have a career for life. >> Yeah. And the scale on the data, this is so much going on. It's super exciting. Thanks for sharing that. Appreciate it. Want to get that plug in there. And of course, IBM, if you learn COBOL you'll have a job forever. I mean, the mainframe's not going away. >> Absolutely. >> Skyla, thank you so much for coming on theCUBE Vice President, for the Z Application Platform and IBM, thanks for coming. Appreciate it. >> Thanks for having me. >> I'm John Furrier your host of theCUBE here for AnsibleFest 2020 Virtual. Thanks for watching. (upbeat music)
SUMMARY :
brought to you by Red Hat. And I hope you can come together online So, you know, I've And it's you know, some you know, talk about with us and, you know, getting started And I want to ask you in the same way that you of agility and people to work on it. kill the old to bring in on the platform, you know, You know, when you have z/OS and Ansible And so it's you know, we've I got to ask you because You know, move the data, you know, so that you can you know, But for people watching who are, you know, And you know, if you've got experience And of course, IBM, if you learn COBOL Skyla, thank you so much for coming I'm John Furrier your host of theCUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Phil Allison | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
AnsibleFest | ORGANIZATION | 0.99+ |
Walter Bentley | PERSON | 0.99+ |
Skyla Loomis | PERSON | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
Windows | TITLE | 0.99+ |
Pat Lane | PERSON | 0.99+ |
this year | DATE | 0.99+ |
Skyla | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
mid | DATE | 0.98+ |
100 clients | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One final question | QUANTITY | 0.98+ |
over 5,000 downloads | QUANTITY | 0.97+ |
Z | TITLE | 0.97+ |
two billion lines | QUANTITY | 0.97+ |
March of this year | DATE | 0.95+ |
Z. | TITLE | 0.95+ |
VS Code | TITLE | 0.95+ |
COBOL | TITLE | 0.93+ |
z/OS | TITLE | 0.92+ |
single platform | QUANTITY | 0.91+ |
hundreds of billions of transactions a day | QUANTITY | 0.9+ |
first | QUANTITY | 0.9+ |
Allstate | ORGANIZATION | 0.88+ |
Palo Alto Studios | LOCATION | 0.88+ |
Z Application Platform | TITLE | 0.86+ |
four years ago | DATE | 0.84+ |
COVID | EVENT | 0.81+ |
late March | DATE | 0.81+ |
about | DATE | 0.8+ |
Vice | PERSON | 0.79+ |
Jenkins | TITLE | 0.78+ |
Vice President | PERSON | 0.77+ |
AnsibleFest 2020 | EVENT | 0.77+ |
IBM Z. | TITLE | 0.72+ |
two thirds | QUANTITY | 0.72+ |
one big distribute computer | QUANTITY | 0.72+ |
one day | QUANTITY | 0.71+ |
z/OSMF | TITLE | 0.69+ |
Z. | ORGANIZATION | 0.69+ |
Black Knight | TITLE | 0.64+ |
ACL | TITLE | 0.64+ |
CICS | ORGANIZATION | 0.63+ |
IMS | TITLE | 0.63+ |
John Thomas, IBM | Change the Game: Winning With AI
(upbeat music) >> Live from Time Square in New York City, it's The Cube. Covering IBM's change the game, winning with AI. Brought to you by IBM. >> Hi everybody, welcome back to The Big Apple. My name is Dave Vellante. We're here in the Theater District at The Westin Hotel covering a Special Cube event. IBM's got a big event today and tonight, if we can pan here to this pop-up. Change the game: winning with AI. So IBM has got an event here at The Westin, The Tide at Terminal 5 which is right up the Westside Highway. Go to IBM.com/winwithAI. Register, you can watch it online, or if you're in the city come down and see us, we'll be there. Uh, we have a bunch of customers will be there. We had Rob Thomas on earlier, he's kind of the host of the event. IBM does these events periodically throughout the year. They gather customers, they put forth some thought leadership, talk about some hard dues. So, we're very excited to have John Thomas here, he's a distinguished engineer and Director of IBM Analytics, long time Cube alum, great to see you again John >> Same here. Thanks for coming on. >> Great to have you. >> So we just heard a great case study with Niagara Bottling around the Data Science Elite Team, that's something that you've been involved in, and we're going to get into that. But give us the update since we last talked, what have you been up to?? >> Sure sure. So we're living and breathing data science these days. So the Data Science Elite Team, we are a team of practitioners. We actually work collaboratively with clients. And I stress on the word collaboratively because we're not there to just go do some work for a client. We actually sit down, expect the client to put their team to work with our team, and we build AI solutions together. Scope use cases, but sort of you know, expose them to expertise, tools, techniques, and do this together, right. And we've been very busy, (laughs) I can tell you that. You know it has been a lot of travel around the world. A lot of interest in the program. And engagements that bring us very interesting use cases. You know, use cases that you would expect to see, use cases that are hmmm, I had not thought of a use case like that. You know, but it's been an interesting journey in the last six, eight months now. >> And these are pretty small, agile teams. >> Sometimes people >> Yes. use tiger teams and they're two to three pizza teams, right? >> Yeah. And my understanding is you bring some number of resources that's called two three data scientists, >> Yes and the customer matches that resource, right? >> Exactly. That's the prerequisite. >> That is the prerequisite, because we're not there to just do the work for the client. We want to do this in a collaborative fashion, right. So, the customers Data Science Team is learning from us, we are working with them hand in hand to build a solution out. >> And that's got to resonate well with customers. >> Absolutely I mean so often the services business is like kind of, customers will say well I don't want to keep going back to a company to get these services >> Right, right. I want, teach me how to fish and that's exactly >> That's exactly! >> I was going to use that phrase. That's exactly what we do, that's exactly. So at the end of the two or three month period, when IBM leaves, my team leaves, you know, the client, the customer knows what the tools are, what the techniques are, what to watch out for, what are success criteria, they have a good handle of that. >> So we heard about the Niagara Bottling use case, which was a pretty narrow, >> Mm-hmm. How can we optimize the use of the plastic wrapping, save some money there, but at the same time maintain stability. >> Ya. You know very, quite a narrow in this case. >> Yes, yes. What are some of the other use cases? >> Yeah that's a very, like you said, a narrow one. But there are some use cases that span industries, that cut across different domains. I think I may have mentioned this on one of our previous discussions, Dave. You know customer interactions, trying to improve customer interactions is something that cuts across industry, right. Now that can be across different channels. One of the most prominent channels is a call center, I think we have talked about this previously. You know I hate calling into a call center (laughter) because I don't know Yeah, yeah. What kind of support I'm going to get. But, what if you could equip the call center agents to provide consistent service to the caller, and handle the calls in the best appropriate way. Reducing costs on the business side because call handling is expensive. And eventually lead up to can I even avoid the call, through insights on why the call is coming in in the first place. So this use case cuts across industry. Any enterprise that has got a call center is doing this. So we are looking at can we apply machine-learning techniques to understand dominant topics in the conversation. Once we understand with these have with unsupervised techniques, once we understand dominant topics in the conversation, can we drill into that and understand what are the intents, and does the intent change as the conversation progress? So you know I'm calling someone, it starts off with pleasantries, it then goes into weather, how are the kids doing? You know, complain about life in general. But then you get to something of substance why the person was calling in the first place. And then you may think that is the intent of the conversation, but you find that as the conversation progresses, the intent might actually change. And can you understand that real time? Can you understand the reasons behind the call, so that you could take proactive steps to maybe avoid the call coming in at the first place? This use case Dave, you know we are seeing so much interest in this use case. Because call centers are a big cost to most enterprises. >> Let's double down on that because I want to understand this. So you basically doing. So every time you call a call center this call may be recorded, >> (laughter) Yeah. For quality of service. >> Yeah. So you're recording the calls maybe using MLP to transcribe those calls. >> MLP is just the first step, >> Right. so you're absolutely right, when a calls come in there's already call recording systems in place. We're not getting into that space, right. So call recording systems record the voice calls. So often in offline batch mode you can take these millions of calls, pass it through a speech-to-text mechanism, which produces a text equivalent of the voice recordings. Then what we do is we apply unsupervised machine learning, and clustering, and topic-modeling techniques against it to understand what are the dominant topics in this conversation. >> You do kind of an entity extraction of those topics. >> Exactly, exactly, exactly. >> Then we find what is the most relevant, what are the relevant ones, what is the relevancy of topics in a particular conversation. That's not enough, that is just step two, if you will. Then you have to, we build what is called an intent hierarchy. So this is at top most level will be let's say payments, the call is about payments. But what about payments, right? Is it an intent to make a late payment? Or is the intent to avoid the payment or contest a payment? Or is the intent to structure a different payment mechanism? So can you get down to that level of detail? Then comes a further level of detail which is the reason that is tied to this intent. What is a reason for a late payment? Is it a job loss or job change? Is it because they are just not happy with the charges that I have coming? What is a reason? And the reason can be pretty complex, right? It may not be in the immediate vicinity of the snippet of conversation itself. So you got to go find out what the reason is and see if you can match it to this particular intent. So multiple steps off the journey, and eventually what we want to do is so we do our offers in an offline batch mode, and we are building a series of classifiers instead of classifiers. But eventually we want to get this to real time action. So think of this, if you have machine learning models, supervised models that can predict the intent, the reasons, et cetera, you can have them deployed operationalize them, so that when a call comes in real time, you can screen it in real time, do the speech to text, you can do this pass it to the supervise models that have been deployed, and the model fires and comes back and says this is the intent, take some action or guide the agent to take some action real time. >> Based on some automated discussion, so tell me what you're calling about, that kind of thing, >> Right. Is that right? >> So it's probably even gone past tell me what you're calling about. So it could be the conversation has begun to get into you know, I'm going through a tough time, my spouse had a job change. You know that is itself an indicator of some other reasons, and can that be used to prompt the CSR >> Ah, to take some action >> Ah, oh case. appropriate to the conversation. >> So I'm not talking to a machine, at first >> no no I'm talking to a human. >> Still talking to human. >> And then real time feedback to that human >> Exactly, exactly. is a good example of >> Exactly. human augmentation. >> Exactly, exactly. I wanted to go back and to process a little bit in terms of the model building. Are there humans involved in calibrating the model? >> There has to be. Yeah, there has to be. So you know, for all the hype in the industry, (laughter) you still need a (laughter). You know what it is is you need expertise to look at what these models produce, right. Because if you think about it, machine learning algorithms don't by themselves have an understanding of the domain. They are you know either statistical or similar in nature, so somebody has to marry the statistical observations with the domain expertise. So humans are definitely involved in the building of these models and claiming of these models. >> Okay. >> (inaudible). So that's who you got math, you got stats, you got some coding involved, and you >> Absolutely got humans are the last mile >> Absolutely. to really bring that >> Absolutely. expertise. And then in terms of operationalizing it, how does that actually get done? What tech behind that? >> Ah, yeah. >> It's a very good question, Dave. You build models, and what good are they if they stay inside your laptop, you know, they don't go anywhere. What you need to do is, I use a phrase, weave these models in your business processes and your applications. So you need a way to deploy these models. The models should be consumable from your business processes. Now it could be a Rest API Call could be a model. In some cases a Rest API Call is not sufficient, the latency is too high. Maybe you've got embed that model right into where your application is running. You know you've got data on a mainframe. A credit card transaction comes in, and the authorization for the credit card is happening in a four millisecond window on the mainframe on all, not all, but you know CICS COBOL Code. I don't have the time to make a Rest API call outside. I got to have the model execute in context with my CICS COBOL Code in that memory space. >> Yeah right. You know so the operationalizing is deploying, consuming these models, and then beyond that, how do the models behave over time? Because you can have the best programmer, the best data scientist build the absolute best model, which has got great accuracy, great performance today. Two weeks from now, performance is going to go down. >> Hmm. How do I monitor that? How do I trigger a loads map for below certain threshold. And, can I have a system in place that reclaims this model with new data as it comes in. >> So you got to understand where the data lives. >> Absolutely. You got to understand the physics, >> Yes. The latencies involved. >> Yes. You got to understand the economics. >> Yes. And there's also probably in many industries legal implications. >> Oh yes. >> No, the explainability of models. You know, can I prove that there is no bias here. >> Right. Now all of these are challenging but you know, doable things. >> What makes a successful engagement? Obviously you guys are outcome driven, >> Yeah. but talk about how you guys measure success. >> So um, for our team right now it is not about revenue, it's purely about adoption. Does the client, does the customer see the value of what IBM brings to the table. This is not just tools and technology, by the way. It's also expertise, right? >> Hmm. So this notion of expertise as a service, which is coupled with tools and technology to build a successful engagement. The way we measure success is has the client, have we built out the use case in a way that is useful for the business? Two, does a client see value in going further with that. So this is right now what we look at. It's not, you know yes of course everybody is scared about revenue. But that is not our key metric. Now in order to get there though, what we have found, a little bit of hard work, yes, uh, no you need different constituents of the customer to come together. It's not just me sending a bunch of awesome Python Programmers to the client. >> Yeah right. But now it is from the customer's side we need involvement from their Data Science Team. We talk about collaborating with them. We need involvement from their line of business. Because if the line of business doesn't care about the models we've produced you know, what good are they? >> Hmm. And third, people don't usually think about it, we need IT to be part of the discussion. Not just part of the discussion, part of being the stakeholder. >> Yes, so you've got, so IBM has the chops to actually bring these constituents together. >> Ya. I have actually a fair amount of experience in herding cats on large organizations. (laughter) And you know, the customer, they've got skin in the IBM game. This is to me a big differentiator between IBM, certainly some of the other technology suppliers who don't have the depth of services, expertise, and domain expertise. But on the flip side of that, differentiation from many of the a size who have that level of global expertise, but they don't have tech piece. >> Right. >> Now they would argue well we do anybodies tech. >> Ya. But you know, if you've got tech. >> Ya. >> You just got to (laughter) Ya. >> Bring those two together. >> Exactly. And that's really seems to me to be the big differentiator >> Yes, absolutely. for IBM. Well John, thanks so much for stopping by theCube and explaining sort of what you've been up to, the Data Science Elite Team, very exciting. Six to nine months in, >> Yes. are you declaring success yet? Still too early? >> Uh, well we're declaring success and we are growing, >> Ya. >> Growth is good. >> A lot of lot of attention. >> Alright, great to see you again, John. >> Absolutely, thanks you Dave. Thanks very much. Okay, keep it right there everybody. You're watching theCube. We're here at The Westin in midtown and we'll be right back after this short break. I'm Dave Vellante. (tech music)
SUMMARY :
Brought to you by IBM. he's kind of the host of the event. Thanks for coming on. last talked, what have you been up to?? We actually sit down, expect the client to use tiger teams and they're two to three And my understanding is you bring some That's the prerequisite. That is the prerequisite, because we're not And that's got to resonate and that's exactly So at the end of the two or three month period, How can we optimize the use of the plastic wrapping, Ya. You know very, What are some of the other use cases? intent of the conversation, but you So every time you call a call center (laughter) Yeah. So you're recording the calls maybe So call recording systems record the voice calls. You do kind of an entity do the speech to text, you can do this Is that right? has begun to get into you know, appropriate to the conversation. I'm talking to a human. is a good example of Exactly. a little bit in terms of the model building. You know what it is is you need So that's who you got math, you got stats, to really bring that how does that actually get done? I don't have the time to make a Rest API call outside. You know so the operationalizing is deploying, that reclaims this model with new data as it comes in. So you got to understand where You got to understand Yes. You got to understand And there's also probably in many industries No, the explainability of models. but you know, doable things. but talk about how you guys measure success. the value of what IBM brings to the table. constituents of the customer to come together. about the models we've produced you know, Not just part of the discussion, to actually bring these differentiation from many of the a size Now they would argue Ya. But you know, And that's really seems to me to be Six to nine months in, are you declaring success yet? Alright, great to see you Absolutely, thanks you Dave.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Thomas | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Six | QUANTITY | 0.99+ |
Time Square | LOCATION | 0.99+ |
tonight | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three month | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
third | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
New York City | LOCATION | 0.98+ |
today | DATE | 0.98+ |
Python | TITLE | 0.98+ |
IBM Analytics | ORGANIZATION | 0.97+ |
Terminal 5 | LOCATION | 0.97+ |
Data Science Elite Team | ORGANIZATION | 0.96+ |
Niagara | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
IBM.com/winwithAI | OTHER | 0.96+ |
first place | QUANTITY | 0.95+ |
eight months | QUANTITY | 0.94+ |
Change the Game: Winning With AI | TITLE | 0.89+ |
The Westin | ORGANIZATION | 0.89+ |
Niagara Bottling | PERSON | 0.89+ |
Theater District | LOCATION | 0.88+ |
four millisecond window | QUANTITY | 0.87+ |
step two | QUANTITY | 0.86+ |
Cube | PERSON | 0.85+ |
Westside Highway | LOCATION | 0.83+ |
first | QUANTITY | 0.83+ |
Two weeks | DATE | 0.82+ |
millions of calls | QUANTITY | 0.79+ |
two three data scientists | QUANTITY | 0.78+ |
CICS | TITLE | 0.77+ |
COBOL | OTHER | 0.69+ |
Rest API call | OTHER | 0.68+ |
The Tide | LOCATION | 0.68+ |
theCube | ORGANIZATION | 0.67+ |
The Westin | LOCATION | 0.66+ |
Rest API | OTHER | 0.66+ |
Apple | LOCATION | 0.63+ |
Big | ORGANIZATION | 0.62+ |
Westin | LOCATION | 0.51+ |
last six | DATE | 0.48+ |
Hotel | ORGANIZATION | 0.45+ |
theCube | TITLE | 0.33+ |
Bottling | COMMERCIAL_ITEM | 0.3+ |
John Thomas, IBM | IBM Data Science For All
(upbeat music) >> Narrator: Live from New York City, it's the Cube, covering IBM Data Science for All. Brought to you by IMB. >> Welcome back to Data Science for All. It's a whole new game here at IBM's event, two-day event going on, 6:00 tonight the big keynote presentation on IBM.com so be sure to join the festivities there. You can watch it live stream, all that's happening. Right now, we're live here on the Cube, along with Dave Vellente, I'm John Walls and we are joined by John Thomas who is a distinguished engineer and director at IBM. John, thank you for your time, good to see you. >> Same here, John. >> Yeah, pleasure, thanks for being with us here. >> John Thomas: Sure. >> I know, in fact, you just wrote this morning about machine learning, so that's obviously very near and dear to you. Let's talk first off about IBM, >> John Thomas: Sure. >> Not a new concept by any means, but what is new with regard to machine learning in your work? >> Yeah, well, that's a good question, John. Actually, I get that question a lot. Machine learning itself is not new, companies have been doing it for decades, so exactly what is new, right? I actually wrote this in a blog today, this morning. It's really three different things, I call them democratizing machine learning, operationalizing machine learning, and hybrid machine learning, right? And we can talk through each of these if you like. But I would say hybrid machine learning is probably closest to my heart. So let me explain what that is because it's sounds fancy, right? (laughter) >> Right. It's what we need is another hybrid something, right? >> In reality, what it is is let data gravity decide where your data stays and let your performance requirements, your SLA's, dictate where your machine learning models go, right? So what do I mean by that? You might have sensitive data, customer data, which you want to keep on a certain platform, right? Instead of moving data off that platform to do machine learning, bring machine learning to that platform, whether that be the mainframe or specialized appliances or hadoop clusters, you name it, right? Bring machine learning to where the data is. Do the training, building of the model, where that is, but then have complete flexibility in terms of where you deploy that model. As an example, you might choose to build and train your model on premises behind the firewall using very sensitive data, but the model that has been built, you may choose to deploy that into a Cloud environment because you have other applications that need to consume it. That flexibility is what I mean by hybrid. Another example is, especially when you get into so many more complex machine learning, deep learning domains, you need exploration and there is hardware that provides that exploration, right? For example, GPU's provide exploration. Well, you need to have the flexibility to train and build the models on hardware that provides that kind of exploration, but then the model that has been built might go into inside of a CICS mainframe transaction for some second scoring of a credit card transaction as to whether it's fraudulent or not, right? So there's flexibility off peri, on peri, different platforms, this is what I mean by hybrid. >> What is the technical enabler to allow that to happen? Is it just a modern software architecture, microservices, containers, blah, blah, blah? Explain that in more detail. >> Yeah, that's a good question and we're not, you know, it's a couple different things. One is bringing native machine learning to these platforms themselves. So you need native machine learning on the mainframe, in the Cloud, in a hadoop cluster environment, in an appliance, right? So you need the run times, the libraries, the frameworks running native on those platforms. And that is not easy to do that, you know? You've got machine learning running native on ZOS, not even Linux on Z. It's native to ZOS on the mainframe. >> At the very primitive level you're talking about. >> Yeah. >> So you get the performance you need. >> You have the runtime environments there and then what you need is a seamless experience across all of these platforms. You need way to export models, repositories into which you can save models, the same API's to save models into a different repository and then consume from them there. So it's a bit of engineering that IBM is doing to enable this, right? Native capabilities on the platforms, the same API's to talk to repositories and consume from the repositories. >> So the other piece of that architecture is talking a lot of tooling that integrated and native. >> John Thomas: Yes. >> And the tooling, as you know, changes, I feel like daily. There's a new tool out there and everybody gloms onto it, so the architecture has to be able to absorb those. What is the enabler there? >> Yeah, so you actually bring up a very good point. There is a new language, a new framework everyday, right? I mean, we all know that, in the world of machine learning, Python and R and Scala. Frameworks like Spark and TensorFlow, they're table scapes now, you know? You have to support all of these, scikit-learning, you name it, right? Obviously, you need a way to support all these frameworks on the platforms you want to enable, right? And then you need an environment which lets you work with the tools of your choice. So you need an environment like a workbench which can allow you to work in the language, the framework that you are the most comfortable with. And that's what we are doing with data science experience. I don't know if you have thought of this, but data science experience is an enterprise ML platform, right, runs in the Cloud, on prem, on x86 machines, you can have it on a (mumbles) box. The idea here is support for a variety of open languages, frameworks, enable through a collaborative workbench kind of interface. >> And the decision to move, whether it's on-prem or in the Cloud, it's a function of many things, but let's talk about those. I mean, data volume is one. You can't just move your business into the Cloud. It's not going to work that well. >> It's a journey, yeah. >> It's too expensive. But then there's others, there's governance edicts and security edicts, not that the security in the Cloud is any worse, it might just different than what your organization requires, and the Cloud supplier might not support that. It's different Clouds, it's location, etc. When you talked about the data thing being on trend, maybe training a model, and then that model moving to the Cloud, so obviously, it's a lighter weight ... It's not as much-- >> Yeah, yeah, yeah, you're not moving the entire data. Right. >> But I have a concern. I wonder if clients as you about this. Okay, well, it's my data, my data, I'm going to keep behind my firewall. But that data trained that model and I'm really worried that that model is now my IP that's going to seep out into the industry. What do you tell a client? >> Yeah, that's a fair point. Obviously, you still need your security mechanisms, you access control mechanisms, your governance control mechanisms. So you need governance whether you are on the Cloud or on prem. And your encryption mechanisms, your version control mechanisms, your governance mechanisms, all need to be in place, regardless of where you deploy, right? And to your question of how do you decide where the model should go, as I said earlier to John, you know, let data gravity SLA's performance security requirements dictate where the model should go. >> We're talking so much about concepts, right, and theories that you have. Lets roll up our sleeves and get to the nitty-gritty a little bit here and talk about what are people really doing out there? >> John Thomas: Oh yeah, use cases. >> Yeah, just give us an idea for some of the ... Kind of the latest and greatest that you're seeing. >> Lots of very interesting, interesting use cases out there so actually, a part of what IBM calls a data science elite team. We go out and engage with customers on very interesting use cases, right? And we see a lot of these hybrid discussions happen as well. On one end of the spectrum is understanding customers better. So I call this reading the customer's mind. So can you understand what is in the customer's mind and have an interaction with the client without asking a bunch of questions, right? Can you look at his historical data, his browsing behavior, his purchasing behavior, and have an offer that he will really love? Can you really understand him and give him a celebrity experience? That's one class of use cases, right? Another class of use cases is around improving operations, improving your own internal processes. One example is fraud detection, right? I mean, that is a hot topic these days. So how do you, as the credit card is swiped, right, it's just a few milliseconds before that travels through a network and kicks you back in mainframe and a scoring is done to as to whether this should be approved or not. Well, you need to have a prediction of how likely this is to be fraudulent or not in the span of the transaction. Here's another one. I don't know if you call help desks now. I sometimes call them "helpless desks." (laughter) >> Try not to. >> Dave: Hell desks. >> Try not to helpless desks but, you know, for pretty every enterprise that I am talking to, there is a goal to optimize their help desk, their call centers. And call center optimization is good. So as the customer calls in, can you understand the intent of the customer? See, he may start off talking about something, but as the call progresses, the intent might change. Can you understand that? In fact, not just understand, but predict it and intercept with something that the client will love before the conversation takes a bad turn? (laughter) >> You must be listening in on my calls. >> Your calls, must be your calls! >> I meander, I go every which way. >> I game the system and just go really mad and go, let me get you an operator. (laughter) Agent, okay. >> You tow guys, your data is a special case. >> Dave: Yeah right, this guy's pissed. >> We are red-flagged right off the top. >> We're not even analyzing you. >> Day job, forget about, you know. What about things, you know, because they're moving so far out to the edge and now with mobile and that explosion there, and sensor data being what it is and all this is tremendous growth. Tough to manage. >> Dave: It is, it really is. >> I guess, maybe tougher to make sense of it, so how are you helping people make sense of this so they can really filter through and find the data that matters? >> Yeah, this is a lot of things rolled up into that question, right? One is just managing those devices, those endpoints in multiple thousands, tens of thousands, millions of these devices. How would you manage them? Then, are you doing the processing of the data and applying ML and DL right at the edge, or are you bringing the data back behind the firewall or into Cloud and then processing it there? If you are doing image reduction in a car, in a self-driving car, can you allow the latency of data being shipping of an image of a pedestrian jumping in front, do we ship across the Cloud for a deep-learning network to process it and give you an answer - oh, that's a pedestrian? You know, you may not have that latency now. So you may want to do some processing on the edge, so that is another interesting discussion, right? And you need exploration there as well. Another aspect now is, as you said, separating the signal from the noise, you know. It's just really, really coming down to the different industries that we go into, what are the signals that we understand now? Can we build on them and can we re-use them? That is an interesting discussion as well. But, yeah, you're right. With the world of exploding data that we are in, with all these devices, it's very important to have systematic approach to managing your data, cataloging it, understanding where to apply ML, where to apply exploration, governance. All of these things become important. >> I want to ask you about, come back to the use cases for a moment. You talk about celebrity experiences, I put that in sort of a marketing category. Fraud detection's always been one of the favorite, big data use cases, help desks, recommendation engines and so forth. Let's start with the fraud detection. About a year ago, first of all, fraud detection in the last six, seven years, has been getting immensely better, no question. And it's great. However, the number of false positives, about a year ago, it was too many. We're a small company but we buy a lot of equipment and lights and cameras and stuff. The number of false positives that I personally get was overwhelming. >> Yeah. >> They've gone down dramatically. >> Yeah. >> In the last 12 months. Is that just a coincidence, happenstance, or is it getting better? >> No, it's not that the bad guys have gone down in number. It's not that at all, no. (laughter) >> Well, that, I know. >> No, I think there is a lot of sophistication in terms of the algorithms that are available now. In terms of ... If you have tens of thousands of features that you're looking at, how do you collapse that space and how do you do that efficiently, right? There are techniques that are evolving in terms of handing that kind of information. In terms of the actual algorithms, are different types of innovations that are happening in that space. But I think, perhaps, the most important one is that things that use to take weeks or days to train and test, now can be done in days or minutes, right? The exploration that comes from GPU's, for example, allows you to test out different algorithms, different models and say, okay, well, this performs well enough for me to roll it out and try this out, right? It gives you a very quick cycle of innovation. >> The time to value is really compressed. Okay, now let's take one that's not so good. Ad recommendations, the Google ads that pop up. One in a hundred are maybe relevant, if that, right? And they pop up on the screen and they're annoying. I worry that Siri's listening somehow. I talk to my wife about Israel and then next thing I know, I'm getting ads for going to Israel. Is that a coincidence or are they listening? What's happening there? >> I don't know about what Google's doing. I can't comment on that. (laughter) I don't want to comment on that. >> Maybe just from a technology perspective. >> From a technology perspective, this notion of understanding what is in the customer's mind and really getting to a customer segment at one, this is top interest for many, many organizations. Regardless of which industry you are, insurance or banking or retail, doesn't matter, right? And it all comes down to the fundamental principles about how efficiently can you do. Now, can you identify the features that have the most predictive power? This is a level of sophistication in terms of the feature engineering, in terms of collapsing that space of features that I had talked about, and then, how do I actually go to the latest science of this? How do I do the exploratory analysis? How do I actually build and test my machine learning models quickly? Do the tools allow me to be very productive about this? Or do I spend weeks and weeks coding in lower-level formats? Or do I get help, do I get guided interfaces, which guide me through the process, right? And then, the topic of exploration we talk about, right? These things come together and then couple that with cognitive API's. For example, speech to text, the word (mumbles) have gone down dramatically now. So as you talk on the phone, with a very high accuracy, we can understand what is being talked about. Image recognition, the accuracy has gone up dramatically. You can create custom classifiers for industry-specific topics that you want to identify in pictures. Natural language processing, natural language understanding, all of these have evolved in the last few years. And all these come together. So machine learning's not an island. All these things coming together is what makes these dramatic advancements possible. >> Well, John, if you've figured out anything about the past 20 minutes or so, is that Dave and I want ads delivered that matter and we want our help desk questions answered right away. (laugher) so if you can help us with that, you're welcome back on the Cube anytime, okay? >> We will try, John. >> That's all we want, that's all we ask. >> You guys, your calls are still being screened. (laughter) >> John Thomas, thank you for joining us, we appreciate that. >> Thank you. >> Our panel discussion coming up at 4:00 Eastern time. Live here on the Cube, we're in New York City. Be back in a bit. (upbeat music)
SUMMARY :
Brought to you by IMB. John, thank you for your time, good to see you. I know, in fact, you just wrote this morning And we can talk through each of these if you like. It's what we need is another hybrid something, right? of where you deploy that model. What is the technical enabler to allow that to happen? And that is not easy to do that, you know? and then what you need is a seamless experience So the other piece of that architecture is And the tooling, as you know, changes, I feel like daily. the framework that you are the most comfortable with. And the decision to move, whether it's on-prem and security edicts, not that the security in the Cloud is Yeah, yeah, yeah, you're not moving the entire data. I wonder if clients as you about this. So you need governance whether you are and theories that you have. Kind of the latest and greatest that you're seeing. I don't know if you call help desks now. So as the customer calls in, can you understand and go, let me get you an operator. What about things, you know, because they're moving the signal from the noise, you know. I want to ask you about, come back to the use cases In the last 12 months. No, it's not that the bad guys have gone down in number. and how do you do that efficiently, right? I talk to my wife about Israel and then next thing I know, I don't know about what Google's doing. So as you talk on the phone, with a very high accuracy, so if you can help us with that, You guys, your calls are still being screened. Live here on the Cube, we're in New York City.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellente | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Thomas | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Israel | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
New York City | LOCATION | 0.99+ |
Siri | TITLE | 0.99+ |
ZOS | TITLE | 0.99+ |
today | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
One example | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Scala | TITLE | 0.99+ |
Spark | TITLE | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
this morning | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
IMB | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
TensorFlow | TITLE | 0.95+ |
millions | QUANTITY | 0.95+ |
About a year ago | DATE | 0.95+ |
first | QUANTITY | 0.94+ |
one class | QUANTITY | 0.92+ |
Z. | TITLE | 0.91+ |
4:00 Eastern time | DATE | 0.9+ |
decades | QUANTITY | 0.9+ |
6:00 tonight | DATE | 0.9+ |
CICS | ORGANIZATION | 0.9+ |
about a year ago | DATE | 0.89+ |
second | QUANTITY | 0.88+ |
two-day event | QUANTITY | 0.86+ |
three different things | QUANTITY | 0.85+ |
last 12 months | DATE | 0.84+ |
IBM Data Science | ORGANIZATION | 0.82+ |
Cloud | TITLE | 0.8+ |
R | TITLE | 0.78+ |
past 20 minutes | DATE | 0.77+ |
Cube | COMMERCIAL_ITEM | 0.75+ |
a hundred | QUANTITY | 0.72+ |
one end | QUANTITY | 0.7+ |
seven years | QUANTITY | 0.69+ |
features | QUANTITY | 0.69+ |
couple | QUANTITY | 0.67+ |
last six | DATE | 0.66+ |
few milliseconds | QUANTITY | 0.63+ |
last few years | DATE | 0.59+ |
x86 | QUANTITY | 0.55+ |
IBM.com | ORGANIZATION | 0.53+ |
SLA | ORGANIZATION | 0.49+ |
Jean Francois Puget, IBM | IBM Machine Learning Launch 2017
>> Announcer: Live from New York, it's theCUBE, covering the IBM machine learning launch event. Brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Alright, we're back. Jean Francois Puget is here, he's the distinguished engineer for machine learning and optimization at IBM analytics, CUBE alum. Good to see you again. >> Yes. >> Thanks very much for coming on, big day for you guys. >> Jean Francois: Indeed. >> It's like giving birth every time you guys give one of these products. We saw you a little bit in the analyst meeting, pretty well attended. Give us the highlights from your standpoint. What are the key things that we should be focused on in this announcement? >> For most people, machine learning equals machine learning algorithms. Algorithms, when you look at newspapers or blogs, social media, it's all about algorithms. Our view that, sure, you need algorithms for machine learning, but you need steps before you run algorithms, and after. So before, you need to get data, to transform it, to make it usable for machine learning. And then, you run algorithms. These produce models, and then, you need to move your models into a production environment. For instance, you use an algorithm to learn from past credit card transaction fraud. You can learn models, patterns, that correspond to fraud. Then, you want to use those models, those patterns, in your payment system. And moving from where you run the algorithm to the operation system is a nightmare today, so our value is to automate what you do before you run algorithms, and then what you do after. That's our differentiator. >> I've had some folks in theCUBE in the past have said years ago, actually, said, "You know what, algorithms are plentiful." I think he made the statement, I remember my friend Avi Mehta, "Algorithms are free. "It's what you do with them that matters." >> Exactly, that's, I believe in autonomy that open source won for machine learning algorithms. Now the future is with open source, clearly. But it solves only a part of the problem you're facing if you want to action machine learning. So, exactly what you said. What do you do with the results of algorithm is key. And open source people don't care much about it, for good reasons. They are focusing on producing the best algorithm. We are focusing on creating value for our customers. It's different. >> In terms of, you mentioned open source a couple times, in terms of customer choice, what's your philosophy with regard to the various tooling and platforms for open source, how do you go about selecting which to support? >> Machine learning is fascinating. It's overhyped, maybe, but it's also moving very quickly. Every year there is a new cool stuff. Five years ago, nobody spoke about deep learning. Now it's everywhere. Who knows what will happen next year? Our take is to support open source, to support the top open source packages. We don't know which one will win in the future. We don't know even if one will be enough for all needs. We believe one size does not fit all, so our take is support a curated list of mid-show open source. We start with Spark ML for many reasons, but we won't stop at Spark ML. >> Okay, I wonder if we can talk use cases. Two of my favorite, well, let's just start with fraud. Fraud has become much, much better over the past certainly 10 years, but still not perfect. I don't know if perfection is achievable, but lot of false positives. How will machine learning affect that? Can we expect as consumers even better fraud detection in more real time? >> If we think of the full life cycle going from data to value, we will provide a better answer. We still use machine learning algorithm to create models, but a model does not tell you what to do. It will tell you, okay, for this credit card transaction coming, it has a high probability to be fraud. Or this one has a lower priority, uh, probability. But then it's up to the designer of the overall application to make decisions, so what we recommend is to use machine learning data prediction but not only, and then use, maybe, (murmuring). For instance, if your machine learning model tells you this is a fraud with a high probability, say 90%, and this is a customer you know very well, it's a 10-year customer you know very well, then you can be confident that it's a fraud. Then if next fraud tells you this is 70% probability, but it's a customer since one week. In a week, we don't know the customer, so the confidence we can get in machine learning should be low, and there you will not reject the transaction immediately. Maybe you will enter, you don't approve it automatically, maybe you will send a one-time passcode, or you enter a serve vendor system, but you don't reject it outright. Really, the idea is to use machine learning predictions as yet another input for making decisions. You're making decision informed on what you could learn from your past. But it's not replacing human decision-making. Our approach with IBM, you don't see IBM speak much about artificial intelligence in general because we don't believe we're here to replace humans. We're here to assist humans, so we say, augmented intelligence or assistance. That's the role we see for machine learning. It will give you additional data so that you make better decisions. >> It's not the concept that you object to, it's the term artificial intelligence. It's really machine intelligence, it's not fake. >> I started my career as a PhD in artificial intelligence, I won't say when, but long enough. At that time, there were already promise that we have Terminator in the next decade and this and that. And the same happened in the '60s, or it was after the '60s. And then, there is an AI winter, and we have a risk here to have an AI winter because some people are just raising red flags that are not substantiated, I believe. I don't think that technology's here that we can replace human decision-making altogether any time soon, but we can help. We can certainly make some proficient, more efficient, more productive with machine learning. >> Having said that, there are a lot of cognitive functions that are getting replaced, maybe not by so-called artificial intelligence, but certainly by machines and automation. >> Yes, so we're automating a number of things, and maybe we won't need to have people do quality check and just have an automated vision system detect defects. Sure, so we're automating more and more, but this is not new, it has been going on for centuries. >> Well, the list evolved. So, what can humans do that machines can't, and how would you expect that to change? >> We're moving away from IMB machine learning, but it is interesting. You know, each time there is a capacity that a machine that will automate, we basically redefine intelligence to exclude it, so you know. That's what I foresee. >> Yeah, well, robots a while ago, Stu, couldn't climb stairs, and now, look at that. >> Do we feel threatened because a robot can climb a stair faster than us? Not necessarily. >> No, it doesn't bother us, right. Okay, question? >> Yeah, so I guess, bringing it back down to the solution that we're talking about today, if I now am doing, I'm doing the analytics, the machine learning on the mainframe, how do we make sure that we don't overrun and blow out all our MIPS? >> We recommend, so we are not using the mainframe base compute system. We recommend using ZIPS, so additional calls to not overload, so it's a very important point. We claim, okay, if you do everything on the mainframe, you can learn from operational data. You don't want to disturb, and you don't want to disturb takes a lot of different meanings. One that you just said, you don't want to slow down your operation processings because you're going to hurt your business. But you also want to be careful. Say we have a payment system where there is a machine learning model predicting fraud probability, a part of the system. You don't want a young bright data scientist decide that he had a great idea, a great model, and he wants to push his model in production without asking anyone. So you want to control that. That's why we insist, we are providing governance that includes a lot of things like keeping track of how models were created from which data sets, so lineage. We also want to have access control and not allow anyone to just deploy a new model because we make it easy to deploy, so we want to have a role-based access and only someone someone with some executive, well, it depends on the customer, but not everybody can update the production system, and we want to support that. And that's something that differentiates us from open source. Open source developers, they don't care about governance. It's not their problem, but it is our customer problem, so this solution will come with all the governance and integrity constraints you can expect from us. >> Can you speak to, first solution's going to be on z/OS, what's the roadmap look like and what are some of those challenges of rolling this out to other private cloud solutions? >> We are going to shape this quarter IBM machine learning for Z. It starts with Spark ML as a base open source. This is not, this is interesting, but it's not all that is for machine learning. So that's how we start. We're going to add more in the future. Last week we announced we will shape Anaconda, which is a major distribution for Python ecosystem, and it includes a number of machine learning open source. We announced it for next quarter. >> I believe in the press release it said down the road things like TensorFlow are coming, H20. >> But Anaconda will announce for next quarter, so we will leverage this when it's out. Then indeed, we have a roadmap to include major open source, so major open source are the one from Anaconda (murmuring), mostly. Key deep learning, so TensorFlow and probably one or two additional, we're still discussing. One that I'm very keen on, it's called XGBoost in one word. People don't speak about it in newspapers, but this is what wins all Kaggle competitions. Kaggle is a machine learning competition site. When I say all, all that are not imagery cognition competitions. >> Dave: And that was ex-- >> XGBoost, X-G-B-O-O-S-T. >> Dave: XGBoost, okay. >> XGBoost, and it's-- >> Dave: X-ray gamma, right? >> It's really a package. When I say we don't know which package will win, XGBoost was introduced a year ago also, or maybe a bit more, but not so long ago, and now, if you have structure data, it is the best choice today. It's a really fast-moving, but so, we will support mid-show deep learning package and mid-show classical learning package like the one from Anaconda or XGBoost. The other thing we start with Z. We announced in the analyst session that we will have a power version and a private cloud, meaning XTC69X version as well. I can't tell you when because it's not firm, but it will come. >> And in public cloud as well, I guess we'll, you've got components in the public cloud today like the Watson Data Platform that you've extracted and put here. >> We have extracted part of the testing experience, so we've extracted notebooks and a graphical tool called ModelBuilder from DSX as part of IBM machine learning now, and we're going to add more of DSX as we go. But the goal is to really share code and function across private cloud and public cloud. As Rob Thomas defined it, we want with private cloud to offer all the features and functionality of public cloud, except that it would run inside a firewall. We are really developing machine learning and Watson machine learning on a command code base. It's an internal open source project. We share code, and then, we shape on different platform. >> I mean, you haven't, just now, used the word hybrid. Every now and then IBM does, but do you see that so-called hybrid use case as viable, or do you see it more, some workloads should run on prem, some should run in the cloud, and maybe they'll never come together? >> Machine learning, you basically have to face, one is training and the other is scoring. I see people moving training to cloud quite easily, unless there is some regulation about data privacy. But training is a good fit for cloud because usually you need a large computing system but only for limited time, so elasticity's great. But then deployment, if you want to score transaction in a CICS transaction, it has to run beside CICS, not cloud. If you want to score data on an IoT gateway, you want to score other gateway, not in a data center. I would say that may not be what people think first, but what will drive really the split between public cloud, private, and on prem is where you want to apply your machine learning models, where you want to score. For instance, smart watches, they are switching to gear to fit measurement system. You want to score your health data on the watch, not in the internet somewhere. >> Right, and in that CICS example that you gave, you'd essentially be bringing the model to the CICS data, is that right? >> Yes, that's what we do. That's a value of machine learning for Z is if you want to score transactions happening on Z, you need to be running on Z. So it's clear, mainframe people, they don't want to hear about public cloud, so they will be the last one moving. They have their reasons, but they like mainframe because it ties really, really secure and private. >> Dave: Public cloud's a dirty word. >> Yes, yes, for Z users. At least that's what I was told, and I could check with many people. But we know that in general the move is for public cloud, so we want to help people, depending on their journey, of the cloud. >> You've got one of those, too. Jean Francois, thanks very much for coming on theCUBE, it was really a pleasure having you back. >> Thank you. >> You're welcome. Alright, keep it right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from the Waldorf Astoria. IBM's machine learning announcement, be right back. (electronic keyboard music)
SUMMARY :
Brought to you by IBM. Good to see you again. on, big day for you guys. What are the key things that we and then what you do after. "It's what you do with them that matters." So, exactly what you said. but we won't stop at Spark ML. the past certainly 10 years, so that you make better decisions. that you object to, that we have Terminator in the next decade cognitive functions that and maybe we won't need to and how would you expect that to change? to exclude it, so you know. and now, look at that. Do we feel threatened because No, it doesn't bother us, right. and you don't want to disturb but it's not all that I believe in the press release it said so we will leverage this when it's out. and now, if you have structure data, like the Watson Data Platform But the goal is to really but do you see that so-called is where you want to apply is if you want to score so we want to help people, depending on it was really a pleasure having you back. from the Waldorf Astoria.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jean Francois | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
10-year | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Avi Mehta | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Jean Francois Puget | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
Last week | DATE | 0.99+ |
next quarter | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
one-time | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Five years ago | DATE | 0.99+ |
one word | QUANTITY | 0.99+ |
CICS | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
a year ago | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
next decade | DATE | 0.98+ |
one week | QUANTITY | 0.98+ |
first solution | QUANTITY | 0.98+ |
XGBoost | TITLE | 0.98+ |
a week | QUANTITY | 0.97+ |
Spark ML | TITLE | 0.97+ |
'60s | DATE | 0.97+ |
ModelBuilder | TITLE | 0.96+ |
one size | QUANTITY | 0.96+ |
One | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
Watson Data Platform | TITLE | 0.93+ |
each time | QUANTITY | 0.93+ |
Kaggle | ORGANIZATION | 0.92+ |
Stu | PERSON | 0.91+ |
this quarter | DATE | 0.91+ |
DSX | TITLE | 0.89+ |
XGBoost | ORGANIZATION | 0.89+ |
Waldorf Astoria | ORGANIZATION | 0.86+ |
Spark ML. | TITLE | 0.85+ |
z/OS | TITLE | 0.82+ |
years | DATE | 0.8+ |
centuries | QUANTITY | 0.75+ |
10 years | QUANTITY | 0.75+ |
DSX | ORGANIZATION | 0.72+ |
Terminator | TITLE | 0.64+ |
XTC69X | TITLE | 0.63+ |
IBM Machine Learning Launch 2017 | EVENT | 0.63+ |
couple times | QUANTITY | 0.57+ |
machine learning | EVENT | 0.56+ |
X | TITLE | 0.56+ |
Watson | TITLE | 0.55+ |
these products | QUANTITY | 0.53+ |
-G-B | COMMERCIAL_ITEM | 0.53+ |
H20 | ORGANIZATION | 0.52+ |
TensorFlow | ORGANIZATION | 0.5+ |
theCUBE | ORGANIZATION | 0.49+ |
CUBE | ORGANIZATION | 0.37+ |