Sreesha Rao, Niagara Bottling & Seth Dobrin, IBM | Change The Game: Winning With AI 2018
>> Live, from Times Square, in New York City, it's theCUBE covering IBM's Change the Game: Winning with AI. Brought to you by IBM. >> Welcome back to the Big Apple, everybody. I'm Dave Vellante, and you're watching theCUBE, the leader in live tech coverage, and we're here covering a special presentation of IBM's Change the Game: Winning with AI. IBM's got an analyst event going on here at the Westin today in the theater district. They've got 50-60 analysts here. They've got a partner summit going on, and then tonight, at Terminal 5 of the West Side Highway, they've got a customer event, a lot of customers there. We've talked earlier today about the hard news. Seth Dobern is here. He's the Chief Data Officer of IBM Analytics, and he's joined by Shreesha Rao who is the Senior Manager of IT Applications at California-based Niagara Bottling. Gentlemen, welcome to theCUBE. Thanks so much for coming on. >> Thank you, Dave. >> Well, thanks Dave for having us. >> Yes, always a pleasure Seth. We've known each other for a while now. I think we met in the snowstorm in Boston, sparked something a couple years ago. >> Yep. When we were both trapped there. >> Yep, and at that time, we spent a lot of time talking about your internal role as the Chief Data Officer, working closely with Inderpal Bhandari, and you guys are doing inside of IBM. I want to talk a little bit more about your other half which is working with clients and the Data Science Elite Team, and we'll get into what you're doing with Niagara Bottling, but let's start there, in terms of that side of your role, give us the update. >> Yeah, like you said, we spent a lot of time talking about how IBM is implementing the CTO role. While we were doing that internally, I spent quite a bit of time flying around the world, talking to our clients over the last 18 months since I joined IBM, and we found a consistent theme with all the clients, in that, they needed help learning how to implement data science, AI, machine learning, whatever you want to call it, in their enterprise. There's a fundamental difference between doing these things at a university or as part of a Kaggle competition than in an enterprise, so we felt really strongly that it was important for the future of IBM that all of our clients become successful at it because what we don't want to do is we don't want in two years for them to go "Oh my God, this whole data science thing was a scam. We haven't made any money from it." And it's not because the data science thing is a scam. It's because the way they're doing it is not conducive to business, and so we set up this team we call the Data Science Elite Team, and what this team does is we sit with clients around a specific use case for 30, 60, 90 days, it's really about 3 or 4 sprints, depending on the material, the client, and how long it takes, and we help them learn through this use case, how to use Python, R, Scala in our platform obviously, because we're here to make money too, to implement these projects in their enterprise. Now, because it's written in completely open-source, if they're not happy with what the product looks like, they can take their toys and go home afterwards. It's on us to prove the value as part of this, but there's a key point here. My team is not measured on sales. They're measured on adoption of AI in the enterprise, and so it creates a different behavior for them. So they're really about "Make the enterprise successful," right, not "Sell this software." >> Yeah, compensation drives behavior. >> Yeah, yeah. >> So, at this point, I ask, "Well, do you have any examples?" so Shreesha, let's turn to you. (laughing softly) Niagara Bottling -- >> As a matter of fact, Dave, we do. (laughing) >> Yeah, so you're not a bank with a trillion dollars in assets under management. Tell us about Niagara Bottling and your role. >> Well, Niagara Bottling is the biggest private label bottled water manufacturing company in the U.S. We make bottled water for Costcos, Walmarts, major national grocery retailers. These are our customers whom we service, and as with all large customers, they're demanding, and we provide bottled water at relatively low cost and high quality. >> Yeah, so I used to have a CIO consultancy. We worked with every CIO up and down the East Coast. I always observed, really got into a lot of organizations. I was always observed that it was really the heads of Application that drove AI because they were the glue between the business and IT, and that's really where you sit in the organization, right? >> Yes. My role is to support the business and business analytics as well as I support some of the distribution technologies and planning technologies at Niagara Bottling. >> So take us the through the project if you will. What were the drivers? What were the outcomes you envisioned? And we can kind of go through the case study. >> So the current project that we leveraged IBM's help was with a stretch wrapper project. Each pallet that we produce--- we produce obviously cases of bottled water. These are stacked into pallets and then shrink wrapped or stretch wrapped with a stretch wrapper, and this project is to be able to save money by trying to optimize the amount of stretch wrap that goes around a pallet. We need to be able to maintain the structural stability of the pallet while it's transported from the manufacturing location to our customer's location where it's unwrapped and then the cases are used. >> And over breakfast we were talking. You guys produce 2833 bottles of water per second. >> Wow. (everyone laughs) >> It's enormous. The manufacturing line is a high speed manufacturing line, and we have a lights-out policy where everything runs in an automated fashion with raw materials coming in from one end and the finished goods, pallets of water, going out. It's called pellets to pallets. Pellets of plastic coming in through one end and pallets of water going out through the other end. >> Are you sitting on top of an aquifer? Or are you guys using sort of some other techniques? >> Yes, in fact, we do bore wells and extract water from the aquifer. >> Okay, so the goal was to minimize the amount of material that you used but maintain its stability? Is that right? >> Yes, during transportation, yes. So if we use too much plastic, we're not optimally, I mean, we're wasting material, and cost goes up. We produce almost 16 million pallets of water every single year, so that's a lot of shrink wrap that goes around those, so what we can save in terms of maybe 15-20% of shrink wrap costs will amount to quite a bit. >> So, how does machine learning fit into all of this? >> So, machine learning is way to understand what kind of profile, if we can measure what is happening as we wrap the pallets, whether we are wrapping it too tight or by stretching it, that results in either a conservative way of wrapping the pallets or an aggressive way of wrapping the pallets. >> I.e. too much material, right? >> Too much material is conservative, and aggressive is too little material, and so we can achieve some savings if we were to alternate between the profiles. >> So, too little material means you lose product, right? >> Yes, and there's a risk of breakage, so essentially, while the pallet is being wrapped, if you are stretching it too much there's a breakage, and then it interrupts production, so we want to try and avoid that. We want a continuous production, at the same time, we want the pallet to be stable while saving material costs. >> Okay, so you're trying to find that ideal balance, and how much variability is in there? Is it a function of distance and how many touches it has? Maybe you can share with that. >> Yes, so each pallet takes about 16-18 wraps of the stretch wrapper going around it, and that's how much material is laid out. About 250 grams of plastic that goes on there. So we're trying to optimize the gram weight which is the amount of plastic that goes around each of the pallet. >> So it's about predicting how much plastic is enough without having breakage and disrupting your line. So they had labeled data that was, "if we stretch it this much, it breaks. If we don't stretch it this much, it doesn't break, but then it was about predicting what's good enough, avoiding both of those extremes, right? >> Yes. >> So it's a truly predictive and iterative model that we've built with them. >> And, you're obviously injecting data in terms of the trip to the store as well, right? You're taking that into consideration in the model, right? >> Yeah that's mainly to make sure that the pallets are stable during transportation. >> Right. >> And that is already determined how much containment force is required when your stretch and wrap each pallet. So that's one of the variables that is measured, but the inputs and outputs are-- the input is the amount of material that is being used in terms of gram weight. We are trying to minimize that. So that's what the whole machine learning exercise was. >> And the data comes from where? Is it observation, maybe instrumented? >> Yeah, the instruments. Our stretch-wrapper machines have an ignition platform, which is a Scada platform that allows us to measure all of these variables. We would be able to get machine variable information from those machines and then be able to hopefully, one day, automate that process, so the feedback loop that says "On this profile, we've not had any breaks. We can continue," or if there have been frequent breaks on a certain profile or machine setting, then we can change that dynamically as the product is moving through the manufacturing process. >> Yeah, so think of it as, it's kind of a traditional manufacturing production line optimization and prediction problem right? It's minimizing waste, right, while maximizing the output and then throughput of the production line. When you optimize a production line, the first step is to predict what's going to go wrong, and then the next step would be to include precision optimization to say "How do we maximize? Using the constraints that the predictive models give us, how do we maximize the output of the production line?" This is not a unique situation. It's a unique material that we haven't really worked with, but they had some really good data on this material, how it behaves, and that's key, as you know, Dave, and probable most of the people watching this know, labeled data is the hardest part of doing machine learning, and building those features from that labeled data, and they had some great data for us to start with. >> Okay, so you're collecting data at the edge essentially, then you're using that to feed the models, which is running, I don't know, where's it running, your data center? Your cloud? >> Yeah, in our data center, there's an instance of DSX Local. >> Okay. >> That we stood up. Most of the data is running through that. We build the models there. And then our goal is to be able to deploy to the edge where we can complete the loop in terms of the feedback that happens. >> And iterate. (Shreesha nods) >> And DSX Local, is Data Science Experience Local? >> Yes. >> Slash Watson Studio, so they're the same thing. >> Okay now, what role did IBM and the Data Science Elite Team play? You could take us through that. >> So, as we discussed earlier, adopting data science is not that easy. It requires subject matter, expertise. It requires understanding of data science itself, the tools and techniques, and IBM brought that as a part of the Data Science Elite Team. They brought both the tools and the expertise so that we could get on that journey towards AI. >> And it's not a "do the work for them." It's a "teach to fish," and so my team sat side by side with the Niagara Bottling team, and we walked them through the process, so it's not a consulting engagement in the traditional sense. It's how do we help them learn how to do it? So it's side by side with their team. Our team sat there and walked them through it. >> For how many weeks? >> We've had about two sprints already, and we're entering the third sprint. It's been about 30-45 days between sprints. >> And you have your own data science team. >> Yes. Our team is coming up to speed using this project. They've been trained but they needed help with people who have done this, been there, and have handled some of the challenges of modeling and data science. >> So it accelerates that time to --- >> Value. >> Outcome and value and is a knowledge transfer component -- >> Yes, absolutely. >> It's occurring now, and I guess it's ongoing, right? >> Yes. The engagement is unique in the sense that IBM's team came to our factory, understood what that process, the stretch-wrap process looks like so they had an understanding of the physical process and how it's modeled with the help of the variables and understand the data science modeling piece as well. Once they know both side of the equation, they can help put the physical problem and the digital equivalent together, and then be able to correlate why things are happening with the appropriate data that supports the behavior. >> Yeah and then the constraints of the one use case and up to 90 days, there's no charge for those two. Like I said, it's paramount that our clients like Niagara know how to do this successfully in their enterprise. >> It's a freebie? >> No, it's no charge. Free makes it sound too cheap. (everybody laughs) >> But it's part of obviously a broader arrangement with buying hardware and software, or whatever it is. >> Yeah, its a strategy for us to help make sure our clients are successful, and I want it to minimize the activation energy to do that, so there's no charge, and the only requirements from the client is it's a real use case, they at least match the resources I put on the ground, and they sit with us and do things like this and act as a reference and talk about the team and our offerings and their experiences. >> So you've got to have skin in the game obviously, an IBM customer. There's got to be some commitment for some kind of business relationship. How big was the collective team for each, if you will? >> So IBM had 2-3 data scientists. (Dave takes notes) Niagara matched that, 2-3 analysts. There were some working with the machines who were familiar with the machines and others who were more familiar with the data acquisition and data modeling. >> So each of these engagements, they cost us about $250,000 all in, so they're quite an investment we're making in our clients. >> I bet. I mean, 2-3 weeks over many, many weeks of super geeks time. So you're bringing in hardcore data scientists, math wizzes, stat wiz, data hackers, developer--- >> Data viz people, yeah, the whole stack. >> And the level of skills that Niagara has? >> We've got actual employees who are responsible for production, our manufacturing analysts who help aid in troubleshooting problems. If there are breakages, they go analyze why that's happening. Now they have data to tell them what to do about it, and that's the whole journey that we are in, in trying to quantify with the help of data, and be able to connect our systems with data, systems and models that help us analyze what happened and why it happened and what to do before it happens. >> Your team must love this because they're sort of elevating their skills. They're working with rock star data scientists. >> Yes. >> And we've talked about this before. A point that was made here is that it's really important in these projects to have people acting as product owners if you will, subject matter experts, that are on the front line, that do this everyday, not just for the subject matter expertise. I'm sure there's executives that understand it, but when you're done with the model, bringing it to the floor, and talking to their peers about it, there's no better way to drive this cultural change of adopting these things and having one of your peers that you respect talk about it instead of some guy or lady sitting up in the ivory tower saying "thou shalt." >> Now you don't know the outcome yet. It's still early days, but you've got a model built that you've got confidence in, and then you can iterate that model. What's your expectation for the outcome? >> We're hoping that preliminary results help us get up the learning curve of data science and how to leverage data to be able to make decisions. So that's our idea. There are obviously optimal settings that we can use, but it's going to be a trial and error process. And through that, as we collect data, we can understand what settings are optimal and what should we be using in each of the plants. And if the plants decide, hey they have a subjective preference for one profile versus another with the data we are capturing we can measure when they deviated from what we specified. We have a lot of learning coming from the approach that we're taking. You can't control things if you don't measure it first. >> Well, your objectives are to transcend this one project and to do the same thing across. >> And to do the same thing across, yes. >> Essentially pay for it, with a quick return. That's the way to do things these days, right? >> Yes. >> You've got more narrow, small projects that'll give you a quick hit, and then leverage that expertise across the organization to drive more value. >> Yes. >> Love it. What a great story, guys. Thanks so much for coming to theCUBE and sharing. >> Thank you. >> Congratulations. You must be really excited. >> No. It's a fun project. I appreciate it. >> Thanks for having us, Dave. I appreciate it. >> Pleasure, Seth. Always great talking to you, and keep it right there everybody. You're watching theCUBE. We're live from New York City here at the Westin Hotel. cubenyc #cubenyc Check out the ibm.com/winwithai Change the Game: Winning with AI Tonight. We'll be right back after a short break. (minimal upbeat music)
SUMMARY :
Brought to you by IBM. at Terminal 5 of the West Side Highway, I think we met in the snowstorm in Boston, sparked something When we were both trapped there. Yep, and at that time, we spent a lot of time and we found a consistent theme with all the clients, So, at this point, I ask, "Well, do you have As a matter of fact, Dave, we do. Yeah, so you're not a bank with a trillion dollars Well, Niagara Bottling is the biggest private label and that's really where you sit in the organization, right? and business analytics as well as I support some of the And we can kind of go through the case study. So the current project that we leveraged IBM's help was And over breakfast we were talking. (everyone laughs) It's called pellets to pallets. Yes, in fact, we do bore wells and So if we use too much plastic, we're not optimally, as we wrap the pallets, whether we are wrapping it too little material, and so we can achieve some savings so we want to try and avoid that. and how much variability is in there? goes around each of the pallet. So they had labeled data that was, "if we stretch it this that we've built with them. Yeah that's mainly to make sure that the pallets So that's one of the variables that is measured, one day, automate that process, so the feedback loop the predictive models give us, how do we maximize the Yeah, in our data center, Most of the data And iterate. the Data Science Elite Team play? so that we could get on that journey towards AI. And it's not a "do the work for them." and we're entering the third sprint. some of the challenges of modeling and data science. that supports the behavior. Yeah and then the constraints of the one use case No, it's no charge. with buying hardware and software, or whatever it is. minimize the activation energy to do that, There's got to be some commitment for some and others who were more familiar with the So each of these engagements, So you're bringing in hardcore data scientists, math wizzes, and that's the whole journey that we are in, in trying to Your team must love this because that are on the front line, that do this everyday, and then you can iterate that model. And if the plants decide, hey they have a subjective and to do the same thing across. That's the way to do things these days, right? across the organization to drive more value. Thanks so much for coming to theCUBE and sharing. You must be really excited. I appreciate it. I appreciate it. Change the Game: Winning with AI Tonight.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shreesha Rao | PERSON | 0.99+ |
Seth Dobern | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Walmarts | ORGANIZATION | 0.99+ |
Costcos | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
California | LOCATION | 0.99+ |
Seth Dobrin | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
Niagara | ORGANIZATION | 0.99+ |
Seth | PERSON | 0.99+ |
Shreesha | PERSON | 0.99+ |
U.S. | LOCATION | 0.99+ |
Sreesha Rao | PERSON | 0.99+ |
third sprint | QUANTITY | 0.99+ |
90 days | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Inderpal Bhandari | PERSON | 0.99+ |
Niagara Bottling | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
tonight | DATE | 0.99+ |
ibm.com/winwithai | OTHER | 0.99+ |
one | QUANTITY | 0.99+ |
Terminal 5 | LOCATION | 0.99+ |
two years | QUANTITY | 0.99+ |
about $250,000 | QUANTITY | 0.98+ |
Times Square | LOCATION | 0.98+ |
Scala | TITLE | 0.98+ |
2018 | DATE | 0.98+ |
15-20% | QUANTITY | 0.98+ |
IBM Analytics | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
each pallet | QUANTITY | 0.98+ |
Kaggle | ORGANIZATION | 0.98+ |
West Side Highway | LOCATION | 0.97+ |
Each pallet | QUANTITY | 0.97+ |
4 sprints | QUANTITY | 0.97+ |
About 250 grams | QUANTITY | 0.97+ |
both side | QUANTITY | 0.96+ |
Data Science Elite Team | ORGANIZATION | 0.96+ |
one day | QUANTITY | 0.95+ |
every single year | QUANTITY | 0.95+ |
Niagara Bottling | PERSON | 0.93+ |
about two sprints | QUANTITY | 0.93+ |
one end | QUANTITY | 0.93+ |
R | TITLE | 0.92+ |
2-3 weeks | QUANTITY | 0.91+ |
one profile | QUANTITY | 0.91+ |
50-60 analysts | QUANTITY | 0.91+ |
trillion dollars | QUANTITY | 0.9+ |
2-3 data scientists | QUANTITY | 0.9+ |
about 30-45 days | QUANTITY | 0.88+ |
almost 16 million pallets of water | QUANTITY | 0.88+ |
Big Apple | LOCATION | 0.87+ |
couple years ago | DATE | 0.87+ |
last 18 months | DATE | 0.87+ |
Westin Hotel | ORGANIZATION | 0.83+ |
pallet | QUANTITY | 0.83+ |
#cubenyc | LOCATION | 0.82+ |
2833 bottles of water per second | QUANTITY | 0.82+ |
the Game: Winning with AI | TITLE | 0.81+ |
John Thomas, IBM | Change the Game: Winning With AI
(upbeat music) >> Live from Time Square in New York City, it's The Cube. Covering IBM's change the game, winning with AI. Brought to you by IBM. >> Hi everybody, welcome back to The Big Apple. My name is Dave Vellante. We're here in the Theater District at The Westin Hotel covering a Special Cube event. IBM's got a big event today and tonight, if we can pan here to this pop-up. Change the game: winning with AI. So IBM has got an event here at The Westin, The Tide at Terminal 5 which is right up the Westside Highway. Go to IBM.com/winwithAI. Register, you can watch it online, or if you're in the city come down and see us, we'll be there. Uh, we have a bunch of customers will be there. We had Rob Thomas on earlier, he's kind of the host of the event. IBM does these events periodically throughout the year. They gather customers, they put forth some thought leadership, talk about some hard dues. So, we're very excited to have John Thomas here, he's a distinguished engineer and Director of IBM Analytics, long time Cube alum, great to see you again John >> Same here. Thanks for coming on. >> Great to have you. >> So we just heard a great case study with Niagara Bottling around the Data Science Elite Team, that's something that you've been involved in, and we're going to get into that. But give us the update since we last talked, what have you been up to?? >> Sure sure. So we're living and breathing data science these days. So the Data Science Elite Team, we are a team of practitioners. We actually work collaboratively with clients. And I stress on the word collaboratively because we're not there to just go do some work for a client. We actually sit down, expect the client to put their team to work with our team, and we build AI solutions together. Scope use cases, but sort of you know, expose them to expertise, tools, techniques, and do this together, right. And we've been very busy, (laughs) I can tell you that. You know it has been a lot of travel around the world. A lot of interest in the program. And engagements that bring us very interesting use cases. You know, use cases that you would expect to see, use cases that are hmmm, I had not thought of a use case like that. You know, but it's been an interesting journey in the last six, eight months now. >> And these are pretty small, agile teams. >> Sometimes people >> Yes. use tiger teams and they're two to three pizza teams, right? >> Yeah. And my understanding is you bring some number of resources that's called two three data scientists, >> Yes and the customer matches that resource, right? >> Exactly. That's the prerequisite. >> That is the prerequisite, because we're not there to just do the work for the client. We want to do this in a collaborative fashion, right. So, the customers Data Science Team is learning from us, we are working with them hand in hand to build a solution out. >> And that's got to resonate well with customers. >> Absolutely I mean so often the services business is like kind of, customers will say well I don't want to keep going back to a company to get these services >> Right, right. I want, teach me how to fish and that's exactly >> That's exactly! >> I was going to use that phrase. That's exactly what we do, that's exactly. So at the end of the two or three month period, when IBM leaves, my team leaves, you know, the client, the customer knows what the tools are, what the techniques are, what to watch out for, what are success criteria, they have a good handle of that. >> So we heard about the Niagara Bottling use case, which was a pretty narrow, >> Mm-hmm. How can we optimize the use of the plastic wrapping, save some money there, but at the same time maintain stability. >> Ya. You know very, quite a narrow in this case. >> Yes, yes. What are some of the other use cases? >> Yeah that's a very, like you said, a narrow one. But there are some use cases that span industries, that cut across different domains. I think I may have mentioned this on one of our previous discussions, Dave. You know customer interactions, trying to improve customer interactions is something that cuts across industry, right. Now that can be across different channels. One of the most prominent channels is a call center, I think we have talked about this previously. You know I hate calling into a call center (laughter) because I don't know Yeah, yeah. What kind of support I'm going to get. But, what if you could equip the call center agents to provide consistent service to the caller, and handle the calls in the best appropriate way. Reducing costs on the business side because call handling is expensive. And eventually lead up to can I even avoid the call, through insights on why the call is coming in in the first place. So this use case cuts across industry. Any enterprise that has got a call center is doing this. So we are looking at can we apply machine-learning techniques to understand dominant topics in the conversation. Once we understand with these have with unsupervised techniques, once we understand dominant topics in the conversation, can we drill into that and understand what are the intents, and does the intent change as the conversation progress? So you know I'm calling someone, it starts off with pleasantries, it then goes into weather, how are the kids doing? You know, complain about life in general. But then you get to something of substance why the person was calling in the first place. And then you may think that is the intent of the conversation, but you find that as the conversation progresses, the intent might actually change. And can you understand that real time? Can you understand the reasons behind the call, so that you could take proactive steps to maybe avoid the call coming in at the first place? This use case Dave, you know we are seeing so much interest in this use case. Because call centers are a big cost to most enterprises. >> Let's double down on that because I want to understand this. So you basically doing. So every time you call a call center this call may be recorded, >> (laughter) Yeah. For quality of service. >> Yeah. So you're recording the calls maybe using MLP to transcribe those calls. >> MLP is just the first step, >> Right. so you're absolutely right, when a calls come in there's already call recording systems in place. We're not getting into that space, right. So call recording systems record the voice calls. So often in offline batch mode you can take these millions of calls, pass it through a speech-to-text mechanism, which produces a text equivalent of the voice recordings. Then what we do is we apply unsupervised machine learning, and clustering, and topic-modeling techniques against it to understand what are the dominant topics in this conversation. >> You do kind of an entity extraction of those topics. >> Exactly, exactly, exactly. >> Then we find what is the most relevant, what are the relevant ones, what is the relevancy of topics in a particular conversation. That's not enough, that is just step two, if you will. Then you have to, we build what is called an intent hierarchy. So this is at top most level will be let's say payments, the call is about payments. But what about payments, right? Is it an intent to make a late payment? Or is the intent to avoid the payment or contest a payment? Or is the intent to structure a different payment mechanism? So can you get down to that level of detail? Then comes a further level of detail which is the reason that is tied to this intent. What is a reason for a late payment? Is it a job loss or job change? Is it because they are just not happy with the charges that I have coming? What is a reason? And the reason can be pretty complex, right? It may not be in the immediate vicinity of the snippet of conversation itself. So you got to go find out what the reason is and see if you can match it to this particular intent. So multiple steps off the journey, and eventually what we want to do is so we do our offers in an offline batch mode, and we are building a series of classifiers instead of classifiers. But eventually we want to get this to real time action. So think of this, if you have machine learning models, supervised models that can predict the intent, the reasons, et cetera, you can have them deployed operationalize them, so that when a call comes in real time, you can screen it in real time, do the speech to text, you can do this pass it to the supervise models that have been deployed, and the model fires and comes back and says this is the intent, take some action or guide the agent to take some action real time. >> Based on some automated discussion, so tell me what you're calling about, that kind of thing, >> Right. Is that right? >> So it's probably even gone past tell me what you're calling about. So it could be the conversation has begun to get into you know, I'm going through a tough time, my spouse had a job change. You know that is itself an indicator of some other reasons, and can that be used to prompt the CSR >> Ah, to take some action >> Ah, oh case. appropriate to the conversation. >> So I'm not talking to a machine, at first >> no no I'm talking to a human. >> Still talking to human. >> And then real time feedback to that human >> Exactly, exactly. is a good example of >> Exactly. human augmentation. >> Exactly, exactly. I wanted to go back and to process a little bit in terms of the model building. Are there humans involved in calibrating the model? >> There has to be. Yeah, there has to be. So you know, for all the hype in the industry, (laughter) you still need a (laughter). You know what it is is you need expertise to look at what these models produce, right. Because if you think about it, machine learning algorithms don't by themselves have an understanding of the domain. They are you know either statistical or similar in nature, so somebody has to marry the statistical observations with the domain expertise. So humans are definitely involved in the building of these models and claiming of these models. >> Okay. >> (inaudible). So that's who you got math, you got stats, you got some coding involved, and you >> Absolutely got humans are the last mile >> Absolutely. to really bring that >> Absolutely. expertise. And then in terms of operationalizing it, how does that actually get done? What tech behind that? >> Ah, yeah. >> It's a very good question, Dave. You build models, and what good are they if they stay inside your laptop, you know, they don't go anywhere. What you need to do is, I use a phrase, weave these models in your business processes and your applications. So you need a way to deploy these models. The models should be consumable from your business processes. Now it could be a Rest API Call could be a model. In some cases a Rest API Call is not sufficient, the latency is too high. Maybe you've got embed that model right into where your application is running. You know you've got data on a mainframe. A credit card transaction comes in, and the authorization for the credit card is happening in a four millisecond window on the mainframe on all, not all, but you know CICS COBOL Code. I don't have the time to make a Rest API call outside. I got to have the model execute in context with my CICS COBOL Code in that memory space. >> Yeah right. You know so the operationalizing is deploying, consuming these models, and then beyond that, how do the models behave over time? Because you can have the best programmer, the best data scientist build the absolute best model, which has got great accuracy, great performance today. Two weeks from now, performance is going to go down. >> Hmm. How do I monitor that? How do I trigger a loads map for below certain threshold. And, can I have a system in place that reclaims this model with new data as it comes in. >> So you got to understand where the data lives. >> Absolutely. You got to understand the physics, >> Yes. The latencies involved. >> Yes. You got to understand the economics. >> Yes. And there's also probably in many industries legal implications. >> Oh yes. >> No, the explainability of models. You know, can I prove that there is no bias here. >> Right. Now all of these are challenging but you know, doable things. >> What makes a successful engagement? Obviously you guys are outcome driven, >> Yeah. but talk about how you guys measure success. >> So um, for our team right now it is not about revenue, it's purely about adoption. Does the client, does the customer see the value of what IBM brings to the table. This is not just tools and technology, by the way. It's also expertise, right? >> Hmm. So this notion of expertise as a service, which is coupled with tools and technology to build a successful engagement. The way we measure success is has the client, have we built out the use case in a way that is useful for the business? Two, does a client see value in going further with that. So this is right now what we look at. It's not, you know yes of course everybody is scared about revenue. But that is not our key metric. Now in order to get there though, what we have found, a little bit of hard work, yes, uh, no you need different constituents of the customer to come together. It's not just me sending a bunch of awesome Python Programmers to the client. >> Yeah right. But now it is from the customer's side we need involvement from their Data Science Team. We talk about collaborating with them. We need involvement from their line of business. Because if the line of business doesn't care about the models we've produced you know, what good are they? >> Hmm. And third, people don't usually think about it, we need IT to be part of the discussion. Not just part of the discussion, part of being the stakeholder. >> Yes, so you've got, so IBM has the chops to actually bring these constituents together. >> Ya. I have actually a fair amount of experience in herding cats on large organizations. (laughter) And you know, the customer, they've got skin in the IBM game. This is to me a big differentiator between IBM, certainly some of the other technology suppliers who don't have the depth of services, expertise, and domain expertise. But on the flip side of that, differentiation from many of the a size who have that level of global expertise, but they don't have tech piece. >> Right. >> Now they would argue well we do anybodies tech. >> Ya. But you know, if you've got tech. >> Ya. >> You just got to (laughter) Ya. >> Bring those two together. >> Exactly. And that's really seems to me to be the big differentiator >> Yes, absolutely. for IBM. Well John, thanks so much for stopping by theCube and explaining sort of what you've been up to, the Data Science Elite Team, very exciting. Six to nine months in, >> Yes. are you declaring success yet? Still too early? >> Uh, well we're declaring success and we are growing, >> Ya. >> Growth is good. >> A lot of lot of attention. >> Alright, great to see you again, John. >> Absolutely, thanks you Dave. Thanks very much. Okay, keep it right there everybody. You're watching theCube. We're here at The Westin in midtown and we'll be right back after this short break. I'm Dave Vellante. (tech music)
SUMMARY :
Brought to you by IBM. he's kind of the host of the event. Thanks for coming on. last talked, what have you been up to?? We actually sit down, expect the client to use tiger teams and they're two to three And my understanding is you bring some That's the prerequisite. That is the prerequisite, because we're not And that's got to resonate and that's exactly So at the end of the two or three month period, How can we optimize the use of the plastic wrapping, Ya. You know very, What are some of the other use cases? intent of the conversation, but you So every time you call a call center (laughter) Yeah. So you're recording the calls maybe So call recording systems record the voice calls. You do kind of an entity do the speech to text, you can do this Is that right? has begun to get into you know, appropriate to the conversation. I'm talking to a human. is a good example of Exactly. a little bit in terms of the model building. You know what it is is you need So that's who you got math, you got stats, to really bring that how does that actually get done? I don't have the time to make a Rest API call outside. You know so the operationalizing is deploying, that reclaims this model with new data as it comes in. So you got to understand where You got to understand Yes. You got to understand And there's also probably in many industries No, the explainability of models. but you know, doable things. but talk about how you guys measure success. the value of what IBM brings to the table. constituents of the customer to come together. about the models we've produced you know, Not just part of the discussion, to actually bring these differentiation from many of the a size Now they would argue Ya. But you know, And that's really seems to me to be Six to nine months in, are you declaring success yet? Alright, great to see you Absolutely, thanks you Dave.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Thomas | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Six | QUANTITY | 0.99+ |
Time Square | LOCATION | 0.99+ |
tonight | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three month | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
third | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
New York City | LOCATION | 0.98+ |
today | DATE | 0.98+ |
Python | TITLE | 0.98+ |
IBM Analytics | ORGANIZATION | 0.97+ |
Terminal 5 | LOCATION | 0.97+ |
Data Science Elite Team | ORGANIZATION | 0.96+ |
Niagara | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
IBM.com/winwithAI | OTHER | 0.96+ |
first place | QUANTITY | 0.95+ |
eight months | QUANTITY | 0.94+ |
Change the Game: Winning With AI | TITLE | 0.89+ |
The Westin | ORGANIZATION | 0.89+ |
Niagara Bottling | PERSON | 0.89+ |
Theater District | LOCATION | 0.88+ |
four millisecond window | QUANTITY | 0.87+ |
step two | QUANTITY | 0.86+ |
Cube | PERSON | 0.85+ |
Westside Highway | LOCATION | 0.83+ |
first | QUANTITY | 0.83+ |
Two weeks | DATE | 0.82+ |
millions of calls | QUANTITY | 0.79+ |
two three data scientists | QUANTITY | 0.78+ |
CICS | TITLE | 0.77+ |
COBOL | OTHER | 0.69+ |
Rest API call | OTHER | 0.68+ |
The Tide | LOCATION | 0.68+ |
theCube | ORGANIZATION | 0.67+ |
The Westin | LOCATION | 0.66+ |
Rest API | OTHER | 0.66+ |
Apple | LOCATION | 0.63+ |
Big | ORGANIZATION | 0.62+ |
Westin | LOCATION | 0.51+ |
last six | DATE | 0.48+ |
Hotel | ORGANIZATION | 0.45+ |
theCube | TITLE | 0.33+ |
Bottling | COMMERCIAL_ITEM | 0.3+ |
Daniel Hernandez, IBM | Change the Game: Winning With AI 2018
>> Live from Times Square in New York City, it's theCUBE, covering IBM's Change the Game, Winning with AI, brought to you by IBM. >> Hi everybody, welcome back to theCUBE's special presentation. We're here at the Western Hotel and the theater district covering IBM's announcements. They've got an analyst meeting today, partner event. They've got a big event tonight. IBM.com/winwithAI, go to that website, if you're in town register. You can watch the webcast online. You'll see this very cool play of Vince Lombardy, one of his famous plays. It's kind of a power sweep right which is a great way to talk about sort of winning and with X's and O's. So anyway, Daniel Hernandez is here the vice president of IBM analytics, long time Cube along. It's great to see you again, thanks for coming on. >> My pleasure Dave. >> So we've talked a number of times. We talked earlier this year. Give us the update on momentum in your business. You guys are doing really well, we see this in the quadrants and the waves, but your perspective. >> Data science and AI, so when we last talked we were just introducing something called IBM Club Private for data. The basic idea is anybody that wants to do data science, data engineering or building apps with data anywhere, we're going to give them a single integrated platform to get that done. It's going to be the most efficient, best way to do those jobs to be done. We introduced it, it's been a resounding success. Been rolling that out with clients, that's been a whole lot of fun. >> So we talked a little bit with Rob Thomas about some of the news that you guys have, but this is really your wheelhouse so I'm going to drill down into each of these. Let's say we had Rob Beerden on yesterday on our program and he talked a lot about the IBM Red Hat and Hortonworks relationship. Certainly they talked about it on their earnings call and there seems to be clear momentum in the marketplace. But give us your perspective on that announcement. What exactly is it all about? I mean it started kind of back in the ODPI days and it's really evolved into something that now customers are taking advantage of. >> You go back to June last year, we entered into a relationship with Hortonworks where the basic primacy, was customers care about data and any data driven initiative was going to require data science. We had to do a better job bringing these eco systems, one focused on kind of Hadoop, the other one on classic enterprise analytical and operational data together. We did that last year. The other element of that was we're going to bring our data science and machine learning tools and run times to where the data is including Hadoop. That's been a resounding success. The next step up is how do we proliferate that single integrated stack everywhere including private Cloud or preferred Clouds like Open Shift. So there was two elements of the announcement. We did the hybrid Cloud architecture initiative which is taking the Hadoop data stack and bringing it to containers and Kubernetes. That's a big deal for people that want to run the infrastructure with Cloud characteristics. And the other was we're going to bring that whole stack onto Open Shift. So on IBM's side, with IBM Cloud Private for data we are driving certification of that entire stack on OpenShift so any customer that's betting on OpenShift as their Cloud infrastructure can benefit from that and the single integrated data stack. It's a pretty big deal. >> So OpenShift is really interesting because OpenShift was kind of quiet for awhile. It was quiest if you will. And then containers come on the scene and OpenShift has just exploded. What are your perspectives on that and what's IBM's angle on OpenShift? >> Containers of Kubernetes basically allow you to get Cloud characteristics everywhere. It used to be locked in to kind of the public Cloud or SCP providers that were offering as a service whether PAS OR IAS and Docker and Kubernetes are making the same underline technology that enabled elasticity, pay as you go models available anywhere including your own data center. So I think it explains why OpenShift, why IBM Cloud Private, why IBM Club Private for data just got on there. >> I mean the Core OS move by Red Hat was genius. They picked that up for the song in our view anyway and it's really helped explode that. And in this world, everybody's talking about Kubernetes. I mean we're here at a big data conference all week. It used to be Hadoop world. Everybody's talking about containers, Kubernetes and Multi cloud. Those are kind of the hot trends. I presume you've seen the same thing. >> 100 percent. There's not a single client that I know, and I spend the majority of my time with clients that are running their workloads in a single stack. And so what do you do? If data is an imperative for you, you better run your data analytic stack wherever you need to and that means Multi cloud by definition. So you've got a choice. You can say, I can port that workload to every distinct programming model and data stack or you can have a data stack everywhere including Multi clouds and Open Shift in this case. >> So thinking about the three companies, so Hortonworks obviously had duped distro specialists, open source, brings that end to end sort of data management from you know Edge, or Clouds on Prim. Red Hat doing a lot of the sort of hardcore infrastructure layer. IBM bringing in the analytics and really empowering people to get insights out of data. Is that the right way to think about that triangle? >> 100 percent and you know with the Hortonworks and IBM data stacks, we've got our common services, particularly you're on open meta data which means wherever your data is, you're going to know about it and you're going to be able to control it. Privacy, security, data discovery reasons, that's a pretty big deal. >> Yeah and as the Cloud, well obviously the Cloud whether it's on Prim or in the public Cloud expands now to the Edge, you've also got this concept of data virtualization. We've talked about this in the past. You guys have made some announcements there. But let's put a double click on that a little bit. What's it all about? >> Data virtualization been going on for a long time. It's basic intent is to help you access data through whatever tools, no matter where the data is. Traditional approaches of data virtualization are pretty limiting. So they work relatively well when you've got small data sets but when you've got highly fragmented data, which is the case in virtually every enterprise that exists a lot of the undermined technology for data virtualization breaks down. Data coming through a single headnote. Ultimately that becomes the critical issue. So you can't take advantage of data virtualization technologies largely because of that when you've got wide scale deployments. We've been incubating technology under this project codename query plex, it was a code name that we used internally and that we were working with Beta clients on and testing it out, validating it technically and it was pretty clear that this is a game changing method for data virtualization that allows you to drive the benefits of accessing your data wherever it is, pushing down queries where the data is and getting benefits of that through highly fragmented data landscape. And so what we've done is take that extremely innovated next generation data virtualization technology include it in our data platform called IBM Club Private for Data, and made it a critical feature inside of that. >> I like that term, query plex, it reminds me of the global sisplex. I go back to the days when actually viewing sort of distributed global systems was very, very challenging and IBM sort of solved that problem. Okay, so what's the secret sauce though of query plex and data virtualization? How does it all work? What's the tech behind it? >> So technically, instead of data coming and getting funneled through one node. If you ever think of your data as kind of a graph of computational data nodes. What query plex does is take advantage of that computational mesh to do queries and analytics. So instead of bringing all the data and funneling it through one of the nodes, and depending on the computational horsepower of that node and all the data being able to get to it, this just federates it out. It distributes out that workload so it's some magic behind the scenes but relatively simple technique. Low computing aggregate, it's probably going to be higher than whatever you can put into that single node. >> And how do customers access these services? How long does it take? >> It would look like a standard query interface to them. So this is all magic behind the scenes. >> Okay and they get this capability as part of what? IBM's >> IBM's Club Private for Data. It's going to be a feature, so this project query plex, is introduced as next generation data virtualization technology which just becomes a part of IBM Club Private for Data. >> Okay and then the other announcement that we talked to Rob, I'd like to understand a little bit more behind it. Actually before we get there, can we talk about the business impact of query plex and data virtualization? Thinking about it, it dramatically simplifies the processes that I have to go through to get data. But more importantly, it helps me get a handle on my data so I can apply machine intelligence. It seems like the innovation sandwich if you will. Data plus AI and then Cloud models for scale and simplicity and that's what's going to drive innovation. So talk about the business impact that people are excited about with regard to query plex. >> Better economics, so in order for you to access your data, you don't have to do ETO in this particular case. So data at rest getting consumed because of this online technology. Two performance, so because of the way this works you're actually going to get faster response times. Three, you're going to be able to query more data simply because this technology allows you to access all your data in a fragmented way without having to consolidate it. >> Okay, so it eliminates steps, right, and gets you time to value and gives you a bigger corporate of data that you can the analyze and drive inside. >> 100 percent. >> Okay, let's talk about stack overflow. You know, Rob took us through a little bit about what that's, what's going on there but why stack overflow, you're targeting developers? Talk to me more about that. >> So stack overflow, 50 million active developers each month on that community. You're a developer and you want to know something, you have to go to stack overflow. You think about data science and AI as disciplines. The idea that that is only dermained to AI and data scientists is very limiting idea. In order for you to actually apply artificial intelligence for whatever your use case is instead of a business it's going to require multiple individuals working together to get that particular outcome done including developers. So instead of having a distinct community for AI that's focused on AI machine developers, why not bring the artificial intelligence community to where the developers already are, which is stack overflow. So, if you go to AI.stackexchange.com, it's going to be the place for you to go to get all your answers to any question around artificial intelligence and of course IBM is going to be there in the community helping out. >> So it's AI.stackexchange.com. You know, it's interesting Daniel that, I mean to talk about digital transformation talking about data. John Furrier said something awhile back about the dots. This is like five or six years ago. He said data is the new development kit and now you guys are essentially targeting developers around AI, obviously a data centric. People trying to put data at the core of the organization. You see that that's a winning strategy. What do you think about that? >> 100 percent, I mean we're the data company instead of IBM, so you're probably asking the wrong guy if you think >> You're biased. (laughing) >> Yeah possibly, but I'm acknowledged. The data over opinions. >> Alright, tell us about tonight what we can expect? I was referencing the Vince Lombardy play here. You know, what's behind that? What are we going to see tonight? >> We were joking a little bit about the old school power eye formation, but that obviously works for your, you're a New England fan aren't you? >> I am actually, if you saw the games this weekend Pat's were in the power eye for quite a bit of the game which I know upset a lot of people. But it works. >> Yeah, maybe we should of used it as a Dallas Cowboy team. But anyways, it's going to be an amazing night. So we're going to have a bunch of clients talking about what they're doing with AI. And so if you're interested in learning what's happening in the industry, kind of perfect event to get it. We're going to do some expert analysis. It will be a little bit of fun breaking down what those customers did to be successful and maybe some tips and tricks that will help you along your way. >> Great, it's right up the street on the west side highway, probably about a mile from the Javis Center people that are at Strata. We've been running programs all week. One of the themes that we talked about, we had an event Tuesday night. We had a bunch of people coming in. There was people from financial services, we had folks from New York State, the city of New York. It was a great meet up and we had a whole conversation got going and one of the things that we talked about and I'd love to get your thoughts and kind of know where you're headed here, but big data to do all that talk and people ask, is that, now at AI, the conversation has moved to AI, is it same wine, new bottle, or is there something substantive here? The consensus was, there's substantive innovation going on. Your thoughts about where that innovation is coming from and what the potential is for clients? >> So if you're going to implement AI for let's say customer care for instance, you're going to be three wrongs griefs. You need data, you need algorithms, you need compute. With a lot of different structure to relate down to capture data wasn't captured until the traditional data systems anchored by Hadoop and big data movement. We landed, we created a data and computational grid for that data today. With all the advancements going on in algorithms particularly in Open Source, you now have, you can build a neuro networks, you can do Cisco machine learning in any language that you want. And bringing those together are exactly the combination that you need to implement any AI system. You already have data and computational grids here. You've got algorithms bringing them together solving some problem that matters to a customer is like the natural next step. >> And despite the skills gap, the skill gaps that we talked about, you're seeing a lot of knowledge transfer from a lot of expertise getting out there into the wild when you follow people like Kirk Born on Twitter you'll see that he'll post like the 20 different models for deep learning and people are starting to share that information. And then that skills gap is closing. Maybe not as fast as some people like but it seems like the industry is paying attention to this and really driving hard to work toward it 'cause it's real. >> Yeah I agree. You're going to have Seth Dulpren, I think it's Niagara, one of our clients. What I like about them is the, in general there's two skill issues. There's one, where does data science and AI help us solve problems that matter in business? That's really a, trying to build a treasure map of potential problems you can solve with a stack. And Seth and Niagara are going to give you a really good basis for the kinds of problems that we can solve. I don't think there's enough of that going on. There's a lot of commentary communication actually work underway in the technical skill problem. You know, how do I actually build these models to do. But there's not enough in how do I, now that I solved that problem, how do we marry it to problems that matter? So the skills gap, you know, we're doing our part with our data science lead team which Seth opens which is telling a customer, pick a hard problem, give us some data, give us some domain experts. We're going to be in the AI and ML experts and we're going to see what happens. So the skill problem is very serious but I don't think it's most people are not having the right conversations about it necessarily. They understand intuitively there's a tech problem but that tech not linked to a business problem matters nothing. >> Yeah it's not insurmountable, I'm glad you mentioned that. We're going to be talking to Niagara Bottling and how they use the data science elite team as an accelerant, to kind of close that gap. And I'm really interested in the knowledge transfer that occurred and of course the one thing about IBM and companies like IBM is you get not only technical skills but you get deep industry expertise as well. Daniel, always great to see you. Love talking about the offerings and going deep. So good luck tonight. We'll see you there and thanks so much for coming on theCUBE. >> My pleasure. >> Alright, keep it right there everybody. This is Dave Vellanti. We'll be back right after this short break. You're watching theCUBE. (upbeat music)
SUMMARY :
IBM's Change the Game, Hotel and the theater district and the waves, but your perspective. It's going to be the most about some of the news that you guys have, and run times to where the It was quiest if you will. kind of the public Cloud Those are kind of the hot trends. and I spend the majority Is that the right way to and you're going to be able to control it. Yeah and as the Cloud, and getting benefits of that I go back to the days and all the data being able to get to it, query interface to them. It's going to be a feature, So talk about the business impact of the way this works that you can the analyze Talk to me more about that. it's going to be the place for you to go and now you guys are You're biased. The data over opinions. What are we going to see tonight? saw the games this weekend kind of perfect event to get it. One of the themes that we talked about, that you need to implement any AI system. that he'll post like the And Seth and Niagara are going to give you kind of close that gap. This is Dave Vellanti.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellanti | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Daniel Hernandez | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Tuesday night | DATE | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Rob Beerden | PERSON | 0.99+ |
AI.stackexchange.com | OTHER | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Three | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
New York State | LOCATION | 0.99+ |
Seth Dulpren | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
tonight | DATE | 0.99+ |
Dallas Cowboy | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
three companies | QUANTITY | 0.99+ |
Open Shift | TITLE | 0.99+ |
New York | LOCATION | 0.99+ |
two elements | QUANTITY | 0.99+ |
IBM Red Hat | ORGANIZATION | 0.99+ |
100 percent | QUANTITY | 0.99+ |
June last year | DATE | 0.99+ |
20 different models | QUANTITY | 0.98+ |
Vince Lombardy | PERSON | 0.98+ |
five | DATE | 0.98+ |
Times Square | LOCATION | 0.98+ |
Red Hat | ORGANIZATION | 0.97+ |
each | QUANTITY | 0.97+ |
Pat | PERSON | 0.97+ |
OpenShift | TITLE | 0.97+ |
each month | QUANTITY | 0.97+ |
single client | QUANTITY | 0.96+ |
New England | LOCATION | 0.96+ |
single | QUANTITY | 0.96+ |
single stack | QUANTITY | 0.96+ |
Hadoop | TITLE | 0.96+ |
six years ago | DATE | 0.94+ |
three wrongs | QUANTITY | 0.94+ |
IBM.com/winwithAI | OTHER | 0.94+ |
today | DATE | 0.94+ |
earlier this year | DATE | 0.93+ |
Niagara | ORGANIZATION | 0.93+ |
One | QUANTITY | 0.92+ |
about a mile | QUANTITY | 0.92+ |
Kirk Born | PERSON | 0.91+ |
Seth | ORGANIZATION | 0.91+ |
IBM Club | ORGANIZATION | 0.89+ |
Change the Game: Winning With AI | TITLE | 0.88+ |
50 million active developers | QUANTITY | 0.88+ |