Image Title

Search Results for Seth Rao:

Seth Rao, FirstEigen | AWS re:Invent 2021


 

(upbeat music) >> Hey, welcome back to Las Vegas. theCUBE is live at AWS re:Invent 2021. I'm Lisa Martin. We have two live sets, theCUBE. We are running one of the largest hybrid tech events, most important events of the year with AWS and its massive ecosystem of partners like as I said. Two live sets, two remote sets. Over a hundred guests on the program talking about the next generation of cloud innovation. I'm pleased to welcome a first timer to theCUBE. Seth Rao, the CEO of FirstEigen joins me. Seth, nice to have you on the program. >> Thank you nice to be here. >> Talk to me about FirstEigen. Also explain to me the name. >> So FirstEigen is a startup company based out of Chicago. The name Eigen is a German word. It's a mathematical term. It comes from eigenvectors and eigenvalues which is used and what it's called is principal component analysis, which is used to detect anomalies, which is related to what we do. So we look for errors in data and hence our name FirstEigen. >> Got it. That's excellent. So talk to me. One of the things that has been a resounding theme of this year's re:Invent is that especially in today's age, every company needs to be a data company. >> Yeah. >> It's all one thing to say it's as a whole other thing to be able to put that into practice with reliable data, with trustworthy data. Talk to me about some of the challenges that you help customers solve 'cause some of the theme about not just being a data company but if you're not a data company you're probably not going to be around much longer. >> Yeah, absolutely .So what we have seen across the board across all verticals, the customers we work with is data governance teams and data management teams are constantly firefighting to find errors in data and fix it. So what we have done is we have created the software DataBuck that autonomously looks at every data set and it will discover errors that are hidden to the human eye. They're hard to find out, hard to detect. Our machine learning algorithms figure out those errors before those errors impact the business. In the usual way, things are sorted out, things are done. It's very laborious, time-consuming and expensive. You have taken a process that takes man-years or even man-months and compressed it to a few hours. >> So dramatic time-savings there. >> Absolutely. >> So six years ago when you guys were founded, you realize this gap in the market, thought it's taking way too long. We don't have this amount of time. Gosh, can you imagine if you guys weren't around the last 22 months when certainly time was of the essence? >> Absolutely. Yeah. Six years ago when we founded the company, my co-founder who's also the CTO. He has extensive experience in validating data and data quality. And my own background and my own experiences in AI and ML. And what we saw was that people are spending an enormous amount of time and yet errors were getting down through to the business side. And at that point it comes back and people are still firefighting. So it was a waste of time, waste of money, waste of effort. >> Right. But also there's the potential for brand damage, brand reputation. Whatever products and services you're producing, if your employees don't have the right data, if there's errors there of what's going out to the consumers is wrong then you've got a big problem. >> Absolutely. Interesting you should mention that because over the summer there was a Danish bank, a very big name Danish bank that had to send apology letters to its customers because they overcharged them on the mortgage because the data in the backend had some errors in it and didn't realize it was inadvertent. But somebody ultimately caught it and did the right thing. Absolutely correct. If the data is incorrect and then you're doing analytics or you're doing reporting or you're sending people a bill that they need to pay it better be very accurate. Otherwise it's a serious brand damage. It has real implications and it has a whole bunch of other issues as well. >> It does and those things can snowball very quickly. >> Yeah. >> So talk to me about one of the things that we've seen in the recent months and years is this explosion of data. And then when the pandemic struck we had this scattering of people and data sources or so much data. The edge is persistent. We've got this work from anywhere environment. What are some of the risks for organizations? They come to you and saying help us ensure that our data is trustworthy. I mean that the trust is key but how do you help organizations that are in somewhat a flux figure out how to solve that problem? >> Yeah. So you're absolutely correct. There is an explosion of data, number one. And along with that, there is also an explosion of analytical tools to mine that data. So as a consequence, there is a big growth. It's exponential growth of microservices, how people are consuming that data. Now in the old world when there were a few consumers of data, it was a lot easier to validate the data. You had few people who are the gatekeepers or the data stewards. But with an explosion of data consumers within a company, you have to take a completely different approach. You cannot now have people manually looking and creating rules to validate data. So there has to be a change in the process. You start validating the data. As soon as the data comes into your system, you start validating if the data is reliable at point zero. >> Okay. >> And then it goes downstream. And every stage the data hops that is a chance that data can get corrupted. And these are called systems risks. Because there are multiple systems and data comes from multiple systems onto the cloud, errors creep in. So you validate the data from the beginning all the way to the end and the kinds of checks you do also increase in complexity as the data is going downstream. You don't want to boil the ocean upfront. You want to do the essential checks. Is my water drinkable at this point, right? I'm not trying to cook as soon as it comes out of the tap. Is it drinkable? - Right. >> Good enough quality. If not then we go back to the source and say, guys, send me better quality data. So sequence, the right process and check every step along the way. >> How much of a cultural shift is FirstEigen helping to facilitate within organizations that now don't... There isn't time to, like we talked about if an error gets in, there's so many downstream effects that can happen, but how do you help organizations shift their mindset? 'Cause that's hard thing to change. >> Fantastic point. In fact, what we see is the mindset change is the biggest wall for companies to have good data. People have been living in the old world where there is a team that is a group, much downstream that is responsible for accurate data. But the volume of data, the complexity of data has gone up so much that that team cannot handle it anymore. It's just beyond their scope. It's not fair for us to expect them to save the world. So the mindshift has to come from an organization leadership that says guys, the data engineers who are upfront who are getting the data into the organization, who are taking care of the data assets have to start thinking of trustable data. Because if they stopped doing it, everything downstream becomes easy. Otherwise it's much, much more complex for these guys. And that's what we do. Our tool provides autonomous solution to monitor the data. It comes out with a data trust score with zero human input. Our software will be able to validate the data and give an objective trust score. Right now it's a popularity contest. People are saying they vote. Yeah, I think I like this. I like this and I like that. That's okay. Maybe it's acceptable. But the reason they do it is because there is no way to objectively say the data is trustable. If there is a small error somewhere, it's a needle in the haystack. It's hard to find out, but we can. With machine learning algorithms our software can detect the errors, the minutest errors, and to give an objective score from zero to a hundred, trust or no trust. So along with a mindset, now they have the tool to implement that mindset and we can make it happen. >> Talk to me about some of the things that you've seen from a data governance perspective, as we've seen, the explosion, the edge, people working from anywhere. This hybrid environment that we're going to be in for quite some time. >> Yeah. >> From a data governance perspective and Dave Vellante did his residency. We're seeing so many more things pop up, you know different regulations. How do you help facilitate data governance for organizations as the data volume is just going to continue to proliferate? >> Absolutely correct. So data governance. So we are a key component of data governance and data quality and data trustworthiness, reliability is a key component of it. And one of the central, one of the central pillars of data governance is the data catalog. Just like a catalog in the library. It's cataloging every data asset. But right now the catalogs, which are the mainstay are not as good as they can be. A key information that is missing is I know where my data is what I don't know is how good is my data? How usable is it? If I'm using it for an accounts receivable or an accounts payable, for example, the data better be very, very accurate. So what our software will do is it'll help data governance by linking with any data governance tool and giving an important component which is data quality, reliability, trustability score, which is objective to every data asset. So imagine I open the catalog. I see where my book is in the library. I also know if there are pages missing in the book is the book readable? So it's not good enough to know that I have a book somewhere but it's how good is it? >> Right >> So DataBuck will make that happen. >> So when customers come to you, how do you help them start? 'Cause obviously the data, the volume it's intimidating. >> Yeah. >> Where do they start? >> Great. This is interestingly enough a challenge that every customer has. >> Right. >> Everybody is ambitious enough to say, no, I want to make the change. But the previous point was, if you want to do such a big change, it's an organizational change management problem. So the way we recommend customers is start with the small problem. Get some early victories. And this software is very easy. Just bring it in, automate a small part. You have your sales data or transactional data, or operational data. Take a small portion of it, automate it. Get reliable data, get good analytics, get the results and start expanding to other places. Trying to do everything at one time, it's just too much inertia, organizations don't move. You don't get anywhere. Data initiatives will fail. >> Right. So you're helping customers identify where are those quick wins? >> Yes. And where are the landmines that we need to be able to find out where they are so we can navigate around them? >> Yeah. We have enough expedience over 20 years of working with different customers. And I know if something can go wrong we know where it'll go wrong and we can help them steer them away from the landmines and take them to areas where they'll get quick wins. 'Cause we want the customer to win. We want them to go back and say, look, because of this, we were able to do better analytics. We are able to do better reporting and so on and so forth. We can help them navigate this area. >> Do you have a favorite example, customer example that you think really articulates that value there, that we're helping customers. We can't boil the ocean like you said. It doesn't make any sense, but customer that you helped with small quick wins that really just opened up the opportunity to unlock the value of trustable data. >> Absolutely. So we're working with a fortune 50 company in the US and it's a manufacturing company. Their CFO is a little in a concern whether the data that she's reporting to the Wall Street is acceptable, does it have any errors? And ultimately she signing off on it. So she had a large team in the technology side that was supporting her and they were doing their best. But in spite of that, she's a very sharp woman. She was able to look and find errors and saying, "Something does not look right here guys. Go back and check". Then it goes back to the IT team and they go, "Oh yeah, actually, there was an error". Some errors had slipped through. So they brought us in and we were able to automate the process, What they could do. They could do a few checks within that audit window. We were able to do an enormous number of checks more. More detailed, more accurate. And we were able to reduce the number of errors that were slipping through by over 98%. >> Big number. >> So, absolutely. Really fast. Really good. Now that this has gone through they feel a lot more comfortable than the question is, okay. In addition to financial reporting, can I use it to iron out my supply chain data? 'Cause they have thousands of vendors. They have hundreds of distributors. They have products all over the globe. Now they want to validate all the data because even if your data is off in a one or 2%, if you're a hundred plus billion dollar company, it has an enormous impact on your balance sheet and your income statement. >> Absolutely. Yeah. >> So we are slowly expanding as soon as they allow us. They like us now they're taking it to other areas from beyond finance. >> Well it sounds like you have not only great technology, Seth but a great plan for helping customers with those quick wins and then learning and expanding within and really developing that trusted relationship between FirstEigen and your customers. Thank you so much for joining me on the program today. Introducing the company, what you guys are doing really cool stuff. Appreciate your time. >> Thank you very much. >> All right. >> Pleasure to be here. >> For Seth Rao, I'm Lisa Martin. You're watching theCUBE. The global leader in live tech coverage. (upbeat music)

Published Date : Dec 2 2021

SUMMARY :

We are running one of the Also explain to me the name. So FirstEigen is a startup One of the things 'cause some of the theme that are hidden to the human eye. So six years ago through to the business side. have the right data, that they need to pay it can snowball very quickly. I mean that the trust is key So there has to be a the kinds of checks you do So sequence, the right process 'Cause that's hard thing to change. So the mindshift has to come the things that you've seen as the data volume is just going is the data catalog. 'Cause obviously the data, that every customer has. So the way we recommend customers So you're to find out where they are We are able to do better We can't boil the ocean like you said. the IT team and they go, They have products all over the globe. Yeah. to other areas from beyond finance. me on the program today. The global leader in live tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

Seth RaoPERSON

0.99+

ChicagoLOCATION

0.99+

Las VegasLOCATION

0.99+

AWSORGANIZATION

0.99+

SethPERSON

0.99+

USLOCATION

0.99+

FirstEigenORGANIZATION

0.99+

two remote setsQUANTITY

0.99+

Two live setsQUANTITY

0.99+

zeroQUANTITY

0.99+

oneQUANTITY

0.99+

two live setsQUANTITY

0.99+

thousandsQUANTITY

0.99+

six years agoDATE

0.99+

fortune 50ORGANIZATION

0.98+

over 98%QUANTITY

0.98+

over 20 yearsQUANTITY

0.98+

todayDATE

0.98+

Six years agoDATE

0.98+

2%QUANTITY

0.98+

OneQUANTITY

0.97+

one timeQUANTITY

0.97+

GermanOTHER

0.97+

pandemicEVENT

0.96+

Over a hundred guestsQUANTITY

0.95+

zero humanQUANTITY

0.94+

DataBuckTITLE

0.94+

hundred plus billion dollarQUANTITY

0.93+

InventEVENT

0.9+

DataBuckORGANIZATION

0.89+

one thingQUANTITY

0.87+

Wall StreetLOCATION

0.87+

last 22 monthsDATE

0.85+

re:Invent 2021EVENT

0.83+

this yearDATE

0.8+

a hundredQUANTITY

0.77+

first timerQUANTITY

0.75+

hundreds of distributorsQUANTITY

0.73+

point zeroQUANTITY

0.67+

2021DATE

0.63+

theCUBETITLE

0.55+

DanishLOCATION

0.55+

CEOPERSON

0.52+

theCUBEORGANIZATION

0.46+

DanishOTHER

0.45+

Sreesha Rao, Niagara Bottling & Seth Dobrin, IBM | Change The Game: Winning With AI 2018


 

>> Live, from Times Square, in New York City, it's theCUBE covering IBM's Change the Game: Winning with AI. Brought to you by IBM. >> Welcome back to the Big Apple, everybody. I'm Dave Vellante, and you're watching theCUBE, the leader in live tech coverage, and we're here covering a special presentation of IBM's Change the Game: Winning with AI. IBM's got an analyst event going on here at the Westin today in the theater district. They've got 50-60 analysts here. They've got a partner summit going on, and then tonight, at Terminal 5 of the West Side Highway, they've got a customer event, a lot of customers there. We've talked earlier today about the hard news. Seth Dobern is here. He's the Chief Data Officer of IBM Analytics, and he's joined by Shreesha Rao who is the Senior Manager of IT Applications at California-based Niagara Bottling. Gentlemen, welcome to theCUBE. Thanks so much for coming on. >> Thank you, Dave. >> Well, thanks Dave for having us. >> Yes, always a pleasure Seth. We've known each other for a while now. I think we met in the snowstorm in Boston, sparked something a couple years ago. >> Yep. When we were both trapped there. >> Yep, and at that time, we spent a lot of time talking about your internal role as the Chief Data Officer, working closely with Inderpal Bhandari, and you guys are doing inside of IBM. I want to talk a little bit more about your other half which is working with clients and the Data Science Elite Team, and we'll get into what you're doing with Niagara Bottling, but let's start there, in terms of that side of your role, give us the update. >> Yeah, like you said, we spent a lot of time talking about how IBM is implementing the CTO role. While we were doing that internally, I spent quite a bit of time flying around the world, talking to our clients over the last 18 months since I joined IBM, and we found a consistent theme with all the clients, in that, they needed help learning how to implement data science, AI, machine learning, whatever you want to call it, in their enterprise. There's a fundamental difference between doing these things at a university or as part of a Kaggle competition than in an enterprise, so we felt really strongly that it was important for the future of IBM that all of our clients become successful at it because what we don't want to do is we don't want in two years for them to go "Oh my God, this whole data science thing was a scam. We haven't made any money from it." And it's not because the data science thing is a scam. It's because the way they're doing it is not conducive to business, and so we set up this team we call the Data Science Elite Team, and what this team does is we sit with clients around a specific use case for 30, 60, 90 days, it's really about 3 or 4 sprints, depending on the material, the client, and how long it takes, and we help them learn through this use case, how to use Python, R, Scala in our platform obviously, because we're here to make money too, to implement these projects in their enterprise. Now, because it's written in completely open-source, if they're not happy with what the product looks like, they can take their toys and go home afterwards. It's on us to prove the value as part of this, but there's a key point here. My team is not measured on sales. They're measured on adoption of AI in the enterprise, and so it creates a different behavior for them. So they're really about "Make the enterprise successful," right, not "Sell this software." >> Yeah, compensation drives behavior. >> Yeah, yeah. >> So, at this point, I ask, "Well, do you have any examples?" so Shreesha, let's turn to you. (laughing softly) Niagara Bottling -- >> As a matter of fact, Dave, we do. (laughing) >> Yeah, so you're not a bank with a trillion dollars in assets under management. Tell us about Niagara Bottling and your role. >> Well, Niagara Bottling is the biggest private label bottled water manufacturing company in the U.S. We make bottled water for Costcos, Walmarts, major national grocery retailers. These are our customers whom we service, and as with all large customers, they're demanding, and we provide bottled water at relatively low cost and high quality. >> Yeah, so I used to have a CIO consultancy. We worked with every CIO up and down the East Coast. I always observed, really got into a lot of organizations. I was always observed that it was really the heads of Application that drove AI because they were the glue between the business and IT, and that's really where you sit in the organization, right? >> Yes. My role is to support the business and business analytics as well as I support some of the distribution technologies and planning technologies at Niagara Bottling. >> So take us the through the project if you will. What were the drivers? What were the outcomes you envisioned? And we can kind of go through the case study. >> So the current project that we leveraged IBM's help was with a stretch wrapper project. Each pallet that we produce--- we produce obviously cases of bottled water. These are stacked into pallets and then shrink wrapped or stretch wrapped with a stretch wrapper, and this project is to be able to save money by trying to optimize the amount of stretch wrap that goes around a pallet. We need to be able to maintain the structural stability of the pallet while it's transported from the manufacturing location to our customer's location where it's unwrapped and then the cases are used. >> And over breakfast we were talking. You guys produce 2833 bottles of water per second. >> Wow. (everyone laughs) >> It's enormous. The manufacturing line is a high speed manufacturing line, and we have a lights-out policy where everything runs in an automated fashion with raw materials coming in from one end and the finished goods, pallets of water, going out. It's called pellets to pallets. Pellets of plastic coming in through one end and pallets of water going out through the other end. >> Are you sitting on top of an aquifer? Or are you guys using sort of some other techniques? >> Yes, in fact, we do bore wells and extract water from the aquifer. >> Okay, so the goal was to minimize the amount of material that you used but maintain its stability? Is that right? >> Yes, during transportation, yes. So if we use too much plastic, we're not optimally, I mean, we're wasting material, and cost goes up. We produce almost 16 million pallets of water every single year, so that's a lot of shrink wrap that goes around those, so what we can save in terms of maybe 15-20% of shrink wrap costs will amount to quite a bit. >> So, how does machine learning fit into all of this? >> So, machine learning is way to understand what kind of profile, if we can measure what is happening as we wrap the pallets, whether we are wrapping it too tight or by stretching it, that results in either a conservative way of wrapping the pallets or an aggressive way of wrapping the pallets. >> I.e. too much material, right? >> Too much material is conservative, and aggressive is too little material, and so we can achieve some savings if we were to alternate between the profiles. >> So, too little material means you lose product, right? >> Yes, and there's a risk of breakage, so essentially, while the pallet is being wrapped, if you are stretching it too much there's a breakage, and then it interrupts production, so we want to try and avoid that. We want a continuous production, at the same time, we want the pallet to be stable while saving material costs. >> Okay, so you're trying to find that ideal balance, and how much variability is in there? Is it a function of distance and how many touches it has? Maybe you can share with that. >> Yes, so each pallet takes about 16-18 wraps of the stretch wrapper going around it, and that's how much material is laid out. About 250 grams of plastic that goes on there. So we're trying to optimize the gram weight which is the amount of plastic that goes around each of the pallet. >> So it's about predicting how much plastic is enough without having breakage and disrupting your line. So they had labeled data that was, "if we stretch it this much, it breaks. If we don't stretch it this much, it doesn't break, but then it was about predicting what's good enough, avoiding both of those extremes, right? >> Yes. >> So it's a truly predictive and iterative model that we've built with them. >> And, you're obviously injecting data in terms of the trip to the store as well, right? You're taking that into consideration in the model, right? >> Yeah that's mainly to make sure that the pallets are stable during transportation. >> Right. >> And that is already determined how much containment force is required when your stretch and wrap each pallet. So that's one of the variables that is measured, but the inputs and outputs are-- the input is the amount of material that is being used in terms of gram weight. We are trying to minimize that. So that's what the whole machine learning exercise was. >> And the data comes from where? Is it observation, maybe instrumented? >> Yeah, the instruments. Our stretch-wrapper machines have an ignition platform, which is a Scada platform that allows us to measure all of these variables. We would be able to get machine variable information from those machines and then be able to hopefully, one day, automate that process, so the feedback loop that says "On this profile, we've not had any breaks. We can continue," or if there have been frequent breaks on a certain profile or machine setting, then we can change that dynamically as the product is moving through the manufacturing process. >> Yeah, so think of it as, it's kind of a traditional manufacturing production line optimization and prediction problem right? It's minimizing waste, right, while maximizing the output and then throughput of the production line. When you optimize a production line, the first step is to predict what's going to go wrong, and then the next step would be to include precision optimization to say "How do we maximize? Using the constraints that the predictive models give us, how do we maximize the output of the production line?" This is not a unique situation. It's a unique material that we haven't really worked with, but they had some really good data on this material, how it behaves, and that's key, as you know, Dave, and probable most of the people watching this know, labeled data is the hardest part of doing machine learning, and building those features from that labeled data, and they had some great data for us to start with. >> Okay, so you're collecting data at the edge essentially, then you're using that to feed the models, which is running, I don't know, where's it running, your data center? Your cloud? >> Yeah, in our data center, there's an instance of DSX Local. >> Okay. >> That we stood up. Most of the data is running through that. We build the models there. And then our goal is to be able to deploy to the edge where we can complete the loop in terms of the feedback that happens. >> And iterate. (Shreesha nods) >> And DSX Local, is Data Science Experience Local? >> Yes. >> Slash Watson Studio, so they're the same thing. >> Okay now, what role did IBM and the Data Science Elite Team play? You could take us through that. >> So, as we discussed earlier, adopting data science is not that easy. It requires subject matter, expertise. It requires understanding of data science itself, the tools and techniques, and IBM brought that as a part of the Data Science Elite Team. They brought both the tools and the expertise so that we could get on that journey towards AI. >> And it's not a "do the work for them." It's a "teach to fish," and so my team sat side by side with the Niagara Bottling team, and we walked them through the process, so it's not a consulting engagement in the traditional sense. It's how do we help them learn how to do it? So it's side by side with their team. Our team sat there and walked them through it. >> For how many weeks? >> We've had about two sprints already, and we're entering the third sprint. It's been about 30-45 days between sprints. >> And you have your own data science team. >> Yes. Our team is coming up to speed using this project. They've been trained but they needed help with people who have done this, been there, and have handled some of the challenges of modeling and data science. >> So it accelerates that time to --- >> Value. >> Outcome and value and is a knowledge transfer component -- >> Yes, absolutely. >> It's occurring now, and I guess it's ongoing, right? >> Yes. The engagement is unique in the sense that IBM's team came to our factory, understood what that process, the stretch-wrap process looks like so they had an understanding of the physical process and how it's modeled with the help of the variables and understand the data science modeling piece as well. Once they know both side of the equation, they can help put the physical problem and the digital equivalent together, and then be able to correlate why things are happening with the appropriate data that supports the behavior. >> Yeah and then the constraints of the one use case and up to 90 days, there's no charge for those two. Like I said, it's paramount that our clients like Niagara know how to do this successfully in their enterprise. >> It's a freebie? >> No, it's no charge. Free makes it sound too cheap. (everybody laughs) >> But it's part of obviously a broader arrangement with buying hardware and software, or whatever it is. >> Yeah, its a strategy for us to help make sure our clients are successful, and I want it to minimize the activation energy to do that, so there's no charge, and the only requirements from the client is it's a real use case, they at least match the resources I put on the ground, and they sit with us and do things like this and act as a reference and talk about the team and our offerings and their experiences. >> So you've got to have skin in the game obviously, an IBM customer. There's got to be some commitment for some kind of business relationship. How big was the collective team for each, if you will? >> So IBM had 2-3 data scientists. (Dave takes notes) Niagara matched that, 2-3 analysts. There were some working with the machines who were familiar with the machines and others who were more familiar with the data acquisition and data modeling. >> So each of these engagements, they cost us about $250,000 all in, so they're quite an investment we're making in our clients. >> I bet. I mean, 2-3 weeks over many, many weeks of super geeks time. So you're bringing in hardcore data scientists, math wizzes, stat wiz, data hackers, developer--- >> Data viz people, yeah, the whole stack. >> And the level of skills that Niagara has? >> We've got actual employees who are responsible for production, our manufacturing analysts who help aid in troubleshooting problems. If there are breakages, they go analyze why that's happening. Now they have data to tell them what to do about it, and that's the whole journey that we are in, in trying to quantify with the help of data, and be able to connect our systems with data, systems and models that help us analyze what happened and why it happened and what to do before it happens. >> Your team must love this because they're sort of elevating their skills. They're working with rock star data scientists. >> Yes. >> And we've talked about this before. A point that was made here is that it's really important in these projects to have people acting as product owners if you will, subject matter experts, that are on the front line, that do this everyday, not just for the subject matter expertise. I'm sure there's executives that understand it, but when you're done with the model, bringing it to the floor, and talking to their peers about it, there's no better way to drive this cultural change of adopting these things and having one of your peers that you respect talk about it instead of some guy or lady sitting up in the ivory tower saying "thou shalt." >> Now you don't know the outcome yet. It's still early days, but you've got a model built that you've got confidence in, and then you can iterate that model. What's your expectation for the outcome? >> We're hoping that preliminary results help us get up the learning curve of data science and how to leverage data to be able to make decisions. So that's our idea. There are obviously optimal settings that we can use, but it's going to be a trial and error process. And through that, as we collect data, we can understand what settings are optimal and what should we be using in each of the plants. And if the plants decide, hey they have a subjective preference for one profile versus another with the data we are capturing we can measure when they deviated from what we specified. We have a lot of learning coming from the approach that we're taking. You can't control things if you don't measure it first. >> Well, your objectives are to transcend this one project and to do the same thing across. >> And to do the same thing across, yes. >> Essentially pay for it, with a quick return. That's the way to do things these days, right? >> Yes. >> You've got more narrow, small projects that'll give you a quick hit, and then leverage that expertise across the organization to drive more value. >> Yes. >> Love it. What a great story, guys. Thanks so much for coming to theCUBE and sharing. >> Thank you. >> Congratulations. You must be really excited. >> No. It's a fun project. I appreciate it. >> Thanks for having us, Dave. I appreciate it. >> Pleasure, Seth. Always great talking to you, and keep it right there everybody. You're watching theCUBE. We're live from New York City here at the Westin Hotel. cubenyc #cubenyc Check out the ibm.com/winwithai Change the Game: Winning with AI Tonight. We'll be right back after a short break. (minimal upbeat music)

Published Date : Sep 13 2018

SUMMARY :

Brought to you by IBM. at Terminal 5 of the West Side Highway, I think we met in the snowstorm in Boston, sparked something When we were both trapped there. Yep, and at that time, we spent a lot of time and we found a consistent theme with all the clients, So, at this point, I ask, "Well, do you have As a matter of fact, Dave, we do. Yeah, so you're not a bank with a trillion dollars Well, Niagara Bottling is the biggest private label and that's really where you sit in the organization, right? and business analytics as well as I support some of the And we can kind of go through the case study. So the current project that we leveraged IBM's help was And over breakfast we were talking. (everyone laughs) It's called pellets to pallets. Yes, in fact, we do bore wells and So if we use too much plastic, we're not optimally, as we wrap the pallets, whether we are wrapping it too little material, and so we can achieve some savings so we want to try and avoid that. and how much variability is in there? goes around each of the pallet. So they had labeled data that was, "if we stretch it this that we've built with them. Yeah that's mainly to make sure that the pallets So that's one of the variables that is measured, one day, automate that process, so the feedback loop the predictive models give us, how do we maximize the Yeah, in our data center, Most of the data And iterate. the Data Science Elite Team play? so that we could get on that journey towards AI. And it's not a "do the work for them." and we're entering the third sprint. some of the challenges of modeling and data science. that supports the behavior. Yeah and then the constraints of the one use case No, it's no charge. with buying hardware and software, or whatever it is. minimize the activation energy to do that, There's got to be some commitment for some and others who were more familiar with the So each of these engagements, So you're bringing in hardcore data scientists, math wizzes, and that's the whole journey that we are in, in trying to Your team must love this because that are on the front line, that do this everyday, and then you can iterate that model. And if the plants decide, hey they have a subjective and to do the same thing across. That's the way to do things these days, right? across the organization to drive more value. Thanks so much for coming to theCUBE and sharing. You must be really excited. I appreciate it. I appreciate it. Change the Game: Winning with AI Tonight.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Shreesha RaoPERSON

0.99+

Seth DobernPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

WalmartsORGANIZATION

0.99+

CostcosORGANIZATION

0.99+

DavePERSON

0.99+

30QUANTITY

0.99+

BostonLOCATION

0.99+

New York CityLOCATION

0.99+

CaliforniaLOCATION

0.99+

Seth DobrinPERSON

0.99+

60QUANTITY

0.99+

NiagaraORGANIZATION

0.99+

SethPERSON

0.99+

ShreeshaPERSON

0.99+

U.S.LOCATION

0.99+

Sreesha RaoPERSON

0.99+

third sprintQUANTITY

0.99+

90 daysQUANTITY

0.99+

twoQUANTITY

0.99+

first stepQUANTITY

0.99+

Inderpal BhandariPERSON

0.99+

Niagara BottlingORGANIZATION

0.99+

PythonTITLE

0.99+

bothQUANTITY

0.99+

tonightDATE

0.99+

ibm.com/winwithaiOTHER

0.99+

oneQUANTITY

0.99+

Terminal 5LOCATION

0.99+

two yearsQUANTITY

0.99+

about $250,000QUANTITY

0.98+

Times SquareLOCATION

0.98+

ScalaTITLE

0.98+

2018DATE

0.98+

15-20%QUANTITY

0.98+

IBM AnalyticsORGANIZATION

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

each palletQUANTITY

0.98+

KaggleORGANIZATION

0.98+

West Side HighwayLOCATION

0.97+

Each palletQUANTITY

0.97+

4 sprintsQUANTITY

0.97+

About 250 gramsQUANTITY

0.97+

both sideQUANTITY

0.96+

Data Science Elite TeamORGANIZATION

0.96+

one dayQUANTITY

0.95+

every single yearQUANTITY

0.95+

Niagara BottlingPERSON

0.93+

about two sprintsQUANTITY

0.93+

one endQUANTITY

0.93+

RTITLE

0.92+

2-3 weeksQUANTITY

0.91+

one profileQUANTITY

0.91+

50-60 analystsQUANTITY

0.91+

trillion dollarsQUANTITY

0.9+

2-3 data scientistsQUANTITY

0.9+

about 30-45 daysQUANTITY

0.88+

almost 16 million pallets of waterQUANTITY

0.88+

Big AppleLOCATION

0.87+

couple years agoDATE

0.87+

last 18 monthsDATE

0.87+

Westin HotelORGANIZATION

0.83+

palletQUANTITY

0.83+

#cubenycLOCATION

0.82+

2833 bottles of water per secondQUANTITY

0.82+

the Game: Winning with AITITLE

0.81+