Image Title

Search Results for EPC:

Shireesh Thota, SingleStore & Hemanth Manda, IBM | AWS re:Invent 2022


 

>>Good evening everyone and welcome back to Sparkly Sin City, Las Vegas, Nevada, where we are here with the cube covering AWS Reinvent for the 10th year in a row. John Furrier has been here for all 10. John, we are in our last session of day one. How does it compare? >>I just graduated high school 10 years ago. It's exciting to be, here's been a long time. We've gotten a lot older. My >>Got your brain is complex. You've been a lot in there. So fast. >>Graduated eight in high school. You know how it's No. All good. This is what's going on. This next segment, wrapping up day one, which is like the the kickoff. The Mondays great year. I mean Tuesdays coming tomorrow big days. The announcements are all around the kind of next gen and you're starting to see partnering and integration is a huge part of this next wave cuz API's at the cloud, next gen cloud's gonna be deep engineering integration and you're gonna start to see business relationships and business transformation scale a horizontally, not only across applications but companies. This has been going on for a while, covering it. This next segment is gonna be one of those things that we're gonna look at as something that's gonna happen more and more on >>Yeah, I think so. It's what we've been talking about all day. Without further ado, I would like to welcome our very exciting guest for this final segment, trust from single store. Thank you for being here. And we also have him on from IBM Data and ai. Y'all are partners. Been partners for about a year. I'm gonna go out on a limb only because their legacy and suspect that a few people, a few more people might know what IBM does versus what a single store does. So why don't you just give us a little bit of background so everybody knows what's going on. >>Yeah, so single store is a relational database. It's a foundational relational systems, but the thing that we do the best is what we call us realtime analytics. So we have these systems that are legacy, which which do operations or analytics. And if you wanted to bring them together, like most of the applications want to, it's really a big hassle. You have to build an ETL pipeline, you'd have to duplicate the data. It's really faulty systems all over the place and you won't get the insights really quickly. Single store is trying to solve that problem elegantly by having an architecture that brings both operational and analytics in one place. >>Brilliant. >>You guys had a big funding now expanding men. Sequel, single store databases, 46 billion again, databases. We've been saying this in the queue for 12 years have been great and recently not one database will rule the world. We know that. That's, everyone knows that databases, data code, cloud scale, this is the convergence now of all that coming together where data, this reinvent is the theme. Everyone will be talking about end to end data, new kinds of specialized services, faster performance, new kinds of application development. This is the big part of why you guys are working together. Explain the relationship, how you guys are partnering and engineering together. >>Yeah, absolutely. I think so ibm, right? I think we are mainly into hybrid cloud and ai and one of the things we are looking at is expanding our ecosystem, right? Because we have gaps and as opposed to building everything organically, we want to partner with the likes of single store, which have unique capabilities that complement what we have. Because at the end of the day, customers are looking for an end to end solution that's also business problems. And they are very good at real time data analytics and hit staff, right? Because we have transactional databases, analytical databases, data lakes, but head staff is a gap that we currently have. And by partnering with them we can essentially address the needs of our customers and also what we plan to do is try to integrate our products and solutions with that so that when we can deliver a solution to our customers, >>This is why I was saying earlier, I think this is a a tell sign of what's coming from a lot of use cases where people are partnering right now you got the clouds, a bunch of building blocks. If you put it together yourself, you can build a durable system, very stable if you want out of the box solution, you can get that pre-built, but you really can't optimize. It breaks, you gotta replace it. High level engineering systems together is a little bit different, not just buying something out of the box. You guys are working together. This is kind of an end to end dynamic that we're gonna hear a lot more about at reinvent from the CEO ofs. But you guys are doing it across companies, not just with aws. Can you guys share this new engineering business model use case? Do you agree with what I'm saying? Do you think that's No, exactly. Do you think John's crazy, crazy? I mean I all discourse, you got out of the box, engineer it yourself, but then now you're, when people do joint engineering project, right? They're different. Yeah, >>Yeah. No, I mean, you know, I think our partnership is a, is a testament to what you just said, right? When you think about how to achieve realtime insights, the data comes into the system and, and the customers and new applications want insights as soon as the data comes into the system. So what we have done is basically build an architecture that enables that we have our own storage and query engine indexing, et cetera. And so we've innovated in our indexing in our database engine, but we wanna go further than that. We wanna be able to exploit the innovation that's happening at ibm. A very good example is, for instance, we have a native connector with Cognos, their BI dashboards right? To reason data very natively. So we build a hyper efficient system that moves the data very efficiently. A very other good example is embedded ai. >>So IBM of course has built AI chip and they have basically advanced quite a bit into the embedded ai, custom ai. So what we have done is, is as a true marriage between the engineering teams here, we make sure that the data in single store can natively exploit that kind of goodness. So we have taken their libraries. So if you have have data in single store, like let's imagine if you have Twitter data, if you wanna do sentiment analysis, you don't have to move the data out model, drain the model outside, et cetera. We just have the pre-built embedded AI libraries already. So it's a, it's a pure engineering manage there that kind of opens up a lot more insights than just simple analytics and >>Cost by the way too. Moving data around >>Another big theme. Yeah. >>And latency and speed is everything about single store and you know, it couldn't have happened without this kind of a partnership. >>So you've been at IBM for almost two decades, don't look it, but at nearly 17 years in how has, and maybe it hasn't, so feel free to educate us. How has, how has IBM's approach to AI and ML evolved as well as looking to involve partnerships in the ecosystem as a, as a collaborative raise the water level together force? >>Yeah, absolutely. So I think when we initially started ai, right? I think we are, if you recollect Watson was the forefront of ai. We started the whole journey. I think our focus was more on end solutions, both horizontal and vertical. Watson Health, which is more vertically focused. We were also looking at Watson Assistant and Watson Discovery, which were more horizontally focused. I think it it, that whole strategy of the world period of time. Now we are trying to be more open. For example, this whole embedable AI that CICE was talking about. Yeah, it's essentially making the guts of our AI libraries, making them available for partners and ISVs to build their own applications and solutions. We've been using it historically within our own products the past few years, but now we are making it available. So that, how >>Big of a shift is that? Do, do you think we're seeing a more open and collaborative ecosystem in the space in general? >>Absolutely. Because I mean if you think about it, in my opinion, everybody is moving towards AI and that's the future. And you have two option. Either you build it on your own, which is gonna require significant amount of time, effort, investment, research, or you partner with the likes of ibm, which has been doing it for a while, right? And it has the ability to scale to the requirements of all the enterprises and partners. So you have that option and some companies are picking to do it on their own, but I believe that there's a huge amount of opportunity where people are looking to partner and source what's already available as opposed to investing from the scratch >>Classic buy versus build analysis for them to figure out, yeah, to get into the game >>And, and, and why reinvent the wheel when we're all trying to do things at, at not just scale but orders of magnitude faster and and more efficiently than we were before. It, it makes sense to share, but it's, it is, it does feel like a bit of a shift almost paradigm shift in, in the culture of competition versus how we're gonna creatively solve these problems. There's room for a lot of players here, I think. And yeah, it's, I don't >>Know, it's really, I wanted to ask if you don't mind me jumping in on that. So, okay, I get that people buy a bill I'm gonna use existing or build my own. The decision point on that is, to your point about the path of getting the path of AI is do I have the core competency skills, gap's a big issue. So, okay, the cube, if you had ai, we'd take it cuz we don't have any AI engineers around yet to build out on all the linguistic data we have. So we might use your ai but I might say this to then and we want to have a core competency. How do companies get that core competency going while using and partnering with, with ai? What you guys, what do you guys see as a way for them to get going? Because I think some people probably want to have core competency of >>Ai. Yeah, so I think, again, I think I, I wanna distinguish between a solution which requires core competency. You need expertise on the use case and you need expertise on your industry vertical and your customers versus the foundational components of ai, which are like, which are agnostic to the core competency, right? Because you take the foundational piece and then you further train it and define it for your specific use case. So we are not saying that we are experts in all the industry verticals. What we are good at is like foundational components, which is what we wanna provide. Got it. >>Yeah, that's the hard deep yes. Heavy lift. >>Yeah. And I can, I can give a color to that question from our perspective, right? When we think about what is our core competency, it's about databases, right? But there's a, some biotic relationship between data and ai, you know, they sort of like really move each other, right? You >>Need, they kind of can't have one without the other. You can, >>Right? And so the, the question is how do we make sure that we expand that, that that relationship where our customers can operationalize their AI applications closer to the data, not move the data somewhere else and do the modeling and then training somewhere else and dealing with multiple systems, et cetera. And this is where this kind of a cross engineering relationship helps. >>Awesome. Awesome. Great. And then I think companies are gonna want to have that baseline foundation and then start hiring in learning. It's like driving the car. You get the keys when you're ready to go. >>Yeah, >>Yeah. Think I'll give you a simple example, right? >>I want that turnkey lifestyle. We all do. Yeah, >>Yeah. Let me, let me just give you a quick analogy, right? For example, you can, you can basically make the engines and the car on your own or you can source the engine and you can make the car. So it's, it's basically an option that you can decide. The same thing with airplanes as well, right? Whether you wanna make the whole thing or whether you wanna source from someone who is already good at doing that piece, right? So that's, >>Or even create a new alloy for that matter. I mean you can take it all the way down in that analogy, >>Right? Is there a structural change and how companies are laying out their architecture in this modern era as we start to see this next let gen cloud emerge, teams, security teams becoming much more focused data teams. Its building into the DevOps into the developer pipeline, seeing that trend. What do you guys see in the modern data stack kind of evolution? Is there a data solutions architect coming? Do they exist yet? Is that what we're gonna see? Is it data as code automation? How do you guys see this landscape of the evolving persona? >>I mean if you look at the modern data stack as it is defined today, it is too detailed, it's too OSes and there are way too many layers, right? There are at least five different layers. You gotta have like a storage you replicate to do real time insights and then there's a query layer, visualization and then ai, right? So you have too many ETL pipelines in between, too many services, too many choke points, too many failures, >>Right? Etl, that's the dirty three letter word. >>Say no to ETL >>Adam Celeste, that's his quote, not mine. We hear that. >>Yeah. I mean there are different names to it. They don't call it etl, we call it replication, whatnot. But the point is hassle >>Data is getting more hassle. More >>Hassle. Yeah. The data is ultimately getting replicated in the modern data stack, right? And that's kind of one of our thesis at single store, which is that you'd have to converge not hyper specialize and conversation and convergence is possible in certain areas, right? When you think about operational analytics as two different aspects of the data pipeline, it is possible to bring them together. And we have done it, we have a lot of proof points to it, our customer stories speak to it and that is one area of convergence. We need to see more of it. The relationship with IBM is sort of another step of convergence wherein the, the final phases, the operation analytics is coming together and can we take analytics visualization with reports and dashboards and AI together. This is where Cognos and embedded AI comes into together, right? So we believe in single store, which is really conversions >>One single path. >>A shocking, a shocking tie >>Back there. So, so obviously, you know one of the things we love to joke about in the cube cuz we like to goof on the old enterprise is they solve complexity by adding more complexity. That's old. Old thinking. The new thinking is put it under the covers, abstract the way the complexities and make it easier. That's right. So how do you guys see that? Because this end to end story is not getting less complicated. It's actually, I believe increasing and complication complexity. However there's opportunities doing >>It >>More faster to put it under the covers or put it under the hood. What do you guys think about the how, how this new complexity gets managed or in this new data world we're gonna be coming in? >>Yeah, so I think you're absolutely right. It's the world is becoming more complex, technology is becoming more complex and I think there is a real need and it's not just from coming from us, it's also coming from the customers to simplify things. So our approach around AI is exactly that because we are essentially providing libraries, just like you have Python libraries, there are libraries now you have AI libraries that you can go infuse and embed deeply within applications and solutions. So it becomes integrated and simplistic for the customer point of view. From a user point of view, it's, it's very simple to consume, right? So that's what we are doing and I think single store is doing that with data, simplifying data and we are trying to do that with the rest of the portfolio, specifically ai. >>It's no wonder there's a lot of synergy between the two companies. John, do you think they're ready for the Instagram >>Challenge? Yes, they're ready. Uhoh >>Think they're ready. So we're doing a bit of a challenge. A little 32nd off the cuff. What's the most important takeaway? This could be your, think of it as your thought leadership sound bite from AWS >>2023 on Instagram reel. I'm scrolling. That's the Instagram, it's >>Your moment to stand out. Yeah, exactly. Stress. You look like you're ready to rock. Let's go for it. You've got that smile, I'm gonna let you go. Oh >>Goodness. You know, there is, there's this quote from astrophysics, space moves matter, a matter tells space how to curve. They have that kind of a relationship. I see the same between AI and data, right? They need to move together. And so AI is possible only with right data and, and data is meaningless without good insights through ai. They really have that kind of relationship and you would see a lot more of that happening in the future. The future of data and AI are combined and that's gonna happen. Accelerate a lot faster. >>Sures, well done. Wow. Thank you. I am very impressed. It's tough hacks to follow. You ready for it though? Let's go. Absolutely. >>Yeah. So just, just to add what is said, right, I think there's a quote from Rob Thomas, one of our leaders at ibm. There's no AI without ia. Essentially there's no AI without information architecture, which essentially data. But I wanna add one more thing. There's a lot of buzz around ai. I mean we are talking about simplicity here. AI in my opinion is three things and three things only. Either you use AI to predict future for forecasting, use AI to automate things. It could be simple, mundane task, it would be complex tasks depending on how exactly you want to use it. And third is to optimize. So predict, automate, optimize. Anything else is buzz. >>Okay. >>Brilliantly said. Honestly, I think you both probably hit the 32nd time mark that we gave you there. And the enthusiasm loved your hunger on that. You were born ready for that kind of pitch. I think they both nailed it for the, >>They nailed it. Nailed it. Well done. >>I I think that about sums it up for us. One last closing note and opportunity for you. You have a V 8.0 product coming out soon, December 13th if I'm not mistaken. You wanna give us a quick 15 second preview of that? >>Super excited about this. This is one of the, one of our major releases. So we are evolving the system on multiple dimensions on enterprise and governance and programmability. So there are certain features that some of our customers are aware of. We have made huge performance gains in our JSON access. We made it easy for people to consume, blossom on OnPrem and hybrid architectures. There are multiple other things that we're gonna put out on, on our site. So it's coming out on December 13th. It's, it's a major next phase of our >>System. And real quick, wasm is the web assembly moment. Correct. And the new >>About, we have pioneers in that we, we be wasm inside the engine. So you could run complex modules that are written in, could be C, could be rushed, could be Python. Instead of writing the the sequel and SQL as a store procedure, you could now run those modules inside. I >>Wanted to get that out there because at coupon we covered that >>Savannah Bay hot topic. Like, >>Like a blanket. We covered it like a blanket. >>Wow. >>On that glowing note, Dre, thank you so much for being here with us on the show. We hope to have both single store and IBM back on plenty more times in the future. Thank all of you for tuning in to our coverage here from Las Vegas in Nevada at AWS Reinvent 2022 with John Furrier. My name is Savannah Peterson. You're watching the Cube, the leader in high tech coverage. We'll see you tomorrow.

Published Date : Nov 29 2022

SUMMARY :

John, we are in our last session of day one. It's exciting to be, here's been a long time. So fast. The announcements are all around the kind of next gen So why don't you just give us a little bit of background so everybody knows what's going on. It's really faulty systems all over the place and you won't get the This is the big part of why you guys are working together. and ai and one of the things we are looking at is expanding our ecosystem, I mean I all discourse, you got out of the box, When you think about how to achieve realtime insights, the data comes into the system and, So if you have have data in single store, like let's imagine if you have Twitter data, if you wanna do sentiment analysis, Cost by the way too. Yeah. And latency and speed is everything about single store and you know, it couldn't have happened without this kind and maybe it hasn't, so feel free to educate us. I think we are, So you have that option and some in, in the culture of competition versus how we're gonna creatively solve these problems. So, okay, the cube, if you had ai, we'd take it cuz we don't have any AI engineers around yet You need expertise on the use case and you need expertise on your industry vertical and Yeah, that's the hard deep yes. you know, they sort of like really move each other, right? You can, And so the, the question is how do we make sure that we expand that, You get the keys when you're ready to I want that turnkey lifestyle. So it's, it's basically an option that you can decide. I mean you can take it all the way down in that analogy, What do you guys see in the modern data stack kind of evolution? I mean if you look at the modern data stack as it is defined today, it is too detailed, Etl, that's the dirty three letter word. We hear that. They don't call it etl, we call it replication, Data is getting more hassle. When you think about operational analytics So how do you guys see that? What do you guys think about the how, is exactly that because we are essentially providing libraries, just like you have Python libraries, John, do you think they're ready for the Instagram Yes, they're ready. A little 32nd off the cuff. That's the Instagram, You've got that smile, I'm gonna let you go. and you would see a lot more of that happening in the future. I am very impressed. I mean we are talking about simplicity Honestly, I think you both probably hit the 32nd time mark that we gave you there. They nailed it. I I think that about sums it up for us. So we are evolving And the new So you could run complex modules that are written in, could be C, We covered it like a blanket. On that glowing note, Dre, thank you so much for being here with us on the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

IBMORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

December 13thDATE

0.99+

Shireesh ThotaPERSON

0.99+

Las VegasLOCATION

0.99+

Adam CelestePERSON

0.99+

Rob ThomasPERSON

0.99+

46 billionQUANTITY

0.99+

12 yearsQUANTITY

0.99+

John FurrierPERSON

0.99+

three thingsQUANTITY

0.99+

15 secondQUANTITY

0.99+

TwitterORGANIZATION

0.99+

PythonTITLE

0.99+

10th yearQUANTITY

0.99+

two companiesQUANTITY

0.99+

thirdQUANTITY

0.99+

32nd timeQUANTITY

0.99+

bothQUANTITY

0.99+

tomorrowDATE

0.99+

32ndQUANTITY

0.99+

single storeQUANTITY

0.99+

TuesdaysDATE

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.98+

10 years agoDATE

0.98+

SingleStoreORGANIZATION

0.98+

Single storeQUANTITY

0.98+

Hemanth MandaPERSON

0.98+

DrePERSON

0.97+

eightQUANTITY

0.96+

two optionQUANTITY

0.96+

day oneQUANTITY

0.96+

one more thingQUANTITY

0.96+

one databaseQUANTITY

0.95+

two different aspectsQUANTITY

0.95+

MondaysDATE

0.95+

InstagramORGANIZATION

0.95+

IBM DataORGANIZATION

0.94+

10QUANTITY

0.94+

about a yearQUANTITY

0.94+

CICEORGANIZATION

0.93+

three letterQUANTITY

0.93+

todayDATE

0.93+

one placeQUANTITY

0.93+

WatsonTITLE

0.93+

One lastQUANTITY

0.92+

CognosORGANIZATION

0.91+

Watson AssistantTITLE

0.91+

nearly 17 yearsQUANTITY

0.9+

Watson HealthTITLE

0.89+

Las Vegas, NevadaLOCATION

0.89+

awsORGANIZATION

0.86+

one areaQUANTITY

0.86+

SQLTITLE

0.86+

One single pathQUANTITY

0.85+

two decadesQUANTITY

0.8+

five different layersQUANTITY

0.8+

Invent 2022EVENT

0.77+

JSONTITLE

0.77+

Scott Castle, Sisense | AWS re:Invent 2022


 

>>Good morning fellow nerds and welcome back to AWS Reinvent. We are live from the show floor here in Las Vegas, Nevada. My name is Savannah Peterson, joined with my fabulous co-host John Furrier. Day two keynotes are rolling. >>Yeah. What do you thinking this? This is the day where everything comes, so the core gets popped off the bottle, all the announcements start flowing out tomorrow. You hear machine learning from swee lot more in depth around AI probably. And then developers with Verner Vos, the CTO who wrote the seminal paper in in early two thousands around web service that becames. So again, just another great year of next level cloud. Big discussion of data in the keynote bulk of the time was talking about data and business intelligence, business transformation easier. Is that what people want? They want the easy button and we're gonna talk a lot about that in this segment. I'm really looking forward to this interview. >>Easy button. We all want the >>Easy, we want the easy button. >>I love that you brought up champagne. It really feels like a champagne moment for the AWS community as a whole. Being here on the floor feels a bit like the before times. I don't want to jinx it. Our next guest, Scott Castle, from Si Sense. Thank you so much for joining us. How are you feeling? How's the show for you going so far? Oh, >>This is exciting. It's really great to see the changes that are coming in aws. It's great to see the, the excitement and the activity around how we can do so much more with data, with compute, with visualization, with reporting. It's fun. >>It is very fun. I just got a note. I think you have the coolest last name of anyone we've had on the show so far, castle. Oh, thank you. I'm here for it. I'm sure no one's ever said that before, but I'm so just in case our audience isn't familiar, tell us about >>Soy Sense is an embedded analytics platform. So we're used to take the queries and the analysis that you can power off of Aurora and Redshift and everything else and bring it to the end user in the applications they already know how to use. So it's all about embedding insights into tools. >>Embedded has been a, a real theme. Nobody wants to, it's I, I keep using the analogy of multiple tabs. Nobody wants to have to leave where they are. They want it all to come in there. Yep. Now this space is older than I think everyone at this table bis been around since 1958. Yep. How do you see Siente playing a role in the evolution there of we're in a different generation of analytics? >>Yeah, I mean, BI started, as you said, 58 with Peter Lu's paper that he wrote for IBM kind of get became popular in the late eighties and early nineties. And that was Gen one bi, that was Cognos and Business Objects and Lotus 1 23 think like green and black screen days. And the way things worked back then is if you ran a business and you wanted to get insights about that business, you went to it with a big check in your hand and said, Hey, can I have a report? And they'd come back and here's a report. And it wasn't quite right. You'd go back and cycle, cycle, cycle and eventually you'd get something. And it wasn't great. It wasn't all that accurate, but it's what we had. And then that whole thing changed in about two, 2004 when self-service BI became a thing. And the whole idea was instead of going to it with a big check in your hand, how about you make your own charts? >>And that was totally transformative. Everybody started doing this and it was great. And it was all built on semantic modeling and having very fast databases and data warehouses. Here's the problem, the tools to get to those insights needed to serve both business users like you and me and also power users who could do a lot more complex analysis and transformation. And as the tools got more complicated, the barrier to entry for everyday users got higher and higher and higher to the point where now you look, look at Gartner and Forester and IDC this year. They're all reporting in the same statistic. Between 10 and 20% of knowledge workers have learned business intelligence and everybody else is just waiting in line for a data analyst or a BI analyst to get a report for them. And that's why the focus on embedded is suddenly showing up so strong because little startups have been putting analytics into their products. People are seeing, oh my, this doesn't have to be hard. It can be easy, it can be intuitive, it can be native. Well why don't I have that for my whole business? So suddenly there's a lot of focus on how do we embed analytics seamlessly? How do we embed the investments people make in machine learning in data science? How do we bring those back to the users who can actually operationalize that? Yeah. And that's what Tysons does. Yeah. >>Yeah. It's interesting. Savannah, you know, data processing used to be what the IT department used to be called back in the day data processing. Now data processing is what everyone wants to do. There's a ton of data we got, we saw the keynote this morning at Adam Lesky. There was almost a standing of vision, big applause for his announcement around ML powered forecasting with Quick Site Cube. My point is people want automation. They want to have this embedded semantic layer in where they are not having all the process of ETL or all the muck that goes on with aligning the data. All this like a lot of stuff that goes on. How do you make it easier? >>Well, to be honest, I, I would argue that they don't want that. I think they, they think they want that, cuz that feels easier. But what users actually want is they want the insight, right? When they are about to make a decision. If you have a, you have an ML powered forecast, Andy Sense has had that built in for years, now you have an ML powered forecast. You don't need it two weeks before or a week after in a report somewhere. You need it when you're about to decide do I hire more salespeople or do I put a hundred grand into a marketing program? It's putting that insight at the point of decision that's important. And you don't wanna be waiting to dig through a lot of infrastructure to find it. You just want it when you need it. What's >>The alternative from a time standpoint? So real time insight, which is what you're saying. Yep. What's the alternative? If they don't have that, what's >>The alternative? Is what we are currently seeing in the market. You hire a bunch of BI analysts and data analysts to do the work for you and you hire enough that your business users can ask questions and get answers in a timely fashion. And by the way, if you're paying attention, there's not enough data analysts in the whole world to do that. Good luck. I am >>Time to get it. I really empathize with when I, I used to work for a 3D printing startup and I can, I have just, I mean, I would call it PTSD flashbacks of standing behind our BI guy with my list of queries and things that I wanted to learn more about our e-commerce platform in our, in our marketplace and community. And it would take weeks and I mean this was only in 2012. We're not talking 1958 here. We're talking, we're talking, well, a decade in, in startup years is, is a hundred years in the rest of the world life. But I think it's really interesting. So talk to us a little bit about infused and composable analytics. Sure. And how does this relate to embedded? Yeah. >>So embedded analytics for a long time was I want to take a dashboard I built in a BI environment. I wanna lift it and shift it into some other application so it's close to the user and that is the right direction to go. But going back to that statistic about how, hey, 10 to 20% of users know how to do something with that dashboard. Well how do you reach the rest of users? Yeah. When you think about breaking that up and making it more personalized so that instead of getting a dashboard embedded in a tool, you get individual insights, you get data visualizations, you get controls, maybe it's not even actually a visualization at all. Maybe it's just a query result that influences the ordering of a list. So like if you're a csm, you have a list of accounts in your book of business, you wanna rank those by who's priorities the most likely to churn. >>Yeah. You get that. How do you get that most likely to churn? You get it from your BI system. So how, but then the question is, how do I insert that back into the application that CSM is using? So that's what we talk about when we talk about Infusion. And SI started the infusion term about two years ago and now it's being used everywhere. We see it in marketing from Click and Tableau and from Looker just recently did a whole launch on infusion. The idea is you break this up into very small digestible pieces. You put those pieces into user experiences where they're relevant and when you need them. And to do that, you need a set of APIs, SDKs, to program it. But you also need a lot of very solid building blocks so that you're not building this from scratch, you're, you're assembling it from big pieces. >>And so what we do aty sense is we've got machine learning built in. We have an LQ built in. We have a whole bunch of AI powered features, including a knowledge graph that helps users find what else they need to know. And we, we provide those to our customers as building blocks so that they can put those into their own products, make them look and feel native and get that experience. In fact, one of the things that was most interesting this last couple of couple of quarters is that we built a technology demo. We integrated SI sensee with Office 365 with Google apps for business with Slack and MS teams. We literally just threw an Nlq box into Excel and now users can go in and say, Hey, which of my sales people in the northwest region are on track to meet their quota? And they just get the table back in Excel. They can build charts of it and PowerPoint. And then when they go to their q do their QBR next week or week after that, they just hit refresh to get live data. It makes it so much more digestible. And that's the whole point of infusion. It's bigger than just, yeah. The iframe based embedding or the JavaScript embedding we used to talk about four or five years >>Ago. APIs are very key. You brought that up. That's gonna be more of the integration piece. How does embedable and composable work as more people start getting on board? It's kind of like a Yeah. A flywheel. Yes. What, how do you guys see that progression? Cause everyone's copying you. We see that, but this is a, this means it's standard. People want this. Yeah. What's next? What's the, what's that next flywheel benefit that you guys coming out with >>Composability, fundamentally, if you read the Gartner analysis, right, they, when they talk about composable, they're talking about building pre-built analytics pieces in different business units for, for different purposes. And being able to plug those together. Think of like containers and services that can, that can talk to each other. You have a composition platform that can pull it into a presentation layer. Well, the presentation layer is where I focus. And so the, so for us, composable means I'm gonna have formulas and queries and widgets and charts and everything else that my, that my end users are gonna wanna say almost minority report style. If I'm not dating myself with that, I can put this card here, I can put that chart here. I can set these filters here and I get my own personalized view. But based on all the investments my organization's made in data and governance and quality so that all that infrastructure is supporting me without me worrying much about it. >>Well that's productivity on the user side. Talk about the software angle development. Yeah. Is your low code, no code? Is there coding involved? APIs are certainly the connective tissue. What's the impact to Yeah, the >>Developer. Oh. So if you were working on a traditional legacy BI platform, it's virtually impossible because this is an architectural thing that you have to be able to do. Every single tool that can make a chart has an API to embed that chart somewhere. But that's not the point. You need the life cycle automation to create models, to modify models, to create new dashboards and charts and queries on the fly. And be able to manage the whole life cycle of that. So that in your composable application, when you say, well I want chart and I want it to go here and I want it to do this and I want it to be filtered this way you can interact with the underlying platform. And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for the next six months. You don't want it popping down into Python and writing that yourself. >>You wanna be able to say, okay, here's my forecasting algorithm. Here are the inputs, here's the dimensions, and then go and just put it somewhere for me. And so that's what you get withy sense. And there aren't any other analytics platforms that were built to do that. We were built that way because of our architecture. We're an API first product. But more importantly, most of the legacy BI tools are legacy. They're coming from that desktop single user, self-service, BI environment. And it's a small use case for them to go embedding. And so composable is kind of out of reach without a complete rebuild. Right? But with SI senses, because our bread and butter has always been embedding, it's all architected to be API first. It's integrated for software developers with gi, but it also has all those low code and no code capabilities for business users to do the minority report style thing. And it's assemble endless components into a workable digital workspace application. >>Talk about the strategy with aws. You're here at the ecosystem, you're in the ecosystem, you're leading product and they have a strategy. We know their strategy, they have some stuff, but then the ecosystem goes faster and ends up making a better product in most of the cases. If you compare, I know they'll take me to school on that, but I, that's pretty much what we report on. Mongo's doing a great job. They have databases. So you kind of see this balance. How are you guys playing in the ecosystem? What's the, what's the feedback? What's it like? What's going on? >>AWS is actually really our best partner. And the reason why is because AWS has been clear for many, many years. They build componentry, they build services, they build infrastructure, they build Redshift, they build all these different things, but they need, they need vendors to pull it all together into something usable. And fundamentally, that's what Cient does. I mean, we didn't invent sequel, right? We didn't invent jackal or dle. These are not, these are underlying analytics technologies, but we're taking the bricks out of the briefcase. We're assembling it into something that users can actually deploy for their use cases. And so for us, AWS is perfect because they focus on the hard bits. The the underlying technologies we assemble those make them usable for customers. And we get the distribution. And of course AWS loves that. Cause it drives more compute and it drives more, more consumption. >>How much do they pay you to say that >>Keynote, >>That was a wonderful pitch. That's >>Absolutely, we always say, hey, they got a lot of, they got a lot of great goodness in the cloud, but they're not always the best at the solutions and that they're trying to bring out, and you guys are making these solutions for customers. Yeah. That resonates with what they got with Amazon. For >>Example, we, last year we did a, a technology demo with Comprehend where we put comprehend inside of a semantic model and we would compile it and then send it back to Redshift. And it takes comprehend, which is a very cool service, but you kind of gotta be a coder to use it. >>I've been hear a lot of hype about the semantic layer. What is, what is going on with that >>Semantec layer is what connects the actual data, the tables in your database with how they're connected and what they mean so that a user like you or me who's saying I wanna bar chart with revenue over time can just work with revenue and time. And the semantic layer translates between what we did and what the database knows >>About. So it speaks English and then they converts it to data language. It's >>Exactly >>Right. >>Yeah. It's facilitating the exchange of information. And, and I love this. So I like that you actually talked about it in the beginning, the knowledge map and helping people figure out what they might not know. Yeah. I, I am not a bi analyst by trade and I, I don't always know what's possible to know. Yeah. And I think it's really great that you're doing that education piece. I'm sure, especially working with AWS companies, depending on their scale, that's gotta be a big part of it. How much is the community play a role in your product development? >>It's huge because I'll tell you, one of the challenges in embedding is someone who sees an amazing experience in outreach or in seismic. And to say, I want that. And I want it to be exactly the way my product is built, but I don't wanna learn a lot. And so you, what you want do is you want to have a community of people who have already built things who can help lead the way. And our community, we launched a new version of the SES community in early 2022 and we've seen a 450% growth in the c in that community. And we've gone from an average of one response, >>450%. I just wanna put a little exclamation point on that. Yeah, yeah. That's awesome. We, >>We've tripled our organic activity. So now if you post this Tysons community, it used to be, you'd get one response maybe from us, maybe from from a customer. Now it's up to three. And it's continuing to trend up. So we're, it's >>Amazing how much people are willing to help each other. If you just get in the platform, >>Do it. It's great. I mean, business is so >>Competitive. I think it's time for the, it's time. I think it's time. Instagram challenge. The reels on John. So we have a new thing. We're gonna run by you. Okay. We just call it the bumper sticker for reinvent. Instead of calling it the Instagram reels. If we're gonna do an Instagram reel for 30 seconds, what would be your take on what's going on this year at Reinvent? What you guys are doing? What's the most important story that you would share with folks on Instagram? >>You know, I think it's really what, what's been interesting to me is the, the story with Redshift composable, sorry. No, composable, Redshift Serverless. Yeah. One of the things I've been >>Seeing, we know you're thinking about composable a lot. Yes. Right? It's, it's just, it's in there, it's in your mouth. Yeah. >>So the fact that Redshift Serverless is now kind becoming the defacto standard, it changes something for, for my customers. Cuz one of the challenges with Redshift that I've seen in, in production is if as people use it more, you gotta get more boxes. You have to manage that. The fact that serverless is now available, it's, it's the default means it now people are just seeing Redshift as a very fast, very responsive repository. And that plays right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top of things. So for me it's, it's a, maybe it's a narrow Instagram reel, but it's an >>Important one. Yeah. And that makes it better for you because you get to embed that. Yeah. And you get access to better data. Faster data. Yeah. Higher quality, relevant, updated. >>Yep. Awesome. As it goes into that 80% of knowledge workers, they have a consumer great expectation of experience. They're expecting that five ms response time. They're not waiting 2, 3, 4, 5, 10 seconds. They're not trained on theola expectations. And so it's, it matters a lot. >>Final question for you. Five years out from now, if things progress the way they're going with more innovation around data, this front end being very usable, semantic layer kicks in, you got the Lambda and you got serverless kind of coming in, helping out along the way. What's the experience gonna look like for a user? What's it in your mind's eye? What's that user look like? What's their experience? >>I, I think it shifts almost every role in a business towards being a quantitative one. Talking about, Hey, this is what I saw. This is my hypothesis and this is what came out of it. So here's what we should do next. I, I'm really excited to see that sort of scientific method move into more functions in the business. Cuz for decades it's been the domain of a few people like me doing strategy, but now I'm seeing it in CSMs, in support people and sales engineers and line engineers. That's gonna be a big shift. Awesome. >>Thank >>You Scott. Thank you so much. This has been a fantastic session. We wish you the best at si sense. John, always pleasure to share the, the stage with you. Thank you to everybody who's attuning in, tell us your thoughts. We're always eager to hear what, what features have got you most excited. And as you know, we will be live here from Las Vegas at reinvent from the show floor 10 to six all week except for Friday. We'll give you Friday off with John Furrier. My name's Savannah Peterson. We're the cube, the the, the leader in high tech coverage.

Published Date : Nov 29 2022

SUMMARY :

We are live from the show floor here in Las Vegas, Nevada. Big discussion of data in the keynote bulk of the time was We all want the How's the show for you going so far? the excitement and the activity around how we can do so much more with data, I think you have the coolest last name of anyone we've had on the show so far, queries and the analysis that you can power off of Aurora and Redshift and everything else and How do you see Siente playing a role in the evolution there of we're in a different generation And the way things worked back then is if you ran a business and you wanted to get insights about that business, the tools to get to those insights needed to serve both business users like you and me the muck that goes on with aligning the data. And you don't wanna be waiting to dig through a lot of infrastructure to find it. What's the alternative? and data analysts to do the work for you and you hire enough that your business users can ask questions And how does this relate to embedded? Maybe it's just a query result that influences the ordering of a list. And SI started the infusion term And that's the whole point of infusion. That's gonna be more of the integration piece. And being able to plug those together. What's the impact to Yeah, the And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for And so that's what you get withy sense. How are you guys playing in the ecosystem? And the reason why is because AWS has been clear for That was a wonderful pitch. the solutions and that they're trying to bring out, and you guys are making these solutions for customers. which is a very cool service, but you kind of gotta be a coder to use it. I've been hear a lot of hype about the semantic layer. And the semantic layer translates between It's So I like that you actually talked about it in And I want it to be exactly the way my product is built, but I don't wanna I just wanna put a little exclamation point on that. And it's continuing to trend up. If you just get in the platform, I mean, business is so What's the most important story that you would share with One of the things I've been Seeing, we know you're thinking about composable a lot. right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top And you get access to better data. And so it's, it matters a lot. What's the experience gonna look like for a user? see that sort of scientific method move into more functions in the business. And as you know, we will be live here from Las Vegas at reinvent from the show floor

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

AWSORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

2012DATE

0.99+

Peter LuPERSON

0.99+

FridayDATE

0.99+

80%QUANTITY

0.99+

Las VegasLOCATION

0.99+

AmazonORGANIZATION

0.99+

30 secondsQUANTITY

0.99+

JohnPERSON

0.99+

450%QUANTITY

0.99+

ExcelTITLE

0.99+

10QUANTITY

0.99+

IBMORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

John FurrierPERSON

0.99+

Office 365TITLE

0.99+

IDCORGANIZATION

0.99+

1958DATE

0.99+

PowerPointTITLE

0.99+

20%QUANTITY

0.99+

ForesterORGANIZATION

0.99+

PythonTITLE

0.99+

Verner VosPERSON

0.99+

early 2022DATE

0.99+

GartnerORGANIZATION

0.99+

last yearDATE

0.99+

10 secondsQUANTITY

0.99+

five msQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

this yearDATE

0.99+

first productQUANTITY

0.99+

awsORGANIZATION

0.98+

one responseQUANTITY

0.98+

late eightiesDATE

0.98+

Five yearsQUANTITY

0.98+

2QUANTITY

0.98+

tomorrowDATE

0.98+

SavannahPERSON

0.98+

Scott CastlePERSON

0.98+

oneQUANTITY

0.98+

SisensePERSON

0.97+

5QUANTITY

0.97+

EnglishOTHER

0.96+

Click and TableauORGANIZATION

0.96+

Andy SensePERSON

0.96+

LookerORGANIZATION

0.96+

two weeksDATE

0.96+

next weekDATE

0.96+

early ninetiesDATE

0.95+

InstagramORGANIZATION

0.95+

serverlessTITLE

0.94+

AWS ReinventORGANIZATION

0.94+

MongoORGANIZATION

0.93+

singleQUANTITY

0.93+

AuroraTITLE

0.92+

Lotus 1 23TITLE

0.92+

OneQUANTITY

0.92+

JavaScriptTITLE

0.92+

SESORGANIZATION

0.92+

next six monthsDATE

0.91+

MSORGANIZATION

0.91+

five yearsQUANTITY

0.89+

sixQUANTITY

0.89+

a weekDATE

0.89+

Soy SenseTITLE

0.89+

hundred grandQUANTITY

0.88+

RedshiftTITLE

0.88+

Adam LeskyPERSON

0.88+

Day two keynotesQUANTITY

0.87+

floor 10QUANTITY

0.86+

two thousandsQUANTITY

0.85+

Redshift ServerlessTITLE

0.85+

both businessQUANTITY

0.84+

3QUANTITY

0.84+

Manu Parbhakar, AWS & Bob Breitel, IBM | AWS re:Invent 2021


 

>>Welcome back. You're watching the cubes coverage of AWS 2021. We're here in the Venetian, formerly the sands convention center in Las Vegas. My name is Dave Volante. Really excited to have Bob bright tell here. He's the director of SAP global alliances at IBM and Manu.. I'm going to try that again. Pro boxcar, is that correct? Rebecca Head of Linux and IBM alliances at AWS Manu. I'm sorry for bashing your name, but at least I got it right, guys. Great to see you. Thanks for coming on. >>And I'm actually now AWS partnership. I had SAP before, so it's great. I first, my first reinvest, >>I have a old DNA title. That's great. That's why I was asking you about Philly before you don't have the accent though. Bob, >>I'm not a Philly native, so cowboy >>Because you have the SAP connection there. IBM, AWS. It's like, whoa, what's going on here? >>Well, maybe I'll start and then have my new, my new, make some comments. And I'll just start by just, uh, we're real excited to be here. IBM's a diamond sponsor at, at re-invent and it's great to be in person and really appreciate AWS being able to put this event on this week and get us back in person. It really makes a difference. And I know there's a lot of people virtually as well, but, um, IBM and AWS have worked together for a number of years. Uh, maybe we could characterize it more opportunistically, um, prior, but in the last 12, 18 months, I think there's been a lot of developments that have really made us come together strategically as partners. I know we'll talk a little bit about red hat during the course of the conversation, but with IBM's >>You say opportunities like you mean in the field and the more strategic >>Relationship or strategic and with IBM's open hybrid cloud strategy. And, uh, with so many of our clients preferring AWS is their cloud. Um, we are working together now to meet clients where they're at to help them get the value of the cloud. And we're talking a little bit about coming out of the pandemic before this. Um, and one of the things that we're seeing with our clients that IBM is a lot of that low hanging fruit. The cloud was achieved, maybe the lift and shift or doing some SAS based applications, but now it's even more important to rapidly adopt hybrid cloud and cloud technologies to provide your business with flexible innovation transformation, all of those things. So that's why it has been important for us to, to partner with AWS strategically. Um, our clients are telling us that when they do move those heavier workloads to the cloud and do it in a hybrid model, they see about two and a half times the value. >>So with that, our partnership is multi-dimensional, we're doing a lot with IBM consulting. My new we'll talk a little bit about IBM software and red hat. Just one example, Dave, with IBM consulting, we now are up to almost 10,000 certifications and 10 plus AWS competencies. So that competency chart that shows we're knocking them all out on the, on the checkerboard there to get them to IBM consulting competencies. And we just had the energy one announced this week. So IBM consulting is in area software's big too. In my news, been helping us with that part of the partner. >>Well, it's, you know, to your point, you can't pick whatever cliche you want. You can't fight fashion. The trend is your friend. You have a lot of, a lot of people want to be on AWS. So rather than fighting, oh, we have our own cloud. No, you've got to meet customers where they are, right >>David, this is where this takes us. You know, the analogy we use between Bob and I, IBM boss spoke about IBM consulting, which we know has been a strategic partnership for the last two, two and a half years. I think I'm going to share the best kept secret in the cloud phase right now. IBM software and AWS now are working together. The analogy we use is IBM software and AWS. I like peanut butter and jelly better together. And over the last 12 months, the two companies have accelerated working together around three key dimensions. Number one, around product, number two around making sure our customers, joint customers successful. And number three, around building a robust ecosystem of partners. One thing that we have realized is just helping customers modernize. Migrate is challenging. And on the product side, now we have about 15 products on AWS marketplace. >>I think about trusty or verify insecurity, cloud Pak for data, uh, uh, Cognos data DataStage over the next 12 months, we plan to land all of the cloud packs. These are containerized version of IBM software on AWS and the marketplace. In addition, many of our customers are now using the managed red hat, OpenShift servers. We launched it earlier in April. This year, we are seeing tremendous customer feedback, tremendous, uh, growth there that is also informing that customers really like the open shirt model managed services one-click deployment. And so our goal is over the next 12 months, launched many more IBM software as a managed service offering. So that's kind of like what we're doing on the product side, on the customer success. A great example is somebody is helping a big oil and gas customers managed with this energy transition that we're working through. Um, Schlumberger software around simulation runs on OpenShift on Amazon in a hybrid environment, especially critical as we have a lot of oil and gas data that needs to have maybe sit on premises, uh, because of data residency requirements. >>I think the third piece is around building an ecosystem of partners for our red hat OpenShift services, which we launched April. We already have 30 partners that are helping customers not only to modernize, but to migrate on AWS. We know modernization is challenging, moving to containers is difficult. So we need this robust ecosystem of partners and Bob and I, and you know, the IBM and AWS team are investing heavily. We have cash credit to do financial incentives plus also technical content so that our customers so that our partners can help customers to be successful. Yeah, >>So the cloud packs are cool. That makes a lot of sense. And now the acquisition of red hat makes it easier. It's a catalyst gets IBM, much more closely aligned to developers and it makes it easier for things like cloud packs to be migrated to the cloud and being running cloud native. How did that acquisition affect from your standpoint menu and Bob I'd love your thoughts and your relationship. >>The red hat acquisition by IBM is a net positive red hat. And AWS have been working together for 14 years now. And we have tens of thousands of customers that are running mission critical workloads, such as SAP, Oracle databases. And there's a lot of trust that is engendered by working in the field for 14 years, uh, supporting mission critical customers, mission critical workloads. And so that relationship has provided a lot of tailwinds to our partnership with IBM software. I think a lot of the stuff we spoke about a lot of the progress you've made in the last six to eight to 12 months, a big function is that the trust that we have engendered working together with red hat. >>Yeah. I'll add Dave that, um, I, I agree with my new comments on the red hat. Red hat really is the epitome of openness right. Of open source software and the history that Manu described with AWS, there has been excellent adoption of red hat on AWS, red hat, enterprise Linux, and then most recently, um, red hat OpenShift on AWS. And just to give another example to the ecosystem point, just this morning, red hat with IBM, with a major ISV named Solonus announced that Solonus will be running one of their key, uh, applications and releasing it on Rossa on AWS. And all this means for our clients is faster adoption and acceleration and being able to innovate, um, in a hybrid way. So that's really the value that red hat is helping, um, to bring to the table in our cloud packs are available on open shift and rose as an option as well. So we're excited about the red hat partnership. It's really essential to our partnership into our, our hybrid cloud strategy. >>You mentioned up front, you know, happy that AWS decided to have this show. Of course, a lot of people watching online and you can get massive scale online, but there's nothing like the live event, you know, and when you make announcements at a live event, there's a little buzz going on and you get feedback. So are you making any hard news here? What, what announcements can you >>Share? Yeah, well, the one we had, um, on, um, uh, Solonus earlier with red hat and to do roasts on top of red hat was one and there's just an advance of, um, of re-invent. Um, we announced something in the data and AI space. So that's another big area of our partnership is data and AI. So we're in, we announced that in the oil industry and in the, um, uh, in that area that we are partnering together with AWS to be able to get insights on data so that we could get clean and reusable energy solutions out there. And there's so much untapped data. We know data is such an important resource, that that's an area that we're going to partner on with our cloud Pak for data on AWS. And of course underlying everything is open shifts. So that's one big announcement. We're also doing a lot in security for IBM and my news has been working closely with this. So my new, I, I know you're close to the integrations we're doing with AWS. So I'll let you comment maybe on some of the things in security. >>I mean, everybody's a security company these days, right? I mean, >>And then we continue to work and making sure that a lot of the IBM security products are integrating with our native services. So the customers have a seamless experience. And as he you'll see a lot of the same investments happening over 2022 as we grow the >>Partnership. So what like a QRadar or something like that >>Are, for example, integrating with security hub. That would be great example. >>I mean, it's the, it's the number one topic for CEO's that has been for a while and still will be okay. So give us a little roadmap, you know, maybe Bob, you could start, where do you want to see this relationship go? Um, what can we expect in the, in the coming 12 months? Yeah, well, >>Again, we're super excited about our partnership with AWS. I think we're just scratching the surface of how we're going to add value to our clients on this, on this hybrid cloud journey that they're all going through. And IBM, and this has been in our financial reports and in our earnings and everything, we're investing over a billion dollars in the ecosystem. And so partners like AWS are critical to provide that platform of growth for our clients and innovation for our clients. So all of the things that I talked about in money talked about today, whether it be our IBM consulting capabilities or our IBM software, our red hat, we're going to continue to invest. We talked about the red hat acquisition. IBM has made a few other acquisitions that help drive this partnership and drive value to our clients for adoption, from Instana to Turbonomic X, to some really innovative cloud consulting companies like Knorr cloud in towels. So we're going to continue to make investments. And I think we're just on the tip of the iceberg and we invite everybody at re-invent, either in person, which is exciting or virtually to learn more about our partnership and how we can help you and my new, any additional comments to that. >>Thanks, Bob V have a golden child hair with red hat OpenShift on Amazon. That'd be launched in April. We are seeing tremendous customer adoption. So we suspect that in next year, we'll continue to see solid adoption around red hat OpenShift. That VocaliD is also informing how customers want a more native experience for IBM software on AWS. And so we, um, we are targeting to, to launch many more IBM software in a native format on edema. So that would be the big team for next year. Uh, in addition, again, I'll call to action to our partner community. There's a huge opportunity to help our joint customers to modernize and migrate on AWS via both IBM, AWS are leaning in, we have cash credit to give financial incentives to partners, to help our customers, to migrate and modernize as well as we are also creating a lot of technical content that is not freely available so that a lot of our partners can start this. IBM focus on AWS practice >>Guys. Thanks so much for coming on the cube. Congratulations, and look at, you know, I often say the next 10 years is not going to be like the last 10 years. The cloud is expanding is a really good example. So thank you for your time. Appreciate your time. All right. You're watching the cube, the leader in high tech coverage at AWS reinvent 2021

Published Date : Dec 1 2021

SUMMARY :

We're here in the Venetian, And I'm actually now AWS partnership. don't have the accent though. Because you have the SAP connection there. of the conversation, but with IBM's Um, and one of the things that we're seeing with And we just had the energy one announced this week. Well, it's, you know, to your point, you can't pick whatever cliche you want. on the product side, now we have about 15 products on AWS And so our goal is over the next 12 months, launched many more IBM software as a managed So we need this robust ecosystem of partners and Bob and I, and you know, And now the acquisition of to eight to 12 months, a big function is that the trust that we have engendered working together with So that's really the value So are you making any hard news here? to be able to get insights on data so that we could get clean and reusable energy And then we continue to work and making sure that a lot of the IBM security products are integrating with our native So what like a QRadar or something like that Are, for example, integrating with security hub. So give us a little roadmap, you know, maybe Bob, you could start, where do you want to see this relationship So all of the things that I talked about in money talked about today, whether it be our IBM So we I often say the next 10 years is not going to be like the last 10 years.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AWSORGANIZATION

0.99+

BobPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VolantePERSON

0.99+

DavePERSON

0.99+

AprilDATE

0.99+

30 partnersQUANTITY

0.99+

two companiesQUANTITY

0.99+

Manu ParbhakarPERSON

0.99+

Bob BreitelPERSON

0.99+

14 yearsQUANTITY

0.99+

next yearDATE

0.99+

Las VegasLOCATION

0.99+

PhillyLOCATION

0.99+

This yearDATE

0.99+

third pieceQUANTITY

0.99+

AmazonORGANIZATION

0.99+

ManuPERSON

0.99+

InstanaORGANIZATION

0.99+

VenetianLOCATION

0.99+

10 plusQUANTITY

0.99+

firstQUANTITY

0.99+

Rebecca HeadPERSON

0.99+

2021DATE

0.98+

Manu Parbhakar, AWS & Mike Evans, Red Hat | AWS re:Invent 2021


 

(upbeat music) >> Hey, welcome back everyone to theCube's coverage of AWS re:Invent 2021. I'm John Furrier, host of theCube, wall-to-wall coverage in-person and hybrid. The two great guests here, Manu Parbhakar, worldwide Leader, Linux and IBM Software Partnership at AWS, and Mike Evans, Vice President of Technical Business Development at Red Hat. Gentlemen, thanks for coming on theCube. Love this conversation, bringing Red Hat and AWS together. Two great companies, great technologies. It really is about software in the cloud, Cloud-Scale. Thanks for coming on. >> Thanks John. >> So get us into the partnership. Okay. This is super important. Red Hat, well known open source as cloud needs to become clear, doing an amazing work. Amazon, Cloud-Scale, Data is a big part of it. Modern software. Tell us about the partnership. >> Thanks John. Super excited to share about our partnership. As we have been partnering for almost 14 years together. We started in the very early days of AWS. And now we have tens of thousands of customers that are running RHEL on EC2. If you look at over the last three years, the pace of innovation for our joint partnership has only increased. It has manifested in three key formats. The first one is the pace at which RHEL supports new EC2 instances like Arm, Graviton. You know, think a lot of features like Nitro. The second is just the portfolio of new RHEL offerings that we have launched over the last three years. We started with RHEL for sequel, RHEL high availability, RHEL for SAP, and then only last month, we've launched the support for knowledge base for RHEL customers. Mike, you want to talk about what you're doing with OpenShift and Ansible as well? >> Yeah, it's good to be here. It's fascinating to me cause I've been at Red Hat for 21 years now. And vividly remember the start of working with AWS back in 2008, when the cloud was kind of a wild idea with a whole bunch of doubters. And it's been an interesting time, but I feel the next 14 years are going to be exciting in a different way. We now have a very large customer base from almost every industry in the world built on RHEL, and running on AWS. And our goal now is to continue to add additional elements to our offerings, to build upon that and extend it. The largest addition which we're going to be talking a lot about here at the re:Invent show was the partnership in April this year when we launched the Red Hat OpenShift service on AWS as a managed version of OpenShift for containers based workloads. And we're seeing a lot of the customers that have standardized on RHEL on EC2, or ones that are using OpenShift on-premise deployments, as the early adopters of ROSA, but we're also seeing a huge number of new customers who never purchased anything from Red Hat. So, in addition to the customers, we're getting great feedback from systems integrators and ISV partners who are looking to have a software application run both on-premise and in AWS, and with OpenShift being one of the pioneers in enabling both container and harnessing Kubernetes where ROSA is just a really exciting area for us to track and continue to advance together with AWS. >> It's very interesting. Before I get to ROSA, I want to just get the update on Red Hat and IBM, obviously the acquisition part of IBM, how is that impacting the partnership? You can just quickly touch on that. >> Sure. I'll start off and, I mean, Red Hat went from a company that was about 15,000 employees competing with a lot of really large technology companies and we added more than 100,000 field oriented people when IBM acquired Red Hat to help magnify the Red Hat solutions, and the global scale and coverage of IBM is incredible. I like to give two simple examples of people. One is, I remember our salesforce in EMEA telling me they got a $4 million order from a country in Africa theydidn't even know existed. And IBM had 100 people in it, or AT&T is one of Red Hat's largest accounts, and I think at one point we had seven full-time people on it and AT&T is one of IBM's largest accounts and they had two seven storey buildings full of people working with AT&T. So RHELative to AWS, we now also see IBM embracing AWS more with both software, and services, in the magnification of Red Hat based solutions, combined with that embrace should be, create some great growth. And I think IBM is pretty excited about being able to sell Red Hat software as well. >> Yeah, go ahead. >> And Manu I think you have, yeah. >> Yeah. I think there's also, it is definitely very positive John. >> Yeah. >> You know, just the joint work that Red Hat and AWS have done for the last 14 years, working in the trenches supporting our end customers is now also providing lot of Tailwinds for the IBM software partnership. We have done some incredible work over the last 12 months around three broad categories. The first one is around product, what we're doing around customer success, and then what we're doing around sales and marketing. So on the product side, we have listed about 15 products on Marketplace over the course of the last 12 to 15 months. And our goal is to launch all of the IBM Cloud Paks. These are containerized versions of IBM software on Marketplace by the first half of next year. The other feedback that we are getting from our customers is that, hey, we love IBM software running at Amazon, but we like to have a cloud native SaaS version of the software. So there's a lot of work that's going on right now, to make sure that many of these offerings are available in a cloud-native manner. And you're not talking with Db2 Cognos, Maximo, (indistinct), on EC2. The second thing that we're doing is making sure that many of these large enterprise customers are running IBM software, are successful. So our technical teams are attached to the hip, working on the ground floor in making customers like Delta successful in running IBM software on them. I think the third piece around sales and marketing just filing up a vibrant ecosystem, rather how do we modernize and migrate this IBM software on Cloud Paks on AWS? So there's a huge push going on here. So (indistinct), you know, the Red Hat partnership is providing a lot of Tailwinds to accelerate our partnership with IBM software. >> You know, I always, I've been saying all this year in Red Hat summit, as well as Ansible Fest that, distributed computing is coming to large scale. And that's really the, what's happening. I mean, you looking at what you guys are doing cause it's amazing. ROSA Red Hat OpenShift on AWS, very notable to use the term on AWS, which actually means something in the partnership as we learned over the years. How is that going Mike because you launched on theCube in April, ROSA, it had great traction going in. It's in the Marketplace. You've got some integration. It's really a hand in glove situation with Cloud-Scale. Take us through what's the update? >> Yeah, let me, let me let Manu speak first to his AWS view and then I'll add the Red Hat picture. >> Thanks Mike. John for ROSA is part of an entire container portfolio. So if you look at it, so we have ECS, EKS, the managed Kubernetes service. We have the serverless containers with Fargate. We launched ECS case anywhere. And then ROSA is part of an entire portfolio of container services. As you know, two thirds of all container workloads run on AWS. And a big function of that is because we (indistinct) from our customer and then sold them what the requirements are. There are two sets of key customers that are driving the demand and the early adoption of ROSA. The first set of customers that have standardized on OpenShift on-premises. They love the fact that everything that comes out of the box and they would love to use it on Arm. So that's the first (indistinct). The second set of customers are, you know, the large RHEL users on EC2. The tens of thousands of customers that we've talked about that want to move from VM to containers, and want to do DevOps. So it's this set of two customers that are informing our roadmap, as well as our investments around ROSA. We are seeing solid adoption, both in terms of adoption by a customer, as well as the partners and helping, and how our partners are helping our customers in modernizing from VMs to containers. So it's a, it's a huge, it's a huge priority for our container service. And over the next few years, we continue to see, to increase our investment on the product road map here. >> Yeah, from my perspective, first off at the high level in mind, my one of the most interesting parts of ROSA is being integrated in the AWS console and not just for the, you know, where it shows up on the screen, but also all the work behind what that took to get there and why we did it. And we did it because customers were asking both of us, we're saying, look, OpenShift is a platform. We're going to be building and deploying serious applications at incredible scale on it. And it's really got to have joint high-quality support, joint high-quality engineering. It's got to be rock solid. And so we came to agreement with AWS. That was the best way to do that, was to build it in the console, you know, integrated in, into the core of an AWS engineering team with Red Hat engineers, Arm and Arms. So that's, that's a very unique service and it's not like a high level SaaS application that runs above everything, it's down in the bowels and, and really is, needs to be rock solid. So we're seeing, we're seeing great interest, both from end users, as I mentioned, existing customers, new customers, the partner base, you know, how the systems integrators are coming on board. There's lots of business and money to be made in modernizing applications as well as building new cloud native applications. People can, you know, between Red Hat and AWS, we've got some, some models around supporting POCs and customer migrations. We've got some joint investments. it's a really ripe area. >> Yeah. That's good stuff. Real quick. what do you think of ROSA versus EKS and ECS? What's, how should people think about that Mike? (indistinct) >> You got to go for it Manu. Your job is to position all these (indistinct). (indistinct) >> John, ROSA is part of our container portfolio services along with EKS, ECS, Fargate, and any (indistinct) services that we just launched earlier this year. There are, you know, set of customers both that are running OpenShift on-premises that are standardized on ROSA. And then there are large set of RHEL customers that are running RHEL on EC2, that want to use the ROSA service. So, you know, both AWS and Red Hat are now continuing to invest in accelerating the roadmap of the service on our platform. You know, we are working on improving the console experience. Also one of the things we just launched recently is the Amazon controller to Kubernetes, or what , you know, service operators for S3. So over the next few years you will see, you know, significant investment from both Red Hat and AWS in this joint service. And this is an integral part of our overall container portfolio. >> And great stuff to get in the console. That's great, great integration. That's the future. I got to ask about the graviton instances. It's been one of the most biggest success stories, I think we believe in Amazon history in the acquisition of Annapurna, has really created great differentiation. And anyone who's in the software knows if you have good chips powering apps, they go faster. And if the chips are good, they're less expensive. And that's the innovation. We saw that RHEL now supports graviton instances. Tell us more about the Red Hat strategy with graviton and Arms specifically, has that impact your (indistinct) development, and what does it mean for customers? >> Sure. Yeah, it's pretty, it's a pretty fascinating area for me. As I said, I've been a Red Hat for 21 years and my job is actually looking at new markets and new technologies now for Red Hat and work with our largest partners. So, I've been tracking the Arm dynamics for awhile, and we've been working with AWS for over two years, supporting graviton. And it's, I'm seeing more enthusiasm now in terms of developers and, especially for very horizontal, large scale applications. And we're excited to be working with AWS directly on it. And I think it's going to be a fascinating next two years on Arm, personally. >> Many of the specialized processors for training and instances, all that stuff, can be applied to web services and automation like cloud native services, right? Is that, it sounds like a good direction. Take us through that. >> John, on our partnership with Red Hat, we are continuing to iterate, as Mike mentioned, the stuff that we've done around graviton, both the last two years is pretty incredible. And the pace at which we are innovating is improving. Around the (indistinct) and the inferential instances, we are continuing to work with Red Hat and, you know, the support for RHEL should come shortly, very soon. >> Well, my prediction is that the graviton success was going to be applied to every single category. You can get that kind of innovation with this on the software side, just really kind of just, that's the magical, that's the, that's the proven form of software, right? We've been there. Good software powering with some great performance. Manu, Mike, thank you for coming on and sharing the, the news and the partnership update. Congratulations on the partnership. Really good. Thank you. >> Excellent John. Incredible (indistinct). >> Yeah, this is the future software as we see, it's all coming together. Here on theCube, we're bringing all the action, software being powered by chips, is theCube coverage of AWS re:invent 2021. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Nov 30 2021

SUMMARY :

in the cloud, Cloud-Scale. about the partnership. The first one is the pace at which RHEL in the world built on RHEL, how is that impacting the partnership? and services, in the magnification it is definitely very positive John. So on the product side, It's in the Marketplace. first to his AWS view that are driving the demand And it's really got to have what do you think You got to go for it Manu. is the Amazon controller to Kubernetes, And that's the innovation. And I think it's going to be Many of the specialized processors And the pace at which we that the graviton success bringing all the action,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Manu ParbhakarPERSON

0.99+

MikePERSON

0.99+

Mike EvansPERSON

0.99+

2008DATE

0.99+

AT&TORGANIZATION

0.99+

John FurrierPERSON

0.99+

two customersQUANTITY

0.99+

21 yearsQUANTITY

0.99+

AT&T.ORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Red HatTITLE

0.99+

AmazonORGANIZATION

0.99+

AfricaLOCATION

0.99+

ManuPERSON

0.99+

AprilDATE

0.99+

RHELTITLE

0.99+

$4 millionQUANTITY

0.99+

April this yearDATE

0.99+

two setsQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

100 peopleQUANTITY

0.99+

Red HatTITLE

0.99+

second setQUANTITY

0.99+

DeltaORGANIZATION

0.99+

third pieceQUANTITY

0.99+

first setQUANTITY

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

over two yearsQUANTITY

0.99+

OneQUANTITY

0.99+

first oneQUANTITY

0.99+

more than 100,000 fieldQUANTITY

0.99+

EC2TITLE

0.99+

Steven Lueck, Associated Bank | IBM DataOps in Action


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi Bri welcome back this is Dave Volante and welcome to this special presentation made possible by IBM we're talking about data op data ops in Acton Steve Lucas here he's the senior vice president and director of data management at Associated Bank be great to see how are things going and in Wisconsin all safe we're doing well we're staying safe staying healthy thanks for having me Dave yeah you're very welcome so Associated Bank and regional bank Midwest to cover a lot of the territories not just Wisconsin but another number of other states around there retail commercial lending real estate offices stuff I think the largest bank in in Wisconsin but tell us a little bit about your business in your specific role sure yeah no it's a good intro we're definitely largest bank at Corvis concen and then we have branches in the in the Upper Midwest area so Minnesota Illinois Wisconsin our primary locations my role at associated I'm director data management so been with the bank a couple of years now and really just focused on defining our data strategy as an overall everything from data ingestion through consumption of data and analytics all the way through and then I'm also the data governance components and keeping the controls and the rails in place around all of our data in its usage so financial services obviously one of the more cutting-edge industries in terms of their use of technology not only are you good negotiators but you you often are early adopters you guys were on the Big Data bandwagon early a lot of financial services firms we're kind of early on in Hadoop but I wonder if you could tell us a little bit about sort of the business drivers and and where's the poor the pressure point that are informing your digital strategy your your data and data op strategy sure yeah I think that one of the key areas for us is that we're trying to shift from more of a reactive mode into more of a predictive prescriptive mode from a data and analytics perspective and using our data to infuse and drive more business decisions but also to infuse it in actual applications and customer experience etc so we have a wealth of data at our fingertips we're really focused on starting to build out that data link style strategy make sure that we're kind of ahead of the curve as far as trying to predict what our end users are going to need and some of the advanced use cases we're going to have before we even know that they actually exist right so it's really trying to prepare us for the future and what's next and and then abling and empowering the business to be able to pivot when we need to without having everything perfect that they prescribed and and ready for what if we could talk about a little bit about the data journey I know it's kind of a buzzword but in my career as a independent observer and analyst I've kind of watched the promise of whether it was decision support systems or enterprise data warehouse you know give that 360 degree view of the business the the real-time nature the the customer intimacy all that in and up until sort of the recent digital you know meme I feel as though the industry hasn't lived up to that promise so I wonder if you could take us through the journey and tell us sort of where you came from and where you are today and I really want to sort of understand some of the successes they've had sure no that's a that's a great point nice I feel like as an industry I think we're at a point now where the the people process technology have sort of all caught up to each other right I feel that that real-time streaming analytics the data service mentality just leveraging web services and API is more throughout our organization in our industry as a whole I feel like that's really starting to take shape right now and and all the pieces of that puzzle have come together so kind of where we started from a journey perspective it was it was very much if your your legacy reporting data warehouse mindset of tell me tell me the data elements that you think you're going to need we'll figure out how do we map those in and form them we'll figure out how to get those prepared for you and that whole lifecycle that waterfall mentality of how do we get this through the funnel and get it to users quality was usually there the the enablement was still there but it was missing that that rapid turnaround it was also missing the the what's next right than what you haven't thought of and almost to a point of just discouraging people from asking for too many things because it got too expensive it got too hard to maintain there was some difficulty in that space so some of the things that we're trying to do now is build that that enablement mentality of encouraging people to ask for everything so when we bring out new systems - the bank is no longer an option as far as how much data they're going to send to us right we're getting all of the data we're going to we're going to bring that all together for people and then really starting to figure out how can this data now be used and and we almost have to push that out and infuse it within our organization as opposed to waiting for it to be asked for so I think that all of the the concepts so that bringing that people process and then now the tools and capabilities together has really started to make a move for us and in the industry I mean it's really not an uncommon story right you had a traditional data warehouse system you had you know some experts that you had to go through to get the data the business kind of felt like it didn't own the data you know it felt like it was imposing every time it made a request or maybe it was frustrated because it took so long and then by the time they got the data perhaps you know the market had shifted so it create a lot of frustration and then to your point but but it became very useful as a reporting tool and that was kind of this the sweet spot so so how did you overcome that and you know get to where you are today and you know kind of where are you today I was gonna say I think we're still overcoming that we'll see it'll see how this all goes right I think there's there's a couple of things that you know we've started to enable first off is just having that a concept of scale and enablement mentality and everything that we do so when we bring systems on we bring on everything we're starting to have those those components and pieces in place and we're starting to build more framework base reusable processes and procedures so that every ask is not brand new it's not this reinvent the wheel and resolve for for all that work so I think that's helped if expedite our time to market and really get some of the buy-in and support from around the organization and it's really just finding the right use cases and finding the different business partners to work with and partner with so that you help them through their journey as well is there I'm there on a similar roadmap and journey for for their own life cycles as well in their product element or whatever business line there so from a process standpoint that you kind of have to jettison the you mentioned waterfall before and move to a more being an agile approach did it require different different skill sets talk about the process and the people side of yeah it's been a it's been a shift we've tried to shift more towards I wouldn't call us more formal agile I would say we're a little bit more lean from a an iterative backlog type of approach right so what are you putting that work together in queues and having the queue of B reprioritized working with the business owners to help through those things has been a key success criteria for us and how we start to manage that work as opposed to opening formal project requests and and having all that work have to funnel through some of the old channels that like you mentioned earlier kind of distracted a little bit from from the way things had been done in the past and added some layers that people felt potentially wouldn't be necessary if they thought it was a small ask in their eyes you know I think it also led to a lot of some of the data silos and and components that we have in place today in the industry and I don't think our company is alone and having data silos and components of data in different locations but those are there for a reason though those were there because they're they're filling a need that has been missing or a gap in the solution so what we're trying to do is really take that to heart and evaluate what can we do to enable those mindsets and those mentalities and find out what was the gap and why did they have to go get a siloed solution or work around operations and technology and the channels that had been in place what would you say well your biggest challenges in getting from point A to point B point B being where you are today there were challenges on each of the components of the pillar right so people process technology people are hard to change right men behavioral type changes has been difficult that there's components of that that definitely has been in place same with the process side right so so changing it into that backlog style mentality and working with the users and having more that be sort of that maintenance type support work is is a different call culture for our organization and traditional project management and then the tool sets right the the tools and capabilities we had to look in and evaluate what tools do we need to Mabel this behavior in this mentality how do we enable more self-service the exploration how do we get people the data that they need when they need it and empower them to use so maybe you could share with us some of the outcomes and I know it's yeah we're never done in this business but but thinking about you know the investments that you've made in intact people in reprocessing you know the time it takes to get leadership involved what has been so far anyway the business outcome and you share any any metrics or it is sort of subjective a guidance I yeah I think from a subjective perspective the some of the biggest things for us has just been our ability to to truly start to have that very 60 degree view of the customer which we're probably never going to get they're officially right there's there everyone's striving for that but the ability to have you know all of that data available kind of at our fingertips and have that all consolidated now into one one location one platform and start to be that hub that starts to redistribute that data to our applications and infusing that out has been a key component for us I think some of the other big kind of components are differentiators for us and value that we can show from an organizational perspective we're in an M&A mode right so we're always looking from a merger and acquisition perspective our the model that we've built out from a data strategy perspective has proven itself useful over and over now in that M&A mentality of how do you rapidly ingest new data sets it had understood get it distributed to the right consumers it's fit our model exactly and and it hasn't been an exception it's been just part of our overall framework for how we get that data and it wasn't anything new that we had to do different because it was M&A just timelines were probably a little bit more expedited the other thing that's been interesting in some of the world that were in now right from a a Kovach perspective and having a pivot and start to change some of the way we do business and some of the PPP loans and and our business models sort of had to change overnight and our ability to work with our different lines of business and get them the data they need to help drive those decisions was another scenario where had we not had the foundational components there in the platform there to do some of this if we would have spun a little bit longer so your data ops approach I'm gonna use that term helped you in this in this kovat situation I mean you had the PPE you had you know of slew of businesses looking to get access to that money you had uncertainty with regard to kind of what the rules of the game were what you was the bank you had a Judah cape but you it was really kind of opaque in terms of what you had to do the volume of loans had to go through the roof in the time frame it was like within days or weeks that you had to provide these so I wonder if we could talk about that a little bit and how you're sort of approach the data helped you be prepared for that yeah no it was a race I mean the bottom line was it felt like a race right from from industry perspective as far as how how could we get this out there soon enough fast enough provide the most value to our customers our applications teams did a phenomenal job on enabling the applications to help streamline some of the application process for the loans themselves but from a data and reporting perspective behind the scenes we were there and we had some tools and capabilities and readiness to say we have the data now in our in our lake we can start to do some business driven decisions around all all of the different components of what's being processed on a daily basis from an application perspective versus what's been funded and how do those start to funnel all the way through doing some data quality checks and operational reporting checks to make sure that that data move properly and got booked in in the proper ways because of the rapid nature of how that was was all being done other covent type use cases as well we had some some different scenarios around different feed reporting and and other capabilities that the business wasn't necessarily prepared for we wouldn't have planned to have some of these types of things and reporting in place that we were able to give it because we had access to all the data because of these frameworks that we had put into place that we could pretty rapidly start to turn around some of those data some of those data points and analytics for us to make some some better decisions so given the propensity in the pace of M&A there has to be a challenge fundamentally in just in terms of data quality consistency governance give us the before and after you know before kind of before being the before the data ops mindset and after being kind of where you are today I think that's still a journey we're always trying to get better on that as well but the data ops mindset for us really has has shifted us to start to think about automation right pipelines that enablement a constant improvement and and how do we deploy faster deploy more consistently and and have the right capabilities in place when we need it so you know where some of that has come into place from an M&A perspective is it's really been around the building scale into everything that we do dezq real-time nature this scalability the rapid deployment models that we have in place is really where that starts to join forces and really become become powerful having having the ability to rapidly ingesting new data sources whether we know about it or not and then exposing that and having the tools and platforms be able to expose that to our users and enable our business lines whether it's covent whether it's M&A the use cases keep coming up right they we keep running into the same same concept which is how rapidly get people the data they need when they need it but still provide the rails and controls and make sure that it's governed and controllable on the way as well [Music] about the tech though wonder if we could spend some time on that I mean can you paint a picture of us so I thought what what what we're looking at here you've got you know some traditional IDI w's involved I'm sure you've got lots of data sources you you may be one of the zookeepers from the the Hadoop days with a lot of you know experimentation there may be some machine intelligence and they are painting a pic before us but sure no so we're kind of evolving some of the tool sets and capabilities as well we have some some generic kind of custom in-house build ingestion frameworks that we've started to build out for how to rapidly ingest and kind of script out the nature of of how we bring those data sources into play what we're what we've now started as well as is a journey down IBM compact product which is really gonna it's providing us that ability to govern and control all of our data sources and then start to enable some of that real-time ad hoc analytics and data preparation data shaping so some of the components that we're doing in there is just around that data discovery pointing that data sources rapidly running data profiles exposing that data to our users obviously very handy in the emanating space and and anytime you get new data sources in but then the concept of publishing that and leveraging some of the AI capabilities of assigning business terms in the data glossary and those components is another key component for us on the on the consumption side of the house for for data we have a couple of tools in place where Cognos shop we do a tableau from a data visualization perspective as well that what that were we're leveraging but that's where cloud pack is now starting to come into play as well from a data refinement perspective and giving the ability for users to actually go start to shape and prep their data sets all within that governed concept and then we've actually now started down the enablement path from an AI perspective with Python and R and we're using compact to be our orchestration tool to keep all that governed and controlled as well enable some some new AI models and some new technologies in that space we're actually starting to convert all of our custom-built frameworks into python now as well so we start to have some of that embedded within cloud pack and we can start to use some of the rails of those frameworks with it within them okay so you've got the ingest and ingestion side you've done a lot of automation it sounds like called the data profiling that's maybe what classification and automating that piece and then you've got the data quality piece the governance you got visualization with with tableau and and this kind of all fits together in a in an open quote unquote open framework is that right yeah I exactly I mean the the framework itself from our perspective where we're trying to keep the tools as as consistent as we can we really want to enable our users to have the tools that they need in the toolbox and and keep all that open what we're trying to focus on is making sure that they get the same data the same experience through whatever tool and mechanism that they're consuming from so that's where that platform mentality comes into place having compact in the middle to help govern all that and and reprovision some of those data sources out for us has it has been a key component for us well see if it sounds like you're you know making a lot of progress or you know so the days of the data temple or the high priest of data or the sort of keepers of that data really to more of a data culture where the businesses kind of feel ownership for their own data you believe self-service I think you've got confidence much more confident than the in the compliance and governance piece but bring us home just in terms of that notion of data culture and where you are and where you're headed no definitely I think that's that's been a key for us too as as part of our strategy is really helping we put in a strategy that helps define and dictate some of those structures and ownership and make that more clear some of the of the failures of the past if you will from an overall my monster data warehouse was around nobody ever owned it there was there wasn't you always ran that that risk of either the loudest consumer actually owned it or no one actually owned it what we've started to do with this is that Lake mentality and and having all that data ingested into our our frameworks the data owners are clear-cut it's who sends that data in what is the book record system for that source data we don't want a ability we don't touch it we don't transform it as we load it it sits there and available you own it we're doing the same mentality on the consumer side so we have we have a series of structures from a consumption perspective that all of our users are consuming our data if it's represented exactly how they want to consume it so again that ownership we're trying to take out a lot of that gray area and I'm enabling them to say yeah I own this I understand what I'm what I'm going after and and I can put the the ownership and the rule and rules and the stewardship around that as opposed to having that gray model in the middle that that that we never we never get but I guess to kind of close it out really the the concept for us is enabling people and end-users right giving them the data that they need when they need it and it's it's really about providing the framework and then the rails around around doing that and it's not about building out a formal bill warehouse model or a formal lessor like you mentioned before some of the you know the ivory tower type concepts right it's really about purpose-built data sets getting the giving our users empowered with the data they need when they need it all the way through and fusing that into our applications so that the applications and provide the best user experiences and and use the data to our advantage all about enabling the business I got a shove all I have you how's that IBM doing you know as a as a partner what do you like what could they be doing better to make your life easier sure I think I think they've been a great partner for us as far as that that enablement mentality the cloud pack platform has been a key for us we wouldn't be where we are without that tool said I our journey originally when we started looking at tools and modernization of our staff was around data quality data governance type components and tools we now because of the platform have released our first Python I models into the environment we have our studio capabilities natively because of the way that that's all container is now within cloud back so we've been able to enable new use cases and really advance us where we would have a time or a lot a lot more technologies and capabilities and then integrate those ourselves so the ability to have that all done has or and be able to leverage that platform has been a key to helping us get some of these roles out of this as quickly as we have as far as a partnership perspective they've been great as far as listening to what what the next steps are for us where we're headed what can we what do we need more of what can they do to help us get there so it's it's really been an encouraging encouraging environment I think they as far as what can they do better I think it's just keep keep delivering write it delivery is ping so keep keep releasing the new functionality and features and keeping the quality of the product intact well see it was great having you on the cube we always love to get the practitioner angle sounds like you've made a lot of progress and as I said when we're never finished in this industry so best of luck to you stay safe then and thanks so much for for sharing appreciate it thank you all right and thank you for watching everybody this is Dave Volante for the cube data ops in action we got the crowd chat a little bit later get right there but right back right of this short break [Music] [Music]

Published Date : May 28 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
WisconsinLOCATION

0.99+

Dave VolantePERSON

0.99+

Associated BankORGANIZATION

0.99+

DavePERSON

0.99+

Steve LucasPERSON

0.99+

Steven LueckPERSON

0.99+

pythonTITLE

0.99+

IBMORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

MinnesotaLOCATION

0.99+

Palo AltoLOCATION

0.99+

60 degreeQUANTITY

0.99+

PythonTITLE

0.99+

todayDATE

0.98+

BostonLOCATION

0.98+

firstQUANTITY

0.97+

eachQUANTITY

0.95+

ActonORGANIZATION

0.94+

CognosORGANIZATION

0.94+

M&ATITLE

0.92+

one platformQUANTITY

0.91+

oneQUANTITY

0.9+

Corvis concenORGANIZATION

0.87+

MidwestLOCATION

0.87+

RTITLE

0.86+

Upper MidwestLOCATION

0.83+

IBM DataOps in ActionORGANIZATION

0.81+

one locationQUANTITY

0.79+

agileTITLE

0.78+

a couple of yearsQUANTITY

0.75+

M&AORGANIZATION

0.7+

point BOTHER

0.69+

Illinois WisconsinLOCATION

0.68+

couple of toolsQUANTITY

0.67+

pointOTHER

0.52+

couple of thingsQUANTITY

0.5+

JudahPERSON

0.31+

HadoopLOCATION

0.28+

Julie Lockner, IBM | IBM DataOps 2020


 

>>from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. This is a cube conversation. >>Hi, everybody. This is Dave Volante with Cuban. Welcome to the special digital presentation. We're really digging into how IBM is operational izing and automating the AI and data pipeline not only for its clients, but also for itself. And with me is Julie Lockner, who looks after offering management and IBM Data and AI portfolio really great to see you again. >>Great, great to be here. Thank you. Talk a >>little bit about the role you have here at IBM. >>Sure, so my responsibility in offering >>management and the data and AI organization is >>really twofold. One is I lead a team that implements all of the back end processes, really the operations behind any time we deliver a product from the Data and AI team to the market. So think about all of the release cycle management are seeing product management discipline, etcetera. The other role that I play is really making sure that I'm We are working with our customers and making sure they have the best customer experience and a big part of that is developing the data ops methodology. It's something that I needed internally >>from my own line of business execution. But it's now something that our customers are looking for to implement in their shops as well. >>Well, good. I really want to get into that. So let's let's start with data ops. I mean, I think you know, a lot of people are familiar with Dev Ops. Not maybe not everybody's familiar with data ops. What do we need to know about data? >>Well, I mean, you bring up the point that everyone knows Dev ops. And in fact, I think you know what data ops really >>does is bring a lot of the benefits that Dev Ops did for application >>development to the data management organizations. So when we look at what is data ops, it's a data management. Uh, it is a data management set of principles that helps organizations bring business ready data to their consumers. Quickly. It takes it borrows from Dev ops. Similarly, where you have a data pipeline that associates a business value requirement. I have this business initiative. It's >>going to drive this much revenue or this must cost >>savings. This is the data that I need to be able to deliver it. How do I develop that pipeline and map to the data sources Know what data it is? Know that I can trust it. So ensuring >>that it has the right quality that I'm actually using, the data that it was meant >>for and then put it to use. So in in history, most data management practices deployed a waterfall like methodology. Our implementation methodology and what that meant is all the data pipeline >>projects were implemented serially, and it was done based on potentially a first in first out program management office >>with a Dev Ops mental model and the idea of being able to slice through all of the different silos that's required to collect the data, to organize it, to integrate it, the validate its quality to create those data integration >>pipelines and then present it to the dashboard like if it's a Cognos dashboard >>or a operational process or even a data science team, that whole end to end process >>gets streamlined through what we're pulling data ops methodology. >>So I mean, as you well know, we've been following this market since the early days of Hadoop people struggle with their data pipelines. It's complicated for them, there's a a raft of tools and and and they spend most of their time wrangling data preparing data moving data quality, different roles within the organization. So it sounds like, you know, to borrow from from Dev Ops Data offices is all about streamlining that data pipeline, helping people really understand and communicate across. End the end, as you're saying, But but what's the ultimate business outcome that you're trying to drive? >>So when you think about projects that require data to again cut costs Teoh Artemia >>business process or drive new revenue initiatives, >>how long does it take to get from having access to the data to making it available? That duration for every time delay that is spent wasted trying to connect to data sources, trying to find subject matter experts that understand what the data means and can verify? It's quality, like all of those steps along those different teams and different disciplines introduces delay in delivering high quality data fat, though the business value of data ops is always associated with something that the business is trying to achieve but with a time element so if it's for every day, we don't have this data to make a decision where either making money or losing money, that's the value proposition of data ops. So it's about taking things that people are already doing today and figuring out the quickest way to do it through automation or work flows and just cutting through all the political barriers >>that often happens when these data's cross different organizational boundaries. >>Yes, sir, speed, Time to insights is critical. But in, you know, with Dev Ops, you really bringing together of the skill sets into, sort of, you know, one Super Dev or one Super ops. It sounds with data ops. It's really more about everybody understanding their role and having communication and line of sight across the entire organization. It's not trying to make everybody else, Ah, superhuman data person. It's the whole It's the group. It's the team effort, Really. It's really a team game here, isn't it? >>Well, that's a big part of it. So just like any type of practice, there's people, aspects, process, aspects and technology, right? So people process technology, and while you're you're describing it, like having that super team that knows everything about the data. The only way that's possible is if you have a common foundation of metadata. So we've seen a surgeons in the data catalog market in the last, you know, 67 years. And what what the what? That the innovation in the data catalog market has actually enabled us to be able >>to drive more data ops pipelines. >>Meaning as you identify data assets you captured the metadata capture its meaning. You capture information that can be shared, whether they're stakeholders, it really then becomes more of a essential repository for people don't really quickly know what data they have really quickly understand what it means in its quality and very quickly with the right proper authority, like privacy rules included. Put it to use >>for models, um, dashboards, operational processes. >>Okay. And we're gonna talk about some examples. And one of them, of course, is IBM's own internal example. But help us understand where you advise clients to start. I want to get into it. Where do I get started? >>Yeah, I mean, so traditionally, what we've seen with these large data management data governance programs is that sometimes our customers feel like this is a big pill to swallow. And what we've said is, Look, there's an operator. There's an opportunity here to quickly define a small project, align into high value business initiative, target something that you can quickly gain access to the data, map out these pipelines and create a squad of skills. So it includes a person with Dev ops type programming skills to automate an instrument. A lot of the technology. A subject matter expert who understands the data sources in it's meeting the line of business executive who translate bringing that information to the business project and associating with business value. So when we say How do you get started? We've developed A I would call it a pretty basic maturity model to help organizations figure out. Where are they in terms of the technology, where are they in terms of organizationally knowing who the right people should be involved in these projects? And then, from a process perspective, we've developed some pretty prescriptive project plans. They help you nail down. What are the data elements that are critical for this business business initiative? And then we have for each role what their jobs are to consolidate the data sets map them together and present them to the consumer. We find that six week projects, typically three sprints, are perfect times to be able to a timeline to create one of these very short, quick win projects. Take that as an opportunity to figure out where your bottlenecks are in your own organization, where your skill shortages are, and then use the outcome of that six week sprint to then focus on billing and gaps. Kick off the next project and iterating celebrate the success and promote the success because >>it's typically tied to a business value to help them create momentum for the next one. >>That's awesome. I want to get into some examples, I mean, or we're both Massachusetts based. Normally you'd be in our studio and we'd be sitting here for face to face of obviously with Kobe. 19. In this crisis world sheltering in place, you're up somewhere in New England. I happened to be in my studio, but I'm the only one here, so relate this to cove it. How would data ops, or maybe you have a, ah, a concrete example in terms of how it's helped, inform or actually anticipate and keep up to date with what's happening with both. >>Yeah, well, I mean, we're all experiencing it. I don't think there's a person >>on the planet who hasn't been impacted by what's been going on with this Cupid pandemic prices. >>So we started. We started down this data obscurity a year ago. I mean, this isn't something that we just decided to implement a few weeks ago. We've been working on developing the methodology, getting our own organization in place so that we could respond the next time we needed to be able todo act upon a data driven decision. So part of the step one of our journey has really been working with our global chief data officer, Interpol, who I believe you have had an opportunity to meet with an interview. So part of this year Journey has been working with with our corporate organization. I'm in a line of business organization where we've established the roles and responsibilities we've established the technology >>stack based on our cloud pack for data and Watson knowledge padlock. >>So I use that as the context. For now, we're faced with a pandemic prices, and I'm being asked in my business unit to respond very quickly. How can we prioritize the offerings that are going to help those in critical need so that we can get those products out to market? We can offer a 90 day free use for governments and hospital agencies. So in order for me to do that as a operations lead or our team, I needed to be able to have access to our financial data. I needed to have access to our product portfolio information. I needed to understand our cloud capacity. So in order for me to be able to respond with the offers that we recently announced and you'll you can take a look at some of the examples with our Watson Citizen Assistant program, where I was able to provide the financial information required for >>us to make those products available from governments, hospitals, state agencies, etcetera, >>that's a That's a perfect example. Now, to set the stage back to the corporate global, uh, the chief data office organization, they implemented some technology that allowed us to, in just data, automatically classify it, automatically assign metadata, automatically associate data quality so that when my team started using that data, we knew what the status of that information >>was when we started to build our own predictive models. >>And so that's a great example of how we've been partnered with a corporate central organization and took advantage of the automated, uh, set of capabilities without having to invest in any additional resources or head count and be able to release >>products within a matter of a couple of weeks. >>And in that automation is a function of machine intelligence. Is that right? And obviously, some experience. But you couldn't you and I when we were consultants doing this by hand, we couldn't have done this. We could have done it at scale anyway. It is it is it Machine intelligence and AI that allows us to do this. >>That's exactly right. And you know, our organization is data and AI, so we happen to have the research and innovation teams that are building a lot of this technology, so we have somewhat of an advantage there, but you're right. The alternative to what I've described is manual spreadsheets. It's querying databases. It's sending emails to subject matter experts asking them what this data means if they're out sick or on vacation. You have to wait for them to come back, and all of this was a manual process. And in the last five years, we've seen this data catalog market really become this augmented data catalog, and the augmentation means it's automation through AI. So with years of experience and natural language understanding, we can home through a lot of the metadata that's available electronically. We can calm for unstructured data, but we can categorize it. And if you have a set of business terms that have industry standard definitions through machine learning, we can automate what you and I did as a consultant manually in a matter of seconds. That's the impact that AI is have in our organization, and now we're bringing this to the market, and >>it's a It's a big >>part of where I'm investing. My time, both internally and externally, is bringing these types >>of concepts and ideas to the market. >>So I'm hearing. First of all, one of the things that strikes me is you've got multiple data, sources and data that lives everywhere. You might have your supply chain data in your er p. Maybe that sits on Prem. You might have some sales data that's sitting in a sas in a cloud somewhere. Um, you might have, you know, weather data that you want to bring in in theory. Anyway, the more data that you have, the better insights that you could gather assuming you've got the right data quality. But so let me start with, like, where the data is, right? So So it's it's anywhere you don't know where it's going to be, but you know you need it. So that's part of this right? Is being able >>to get >>to the data quickly. >>Yeah, it's funny. You bring it up that way. I actually look a little differently. It's when you start these projects. The data was in one place, and then by the time you get through the end of a project, you >>find out that it's moved to the cloud, >>so the data location actually changes. While we're in the middle of projects, we have many or even during this this pandemic crisis. We have many organizations that are using this is an opportunity to move to SAS. So what was on Prem is now cloud. But that shouldn't change the definition of the data. It shouldn't change. It's meaning it might change how you connect to it. It might also change your security policies or privacy laws. Now, all of a sudden, you have to worry about where is that data physically located? And am I allowed to share it across national boundaries right before we knew physically where it waas. So when you think about data ops, data ops is a process that sits on top of where the data physically resides. And because we're mapping metadata and we're looking at these data pipelines and automated work flows, part of the design principles are to set it up so that it's independent of where it resides. However, you have to have placeholders in your metadata and in your tool chain, where we're automating these work flows so that you can accommodate when the data decides to move. Because the corporate policy change >>from on prem to cloud. >>And that's a big part of what Data ops offers is the same thing. By the way, for Dev ops, they've had to accommodate building in, you know, platforms as a service versus on from the development environments. It's the same for data ops, >>and you know, the other part that strikes me and listening to you is scale, and it's not just about, you know, scale with the cloud operating model. It's also about what you were talking about is you know, the auto classification, the automated metadata. You can't do that manually. You've got to be able to do that. Um, in order to scale with automation, That's another key part of data office, is it not? >>It's a well, it's a big part of >>the value proposition and a lot of the part of the business case. >>Right then you and I started in this business, you know, and big data became the thing. People just move all sorts of data sets to these Hadoop clusters without capturing the metadata. And so as a result, you know, in the last 10 years, this information is out there. But nobody knows what it means anymore. So you can't go back with the army of people and have them were these data sets because a lot of the contact was lost. But you can use automated technology. You can use automated machine learning with natural, understand natural language, understanding to do a lot of the heavy lifting for you and a big part of data ops, work flows and building these pipelines is to do what we call management by exception. So if your algorithms say 80% confident that this is a phone number and your organization has a low risk tolerance, that probably will go to an exception. But if you have a you know, a match algorithm that comes back and says it's 99% sure this is an email address, right, and you have a threshold that's 98%. It will automate much of the work that we used to have to do manually. So that's an example of how you can automate, eliminate manual work and have some human interaction based on your risk threshold. >>That's awesome. I mean, you're right, the no schema on write said. I throw it into a data lake. Data Lake becomes a data swamp. We all know that joke. Okay, I want to understand a little bit, and maybe you have some other examples of some of the use cases here, but there's some of the maturity of where customers are. It seems like you've got to start by just understanding what data you have, cataloging it. You're getting your metadata act in order. But then you've got you've got a data quality component before you can actually implement and get yet to insight. So, you know, where are customers on the maturity model? Do you have any other examples that you can share? >>Yeah. So when we look at our data ops maturity model, we tried to simplify, and I mentioned this earlier that we try to simplify it so that really anybody can get started. They don't have to have a full governance framework implemented to to take advantage of the benefits data ops delivers. So what we did is we said if you can categorize your data ops programs into really three things one is how well do you know your data? Do you even know what data you have? The 2nd 1 is, and you trust it like, can you trust it's quality? Can you trust it's meeting? And the 3rd 1 is Can you put it to use? So if you really think about it when you begin with what data do you know, write? The first step is you know, how are you determining what data? You know? The first step is if you are using spreadsheets. Replace it with a data catalog. If you have a department line of business catalog and you need to start sharing information with the department's, then start expanding to an enterprise level data catalog. Now you mentioned data quality. So the first step is do you even have a data quality program, right. Have you even established what your criteria are for high quality data? Have you considered what your data quality score is comprised of? Have you mapped out what your critical data elements are to run your business? Most companies have done that for there. They're governed processes. But for these new initiatives And when you identify, I'm in my example with the covert prices, what products are we gonna help bring to market quickly? I need to be able to >>find out what the critical data elements are. And can I trust it? >>Have I even done a quality scan and have teams commented on it's trustworthiness to be used in this case, If you haven't done anything like that in your organization, that might be the first place to start. Pick the critical data elements for this initiative, assess its quality, and then start to implement the work flows to re mediate. And then when you get to putting it to use, there's several methods for making data available. One is simply making a gate, um, are available to a small set of users. That's what most people do Well, first, they make us spreadsheet of the data available, But then, if they need to have multiple people access it, that's when, like a Data Mart might make sense. Technology like data virtualization eliminates the need for you to move data as you're in this prototyping phase, and that's a great way to get started. It doesn't cost a lot of money to get a virtual query set up to see if this is the right join or the right combination of fields that are required for this use case. Eventually, you'll get to the need to use a high performance CTL tool for data integration. But Nirvana is when you really get to that self service data prep, where users can weary a catalog and say these are the data sets I need. It presents you a list of data assets that are available. I can point and click at these columns I want as part of my data pipeline and I hit go and automatically generates that output or data science use cases for it. Bad news, Dashboard. Right? That's the most mature model and being able to iterate on that so quickly that as soon as you get feedback that that data elements are wrong or you need to add something, you can do it. Push button. And that's where data obscurity should should bring organizations too. >>Well, Julie, I think there's no question that this covert crisis is accentuated the importance of digital. You know, we talk about digital transformation a lot, and it's it's certainly riel, although I would say a lot of people that we talk to we'll say, Well, you know, not on my watch. Er, I'll be retired before that all happens. Well, this crisis is accelerating. That transformation and data is at the heart of it. You know, digital means data. And if you don't have data, you know, story together and your act together, then you're gonna you're not gonna be able to compete. And data ops really is a key aspect of that. So give us a parting word. >>Yeah, I think This is a great opportunity for us to really assess how well we're leveraging data to make strategic decisions. And if there hasn't been a more pressing time to do it, it's when our entire engagement becomes virtual like. This interview is virtual right. Everything now creates a digital footprint that we can leverage to understand where our customers are having problems where they're having successes. You know, let's use the data that's available and use data ops to make sure that we can generate access. That data? No, it trust it, Put it to use so that we can respond to >>those in need when they need it. >>Julie Lockner, your incredible practitioner. Really? Hands on really appreciate you coming on the Cube and sharing your knowledge with us. Thank you. >>Thank you very much. It was a pleasure to be here. >>Alright? And thank you for watching everybody. This is Dave Volante for the Cube. And we will see you next time. >>Yeah, yeah, yeah, yeah, yeah

Published Date : May 28 2020

SUMMARY :

from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. portfolio really great to see you again. Great, great to be here. from the Data and AI team to the market. But it's now something that our customers are looking for to implement I mean, I think you know, I think you know what data ops really Similarly, where you have a data pipeline that associates a This is the data that I need to be able to deliver it. for and then put it to use. So it sounds like, you know, that the business is trying to achieve but with a time element so if it's for every you know, with Dev Ops, you really bringing together of the skill sets into, sort of, in the data catalog market in the last, you know, 67 years. Meaning as you identify data assets you captured the metadata capture its meaning. But help us understand where you advise clients to start. So when we say How do you get started? it's typically tied to a business value to help them create momentum for the next or maybe you have a, ah, a concrete example in terms of how it's helped, I don't think there's a person on the planet who hasn't been impacted by what's been going on with this Cupid pandemic Interpol, who I believe you have had an opportunity to meet with an interview. So in order for me to Now, to set the stage back to the corporate But you couldn't you and I when we were consultants doing this by hand, And if you have a set of business terms that have industry part of where I'm investing. Anyway, the more data that you have, the better insights that you could The data was in one place, and then by the time you get through the end of a flows, part of the design principles are to set it up so that it's independent of where it for Dev ops, they've had to accommodate building in, you know, and you know, the other part that strikes me and listening to you is scale, and it's not just about, So you can't go back with the army of people and have them were these data I want to understand a little bit, and maybe you have some other examples of some of the use cases So the first step is do you even have a data quality program, right. And can I trust it? able to iterate on that so quickly that as soon as you get feedback that that data elements are wrong And if you don't have data, you know, Put it to use so that we can respond to Hands on really appreciate you coming on the Cube and sharing Thank you very much. And we will see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JuliePERSON

0.99+

Julie LocknerPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VolantePERSON

0.99+

New EnglandLOCATION

0.99+

90 dayQUANTITY

0.99+

99%QUANTITY

0.99+

80%QUANTITY

0.99+

MassachusettsLOCATION

0.99+

Data MartORGANIZATION

0.99+

first stepQUANTITY

0.99+

98%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

67 yearsQUANTITY

0.99+

six weekQUANTITY

0.99+

Cube StudiosORGANIZATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

a year agoDATE

0.99+

firstQUANTITY

0.98+

Dev OpsORGANIZATION

0.98+

2nd 1QUANTITY

0.97+

OneQUANTITY

0.97+

FirstQUANTITY

0.97+

InterpolORGANIZATION

0.97+

one placeQUANTITY

0.97+

each roleQUANTITY

0.97+

HadoopTITLE

0.95+

KobePERSON

0.95+

SASORGANIZATION

0.95+

Cupid pandemicEVENT

0.94+

todayDATE

0.93+

3rd 1QUANTITY

0.93+

this yearDATE

0.93+

few weeks agoDATE

0.88+

PremORGANIZATION

0.87+

last five yearsDATE

0.87+

2020DATE

0.85+

three sprintsQUANTITY

0.81+

one SuperQUANTITY

0.8+

NirvanaORGANIZATION

0.79+

CubanORGANIZATION

0.77+

three thingsQUANTITY

0.76+

pandemicEVENT

0.74+

step oneQUANTITY

0.71+

one of themQUANTITY

0.7+

last 10 yearsDATE

0.69+

Dev OpsTITLE

0.69+

Teoh ArtemiaORGANIZATION

0.68+

CognosORGANIZATION

0.61+

Watson Citizen AssistantTITLE

0.6+

Dev opsTITLE

0.6+

CubeCOMMERCIAL_ITEM

0.57+

opsORGANIZATION

0.54+

weeksQUANTITY

0.48+

CubeORGANIZATION

0.47+

coupleQUANTITY

0.47+

WatsonTITLE

0.42+

UNLISTED FOR REVIEW Julie Lockner, IBM | DataOps In Action


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi everybody this is David on tape with the cube and welcome to the special digital presentation we're really digging into how IBM is operationalizing and automating the AI and data pipeline not only for its clients but also for itself and with me is Julie Lochner who looks after offering management and IBM's data and AI portfolio Julie great to see you again okay great to be here thank you talk a little bit about the role you have here at IBM sure so my responsibility in offering management in the data and AI organization is really twofold one is I lead a team that implements all of the back-end processes really the operations behind anytime we deliver a product from the data AI team to the market so think about all of the release cycle management pricing product management discipline etc the other roles that I play is really making sure that um we are working with our customers and making sure they have the best customer experience and a big part of that is developing the data ops methodology it's something that I needed internally from my own line of business execution but it's now something that our customers are looking for to implement in their shops as well well good I really want to get into that and so let's let's start with data ops I mean I think you know a lot of people are familiar with DevOps not maybe not everybody's familiar with the data Ops what do we need to know about data well I mean you bring up the point that everyone knows DevOps and and then in fact I think you know what data Ops really does is bring a lot of the benefits that DevOps did for application development to the data management organizations so when we look at what is data ops it's a data management it's a it's a data management set of principles that helps organizations bring business ready data to their consumers quickly it takes it borrows from DevOps similarly where you have a data pipeline that associates a business value requirement I have this business initiative it's gonna drive this much revenue or this much cost savings this is the data that I need to be able to deliver it how do I develop that pipeline and map to the data sources know what data it is know that I can trust it so ensuring that it has the right quality that I'm actually using the data that it was meant for and then put it to use so in in history most dated management practices deployed a waterfall like methodology or implementation methodology and what that meant is all the data pipeline projects were implemented serially and it was dawn based on potentially a first-in first-out program management office with a DevOps mental model and the idea of being able to slice through all of the different silos that's required to collect the data to organize it to integrate it to validate its quality to create those data integration pipelines and then present it to the dashboard like if it's a Cognos dashboard for a operational process or even a data science team that whole end-to-end process gets streamlined through what we're calling data ops methodology so I mean as you well know we've been following this market since the early days of a dupe and people struggle with their data pipelines it's complicated for them there's a raft of tools and and and they spend most of their time wrangling data preparing data improving data quality different roles within the organization so it sounds like you know to borrow from from DevOps data OPS's is all about REME lining that data pipeline helping people really understand and communicate across end to end as you're saying but but what's the ultimate business outcome that you're trying to drive so when you think about projects that require data to again cut cost to automate a business process or drive new revenue initiatives how long does it take to get from having access to the data to making it available that duration for every time delay that is spent wasted trying to connect to data sources trying to find subject matter experts that understand what the data means and can verify its quality like all of those steps along those different teams and different disciplines introduces delay in delivering high quality data fast so the business value of data Ops is always associated with something that the business is trying to achieve but with a time element so if it's for every day we don't have this data to make a decision we're either making money or losing money that's the value proposition of data ops so it's about taking things that people are already doing today and figuring out the quickest way to do it through automation through workflows and just cutting through all of the political barriers that often happens when these data's cross different organizational boundaries yeah so speed time to insights is critical but to in and then you know with DevOps you're really bringing together the skill sets into sort of you know one super dev or one super ops it sounds with data ops it's really more about everybody understanding their role and having communication and line-of-sight across the entire organization it's not trying to make everybody a superhuman data person it's the whole it's the group it's the team effort really it's really a team game here isn't it well that's a big part of it so just like any type of practice there's people aspects process aspects and technology right so people process technology and while you're you're describing it like having that super team that knows everything about the data the only way that's possible is if you have a common foundation of metadata so we've seen a surgeons in the data catalog market and last you know six seven years and what what the what that the innovation in the data catalog market has actually enabled us to be able to drive more data ops pipelines meaning as you identify data assets you've captured the metadata you capture its meaning you capture information that can be shared whether they're stakeholders it really then becomes more of a essential repository for people to really quickly know what data they have really quickly understand what it means in its quality and very quickly with the right proper authority like privacy rules included put it to use for models you know dashboards operational processes okay and and we're gonna talk about some examples and one of them of course is ibm's own internal example but but help us understand where you advise clients to start I want to get into it where do I get started yeah I mean so traditionally what we've seen with these large data management data governance programs is that sometimes our customers feel like this is a big pill to swallow and what we've said is look there's an opera there's an opportunity here to quickly define a small project align it to a high-value business initiative target something that you can quickly gain access to the data map out these pipelines and create a squad of skills so it includes a person with DevOps type programming skills to automate an instrument a lot of the technology a subject matter expert who understands the data sources and its meaning a line of business executive who can translate bringing that information to the business project and associating with business value so when we say how do you get started we've developed a I would call it a pretty basic maturity model to help organizations figure out where are they in terms of the technology where are they in terms of organizationally knowing who the right people should be involved in these projects and then from a process perspective we've developed some pretty prescriptive project plans that help you nail down what are the data elements that are critical for this business business initiative and then we have for each role what their jobs are to consolidate the datasets map them together and present them to the consumer we find that six-week projects typically three sprints are perfect times to be able to in a timeline to create one of these very short quick win projects take that as an opportunity to figure out where your bottlenecks are in your own organization where your skill shortages are and then use the outcome of that six-week sprint to then focus on filling in gaps kick off the next project and iterate celebrate the success and promote the success because it's typically tied to a business value to help them create momentum for the next one all right that's awesome I want to now get into some examples I mean or you're we're both massachusetts-based normally you'd be in our studio and we'd be sitting here face-to-face obviously with kovat 19 in this crisis we're all sheltering in place you're up in somewhere in New England I happen to be in my studio believe it but I'm the only one here so relate this to kovat how would data ops or maybe you have a concrete example in in terms of how it's helped inform or actually anticipate and keep up-to-date with what's happening with building yeah well I mean we're all experiencing it I don't think there's a person on the planet who hasn't been impacted by what's been going on with this coded pandemic crisis so we started we started down this data obscurity a year ago I mean this isn't something that we just decided to implement a few weeks ago we've been working on developing the methodology getting our own organization in place so that we could respond the next time we needed to be able to you know act upon a data-driven decision so part of step one of our journey has really been working with our global chief data officer Interpol who I believe you have had an opportunity to meet with an interview so part of this year journey has been working with with our corporate organization I'm in the line of business organization where we've established the roles and responsibilities we've established the technology stack based on our cloud pack for data and Watson knowledge catalog so I use that as the context for now we're faced with a pandemic crisis and I'm being asked in my business unit to respond very quickly how can we prioritize the offerings that are gonna help those in critical need so that we can get those products out to market we can offer a you know 90-day free use for governments and Hospital agencies so in order for me to do that as a operations lead for our team I needed to be able to have access to our financial data I needed to have access to our product portfolio information I needed to understand our cloud capacity so in order for me to be able to respond with the offers that we recently announced you know you can take a look at some of the examples with our Watson citizen assistant program where I was able to provide the financial information required for us to make those products available for governments hospitals state agencies etc that's a that's a perfect example now to to set the stage back to the corporate global chief data office organization they implemented some technology that allowed us to ingest data automatically classify it automatically assign metadata automatically associate data quality so that when my team started using that data we knew what the status of that information was when we started to build our own predictive models and so that's a great example of how we've partnered with a corporate central organization and took advantage of the automated set of capabilities without having to invest in any additional resources or headcount and be able to release products within a matter of a couple of weeks and in that automation is a function of machine intelligence is that right and obviously some experience but but you couldn't you and I when we were consultants doing this by hand we couldn't have done this we could have done it at scale anyways it is it machine intelligence an AI that allows us to do this that's exactly right and as you know our organization is data and AI so we happen to have the a research and innovation teams that are building a lot of this technology so we have somewhat of an advantage there but you're right the alternative to what I've described is manual spreadsheets it's querying databases it's sending emails to subject matter experts asking them what this data means if they're out sick or on vacation you have to wait for them to come back and all of this was a manual process and in the last five years we've seen this data catalog market really become this augmented data catalog and that augmentation means it's automation through AI so with years of experience and natural language understanding we can comb through a lot of the metadata that's available electronically we can comb through unstructured data we can categorize it and if you have a set of business terms that have industry standard definitions through machine learning we can automate what you and I did as a consultant manually in a matter of seconds that's the impact the AI is had in our organization and now we're bringing this to the market and it's a it's a big part of where I'm investing my time both internally and externally is bringing these types of concepts and ideas to the market so I'm hearing first of all one of the things that strikes me is you've got multiple data sources and data lives everywhere you might have your supply chain data and your ERP maybe that sits on Prem you might have some sales data that's sitting in the SAS store in a cloud somewhere you might have you know a weather data that you want to bring in in theory anyway the more data that you have the better insights that you can gather assuming you've got the right data quality but so let me start with like where the data is right so so it sits anywhere you don't know where it's gonna be but you know you need it so that that's part of this right is being able to read it quickly yeah it's funny you bring it up that way I actually look a little differently it's when you start these projects the data was in one place and then by the time you get through the end of a project you find out that it's a cloud so the data location actually changes while we're in the middle of projects we have many or coming even during this this pandemic crisis we have many organizations that are using this as an opportunity to move to SAS so what was on Prem is now cloud but that shouldn't change the definition of the data it shouldn't change its meaning it might change how you connect to it um it might also change your security policies or privacy laws now all of a sudden you have to worry about where is that data physically located and am I allowed to share it across national boundaries right before we knew physically where it was so when you think about data ops data ops is a process that sits on top of where the data physically resides and because we're mapping metadata and we're looking at these data pipelines and automated workflows part of the design principles are to set it up so that it's independent of where it resides however you have to have placeholders in your metadata and in your tool chain where we oughta mating these workflows so that you can accommodate when the data decides to move because of corporate policy change from on-prem to cloud then that's a big part of what data Ops offers it's the same thing by the way for DevOps they've had to accommodate you know building in you know platforms as a service versus on from the development environments it's the same for data ops and you know the other part that strikes me and listening to you is scale and it's not just about you know scale with the cloud operating model it's also about what you're talking about is you know the auto classification the automated metadata you can't do that manually you've got to be able to do that in order to scale with automation that's another key part of data Ops is it not it's well it's a big part of the value proposition and a lot of a part of the business base right then you and I started in this business you know and Big Data became the thing people just move all sorts of data sets to these Hadoop clusters without capturing the metadata and so as a result you know in the last 10 years this information is out there but nobody knows what it means anymore so you can't go back with the army of people and have them query these data sets because a lot of the contact was lost but you can use automated technology you can use automated machine learning with natural under Snatcher Alang guaa Jing to do a lot of the heavy lifting for you and a big part of data ops workflows and building these pipelines is to do what we call management-by-exception so if your algorithms say you know 80% confident that this is a phone number and your organization has a you know low risk tolerance that probably will go to an exception but if you have a you know a match algorithm that comes back and says it's 99 percent sure this is an email address right and you I have a threshold that's 98% it will automate much of the work that we used to have to do manually so that's an example of how you can automate eliminate manual work and have some human interaction based on your risk threshold now that's awesome I mean you're right the no schema on right said I throw it into a data leg the data link becomes the data swap we all know that joke okay I want to understand a little bit and maybe you have some other examples of some of the use cases here but there's some of the maturity of where customers are I mean it seems like you got to start by just understanding what data you have cataloging it you're getting your metadata act in order but then you've got a you've got a data quality component before you can actually implement and get yet to insight so you know where our customers on the on the maturity model do you have any other examples that you can share yeah so when we look at our data ops maturity model we tried to simplify it I mentioned this earlier that we try to simplify it so that really anybody can get started they don't have to have a full governance framework implemented to take advantage of the benefits data ops delivers so what we did we said if you can categorize your data ops programs into really three things one is how well do you know your data do you even know what data you have the second one is and you trust it like can you trust its quality can you trust its meeting and the third one is can you put it to use so if you really think about it when you begin with what data do you know right the first step is you know how are you determining what data you know the first step is if you are using spreadsheets replace it with a data catalog if you have a department line of business catalog and you need to start sharing information with the departments then start expanding to an enterprise level data catalog now you mentioned data quality so the first step is do you even have a data quality program right have you even established what your criteria are for high quality data have you considered what your data quality score is comprised of have you mapped out what your critical data elements are to run your business most companies have done that for they're they're governed processes but for these new initiatives and when you identify I'm in my example with the Kovach crisis what products are we gonna help bring to market quickly I need to be able to find out what the critical data elements are and can I trust it have I even done a quality scan and have teams commented on its trustworthiness to be used in this case if you haven't done anything like that in your organization that might be the first place to start pick the critical data elements for this initiative assess its quality and then start to implement the workflows to remediate and then when you get to putting it to use there's several methods for making data available you know one is simply making a data Mart available to a small set of users that's what most people do well first they make a spreadsheet of the data available but then if they need to have multiple people access it that's when like a data Mart might make sense technology like data virtualization eliminates the need for you to move data as you're in this prototyping phase and that's a great way to get started it doesn't cost a lot of money to get a virtual query set up to see if this is the right join or the right combination of fields that are required for this use case eventually you'll get to the need to use a high performance ETL tool for data integration but Nirvana is when you really get to that self-service data prep where users can query a catalog and say these are the data sets I need it presents you a list of data assets that are available I can point and click at these columns I want as part of my you know data pipeline and I hit go and it automatically generates that output for data science use cases for a Cognos dashboard right that's the most mature model and being able to iterate on that so quickly that as soon as you get feedback that that data elements are wrong or you need to add something you can do it push button and that's where data observation to bring organizations to well Julie I think there's no question that this kovat crisis is accentuated the importance of digital you know we talk about digital transformation a lot and it's it's certainly real although I would say a lot of people that we talk to will say well you know not on my watch or I'll be retired before that all happens will this crisis is accelerating that transformation and data is at the heart of it you know digital means data and if you don't have your data you know story together and your act together then you're gonna you're not going to be able to compete and data ops really is a key aspect of that so you know give us a parting word all right I think this is a great opportunity for us to really assess how well we're leveraging data to make strategic decisions and if there hasn't been a more pressing time to do it it's when our entire engagement becomes virtual like this interview is virtual write everything now creates a digital footprint that we can leverage to understand where our customers are having problems where they're having successes you know let's use the data that's available and use data ops to make sure that we can iterate access that data know it trust it put it to use so that we can respond to those in need when they need it Julie Locker your incredible practitioner really hands-on really appreciate you coming on the Kuban and sharing your knowledge with us thank you okay thank you very much it was a pleasure to be here all right and thank you for watching everybody this is Dave Volante for the cube and we will see you next time [Music]

Published Date : Apr 9 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Julie LochnerPERSON

0.99+

Dave VolantePERSON

0.99+

Julie LocknerPERSON

0.99+

90-dayQUANTITY

0.99+

IBMORGANIZATION

0.99+

99 percentQUANTITY

0.99+

Julie LockerPERSON

0.99+

80%QUANTITY

0.99+

six-weekQUANTITY

0.99+

first stepQUANTITY

0.99+

New EnglandLOCATION

0.99+

Palo AltoLOCATION

0.99+

first stepQUANTITY

0.99+

98%QUANTITY

0.99+

JuliePERSON

0.99+

DevOpsTITLE

0.99+

a year agoDATE

0.99+

BostonLOCATION

0.99+

DavidPERSON

0.98+

WatsonTITLE

0.98+

second oneQUANTITY

0.98+

six seven yearsQUANTITY

0.97+

InterpolORGANIZATION

0.97+

third oneQUANTITY

0.97+

oneQUANTITY

0.97+

bothQUANTITY

0.96+

MartORGANIZATION

0.94+

first placeQUANTITY

0.93+

todayDATE

0.92+

each roleQUANTITY

0.91+

firstQUANTITY

0.91+

a couple of weeksQUANTITY

0.88+

pandemicEVENT

0.88+

kovatPERSON

0.87+

three sprintsQUANTITY

0.87+

three thingsQUANTITY

0.84+

step oneQUANTITY

0.8+

guaa JingPERSON

0.8+

few weeks agoDATE

0.78+

OPSORGANIZATION

0.77+

one placeQUANTITY

0.77+

ibmORGANIZATION

0.75+

NirvanaORGANIZATION

0.74+

last five yearsDATE

0.72+

DevOpsORGANIZATION

0.71+

this yearDATE

0.7+

pandemic crisisEVENT

0.7+

last 10 yearsDATE

0.69+

a lot of peopleQUANTITY

0.68+

CognosTITLE

0.66+

lot of moneyQUANTITY

0.66+

KubanLOCATION

0.56+

DataOpsORGANIZATION

0.55+

KovachORGANIZATION

0.55+

SnatcherPERSON

0.51+

kovatORGANIZATION

0.49+

lotQUANTITY

0.46+

19PERSON

0.44+

massachusettsPERSON

0.42+

SASORGANIZATION

0.37+

AlangPERSON

0.31+

UNLIST TILL 4/2 - Keep Data Private


 

>> Paige: Hello everybody and thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled Keep Data Private Prepare and Analyze Without Unencrypting With Voltage SecureData for Vertica. I'm Paige Roberts, Open Source Relations Manager at Vertica, and I'll be your host for this session. Joining me is Rich Gaston, Global Solutions Architect, Security, Risk, and Government at Voltage. And before we begin, I encourage you to submit your questions or comments during the virtual session, you don't have to wait till the end. Just type your question as it occurs to you, or comment, in the question box below the slide and then click Submit. There'll be a Q&A session at the end of the presentation where we'll try to answer as many of your questions as we're able to get to during the time. Any questions that we don't address we'll do our best to answer offline. Now, if you want, you can visit the Vertica Forum to post your questions there after the session. Now, that's going to take the place of the Developer Lounge, and our engineering team is planning to join the Forum, to keep the conversation going. So as a reminder, you can also maximize your screen by clicking the double arrow button, in the lower-right corner of the slides. That'll allow you to see the slides better. And before you ask, yes, this virtual session is being recorded and it will be available to view on-demand this week. We'll send you a notification as soon as it's ready. All right, let's get started. Over to you, Rich. >> Rich: Hey, thank you very much, Paige, and appreciate the opportunity to discuss this topic with the audience. My name is Rich Gaston and I'm a Global Solutions Architect, within the Micro Focus team, and I work on global Data privacy and protection efforts, for many different organizations, looking to take that journey toward breach defense and regulatory compliance, from platforms ranging from mobile to mainframe, everything in between, cloud, you name it, we're there in terms of our solution sets. Vertica is one of our major partners in this space, and I'm very excited to talk with you today about our solutions on the Vertica platform. First, let's talk a little bit about what you're not going to learn today, and that is, on screen you'll see, just part of the mathematics that goes into, the format-preserving encryption algorithm. We are the originators and authors and patent holders on that algorithm. Came out of research from Stanford University, back in the '90s, and we are very proud, to take that out into the market through the NIST standard process, and license that to others. So we are the originators and maintainers, of both standards and athureader in the industry. We try to make this easy and you don't have to learn any of this tough math. Behind this there are also many other layers of technology. They are part of the security, the platform, such as stateless key management. That's a really complex area, and we make it very simple for you. We have very mature and powerful products in that space, that really make your job quite easy, when you want to implement our technology within Vertica. So today, our goal is to make Data protection easy for you, to be able to understand the basics of Voltage Secure Data, you're going to be learning how the Vertica UDx, can help you get started quickly, and we're going to see some examples of how Vertica plus Voltage Secure Data, are going to be working together, in our customer cases out in the field. First, let's take you through a quick introduction to Voltage Secure Data. The business drivers and what's this all about. First of all, we started off with Breach Defense. We see that despite continued investments, in personal perimeter and platform security, Data breaches continue to occur. Voltage Secure Data plus Vertica, provides defense in depth for sensitive Data, and that's a key concept that we're going to be referring to. in the security field defense in depth, is a standard approach to be able to provide, more layers of protection around sensitive assets, such as your Data, and that's exactly what Secure Data is designed to do. Now that we've come through many of these breach examples, and big ticket items, getting the news around breaches and their impact, the business regulators have stepped up, and regulatory compliance, is now a hot topic in Data privacy. Regulations such as GDPR came online in 2018 for the EU. CCPA came online just this year, a couple months ago for California, and is the de-facto standard for the United States now, as organizations are trying to look at, the best practices for providing, regulatory compliance around Data privacy and protection. These gives massive new rights to consumers, but also obligations to organizations, to protect that personal Data. Secure Data Plus Vertica provides, fine grained authorization around sensitive Data, And we're going to show you exactly how that works, within the Vertica platform. At the bottom, you'll see some of the snippets there, of the news articles that just keep racking up, and our goal is to keep you off the news, to keep your company safe, so that you can have the assurance, that even if there is an unintentional, or intentional breach of Data out of the corporation, if it is protected by voltage Secure Data, it will be of no value to those hackers, and then you have no impact, in terms of risk to the organization. What do we mean by defense in depth? Let's take a look first at the encryption types, and the benefits that they provide, and we see our customers implementing, all kinds of different protection mechanisms, within the organization. You could be looking at disk level protection, file system protection, protection on the files themselves. You could protect the entire Database, you could protect our transmissions, as they go from the client to the server via TLS, or other protected tunnels. And then we look at Field-level Encryption, and that's what we're talking about today. That's all the above protections, at the perimeter level at the platform level. Plus, we're giving you granular access control, to your sensitive Data. Our main message is, keep the Data protected for at the earliest possible point, and only access it, when you have a valid business need to do so. That's a really critical aspect as we see Vertica customers, loading terabytes, petabytes of Data, into clusters of Vertica console, Vertica Database being able to give access to that Data, out to a wide variety of end users. We started off with organizations having, four people in an office doing Data science, or analytics, or Data warehousing, or whatever it's called within an organization, and that's now ballooned out, to a new customer coming in and telling us, we're going to have 1000 people accessing it, plus service accounts accessing Vertica, we need to be able to provide fine level access control, and be able to understand what are folks doing with that sensitive Data? And how can we Secure it, the best practices possible. In very simple state, voltage protect Data at rest and in motion. The encryption of Data facilitates compliance, and it reduces your risk of breach. So if you take a look at what we mean by feel level, we could take a name, that name might not just be in US ASCII. Here we have a sort of Latin one extended, example of Harold Potter, and we could take a look at the example protected Data. Notice that we're taking a character set approach, to protecting it, meaning, I've got an alphanumeric option here for the format, that I'm applying to that name. That gives me a mix of alpha and numeric, and plus, I've got some of that Latin one extended alphabet in there as well, and that's really controllable by the end customer. They can have this be just US ASCII, they can have it be numbers for numbers, you can have a wide variety, of different protection mechanisms, including ignoring some characters in the alphabet, in case you want to maintain formatting. We've got all the bells and whistles, that you would ever want, to put on top of format preserving encryption, and we continue to add more to that platform, as we go forward. Taking a look at tax ID, there's an example of numbers for numbers, pretty basic, but it gives us the sort of idea, that we can very quickly and easily keep the Data protected, while maintaining the format. No schema changes are going to be required, when you want to protect that Data. If you look at credit card number, really popular example, and the same concept can be applied to tax ID, often the last four digits will be used in a tax ID, to verify someone's identity. That could be on an automated telephone system, it could be a customer service representative, just trying to validate the security of the customer, and we can keep that Data in the clear for that purpose, while protecting the entire string from breach. Dates are another critical area of concern, for a lot of medical use cases. But we're seeing Date of Birth, being included in a lot of Data privacy conversations, and we can protect dates with dates, they're going to be a valid date, and we have some really nifty tools, to maintain offsets between dates. So again, we've got the real depth of capability, within our encryption, that's not just saying, here's a one size fits all approach, GPS location, customer ID, IP address, all of those kinds of Data strings, can be protected by voltage Secure Data within Vertica. Let's take a look at the UDx basics. So what are we doing, when we add Voltage to Vertica? Vertica stays as is in the center. In fact, if you get the Vertical distribution, you're getting the Secure Data UDx onboard, you just need to enable it, and have Secure Data virtual appliance, that's the box there on the middle right. That's what we come in and add to the mix, as we start to be able to add those capabilities to Vertica. On the left hand side, you'll see that your users, your service accounts, your analytics, are still typically doing Select, Update, Insert, Delete, type of functionality within Vertica. And they're going to come into Vertica's access control layer, they're going to also access those services via SQL, and we simply extend SQL for Vertica. So when you add the UDx, you get additional syntax that we can provide, and we're going to show you examples of that. You can also integrate that with concepts, like Views within Vertica. So that we can say, let's give a view of Data, that gives the Data in the clear, using the UDx to decrypt that Data, and let's give everybody else, access to the raw Data which is protected. Third parties could be brought in, folks like contractors or folks that aren't vetted, as closely as a security team might do, for internal sensitive Data access, could be given access to the Vertical cluster, without risk of them breaching and going into some area, they're not supposed to take a look at. Vertica has excellent control for access, down even to the column level, which is phenomenal, and really provides you with world class security, around the Vertical solution itself. Secure Data adds another layer of protection, like we're mentioning, so that we can have Data protected in use, Data protected at rest, and then we can have the ability, to share that protected Data throughout the organization. And that's really where Secure Data shines, is the ability to protect that Data on mainframe, on mobile, and open systems, in the cloud, everywhere you want to have that Data move to and from Vertica, then you can have Secure Data, integrated with those endpoints as well. That's an additional solution on top, the Secure Data Plus Vertica solution, that is bundled together today for a sales purpose. But we can also have that conversation with you, about those wider Secure Data use cases, we'd be happy to talk to you about that. Security to the virtual appliance, is a lightweight appliance, sits on something like eight cores, 16 gigs of RAM, 100 gig of disk or 200 gig of disk, really a lightweight appliance, you can have one or many. Most customers have four in production, just for redundancy, they don't need them for scale. But we have some customers with 16 or more in production, because they're running such high volumes of transaction load. They're running a lot of web service transactions, and they're running Vertica as well. So we're going to have those virtual appliances, as co-located around the globe, hooked up to all kinds of systems, like Syslog, LDAP, load balancers, we've got a lot of capability within the appliance, to fit into your enterprise IP landscape. So let me get you directly into the neat, of what does the UDx do. If you're technical and you know SQL, this is probably going to be pretty straightforward to you, you'll see the copy command, used widely in Vertica to get Data into Vertica. So let's try to protect that Data when we're ingesting it. Let's grab it from maybe a CSV file, and put it straight into Vertica, but protected on the way and that's what the UDx does. We have Voltage Secure protectors, an added syntax, like I mentioned, to the Vertica SQL. And that allows us to say, we're going to protect the customer first name, using the parameters of hyper alphanumeric. That's our internal lingo of a format, within Secure Data, this part of our API, the API is require very few inputs. The format is the one, that you as a developer will be supplying, and you'll have different ones for maybe SSN, you'll have different formats for street address, but you can reuse a lot of your formats, across a lot of your PII, PHI Data types. Protecting after ingest is also common. So I've got some Data, that's already been put into a staging area, perhaps I've got a landing zone, a sandbox of some sort, now I want to be able to move that, into a different zone in Vertica, different area of the schema, and I want to have that Data protected. We can do that with the update command, and simply again, you'll notice Voltage Secure protect, nothing too wild there, basically the same syntax. We're going to query unprotected Data. How do we search once I've encrypted all my Data? Well, actually, there's a pretty nifty trick to do so. If you want to be able to query unprotected Data, and we have the search string, like a phone number there in this example, simply call Voltage Secure protect on that, now you'll have the cipher text, and you'll be able to search the stored cipher text. Again, we're just format preserving encrypting the Data, and it's just a string, and we can always compare those strings, using standard syntax and SQL. Using views to decrypt Data, again a powerful concept, in terms of how to make this work, within the Vertica Landscape, when you have a lot of different groups of users. Views are very powerful, to be able to point a BI tool, for instance, business intelligence tools, Cognos, Tableau, etc, might be accessing Data from Vertica with simple queries. Well, let's point them to a view that does the hard work, and uses the Vertical nodes, and its horsepower of CPU and RAM, to actually run that Udx, and do the decryption of the Data in use, temporarily in memory, and then throw that away, so that it can't be breached. That's a nice way to keep your users active and working and going forward, with their Data access and Data analytics, while also keeping the Data Secure in the process. And then we might want to export some Data, and push it out to someone in a clear text manner. We've got a third party, needs to take the tax ID along with some Data, to do some processing, all we need to do is call Voltage Secure Access, again, very similar to the protect call, and you're writing the parameter again, and boom, we have decrypted the Data and used again, the Vertical resources of RAM and CPU and horsepower, to do the work. All we're doing with Voltage Secure Data Appliance, is a real simple little key fetch, across a protected tunnel, that's a tiny atomic transaction, gets done very quick, and you're good to go. This is it in terms of the UDx, you have a couple of calls, and one parameter to pass, everything else is config driven, and really, you're up and running very quickly. We can even do demos and samples of this Vertical Udx, using hosted appliances, that we put up for pre sales purposes. So folks want to get up and get a demo going. We could take that Udx, configure it to point to our, appliance sitting on the internet, and within a couple of minutes, we're up and running with some simple use cases. Of course, for on-prem deployment, or deployment in the cloud, you'll want your own appliance in your own crypto district, you have your own security, but it just shows, that we can easily connect to any appliance, and get this working in a matter of minutes. Let's take a look deeper at the voltage plus Vertica solution, and we'll describe some of the use cases and path to success. First of all your steps to, implementing Data-centric security and Vertica. Want to note there on the left hand side, identify sensitive Data. How do we do this? I have one customer, where they look at me and say, Rich, we know exactly what our sensitive Data is, we develop the schema, it's our own App, we have a customer table, we don't need any help in this. We've got other customers that say, Rich, we have a very complex Database environment, with multiple Databases, multiple schemas, thousands of tables, hundreds of thousands of columns, it's really, really complex help, and we don't know what people have been doing exactly, with some of that Data, We've got various teams that share this resource. There, we do have additional tools, I wanted to give a shout out to another microfocus product, which is called Structured Data Manager. It's a great tool that helps you identify sensitive Data, with some really amazing technology under the hood, that can go into a Vertica repository, scan those tables, take a sample of rows or a full table scan, and give you back some really good reports on, we think this is sensitive, let's go confirm it, and move forward with Data protection. So if you need help on that, we've got the tools to do it. Once you identify that sensitive Data, you're going to want to understand, your Data flows and your use cases. Take a look at what analytics you're doing today. What analytics do you want to do, on sensitive Data in the future? Let's start designing our analytics, to work with sensitive Data, and there's some tips and tricks that we can provide, to help you mitigate, any kind of concerns around performance, or any kind of concerns around rewriting your SQL. As you've noted, you can just simply insert our SQL additions, into your code and you're off and running. You want to install and configure the Udx, and secure Data software plants. Well, the UDx is pretty darn simple. The documentation on Vertica is publicly available, you could see how that works, and what you need to configure it, one file here, and you're ready to go. So that's pretty straightforward to process, either grant some access to the Udx, and that's really up to the customer, because there are many different ways, to handle access control in Vertica, we're going to be flexible to fit within your model, of access control and adding the UDx to your mix. Each customer is a little different there, so you might want to talk with us a little bit about, the best practices for your use cases. But in general, that's going to be up and running in just a minute. The security software plants, hardened Linux appliance today, sits on-prem or in the cloud. And you can deploy that. I've seen it done in 15 minutes, but that's what the real tech you had, access to being able to generate a search, and do all this so that, your being able to set the firewall and all the DNS entries, the basically blocking and tackling of a software appliance, you get that done, corporations can take care of that, in just a couple of weeks, they get it all done, because they have wait waiting on other teams, but the software plants are really fast to get stood up, and they're very simple to administer, with our web based GUI. Then finally, you're going to implement your UDx use cases. Once the software appliance is up and running, we can set authentication methods, we could set up the format that you're going to use in Vertica, and then those two start talking together. And it should be going in dev and test in about half a day, and then you're running toward production, in just a matter of days, in most cases. We've got other customers that say, Hey, this is going to be a bigger migration project for us. We might want to split this up into chunks. Let's do the real sensitive and scary Data, like tax ID first, as our sort of toe in the water approach, and then we'll come back and protect other Data elements. That's one way to slice and dice, and implement your solution in a planned manner. Another way is schema based. Let's take a look at this section of the schema, and implement protection on these Data elements. Now let's take a look at the different schema, and we'll repeat the process, so you can iteratively move forward with your deployment. So what's the added value? When you add full Vertica plus voltage? I want to highlight this distinction because, Vertica contains world class security controls, around their Database. I'm an old time DBA from a different product, competing against Vertica in the past, and I'm really aware of the granular access controls, that are provided within various platforms. Vertica would rank at the very top of the list, in terms of being able to give me very tight control, and a lot of different AWS methods, being able to protect the Data, in a lot of different use cases. So Vertica can handle a lot of your Data protection needs, right out of the box. Voltage Secure Data, as we keep mentioning, adds that defense in-Depth, and it's going to enable those, enterprise wide use cases as well. So first off, I mentioned this, the standard of FF1, that is format preserving encryption, we're the authors of it, we continue to maintain that, and we want to emphasize that customers, really ought to be very, very careful, in terms of choosing a NIST standard, when implementing any kind of encryption, within the organization. So 8 ES was one of the first, and Hallmark, benchmark encryption algorithms, and in 2016, we were added to that mix, as FF1 with CS online. If you search NIST, and Voltage Security, you'll see us right there as the author of the standard, and all the processes that went along with that approval. We have centralized policy for key management, authentication, audit and compliance. We can now see that Vertica selected or fetch the key, to be able to protect some Data at this date and time. We can track that and be able to give you audit, and compliance reporting against that Data. You can move protected Data into and out of Vertica. So if we ingest via Kafka, and just via NiFi and Kafka, ingest on stream sets. There are a variety of different ingestion methods, and streaming methods, that can get Data into Vertica. We can integrate secure Data with all of those components. We're very well suited to integrate, with any Hadoop technology or any big Data technology, as we have API's in a variety of languages, bitness and platforms. So we've got that all out of the box, ready to go for you, if you need it. When you're moving Data out of Vertica, you might move it into an open systems platform, you might move it to the cloud, we can also operate and do the decryption there, you're going to get the same plaintext back, and if you protect Data over in the cloud, and move it into Vertica, you're going to be able to decrypt it in Vertica. That's our cross platform promise. We've been delivering on that for many, many years, and we now have many, many endpoints that do that, in production for the world's largest organization. We're going to preserve your Data format, and referential integrity. So if I protect my social security number today, I can protect another batch of Data tomorrow, and that same ciphertext will be generated, when I put that into Vertica, I can have absolute referential integrity on that Data, to be able to allow for analytics to occur, without even decrypting Data in many cases. And we have decrypt access for authorized users only, with the ability to add LDAP authentication authorization, for UDx users. So you can really have a number of different approaches, and flavors of how you implement voltage within Vertica, but what you're getting is the additional ability, to have that confidence, that we've got the Data protected at rest, even if I have a DBA that's not vetted or someone new, or I don't know where this person is from a third party, and being provided access as a DBA level privilege. They could select star from all day long, and they're going to get ciphertext, they're going to have nothing of any value, and if they want to use the UDF to decrypt it, they're going to be tracked and traced, as to their utilization of that. So it allows us to have that control, and additional layer of security on your sensitive Data. This may be required by regulatory agencies, and it's seeming that we're seeing compliance audits, get more and more strict every year. GDPR was kind of funny, because they said in 2016, hey, this is coming, they said in 2018, it's here, and now they're saying in 2020, hey, we're serious about this, and the fines are mounting. And let's give you some examples to kind of, help you understand, that these regulations are real, the fines are real, and your reputational damage can be significant, if you were to be in breach, of a regulatory compliance requirements. We're finding so many different use cases now, popping up around regional protection of Data. I need to protect this Data so that it cannot go offshore. I need to protect this Data, so that people from another region cannot see it. That's all the kind of capability that we have, within secure Data that we can add to Vertica. We have that broad platform support, and I mentioned NiFi and Kafka, those would be on the left hand side, as we start to ingest Data from applications into Vertica. We can have landing zone approaches, where we provide some automated scripting at an OS level, to be able to protect ETL batch transactions coming in. We could protect within the Vertica UDx, as I mentioned, with the copy command, directly using Vertica. Everything inside that dot dash line, is the Vertical Plus Voltage Secure Data combo, that's sold together as a single package. Additionally, we'd love to talk with you, about the stuff that's outside the dash box, because we have dozens and dozens of endpoints, that could protect and access Data, on many different platforms. And this is where you really start to leverage, some of the extensive power of secure Data, to go across platform to handle your web based apps, to handle apps in the cloud, and to handle all of this at scale, with hundreds of thousands of transactions per second, of format preserving encryption. That may not sound like much, but when you take a look at the algorithm, what we're doing on the mathematics side, when you look at everything that goes into that transaction, to me, that's an amazing accomplishment, that we're trying to reach those kinds of levels of scale, and with Vertica, it scales horizontally. So the more nodes you add, the more power you get, the more throughput you're going to get, from voltage secure Data. I want to highlight the next steps, on how we can continue to move forward. Our secure Data team is available to you, to talk about the landscape, your use cases, your Data. We really love the concept that, we've got so many different organizations out there, using secure Data in so many different and unique ways. We have vehicle manufacturers, who are protecting not just the VIN, not just their customer Data, but in fact they're protecting sensor Data from the vehicles, which is sent over the network, down to the home base every 15 minutes, for every vehicle that's on the road, and every vehicle of this customer of ours, since 2017, has included that capability. So now we're talking about, an additional millions and millions of units coming online, as those cars are sold and distributed, and used by customers. That sensor Data is critical to the customer, and they cannot let that be ex-filled in the clear. So they protect that Data with secure Data, and we have a great track record of being able to meet, a variety of different unique requirements, whether it's IoT, whether it's web based Apps, E-commerce, healthcare, all kinds of different industries, we would love to help move the conversations forward, and we do find that it's really a three party discussion, the customer, secure Data experts in some cases, and the Vertica team. We have great enablement within Vertica team, to be able to explain and present, our secure Data solution to you. But we also have that other ability to add other experts in, to keep that conversation going into a broader perspective, of how can I protect my Data across all my platforms, not just in Vertica. I want to give a shout out to our friends at Vertica Academy. They're building out a great demo and training facilities, to be able to help you learn more about these UDx's, and how they're implemented. The Academy, is a terrific reference and resource for your teams, to be able to learn more, about the solution in a self guided way, and then we'd love to have your feedback on that. How can we help you more? What are the topics you'd like to learn more about? How can we look to the future, in protecting unstructured Data? How can we look to the future, of being able to protect Data at scale? What are the requirements that we need to be meeting? Help us through the learning processes, and through feedback to the team, get better, and then we'll help you deliver more solutions, out to those endpoints and protect that Data, so that we're not having Data breach, we're not having regulatory compliance concerns. And then lastly, learn more about the Udx. I mentioned, that all of our content there, is online and available to the public. So vertica.com/secureData , you're going to be able to walk through the basics of the UDX. You're going to see how simple it is to set up, what the UDx syntax looks like, how to grant access to it, and then you'll start to be able to figure out, hey, how can I start to put this, into a PLC in my own environment? Like I mentioned before, we have publicly available hosted appliance, for demo purposes, that we can make available to you, if you want to PLC this. Reach out to us. Let's get a conversation going, and we'll get you the address and get you some instructions, we can have a quick enablement session. We really want to make this accessible to you, and help demystify the concept of encryption, because when you see it as a developer, and you start to get your hands on it and put it to use, you can very quickly see, huh, I could use this in a variety of different cases, and I could use this to protect my Data, without impacting my analytics. Those are some of the really big concerns that folks have, and once we start to get through that learning process, and playing around with it in a PLC way, that we can start to really put it to practice into production, to say, with confidence, we're going to move forward toward Data encryption, and have a very good result, at the end of the day. This is one of the things I find with customers, that's really interesting. Their biggest stress, is not around the timeframe or the resource, it's really around, this is my Data, I have been working on collecting this Data, and making it available in a very high quality way, for many years. This is my job and I'm responsible for this Data, and now you're telling me, you're going to encrypt that Data? It makes me nervous, and that's common, everybody feels that. So we want to have that conversation, and that sort of trial and error process to say, hey, let's get your feet wet with it, and see how you like it in a sandbox environment. Let's now take that into analytics, and take a look at how we can make this, go for a quick 1.0 release, and let's then take a look at, future expansions to that, where we start adding Kafka on the ingest side. We start sending Data off, into other machine learning and analytics platforms, that we might want to utilize outside of Vertica, for certain purposes, in certain industries. Let's take a look at those use cases together, and through that journey, we can really chart a path toward the future, where we can really help you protect that Data, at rest, in use, and keep you safe, from both the hackers and the regulators, and that I think at the end of the day, is really what it's all about, in terms of protecting our Data within Vertica. We're going to have a little couple minutes for Q&A, and we would encourage you to have any questions here, and we'd love to follow up with you more, about any questions you might have, about Vertica Plus Voltage Secure Data. They you very much for your time today.

Published Date : Mar 30 2020

SUMMARY :

and our engineering team is planning to join the Forum, and our goal is to keep you off the news,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VerticaORGANIZATION

0.99+

100 gigQUANTITY

0.99+

16QUANTITY

0.99+

16 gigsQUANTITY

0.99+

200 gigQUANTITY

0.99+

Paige RobertsPERSON

0.99+

2016DATE

0.99+

PaigePERSON

0.99+

Rich GastonPERSON

0.99+

dozensQUANTITY

0.99+

2018DATE

0.99+

Vertica AcademyORGANIZATION

0.99+

2020DATE

0.99+

SQLTITLE

0.99+

AWSORGANIZATION

0.99+

FirstQUANTITY

0.99+

1000 peopleQUANTITY

0.99+

HallmarkORGANIZATION

0.99+

todayDATE

0.99+

Harold PotterPERSON

0.99+

RichPERSON

0.99+

millionsQUANTITY

0.99+

Stanford UniversityORGANIZATION

0.99+

15 minutesQUANTITY

0.99+

TodayDATE

0.99+

Each customerQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

CaliforniaLOCATION

0.99+

KafkaTITLE

0.99+

VerticaTITLE

0.99+

LatinOTHER

0.99+

tomorrowDATE

0.99+

2017DATE

0.99+

eight coresQUANTITY

0.99+

twoQUANTITY

0.98+

GDPRTITLE

0.98+

firstQUANTITY

0.98+

one customerQUANTITY

0.98+

TableauTITLE

0.98+

United StatesLOCATION

0.97+

this weekDATE

0.97+

VerticaLOCATION

0.97+

4/2DATE

0.97+

LinuxTITLE

0.97+

one fileQUANTITY

0.96+

vertica.com/secureDataOTHER

0.96+

fourQUANTITY

0.95+

about half a dayQUANTITY

0.95+

CognosTITLE

0.95+

four peopleQUANTITY

0.94+

UdxORGANIZATION

0.94+

one wayQUANTITY

0.94+

Tony Higham, IBM | IBM Data and AI Forum


 

>>live from Miami, Florida It's the Q covering IBM is data in a I forum brought to you by IBM. >>We're back in Miami and you're watching the cubes coverage of the IBM data and a I forum. Tony hi. Amiss here is a distinguished engineer for Ditch the Digital and Cloud Business Analytics at IBM. Tony, first of all, congratulations on being a distinguished engineer. That doesn't happen often. Thank you for coming on the Cube. Thank you. So your area focus is on the B I and the Enterprise performance management space. >>Um, and >>if I understand it correctly, a big mission of yours is to try to modernize those make himself service, making cloud ready. How's that going? >>It's going really well. I mean, you know, we use things like B. I and enterprise performance management. When you really boil it down, there's that's analysis of data on what do we do with the data this useful that makes a difference in the world, and then this planning and forecasting and budgeting, which everyone has to do whether you are, you know, a single household or whether you're an Amazon or Boeing, which are also some of our clients. So it's interesting that we're going from really enterprise use cases, democratizing it all the way down to single user on the cloud credit card swipe 70 bucks a month >>so that was used to be used to work for Lotus. But Cognos is one of IBM's largest acquisitions in the software space ever. Steve Mills on his team architected complete transformation of IBM is business and really got heavily into it. I think I think it was a $5 billion acquisition. Don't hold me to that, but massive one of the time and it's really paid dividends now when all this sort of 2000 ten's came in and said, Oh, how Duke's gonna kill all the traditional b I traditional btw that didn't happen, that these traditional platforms were a fundamental component of people's data strategies, so that created the imperative to modernize and made sure that there could be things like self service and cloud ready, didn't it? >>Yeah, that's absolutely true. I mean, the work clothes that we run a really sticky were close right when you're doing your reporting, your consolidation or you're planning of your yearly cycle, your budget cycle on these technologies, you don't rip them out so easily. So yes, of course, there's competitive disruption in the space. And of course, cloud creates on opportunity for work loads to be wrong, Cheaper without your own I t people. And, of course, the era of digital software. I find it myself. I tried myself by it without ever talking to a sales person creates a democratization process for these really powerful tools that's never been invented before in that space. >>Now, when I started in the business a long, long time ago, it was called GSS decision support systems, and they at the time they promised a 360 degree view with business That never really happened. You saw a whole new raft of players come in, and then the whole B I and Enterprise Data Warehouse was gonna deliver on that promise. That kind of didn't happen, either. Sarbanes Oxley brought a big wave of of imperative around these systems because compliance became huge. So that was a real tailwind for it. Then her duke was gonna solve all these problems that really didn't happen. And now you've got a I, and it feels like the combination of those systems of record those data warehouse systems, the traditional business intelligence systems and all this new emerging tech together are actually going to be a game changer. I wonder if you could comment on >>well so they can be a game changer, but you're touching on a couple of subjects here that are connected. Right? Number one is obviously the mass of data, right? Cause data has accelerated at a phenomenal pace on then you're talking about how do I then visualize or use that data in a useful manner? And that really drives the use case for a I right? Because A I in and of itself, for augmented intelligence as we as we talk about, is only useful almost when it's invisible to the user cause the user needs to feel like it's doing something for them that super intuitive, a bit like the sort of transition between the electric car on the normal car. That only really happens when the electric car can do what the normal car can do. So with things like Imagine, you bring a you know, how do cluster into a B. I solution and you're looking at that data Well. If I can correlate, for example, time profit cost. Then I can create KP eyes automatically. I can create visualizations. I know which ones you like to see from that. Or I could give you related ones that I can even automatically create dashboards. I've got the intelligence about the data and the knowledge to know what? How you might what? Visualize adversity. You have to manually construct everything >>and a I is also going to when you when you spring. These disparage data sets together, isn't a I also going to give you an indication of the confidence level in those various data set. So, for example, you know, you're you're B I data set might be part of the General ledger. You know of the income statement and and be corporate fact very high confidence level. More sometimes you mention to do some of the unstructured data. Maybe not as high a confidence level. How our customers dealing with that and applying that first of all, is that a sort of accurate premise? And how is that manifesting itself in terms of business? Oh, >>yeah. So it is an accurate premise because in the world in the world of data. There's the known knowns on the unknown knowns, right? No, no's are what you know about your data. What's interesting about really good B I solutions and planning solutions, especially when they're brought together, right, Because planning and analysis naturally go hand in hand from, you know, one user 70 bucks a month to the Enterprise client. So it's things like, What are your key drivers? So this is gonna be the drivers that you know what drives your profit. But when you've got massive amounts of data and you got a I around that, especially if it's a I that's gone ontology around your particular industry, it can start telling you about drivers that you don't know about. And that's really the next step is tell me what are the drivers around things that I don't know. So when I'm exploring the data, I'd like to see a key driver that I never even knew existed. >>So when I talk to customers, I'm doing this for a while. One of the concerns they had a criticisms they had of the traditional systems was just the process is too hard. I got to go toe like a few guys I could go to I gotta line up, you know, submit a request. By the time I get it back, I'm on to something else. I want self serve beyond just reporting. Um, how is a I and IBM changing that dynamic? Can you put thes tools in the hands of users? >>Right. So this is about democratizing the cleverness, right? So if you're a big, broad organization, you can afford to hire a bunch of people to do that stuff. But if you're a startup or an SNB, and that's where the big market opportunity is for us, you know, abilities like and this it would be we're building this into the software already today is I'll bring a spreadsheet. Long spreadsheets. By definition, they're not rows and columns, right? Anyone could take a Roan Collin spreadsheet and turn into a set of data because it looks like a database. But when you've got different tabs on different sets of data that may or may not be obviously relatable to each other, that ai ai ability to be on introspect a spreadsheet and turn into from a planning point of view, cubes, dimensions and rules which turn your spreadsheet now to a three dimensional in memory cube or a planning application. You know, the our ability to go way, way further than you could ever do with that planning process over thousands of people is all possible now because we don't have taken all the hard work, all the lifting workout, >>so that three dimensional in memory Cuba like the sound of that. So there's a performance implication. Absolutely. On end is what else? Accessibility Maw wraps more users. Is that >>well, it's the ability to be out of process water. What if things on huge amounts of data? Imagine you're bowing, right? Howdy, pastors. Boeing How? I don't know. Three trillion. I'm just guessing, right? If you've got three trillion and you need to figure out based on the lady's hurricane report how many parts you need to go ship toe? Where that hurricane reports report is you need to do a water scenario on massive amounts of data in a second or two. So you know that capability requires an old lap solution. However, the rest of the planet other than old people bless him who are very special. People don't know what a laugh is from a pop tart, so democratizing it right to the person who says, I've got a set of data on as I still need to do what if analysis on things and probably at large data cause even if you're a small company with massive amounts of data coming through, people click. String me through your website just for example. You know what if I What if analysis on putting a 5% discount on this product based on previous sales have that going to affect me from a future sales again? I think it's the democratizing as the well is the ability to hit scale. >>You talk about Cloud and analytics, how they've they've come together, what specifically IBM has done to modernize that platform. And I'm interested in what customers are saying. What's the adoption like? >>So So I manage the Global Cloud team. We have night on 1000 clients that are using cloud the cloud implementations of our software growing actually so actually Maur on two and 1/2 1000. If you include the multi tenant version, there's two steps in this process, right when you've got an enterprise software solution, your clients have a certain expectation that your software runs on cloud just the way as it does on premise, which means in practical terms, you have to build a single tenant will manage cloud instance. And that's just the first step, right? Because getting clients to see the value of running the workload on cloud where they don't need people to install it, configure it, update it, troubleshoot it on all that other sort of I t. Stuff that subtracts you from doing running your business value. We duel that for you. But the future really is in multi tenant on how we can get vast, vast scale and also greatly lower costs. But the adoptions been great. Clients love >>it. Can you share any kind of indication? Or is that all confidential or what kind of metrics do you look at it? >>So obviously we look, we look a growth. We look a user adoption, and we look at how busy the service. I mean, let me give you the best way I can give you is a is a number of servers, volume numbers, right. So we have 8000 virtual machines running on soft layer or IBM cloud for our clients business Analytics is actually the largest client for IBM Cloud running those workloads for our clients. So it's, you know, that the adoption has been really super hard on the growth continues. Interestingly enough, I'll give you another factoid. So we just launched last October. Cognos Alex. Multi tenant. So it is truly multi infrastructure. You try, you buy, you give you credit card and away you go. And you would think, because we don't have software sellers out there selling it per se that it might not adopt as much as people are out there selling software. Okay, well, in one year, it's growing 10% month on month cigarette Ally's 10% month on month, and we're nearly 1400 users now without huge amounts of effort on our part. So clearly this market interest in running those softwares and then they're not want Tuesdays easer. Six people pretending some of people have 150 people pretending on a multi tenant software. So I believe that the future is dedicated is the first step to grow confidence that my own premise investments will lift and shift the cloud, but multi tenant will take us a lot >>for him. So that's a proof point of existing customer saying okay, I want to modernize. I'm buying in. Take 1/2 step of the man dedicated. And then obviously multi tenant for scale. And just way more cost efficient. Yes, very much. All right. Um, last question. Show us a little leg. What? What can you tell us about the road map? What gets you excited about the future? >>So I think the future historically, Planning Analytics and Carlos analytics have been separate products, right? And when they came together under the B I logo in about about a year ago, we've been spending a lot of our time bringing them together because, you know, you can fight in the B I space and you can fight in the planning space. And there's a lot of competitors here, not so many here. But when you bring the two things together, the connected value chain is where we really gonna win. But it's not only just doing is the connected value chain it and it could be being being vice because I'm the the former Lotus guy who believes in democratization of technology. Right? But the market showing us when we create a piece of software that starts at 15 bucks for a single user. For the same power mind you write little less less of the capabilities and 70 bucks for a single user. For all of it, people buy it. So I'm in. >>Tony, thanks so much for coming on. The kid was great to have you. Brilliant. Thank you. Keep it right there, everybody. We'll be back with our next guest. You watching the Cube live from the IBM data and a I form in Miami. We'll be right back.

Published Date : Oct 23 2019

SUMMARY :

IBM is data in a I forum brought to you by IBM. is on the B I and the Enterprise performance management How's that going? I mean, you know, we use things like B. I and enterprise performance management. so that created the imperative to modernize and made sure that there could be things like self service and cloud I mean, the work clothes that we run a really sticky were close right when you're doing and it feels like the combination of those systems of record So with things like Imagine, you bring a you know, and a I is also going to when you when you spring. that you know what drives your profit. By the time I get it back, I'm on to something else. You know, the our ability to go way, way further than you could ever do with that planning process So there's a performance implication. So you know that capability What's the adoption like? t. Stuff that subtracts you from doing running your business value. or what kind of metrics do you look at it? So I believe that the future is dedicated What can you tell us about the road map? For the same power mind you write little less less of the capabilities and 70 bucks for a single user. The kid was great to have you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tony HighamPERSON

0.99+

Steve MillsPERSON

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

BoeingORGANIZATION

0.99+

MiamiLOCATION

0.99+

$5 billionQUANTITY

0.99+

15 bucksQUANTITY

0.99+

TonyPERSON

0.99+

70 bucksQUANTITY

0.99+

three trillionQUANTITY

0.99+

5%QUANTITY

0.99+

Three trillionQUANTITY

0.99+

360 degreeQUANTITY

0.99+

150 peopleQUANTITY

0.99+

Miami, FloridaLOCATION

0.99+

two stepsQUANTITY

0.99+

Six peopleQUANTITY

0.99+

1000 clientsQUANTITY

0.99+

two thingsQUANTITY

0.99+

twoQUANTITY

0.99+

first stepQUANTITY

0.99+

last OctoberDATE

0.99+

OneQUANTITY

0.97+

one yearQUANTITY

0.97+

DukeORGANIZATION

0.97+

Ditch the DigitalORGANIZATION

0.97+

todayDATE

0.97+

CubaLOCATION

0.96+

AmissPERSON

0.96+

Planning AnalyticsORGANIZATION

0.96+

single userQUANTITY

0.96+

LotusTITLE

0.95+

nearly 1400 usersQUANTITY

0.95+

TuesdaysDATE

0.92+

oneQUANTITY

0.92+

10% monthQUANTITY

0.92+

B IORGANIZATION

0.91+

aboutDATE

0.91+

over thousands of peopleQUANTITY

0.91+

Global CloudORGANIZATION

0.91+

Carlos analyticsORGANIZATION

0.91+

10% monthQUANTITY

0.9+

1/2 1000QUANTITY

0.87+

AlexPERSON

0.87+

firstQUANTITY

0.81+

70 bucks a monthQUANTITY

0.81+

8000 virtual machinesQUANTITY

0.8+

AllyORGANIZATION

0.79+

Enterprise Data WarehouseORGANIZATION

0.79+

single tenantQUANTITY

0.79+

a year agoDATE

0.79+

CollinPERSON

0.78+

single userQUANTITY

0.76+

1/2 stepQUANTITY

0.73+

Sarbanes OxleyPERSON

0.73+

single householdQUANTITY

0.7+

Cloud Business AnalyticsORGANIZATION

0.7+

a secondQUANTITY

0.68+

coupleQUANTITY

0.65+

CognosPERSON

0.59+

2000 tenDATE

0.58+

cloudTITLE

0.57+

RoanORGANIZATION

0.56+

IBM CloudORGANIZATION

0.53+

CubePERSON

0.37+

Seth Dobrin, IBM | IBM Data and AI Forum


 

>>live from Miami, Florida It's the Q covering. IBM is data in a I forum brought to you by IBM. >>Welcome back to the port of Miami, everybody. We're here at the Intercontinental Hotel. You're watching the Cube? The leader and I live tech covered set. Daubert is here. He's the vice president of data and I and a I and the chief data officer of cloud and cognitive software. And I'd be upset too. Good to see you again. >>Good. See, Dave, thanks for having me >>here. The data in a I form hashtag data. I I It's amazing here. 1700 people. Everybody's gonna hands on appetite for learning. Yeah. What do you see out in the marketplace? You know what's new since we last talked. >>Well, so I think if you look at some of the things that are really need in the marketplace, it's really been around filling the skill shortage. And how do you operationalize and and industrialize? You're a I. And so there's been a real need for things ways to get more productivity out of your data. Scientists not necessarily replace them. But how do you get more productivity? And we just released a few months ago, something called Auto A I, which really is, is probably the only tool out there that automates the end end pipeline automates 80% of the work on the Indian pipeline, but isn't a black box. It actually kicks out code. So your data scientists can then take it, optimize it further and understand it, and really feel more comfortable about it. >>He's got a eye for a eyes. That's >>exactly what is a eye for an eye. >>So how's that work? So you're applying machine intelligence Two data to make? Aye. Aye, more productive pick algorithms. Best fit. >>Yeah, So it does. Basically, you feed it your data and it identifies the features that are important. It does feature engineering for you. It does model selection for you. It does hyper parameter tuning and optimization, and it does deployment and also met monitors for bias. >>So what's the date of scientists do? >>Data scientist takes the code out the back end. And really, there's some tweaks that you know, the model, maybe the auto. Aye, aye. Maybe not. Get it perfect, Um, and really customize it for the business and the needs of the business. that the that the auto A I so they not understand >>the data scientist, then can can he or she can apply it in a way that is unique to their business that essentially becomes their I p. It's not like generic. Aye, aye for everybody. It's it's customized by And that's where data science to complain that I have the time to do this. Wrangling data >>exactly. And it was built in a combination from IBM Research since a great assets at IBM Research plus some cattle masters at work here at IBM that really designed and optimize the algorithm selection and things like that. And then at the keynote today, uh, wonderment Thompson was up there talking, and this is probably one of the most impactful use cases of auto. Aye, aye to date. And it was also, you know, my former team, the data science elite team, was engaged, but wonderment Thompson had this problem where they had, like, 17,000 features in their data sets, and what they wanted to do was they wanted to be able to have a custom solution for their customers. And so every time they get a customer that have to have a data scientist that would sit down and figure out what the right features and how the engineer for this customer. It was an intractable problem for them. You know, the person from wonderment Thompson have prevented presented today said he's been trying to solve this problem for eight years. Auto Way I, plus the data science elite team solve the form in two months, and after that two months, it went right into production. So in this case, oughta way. I isn't doing the whole pipeline. It's helping them identify the features and engineering the features that are important and giving them a head start on the model. >>What's the, uh, what's the acquisition bottle for all the way as a It's a license software product. Is it assassin part >>of Cloudpack for data, and it's available on IBM Cloud. So it's on IBM Cloud. You can use it paper use so you get a license as part of watching studio on IBM Cloud. If you invest in Cloudpack for data, it could be a perpetual license or committed term license, which essentially assassin, >>it's essentially a feature at dawn of Cloudpack for data. >>It's part of Cloudpack per day and you're >>saying it can be usage based. So that's key. >>Consumption based hot pack for data is all consumption based, >>so people want to use a eye for competitive advantage. I said by my open that you know, we're not marching to the cadence of Moore's Law in this industry anymore. It's a combination of data and then cloud for scale. So so people want competitive advantage. You've talked about some things that folks are doing to gain that competitive advantage. But the same time we heard from Rob Thomas that only about 4 to 10% penetration for a I. What? What are the key blockers that you see and how you're knocking them >>down? Well, I think there's. There's a number of key blockers, so one is of access to data, right? Cos have tons of data, but being able to even know what data is, they're being able to pull it all together and being able to do it in a way that is compliant with regulation because you got you can't do a I in a vacuum. You have to do it in the context of ever increasing regulation like GDP R and C, C, P A and all these other regulator privacy regulations that are popping up. So so that's that's really too so access to data and regulation can be blockers. The 2nd 1 or the 3rd 1 is really access to appropriate skills, which we talked a little bit about. Andi, how do you retrain, or how do you up skill, the talent you have? And then how do you actually bring in new talent that can execute what you want on then? Sometimes in some cos it's a lack of strategy with appropriate measurement, right? So what is your A II strategy, and how are you gonna measure success? And you and I have talked about this on Cuban on Cube before, where it's gotta measure your success in dollars and cents right cost savings, net new revenue. That's really all your CFO is care about. That's how you have to be able to measure and monitor your success. >>Yes. Oh, it's so that's that Last one is probably were where most organizations start. Let's prioritize the use cases of the give us the best bang for the buck, and then business guys probably get really excited and say Okay, let's go. But to up to truly operationalize that you gotta worry about these other things. You know, the compliance issues and you gotta have the skill sets. Yeah, it's a scale. >>And sometimes that's actually the first thing you said is sometimes a mistake. So focusing on the one that's got the most bang for the buck is not necessarily the best place to start for a couple of reasons. So one is you may not have the right data. It may not be available. It may not be governed properly. Number one, number two the business that you're building it for, may not be ready to consume it right. They may not be either bought in or the processes need to change so much or something like that, that it's not gonna get used. And you can build the best a I in the world. If it doesn't get used, it creates zero value, right? And so you really want to focus on for the first couple of projects? What are the one that we can deliver the best value, not Sarah, the most value, but the best value in the shortest amount of time and ensure that it gets into production because especially when you're starting off, if you don't show adoption, people are gonna lose interest. >>What are you >>seeing in terms of experimentation now in the customer base? You know, when you talk to buyers and you talk about, you know, you look at the I T. Spending service. People are concerned about tariffs. The trade will hurt the 2020 election. They're being a little bit cautious. But in the last two or three years have been a lot of experimentation going on. And a big part of that is a I and machine learning. What are you seeing in terms of that experimentation turning into actually production project that we can learn from and maybe do some new experiments? >>Yeah, and I think it depends on how you're doing the experiments. There's, I think there's kind of academic experimentation where you have data science, Sistine Data science teams that come work on cool stuff that may or may not have business value and may or may not be implemented right. They just kind of latch on. The business isn't really involved. They latch on, they do projects, and that's I think that's actually bad experimentation if you let it that run your program. The good experimentation is when you start identity having a strategy. You identify the use cases you want to go after and you experiment by leveraging, agile to deliver these methodologies. You deliver value in two weeks prints, and you can start delivering value quickly. You know, in the case of wonderment, Thompson again 88 weeks, four sprints. They got value. That was an experiment, right? That was an experiment because it was done. Agile methodologies using good coding practices using good, you know, kind of design up front practices. They were able to take that and put it right into production. If you're doing experimentation, you have to rewrite your code at the end. And it's a waste of time >>T to your earlier point. The moon shots are oftentimes could be too risky. And if you blow it on a moon shot, it could set you back years. So you got to be careful. Pick your spots, picked ones that maybe representative, but our lower maybe, maybe lower risk. Apply agile methodologies, get a quick return, learn, develop those skills, and then then build up to the moon ship >>or you break that moon shot down its consumable pieces. Right, Because the moon shot may take you two years to get to. But maybe there are sub components of that moon shot that you could deliver in 34 months and you start delivering knows, and you work up to the moon shot. >>I always like to ask the dog food in people. And I said, like that. Call it sipping your own champagne. What do you guys done internally? When we first met, it was and I think, a snowy day in Boston, right at the spark. Some it years ago. And you did a big career switch, and it's obviously working out for you, But But what are some of the things? And you were in part, brought in to help IBM internally as well as Interpol Help IBM really become data driven internally? Yeah. How has that gone? What have you learned? And how are you taking that to customers? >>Yeah, so I was hired three years ago now believe it was that long toe lead. Our internal transformation over the last couple of years, I got I don't want to say distracted there were really important business things I need to focus on, like gpr and helping our customers get up and running with with data science, and I build a data science elite team. So as of a couple months ago, I'm back, you know, almost entirely focused on her internal transformation. And, you know, it's really about making sure that we use data and a I to make appropriate decisions on DSO. Now we have. You know, we have an app on her phone that leverages Cognos analytics, where at any point, Ginny Rometty or Rob Thomas or Arvin Krishna can pull up and look in what we call E P M. Which is enterprise performance management and understand where the business is, right? What what do we do in third quarter, which just wrapped up what was what's the pipeline for fourth quarter? And it's at your fingertips. We're working on revamping our planning cycle. So today planning has been done in Excel. We're leveraging Planning Analytics, which is a great planning and scenario planning tool that with the tip of a button, really let a click of a button really let you understand how your business can perform in the future and what things need to do to get it perform. We're also looking across all of cloud and cognitive software, which data and A I sits in and within each business unit and cloud and cognitive software. The sales teams do a great job of cross sell upsell. But there's a huge opportunity of how do we cross sell up sell across the five different businesses that live inside of cloud and cognitive software. So did an aye aye hybrid cloud integration, IBM Cloud cognitive Applications and IBM Security. There's a lot of potential interplay that our customers do across there and providing a I that helps the sales people understand when they can create more value. Excuse me for our customers. >>It's interesting. This is the 10th year of doing the Cube, and when we first started, it was sort of the beginning of the the big data craze, and a lot of people said, Oh, okay, here's the disruption, crossing the chasm. Innovator's dilemma. All that old stuff going away, all the new stuff coming in. But you mentioned Cognos on mobile, and that's this is the thing we learned is that the key ingredients to data strategies. Comprised the existing systems. Yes. Throw those out. Those of the systems of record that were the single version of the truth, if you will, that people trusted you, go back to trust and all this other stuff built up around it. Which kind of created dissidents. Yeah. And so it sounds like one of the initiatives that you you're an IBM I've been working on is really bringing in the new pieces, modernizing sort of the existing so that you've got sort of consistent data sets that people could work. And one of the >>capabilities that really has enabled this transformation in the last six months for us internally and for our clients inside a cloud pack for data, we have this capability called IBM data virtualization, which we have all these independent sources of truth to stomach, you know? And then we have all these other data sources that may or may not be as trusted, but to be able to bring them together literally. With the click of a button, you drop your data sources in the Aye. Aye, within data. Virtualization actually identifies keys across the different things so you can link your data. You look at it, you check it, and it really enables you to do this at scale. And all you need to do is say, pointed out the data. Here's the I. P. Address of where the data lives, and it will bring that in and help you connect it. >>So you mentioned variances in data quality and consumer of the data has to have trust in that data. Can you use machine intelligence and a I to sort of give you a data confidence meter, if you will. Yeah. So there's two things >>that we use for data confidence. I call it dodging this factor, right. Understanding what the dodging this factor is of the data. So we definitely leverage. Aye. Aye. So a I If you have a date, a dictionary and you have metadata, the I can understand eight equality. And it can also look at what your data stewards do, and it can do some of the remediation of the data quality issues. But we all in Watson Knowledge catalog, which again is an in cloudpack for data. We also have the ability to vote up and vote down data. So as much as the team is using data internally. If there's a data set that had a you know, we had a hive data quality score, but it wasn't really valuable. It'll get voted down, and it will help. When you search for data in the system, it will sort it kind of like you do a search on the Internet and it'll it'll down rank that one, depending on how many down votes they got. >>So it's a wisdom of the crowd type of. >>It's a crowd sourcing combined with the I >>as that, in your experience at all, changed the dynamics of politics within organizations. In other words, I'm sure we've all been a lot of meetings where somebody puts foursome data. And if the most senior person in the room doesn't like the data, it doesn't like the implication he or she will attack the data source, and then the meeting's over and it might not necessarily be the best decision for the organization. So So I think it's maybe >>not the up, voting down voting that does that, but it's things like the E PM tool that I said we have here. You know there is a single source of truth for our finance data. It's on everyone's phone. Who needs access to it? Right? When you have a conversation about how the company or the division or the business unit is performing financially, it comes from E. P M. Whether it's in the Cognos app or whether it's in a dashboard, a separate dashboard and Cognos or is being fed into an aye aye, that we're building. This is the source of truth. Similarly, for product data, our individual products before me it comes from here's so the conversation at the senior senior meetings are no longer your data is different from my data. I don't believe it. You've eliminated that conversation. This is the data. This is the only data. Now you can have a conversation about what's really important >>in adult conversation. Okay, Now what are we going to do? It? It's >>not a bickering about my data versus your data. >>So what's next for you on? You know, you're you've been pulled in a lot of different places again. You started at IBM as an internal transformation change agent. You got pulled into a lot of customer situations because yeah, you know, you're doing so. Sales guys want to drag you along and help facilitate activity with clients. What's new? What's what's next for you. >>So really, you know, I've only been refocused on the internal transformation for a couple months now. So really extending IBM struck our cloud and cognitive software a data and a I strategy and starting to quickly implement some of these products, just like project. So, like, just like I just said, you know, we're starting project without even knowing what the prioritized list is. Intuitively, this one's important. The team's going to start working on it, and one of them is an aye aye project, which is around cross sell upsell that I mentioned across the portfolio and the other one we just got done talking about how in the senior leadership meeting for Claude Incognito software, how do we all work from a Cognos dashboard instead of Excel data data that's been exported put into Excel? The challenge with that is not that people don't trust the data. It's that if there's a question you can't drill down. So if there's a question about an Excel document or a power point that's up there, you will get back next meeting in a month or in two weeks, we'll have an e mail conversation about it. If it's presented in a really live dashboard, you can drill down and you can actually answer questions in real time. The value of that is immense, because now you as a leadership team, you can make a decision at that point and decide what direction you're going to do. Based on data, >>I said last time I have one more questions. You're CDO but you're a polymath on. So my question is, what should people look for in a chief data officer? What sort of the characteristics in the attributes, given your >>experience, that's kind of a loaded question, because there is. There is no good job, single job description for a chief date officer. I think there's a good solid set of skill sets, the fine for a cheap date officer and actually, as part of the chief data officer summits that you you know, you guys attend. We had were having sessions with the chief date officers, kind of defining a curriculum for cheap date officers with our clients so that we can help build the chief. That officer in the future. But if you look a quality so cheap, date officer is also a chief disruption officer. So it needs to be someone who is really good at and really good at driving change and really good at disrupting processes and getting people excited about it changes hard. People don't like change. How do you do? You need someone who can get people excited about change. So that's one thing. On depending on what industry you're in, it's got to be. It could be if you're in financial or heavy regulated industry, you want someone that understands governance. And that's kind of what Gardner and other analysts call a defensive CDO very governance Focus. And then you also have some CDOs, which I I fit into this bucket, which is, um, or offensive CDO, which is how do you create value from data? How do you caught save money? How do you create net new revenue? How do you create new business models, leveraging data and a I? And now there's kind of 1/3 type of CDO emerging, which is CDO not as a cost center but a studio as a p N l. How do you generate revenue for the business directly from your CDO office. >>I like that framework, right? >>I can't take credit for it. That's Gartner. >>Its governance, they call it. We say he called defensive and offensive. And then first time I met Interpol. He said, Look, you start with how does data affect the monetization of my organization? And that means making money or saving money. Seth, thanks so much for coming on. The Cube is great to see you >>again. Thanks for having me >>again. All right, Keep it right to everybody. We'll be back at the IBM data in a I form from Miami. You're watching the Cube?

Published Date : Oct 22 2019

SUMMARY :

IBM is data in a I forum brought to you by IBM. Good to see you again. What do you see out in the marketplace? And how do you operationalize and and industrialize? He's got a eye for a eyes. So how's that work? Basically, you feed it your data and it identifies the features that are important. And really, there's some tweaks that you know, the data scientist, then can can he or she can apply it in a way that is unique And it was also, you know, my former team, the data science elite team, was engaged, Is it assassin part You can use it paper use so you get a license as part of watching studio on IBM Cloud. So that's key. What are the key blockers that you see and how you're knocking them the talent you have? You know, the compliance issues and you gotta have the skill sets. And sometimes that's actually the first thing you said is sometimes a mistake. You know, when you talk to buyers and you talk You identify the use cases you want to go after and you experiment by leveraging, And if you blow it on a moon shot, it could set you back years. Right, Because the moon shot may take you two years to And how are you taking that to customers? with the tip of a button, really let a click of a button really let you understand how your business And so it sounds like one of the initiatives that you With the click of a button, you drop your data sources in the Aye. to sort of give you a data confidence meter, if you will. So a I If you have a date, a dictionary and you have And if the most senior person in the room doesn't like the data, so the conversation at the senior senior meetings are no longer your data is different Okay, Now what are we going to do? a lot of customer situations because yeah, you know, you're doing so. So really, you know, I've only been refocused on the internal transformation for What sort of the characteristics in the attributes, given your And then you also have some CDOs, which I I I can't take credit for it. The Cube is great to see you Thanks for having me We'll be back at the IBM data in a I form from Miami.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SethPERSON

0.99+

Arvin KrishnaPERSON

0.99+

IBMORGANIZATION

0.99+

DaubertPERSON

0.99+

BostonLOCATION

0.99+

Rob ThomasPERSON

0.99+

DavePERSON

0.99+

Ginny RomettyPERSON

0.99+

Seth DobrinPERSON

0.99+

IBM ResearchORGANIZATION

0.99+

two yearsQUANTITY

0.99+

MiamiLOCATION

0.99+

ExcelTITLE

0.99+

eight yearsQUANTITY

0.99+

88 weeksQUANTITY

0.99+

Rob ThomasPERSON

0.99+

GardnerPERSON

0.99+

SarahPERSON

0.99+

Miami, FloridaLOCATION

0.99+

34 monthsQUANTITY

0.99+

17,000 featuresQUANTITY

0.99+

two thingsQUANTITY

0.99+

10th yearQUANTITY

0.99+

two weeksQUANTITY

0.99+

1700 peopleQUANTITY

0.99+

GartnerORGANIZATION

0.99+

CognosTITLE

0.99+

three years agoDATE

0.99+

two monthsQUANTITY

0.99+

first timeQUANTITY

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

each businessQUANTITY

0.97+

first coupleQUANTITY

0.97+

InterpolORGANIZATION

0.96+

about 4QUANTITY

0.96+

ThompsonPERSON

0.96+

third quarterDATE

0.96+

five different businessesQUANTITY

0.95+

Two dataQUANTITY

0.95+

Intercontinental HotelORGANIZATION

0.94+

IBM DataORGANIZATION

0.94+

firstQUANTITY

0.93+

single jobQUANTITY

0.93+

first thingQUANTITY

0.92+

CognosORGANIZATION

0.91+

last couple of yearsDATE

0.91+

single sourceQUANTITY

0.89+

few months agoDATE

0.89+

one more questionsQUANTITY

0.89+

couple months agoDATE

0.88+

CloudpackTITLE

0.87+

single versionQUANTITY

0.87+

CubeCOMMERCIAL_ITEM

0.86+

80% ofQUANTITY

0.85+

last six monthsDATE

0.84+

Claude IncognitoORGANIZATION

0.84+

agileTITLE

0.84+

10%QUANTITY

0.84+

yearsDATE

0.84+

MooreORGANIZATION

0.82+

zeroQUANTITY

0.81+

three yearsQUANTITY

0.8+

2020 electionEVENT

0.8+

E PMTITLE

0.79+

four sprintsQUANTITY

0.79+

WatsonORGANIZATION

0.77+

2nd 1QUANTITY

0.75+

Breaking Analysis: Q4 Spending Outlook - 10/18/19


 

>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. (dramatic music) >> Hi, everyone, welcome to this week's Breaking Analysis. It's Friday, October 18th, and this is theCUBE Insights, powered by ETR. Today, ETR had its conference call, its webcast. It was in a quiet period, and it dropped this tome. I have spent the last several hours going through this dataset. It's just unbelievable. It's the fresh data from the October survey, and I'm going to share just some highlights with you. I wish I had a couple hours to go through all this stuff, but I'm going to just pull out some of the key points. Spending is flattening. We've talked about this in previous discussions with you. But, things are still healthy. We're just reverting back to pre 2018 levels and, obviously, keeping a very close eye on the spending data and the sectors. There is some uncertainty heading into Q four. It's not only tariffs, you know. 2020's an election year, so that causes some uncertainty and some concerns for people. But, the big theme from ETR is there's less experimentation going on. The last several years have been ones where we're pushing out digital initiatives, and there was a lot of experimentation, a lot of redundancy. So, I'm going to talk more about that. I'm going to focus on a couple of sectors. I'm going to share with you there's the overall sector analysis. Then, I'm going to focus in on Microsoft and AWS and talk a little bit about the cloud. Then, I'm going to give some other highlights and, particularly, around enterprise software. The other thing I'll say is that the folks from ETR are going to be in the Bay Area on October 28th through the 30th, and I would encourage you to spend some time with them. If you want to meet them, just, you know, contact me @dvellante on Twitter or David.Vellante@siliconangle.com. I have no dog in this fight. I get no money from these guys. We're just partners and friends, but I love their data. And, they've given me access to it, and it's great because I can share it with you, our community. So, let's get right into it. Alex, if you just bring up the first slide, what I want to show is the ETR pulse check survey demographics, so every quarter, ETR does these surveys. They've got a dataset comprising 4500 members, panelists if you will, that they survey each quarter. In this survey, 1336 responded, representing 457 billion in spending power, and you can see from this slide, you know, it's got a nice mix of large companies. Very heavily weighted toward North America, but you're talking about, you know, 12% AMIA out of 1300. Certainly substantial and statistically significant to get some trends overseas. You can see across all industries. And then, job titles, a lot of C level executives, VPs, architects, people who know what the spending climate looks like, so I really like the mix of data. Let me make some overall comments, and, Alex, the next slide sort of gives some snapshot here. The big theme is that there's a compression in tech spending, as they say. It's very tough to compare to compare to 2018, which was just a phenomenal year. I mentioned the tariffs. It was an election year. Election years bring uncertainty. Uncertainty brings conservatism, so that's something, obviously, that's weighing, I think, on buyers' minds. And, I'll give you some anecdotal comments in a moment that will underscore that. There's less redundancy in spending. This has been a theme of ETR's for quite some time now. The last few years have been a try everything type of mode. Digital initiatives were launched, let's say, starting in 2016. ETR called this, I love this, Tom DelVecchio, the CEO of ETR, called it a giant IT bake off where you were looking at, okay, cloud versus on prem or SaaS versus conventional models, new databases versus legacy databases, legacy storage versus sort of modern storage stacks. So, you had this big bake off going on. And, what's happening now is you're seeing less experimentation so less adoption of new technologies, and replacements are on the rise. So, people are making their bets. They're saying, "Okay, these technologies "are the ones we're going to bet on, "these emerging disruptive technologies." So, they're narrowing their scope of emerging technologies, and they're saying, "Okay, now, "we're going to replace the legacy stuff." So, you're seeing these new stacks emerging. I mentioned some others before, but things like cloud native versus legacy waterfall approaches. And, these new stacks are hitting both legacy and disruptive companies for the reasons that I mentioned before because we're replacing legacy, but at the same time, we're narrowing the scope of the new stuff. This is not necessarily good for the disruptors. Downturns, sometimes, are good for legacy because they're perceived as a safer bet. So, what I want to do, right now, is share with you some of the anecdotals from the survey, and I'll just, you know, call out some things. By the way, the first thing I would note is, you know, ETR did sort of an analysis of frequency of terms. Cloud, cost, replacing, change, moving, consolidation, migration, and contract were the big ones that stood out. But, let me just call a couple of the anecdotals. When they do these surveys, they'll ask open ended questions, and so these kind of give you a good idea as to how people are thinking. "We're projecting a hold based on impacts from tariffs. "Situation could change if tariff relief is reached. "We're really concerned about EU." Another one, "Shift to SaaS is accelerating "and driving TCO down. "Investing in 2019, we're implementing "and retiring old technologies in 2020. "There's an active effort to consolidate "the number of security vendor solutions. "We're doing more Microsoft." Let's see, "We have moved "to a completely outsourced infrastructure model, "so no longer purchasing storage," interesting. "In general, we're trying to reduce spending "based on current market conditions." So, people, again, are concerned. Storage, as a category, is way down. "We're moving from Teradata to AWS and a data lake." I'll make some comments, as well, later on about EDW and Snowflake in particular, who, you know, remains very healthy. "We're moving our data to G Suite and AWS. "We're migrating our SaaS offering to elastic. "We're sunsetting Cognos," which, of course, is owned by IBM. "Talend, we decided to drop after evaluating. "Tableau, we've decided to not integrate anymore," even though Tableau is, actually, looking very strong subsequent to the sales force acquisition. So, there's some comments there that people, again, are replacing and they're narrowing some of their focus on spending. All right, Alex, bring up the next slide. I want to share with you the sector momentum. So, we've talked about this methodology of net score. Every time ETR does one of these pulse surveys, they ask, "Are you spending more or are you spending less? "Or, are you spending the same?" And then, essentially, they subtract the spending less from the spending more, and the spending more included new adoptions. The spending less includes replacements. And, that comes out with a net score, and that net score is an indicator of momentum. And, what you can see here is, the momentum I've highlighted in red, is container orchestration, the container platforms, machine learning, AI, automation, big theme. We were just at the UiPath conference, huge theme on automation. And, of course, robotic process automation, RPA. Cloud computing remains very strong. This dotted red line that I put in there, that's at the, you know, 30%, 35% level. You kind of want to be above that line to really show momentum. Anything below that line is either holding serve, holding steady, but well below that line, when you start getting into the low 20s and the teens, is a red zone. That's a danger zone. You could see data warehouse software is kind of on that cusp. and I'm not, you know, a huge fan of the sector in general, but I love Snowflake and what they're doing and the share gains that are going on there. So, when you're below that red line, it's a game of share gain. Storage, same thing we've talked about. The overall storage sector is down. It's being pressured by cloud, as that anectdotal suggested. It's also being pressured by the fact that so much flash has been injected into the data center over the last couple of years. That given headroom for buyers. They don't need as much storage, so overall, the sector is soft. But then, you see companies, like Pure, continuing to gain share, so they're actually quite strong in this quarter survey. So, you could see some various sectors here. IT consulting and outsourced IT not looking strong, data center consolidation. By the way, you saw, in IBM's recent earnings, Jim Kavanaugh pointed to their outsourcing business as a real drag, you know. Some of these other sectors, you could see, actually, PC laptop, this is obviously a big impact for Dell and HP, you know, kind of holding steady. Actually, better than storage, so, you know, for that large of a segment, not necessarily such a bad thing. Okay, now, what I want to do, I want to shift focus and make some comments on Microsoft, specifically, and AWS. So, here's just some high level points on this slide on Microsoft. The N out of that total was 1200, so very large proportion of the survey is weighted toward Microsoft. So, a good observation space for Microsoft. Extremely positive spending outlook for this company. There's a lot of ways to get to Microsoft. You want cloud, there's Azure, you know. Visualization, you got Power BI. Collaboration, there's Teams. Of course, email and calendaring is Office 365. You need hiring data? Well, we just bought LinkedIn. CRM, ERP, there's Microsoft Dynamics. So, Microsoft is a lot of roads, to spend with Microsoft. Windows is not the future of Microsoft. Satya Nadella and company have done a great job of sort of getting out of that dogma and really expanding their TAM. You're seeing acceleration from Microsoft across all key sectors, cloud, apps, containers, MI, or machine intelligence, AI and ML, analytics, infrastructure software, data warehousing, servers, GitHub is strong, collaboration, as I mentioned. So, really, across the board, this portfolio of offerings powered by the scale of Azure is very strong. Microsoft has great velocity in the cloud, and it's a key bellwether. Now, the next slide, what it does is compares the cloud computing big three in the US, Azure, AWS, and GCP, Google Cloud Platform. This is, again, net score. This is infrastructure as a service, and so you can see here the yellow is Microsoft, that darker line is AWS, and GCP is that blue line down below. All three are actually showing great strength in the spending data. Azure has more momentum than AWS, so it's growing faster. We've seen this for a while, but I want to make a point here that didn't come up on the ETR call. But, AWS is probably two and a half to three times larger in infrastructure as a service than is Microsoft Azure, so remember, AWS has a $35 billion at least run rate business in infrastructure as a service. And, as I say, it's two and a half to three times, at least, larger than Microsoft, which is probably a run rate of, let's call it, 10 to 12 billion, okay. So, it's quite amazing that AWS is holding at that 66 to now dropping to 63% net score given that it's so large. And, of course, way behind is GCP, much smaller share. In fact, I think, probably, Alibaba has surpassed GCP in terms of overall market share. So, at any rate, you could see all three, strong momentum. The cloud continues its march. I'll make some comments on that a little bit later. But, Azure has really strong momentum. Let's talk, next slide if you will, Alex, about AWS. Smaller sample size, 731 out of the total, which is not surprising, right. Microsoft's been around a lot longer and plays in a lot more sectors. ETR has a positive to neutral outlook on AWS. Now, you have to be careful here because, remember, what ETR is doing is they're looking at the spending momentum and comparing that to consensus estimates, okay. So, ETR's business is helping, largely, Wall Street, you know, buy side analysts make bets, and so it's not only about how much money they make or what kind of momentum they have in aggregate. It's about how they're doing relative to expectation, something that I explained on the last Breaking Analysis. Spending on AWS continues to be very robust. They've got that flywheel effect. Make no mistake that this positive to neutral outlook is relative to expectations. Relative to overall market, AWS is, you know, kicking butt. Cloud, analytics, big data, data warehousing, containers, machine intelligence, even virtualization. AWS is growing and gaining share. My view, AWS will continue to outperform the marketplace for quite some time now, and it's gaining share from legacy players. Who's it hurting? You're seeing the companies within AWS's sort of sphere that are getting impacted by AWS. Oracle, IBM, SAP, you know, cloud Arrow, which we mentioned last time is at all time lows, Teradata. These accounts, inside of AWS respondents, are losing share. Now, who's gaining share? Snowflake is on a tear. Mongo is very strong. Microsoft, interestingly, remains strong in AWS. In fact, AWS runs a lot of Microsoft workloads. That's, you know, fairly well known. But, again, Snowflake, very strong inside of AWS accounts. There's no indication that, despite AWS's emphasis on database and, of course, data warehouse, that Snowflake's being impacted by that. The reverse, Snowflake is taking advantage of cloud momentum. The only real negative you can say about AWS is that Microsoft is accelerating faster than AWS, so that might upset Andy Jassy. But, he'll point out, I guess, what I pointed out before, that they're much larger. Take a look at AWS on this next slide. The net score across all AWS sectors, the ones I mentioned. And, this is the growth in Fortune 500, so you can see, very steady in the large accounts. That's that blue line, you know, dipped in the October 18 survey, but look at how strong it is, holding 67% in Fortune 500 accounts. And then, you can see, the yellow line is the market share. AWS continues to gain share in those large accounts when you weight that out in terms of spending. That's why I say AWS is going to continue to do very well in this overall market. So, just some, you know, comments on cloud. As I said, it continues to march, it continues to really be the watchword, the fundamental operating model. Microsoft, very strong, expanding its TAM everywhere, I mean, affecting, potentially, Slack, Box, Dropbox, New Relic, Splunk, IBM, and Security, Elastic. So, Microsoft, very strong here. AWS continues to grow, not as strong as '18, but much stronger than its peers, very well positioned in database and artificial intelligence. And so, not a lot of softness in AWS. I mentioned on one of the previous Breaking Analysis, Kubernetes', actually, container's a little soft, so we always keep an eye on that one. And, Google, again, struggling to make gains in cloud. One of the comments I made before is that the long term surveys for Google looked positive, but that's not showing up yet in the near term market shares. All right, Alex, if you want to bring up the next slide, I want to make some quick comments before I close, on enterprise software. There was a big workday scare this week. They kind of guided that their core HR business was not going to be as robust as it had been previously, so this pulled back all the SaaS vendors. And, you know, the stock got crushed, Salesforce got hit, ServiceNow got hit, Splunk got hit. But, I tell you, you look at the data in this massive dataset, ServiceNow remains strong, Salesforce looks, very slight deceleration, but very sound, especially in the Fortune 100 in that GPP, the giant public and private companies that I talked about on an earlier call. That's one of the best indicators of strength. Tableau, actually, very strong, especially in large accounts, so Salesforce seems to be doing a good job of integrating there. Splunk, (mumbles) coming up shortly, I think this month. Securities, the category is very strong, lifting all ships. Splunk looks really good. Despite some of the possible competition from Microsoft, there's no indication that Splunk is slowing. There's some anecdotal issues about pricing that I talked about before, but I think Splunk is really dealing with those. UiPath's another company. We were just out there this past week at the UiPath Forward conference. UiPath, in this dataset, when you take out some of the smaller respondents, smaller number of respondents, UiPath has one of the highest net scores in the entire sample. UiPath is on a tear. I talked to dozens of customers this week. Very strong momentum, and then moving into, got new areas, and I'll be focusing on the RPA sector a little later on. But, automation, in general, really has some tailwinds in the marketplace. And, you know, the other comment I'll make about RPA is a downturn actually could help RPA vendors, who, by the way, all the RPA vendors look strong. Automation Anywhere, UiPath, I mentioned, Blue Prism, you know, even some of the legacy companies like Pega look, actually, very strong. A downturn in the economy could help some of the RPA vendors because would be looking to do more with less, and automation, you know, could be something that they're looking toward. Snowflake I mentioned, again, they continue their tear. A very strong share in expansion. Slightly lower than previous quarters in terms of the spending momentum, but the previous quarters were off the charts. So, also very strong in large companies. All right, so let me wrap. So, buyers are planning for a slowdown. I mean, there's no doubt about that. It's something that we have to pay very close attention to, and I think the marker expects that. And, I think, you know, it's okay. There's less spaghetti against the wall, we're going to try everything, and that's having a moderating effect on spending, as is the less redundancy. People were running systems in parallel. As they say, they're placing bets, now, on both disruptive tech and on legacy tech, so they're replacing both in some cases. Or, they're not investing in some of the disruptive stuff because they're narrowing their investments in disruptive technologies, and they're also replacing some legacy. We're clearly seeing new adoptions down, according to ETR, and replacements up, and that's going to affect both legacy and disruptive vendors. So, caution is the watchword, but, overall, the market remains healthy. Okay, so thanks for watching. This is Dave Vellante for CUBE Insights, powered by ETR. Thanks for watching this Breaking Analysis. We'll see you next time. (dramatic music)

Published Date : Oct 18 2019

SUMMARY :

From the SiliconANGLE Media office By the way, the first thing I would note is, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tom DelVecchioPERSON

0.99+

10QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Andy JassyPERSON

0.99+

Jim KavanaughPERSON

0.99+

HPORGANIZATION

0.99+

2016DATE

0.99+

2019DATE

0.99+

October 28thDATE

0.99+

DellORGANIZATION

0.99+

2020DATE

0.99+

GoogleORGANIZATION

0.99+

OctoberDATE

0.99+

$35 billionQUANTITY

0.99+

OracleORGANIZATION

0.99+

UiPathORGANIZATION

0.99+

63%QUANTITY

0.99+

ETRORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

4500 membersQUANTITY

0.99+

October 18DATE

0.99+

2018DATE

0.99+

PegaORGANIZATION

0.99+

New RelicORGANIZATION

0.99+

AlexPERSON

0.99+

10/18/19DATE

0.99+

DropboxORGANIZATION

0.99+

457 billionQUANTITY

0.99+

30%QUANTITY

0.99+

SAPORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

67%QUANTITY

0.99+

35%QUANTITY

0.99+

1200QUANTITY

0.99+

Satya NadellaPERSON

0.99+

66QUANTITY

0.99+

SplunkORGANIZATION

0.99+

Survey Data Shows Momentum for IBM Red Hat But Questions Remain


 

>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE! (upbeat electronic music) Now, here's your host, Dave Vellante. >> Hi, everybody, this is Dave Vellante, and I want to share with you some recent survey data that talks to the IBM acquisition of Red Hat, which closed today. It's always really valuable to go out, talk to practitioners, see what they're doing, and it's a hard thing to do. It's very expensive to get this type of survey data. A lot of times, it's very much out of date. You might remember. Some of you might remember a company called the InfoPro. Its founder and CEO was Ken Male, and he raised some money from Gideon Gartner, and he had this awesome survey panel. Well, somehow it failed. Well, friends of mine at ETR, Enterprise Technology Research, have basically created a modern version of the InfoPro. It's the InfoPro on steroids with a modern interface and data science behind it. They've now been at this for 10 years. They built a panel of 4,500 users, practitioners that they can go to, a lot of C level folks, a lot of VP level and then some doers down at the engineering level, and they go out and periodically survey these folks, and one of the surveys they did back in October was what do you think of the IBM-Red Hat acquisition? And then they've periodically gone out and talked to customers of both Red Hat and IBM or both to get a sense of the sentiment. So given that the acquisition closed today, we wanted to share some of that data with you, and our friends at ETR shared with us some of their drill down data with us, and we're going to share it with you. So first of all, I want to summarize something that they said. Back in October, they said, "We view this acquisition as less of an attempt "by IBM to climb into the cloud game, cloud relevance, "but rather a strategic opportunity "to reboot IBM's early 1990s IT services business strategy." I couldn't agree with that more. I've said all along this is a services play connecting OpenShift from Red Hat into the what Ginni Rometty talks about as the 80% of the install base that is still on prem with the workloads at the backend of mission critical systems that need to be modernized. That's IBM's opportunity. That's why this is a front end loaded cashflow deal 'cause IBM can immediately start doing business through it services organization and generate cash. They went on to say, ETR said, "Here, IBM could position itself "as the de facto IT services partner "for Fortune 100 to Global 2000 organizations "and their digital transformations. "Therefore, in theory, this could reinvigorate "the global services business for IBM "and their overlapping customer bases "could alow IBM to recapture and accelerate a great deal "of service revenues that they have lost "over the past few years." Again, I couldn't agree more. It's less about a cloud play. It is definitely about a multi-cloud play, which is how IBM's positioning this, but services de-risks this entire acquisition in my opinion even though it's very large, 34 billion. Okay, I'm show you some data. So pull up this slide. So what ETR does is they'll go out. So this is a survey of right after the acquisition of about 132 Global 2000 practitioners across a bunch of different industries, energy, utilities, financial services, government, healthcare, IT, telco, retail consumers, so a nice cross section of industries and largely in North America but a healthy cross section of AMIA and APAC. And again, these are large enterprises. So what this slide shows is conditioned responses, which I love conditioned responses. It sort of forces people to answer which of the following best describes. But this says, "Given IBM's intent to acquire Red Hat, "do you believe your organization will be more likely "to use this new combination "or less likely in your digital transformation?" You can see here on the left hand side, the green, 23% positive, on the right hand side, 13% negative. So, the data doesn't necessarily support ETR's original conclusions and my belief that this all about services momentum because most IT people are going to wait and see. So you can see the fat middle there is 64%. Basically you're saying, "Yeah, we're going to wait and see. "This really doesn't change anything." But nonetheless, you see a meaningfully more positive sentiment than negative sentiment. The bottom half of this slide shows, the question is, "Do you believe that this acquisition "makes or will make IBM a legitimate competitor "in the cloud wars between AWS and Microsoft Azure?" You can see on the left hand side, it says 45% positive. Very few say, all the way on the left hand side, a very legitimate player in the cloud on par with AWS and Azure. I don't believe that's the case. But a majority said, "IBM is surely better off "with Red Hat than without Red Hat in the context of cloud." Again, I would agree with that. While I think this is largely a services play, it's also, as Stu Miniman pointed out in an earlier video with me, a cloud play. And you can see it's still 38% is negative on the right hand side. 15% absolutely not, IBM is far behind AWS and Azure in cloud. I would tend to agree with that, but IBM is different. They're trying to bring together its entire software portfolio so it has a competitive approach. It's not trying to take Azure and AWS head on. So you see 38% negative, 45% positive. Now, what the survey didn't do is really didn't talk to multi-cloud. This, to me, puts IBM at the forefront of multi-cloud, right in there with VMware. You got IBM-Red Hat, Google with Anthos, Cisco coming at it from a network perspective and, of course, Microsoft leveraging its large estate of software. So, maybe next time we can poke at the multi-cloud. Now, that survey was done of about over 150, about 157 in the Global 2000. Sorry, I apologize. That was was 137. The next chart that I'm going to show you is a sentiment chart that took a pulse periodically, which was 157 IT practitioners, C level executives, VPs and IT practitioners. And what this chart shows essentially is the spending intentions for Red Hat over time. Now, the green bars are really about the adoption rates, and you can see they fluctuate, and it's kind of the percentage on left hand side and time is on the horizontal axis. The red is the replacement. We're going to replace. We're not going to buy. We're going to replace. In the middle is that fat middle, we're going to stay flat. So the yellow line is essentially what ETR calls market share. It's really an indication of mind share in my opinion. And then the blue line is spending intentions net score. So what does that mean? What that means is they basically take the gray, which is staying the same, they subtract out the red, which is we're doing less, and they add in the we're going to do more. So what does this data show? Let's focus on the blue line. So you can see, you know, slightly declining, and then pretty significantly declining last summer, maybe that's 'cause people spend less in the summer, and then really dropping coming into the announcement of the acquisition in October of 2018, IBM announced the $34 billion acquisition of Red Hat. Look at the spike post announcement. The sentiment went way up. You have a meaningful jump. Now, you see a little dip in the April survey, and again, that might've been just an attenuation of the enthusiasm. Now, July is going on right now, so that's why it's phased out, but we'll come back and check that data later. So, and then you can see this sort of similar trend with what they call market share, which, to me, is, again, really mind share and kind of sentiment. You can see the significant uptick in momentum coming out of the announcement. So people are generally pretty enthusiastic. Again, remember, these are customers of IBM, customers of Red Hat and customer of both. Now, let's see what the practitioners said. Let's go to some of the open endeds. What I love about ETR is they actually don't just do the hardcore data, they actually ask people open ended questions. So let's put this slide up and share with you some of the drill down statements that I thought were quite relevant. The first one is right on. "Assuming IBM does not try to increase subscription costs "for RHEL," Red Hat Enterprise Linux, "then its organizational issues over sales "and support should go away. "This should fix an issue where enterprises "were moving away from RHEL to lower cost alternatives "with significant movement to other vendors. "This plus IBM's purchase of SoftLayer and deployment "of CloudFoundry will make it harder "for Fortune 1000 companies to move away from IBM." So a lot implied things in there. The first thing I want to mention is IBM has a nasty habit when it buys companies, particularly software companies, to raise prices. You certainly saw this with SPSS. You saw this with other smaller acquisitions like Ustream. Cognos customers complained about that. IBM buys software companies with large install bases. It's got a lock in spec. It'll raise prices. It works because financially it's clearly worked for IBM, but it sometimes ticks off customers. So IBM has said it's going to keep Red Hat separate. Let's see what it does from a pricing standpoint. The next comment here is kind of interesting. "IBM has been trying hard to "transition to cloud-service model. "However, its transition has not been successful "even in the private-cloud domain." So basically these guys are saying something that I've just said is that IBM's cloud strategy essentially failed to meet its expectations. That's why it has to go out and spend $34 billion with Red Hat. While it's certainly transformed IBM in some respects, IBM's still largely a services company, not as competitive as cloud as it would've liked. So this guys says, "let alone in this fiercely competitive "public cloud domain." They're not number one. "One of the reasons, probably the most important one, "is IBM itself does not have a cloudOS product. "So, acquiring Red Hat will give IBM "some competitive advantage going forward." Interesting comments. Let's take a look at some of the other ones here. I think this is right on, too. "I don't think IBM's goal is to challenge AWS "or Azure directly." 100% agree. That's why they got rid of the low end intel business because it's not trying to be in the commodity businesses. They cannot compete with AWS and Azure in terms of the cost structure of cloud infrastructure. No way. "It's more to go after hybrid multi-cloud." Ginni Rometty said today at the announcement, "We're the only hybrid multi-cloud, opensource vendor out there. Now, the third piece of that opensource I think is less important than competing in hybrid and multi-cloud. Clearly Red hat gives IMB a better position to do this with CoreOS, CentOS. And so is it worth 34 billion? This individual thinks it is. So it's a vice president of a financial insurance organization, again, IBM's strong house. So you can here some of the other comments here. "For customers doing significant business "with IBM Global Services teams." Again, outsourcing, it's a 10-plus billion dollar opportunity for IBM to monetize over the next five years, in my opinion. "This acquisition could help IBM "drive some of those customers "toward a multi-cloud strategy "that also includes IBM's cloud." Yes, it's a very much of a play that will integrate services, Red Hat, Linux, OpenShift, and of course, IBM's cloud, sprinkle in a little Watson, throw in some hardware that IBM has a captive channel so the storage guys and the server guys can sell their hardware in there if the customer doesn't care. So it's a big integrated services play. "Positioning Red Hat, and empowering them "across legacy IBM silos, will determine if this works." Again, couldn't agree more. These are very insightful comments. This is a largely a services and an integration play. Hybrid cloud, multi-cloud is complex. IBM loves complexity. IBM's services organization is number one in the industry. Red Hat gives it an ingredient that it didn't have before other than as a partner. IBM now owns that intellectual property and can really go hard and lean in to that services opportunity. Okay, so thanks to our friends at Enterprise Technology Research for sharing that data, and thank you for watching theCUBE. This is Dave Vellante signing off for now. Talk to you soon. (upbeat electronic music)

Published Date : Jul 9 2019

SUMMARY :

From the SiliconANGLE Media office and it's kind of the percentage on left hand side

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

October of 2018DATE

0.99+

Ken MalePERSON

0.99+

APACORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

OctoberDATE

0.99+

InfoProORGANIZATION

0.99+

AMIAORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

$34 billionQUANTITY

0.99+

ETRORGANIZATION

0.99+

AprilDATE

0.99+

45%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

64%QUANTITY

0.99+

JulyDATE

0.99+

38%QUANTITY

0.99+

Enterprise Technology ResearchORGANIZATION

0.99+

North AmericaLOCATION

0.99+

4,500 usersQUANTITY

0.99+

13%QUANTITY

0.99+

80%QUANTITY

0.99+

34 billionQUANTITY

0.99+

100%QUANTITY

0.99+

15%QUANTITY

0.99+

RHELTITLE

0.99+

third pieceQUANTITY

0.99+

GoogleORGANIZATION

0.99+

UstreamORGANIZATION

0.99+

23%QUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

first thingQUANTITY

0.99+

OneQUANTITY

0.99+

Breaking Analysis: IBM Completes $34B Red Hat Acquisition


 

from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape hi everybody Dave Volante here with Stu minumum we have some breaking analysis we're gonna break down the acquisition of IBM Red Hat by IBM was announced today that it closed Stu was originally announced in October a 34 billion dollar acquisition so not a surprise surprise that it closed a little bit earlier than people thought people would thinkin you know well into the second half closed in July they got through all the all the issues in Europe what does this mean in your view to the industry yeah so Dave we did a lot of analysis when the deal was announced absolutely the the cloud and the ripples of change that are happening because of cloud are the impetus for this and you know the the question we've been having for years Dave is you know how many companies can stay kind of independent in you know their swimlane to what they're doing or are we going to see more massive consolidations we're not that far off of the 67 billion dollar acquisition of Dell buying EMC to go heavily into the enterprise market and of course there are cloud implications what happened there and you know we're watching the growth of cloud what's happening in the developer world you know we've watched Red Hat for a long time and you know Red Hat has a nice position in the world and it carved themselves out a nice role into what has been emerging as hybrid and multi cloud and in my opinion that's you know the number one reason why arvind and the IBM team you know when to take that 20-year partnership and turn it into you know now just part of the IBM portfolio Arvind Krishna executive at IBM a longtime player there so the the the deal is so you talked about Dells acquisition we've talked a lot about the VMware model keeping the company separate and of course Red Hat is not going to be a separately traded public company it is going to be a distinct unit inside of IBM's cloud and cognitive software group as I understand it is that right so the question is will it be reported separately or is it going to be oh we're gonna throw everything into our cloud number yeah so Dave this is where all of us that have watched and known IBM you know for our entire careers because they've been around over a hundred years on ask what's going to happen so from a reporting structure Jim Whitehurst reports to Ginny from a Wall Street standpoint it sounds like it's gonna be just thrown into the cloud piece you know Dave isn't it that the the the standard practice today that you throw lots of stuff in there so we can't figure out what your cloud business really is I mean let's look at Oracle or even Microsoft and what they had you know Amazon's probably the only one that clearly differentiates you know this is revenue that we all understand is cloud and can you know touch and feel it so sure I IBM you know you've got all of the the piece that used to be soft layer it's now the IBM cloud piece there are lots of software pieces in that mix the cloud and cognitive is a big umbrella and you know Red Hat adds a few billion dollars worth of revenue into that stream so IBM's assumptions here juni talks a lot about chapter two chapter one was a lot of front-end systems that sort of the growth was everybody thought everything was going into the cloud that's really not the way it is 80% of the workloads are still on Prem and in Chapter two was all about you know connecting those to any cloud multi-cloud heard her words the IBM cloud or the Amazon Google or Microsoft cloud etc etc she made the statement that that we are the only hybrid multi-cloud open source company okay I guess that's true does it matter that they're the only hybrid multi-cloud open source company and are they yeah so I mean Dave anytime a vendor tries to paint themselves as the number one or you know leader in the space it's you know that's how they're defining it that's not how customers think of it customers you know don't think is much about whether it's multi cloud or hybrid cloud they're doing cloud and they're working with you know more than one supplier it is very rare that you find somebody I'm all-in and then you dig in oh yeah wait I'm using office 365 and Salesforce and oh wait there was that cool new thing that Google announced that somebody off on the sides doing so we understand that today it's a multi cloud world tomorrow to be a multi cloud we're absolutely open source is growing you know at great leaps and bounds Red Hat is you know the you know best example we've had of that that trend something I've been watching for the last 20 years and you know it is impressive to see it but you know even when you talk to customers of you know most customers are not you know flag-waving I must do everything open-source you know that they have a little bit more nuanced view of it sure lots of companies are participating in contributing to open source but you know I've yet to talk to too many companies that were like well when I'm making this decision you know this is absolutely what it is am i concerned about my overall costs and I'm concerned about transparency am i concerned about you know security and how fast I can get things resolved and by the way open-source can help with a lot of those things that's what they need to think about but look IBM you know had a longtime partnership with Red Hat Red Hat has a strong position in the marketplace but they're not the only ones there you know you mentioned VMware Dave VMware cross has a strong play across multi cloud environments you know we see Red Hat at all of the cloud shows you see yeah IBM at many of the cloud shows but you've got Cisco out there with their play it is still you know this this chapter - if you agree with Ginny's terminology we are relatively early in that but you know IBM I believe is strengthened in their positioning I don't think it radically changes the landscape just because you know Red Hat is still going to stay you know working with the Amazons and Microsoft and Google's and and and other players out there so it doesn't dramatically change the landscape it just consolidates two players that already worked closely let me ask a question so I mean was clearly positioning this as a cloud play you know generally and you know in a multi cloud specifically is this a cloud play okay um so I'll say yes but Dave so absolutely the future and where the growth for Red Hat and where IBM and for this thirty four billion dollars to be successful the tip of the spear is open shift and therefore you know how does that new cloud native multi cloud environment you know where do they play but at its core you know red heads still Linux Red Hat Enterprise Linux you know is it stills you know that is the primary driver of revenue and Linux isn't going away as a matter of fact Linux is growing Microsoft you know just revealed that there are more Linux workloads sitting in Azure than there are windows we already knew that there were you know strong Linux out there and Microsoft is embrace Linux we saw Satya Nadella at Red Hat summit and you know we've seen that proliferation of linux out there so linux is still you know growing in it where it's being used out there and in the cloud you know linux is what most people are using so the reason why I think this acquisition is interesting Jim Whitehurst today said publicly that it was a great deal that IBM was getting but then he couched he said of course it's a great deal for our shareholders too so and Ginni chimed in and said yes it was a fair deal okay fine 34 billion you know we'll see the reason why I think IBM likes this deal and IBM you know generally has been been good over in history with acquisitions you know clearly some mega acquisitions like PwC which was transformative me we have time to talk about that Cognos and some of the other software acquisitions done quite well not a hundred percent but the reason why I think IBM likes this deal is because it's a good cash flow deal so I think in many ways and they don't talk about this because it's not sexy marketing but iBM is a services company over 60% of the company's revenue comes from professional services IBM loves complexity because they can bring in services throw the big blue blanket around you and do a lot of integration work and the reason is that I think this is an interesting acquisition from from a financial standpoint and Ginny says this all the time this is not about cost synergies this is about revenue opportunities when you try to put everything in the cloud you always run into the back-end systems and her point is that those back-end systems need to be modernized how do you modernize those back-end systems openshift it's not trivial to do that you need services and so iBM has a large install base probably by my estimate you know certainly tens of billions of dollars of opportunity there to modernize back-end systems using Red Hat technology and that means that it's a front-loaded deal from a cash flow standpoint that they will find automatically revenue Cyn to plug in to IBM's captive install base what are your thoughts yeah Dave III think that your analysis is spot-on so RedHat has been one of these most consistent you know revenue companies out there you steadily when they went from a billion dollars to now they're right around three billion dollars they had the March to five billion dollars they had a couple of minor blips in their quarterly earnings but if you plug in that IBM services organization you really have the opportunity to supercharge this is not the opportunity is to have that that huge IBM services organization really helped you know grow those engagements do more openshift you know get more Linux help ansible you know really become the standard for you know automation in the modern workplace the challenge is that too many IBM people get involved because the the thing that everybody's a little worried about is IBM's done well with a lot of those acquisitions but they don't leave them stand alone even you know VMware for many years was a standalone company today VMware in Dell they're one company they're in lockstep from a management standpoint and they're working closely together what differentiates RedHat is you know iBM has groups that are much larger than RedHat that do some of the same things but RedHat with their open-source mission and and where they're driving things and the innovation they drive they move a little bit faster than IBM traditionally does so can will the Red Hat brand the Red Hat people and Red Hat still stay independent enough so that they can till you know hop on that next wave you know they they jumped early into kubernetes and that was the wave that really helped them drive for what they're doing the open shift you know even Dave you know Red Hat ahead bought core OS which was a smaller company moving even faster than Red Hat and while they've done a really good job of integrating those people absolutely from what I've heard it is slowed things down a little bit just because Red Hat compared to core OS was a much bigger company and of course IBM is a be a myth compared to Red Hat so will they throw these groups together and you know who will be making the decisions and can they you know maintain that that culture and that growth mindset well the point is structure we bring up VMware a lot as the model and of course when EMC bought VMware for paltry six hundred million six thirty five million dollars it folded it in and then spun it back out which was the right move certainly allowed the ecosystem to blossom I don't think IBM is gonna take that same approach blue wash is the term they'll probably blue wash that now cuz no Dave they said iBM has said they will not blue eyes there's no purple red stay separate absolutely there's concerns you know so to get those revenue synergies there's there's you're gonna have to plug into IBM systems and that requires some some work and IBM generally good at that so we'll see we'll keep our eyes on that it's but but I would predict that IBM is not going to do a VMware like well it's going to be some kind of hybrid Dave one of the other things is you talked about so Jim Whitehurst you know executive respective had him on the cube a lot he's reporting to Ginny you know the question is is this Ginny's last big move and who replaces her yeah let's talk about succession planning so a lot of a lot of rumors that Whitehurst is is next he's 52 years old I've said I don't I don't think they would do that but but let's talk about it first of all just you know Jim white her side sort of interviewed him the number of times but but you know I'm quite well you think even watch the job so you know I talked with Jim a little bit at red hat summit you know he kind of makes light of it he said you know knowing IBM the way we all know IBM IBM has always taken somebody from inside to do that he feels that he has a strong mission still to drive Red Hat he is super passionate about Red Hat he wrote a book book about the open source culture and is still driving that so I think from everything I see from him that's still the job that he loves and wants to do and you know it's a very different challenge to run IBM I'm not saying he would turn it down if that was the direction that it went if it went down to it but I did not see him angling and positioning like that would be where he wants to go well and of course you know Jim is from North Carolina he's got that kind of southern folksy demeanor you know comes across as the so the nicest guy in the room he's also the smartest guy in the room but oh we'll see we'll see what happens there I've said that I think Martin Schroder is going to be the next CEO of IBM Martin Schroder did three years of combat duty as the CFO in in what was a tough time for IBM to be a CFO they were going through those big transitions talking about you know they had to had to do the the SoftLayer acquisition they had to put together those strategic initiatives and so he's has he has CFO chops so he understands finance deeply he ran you know when IBM's big services business he's now responsible for IBM's revenue generation he's a spokesperson you know in many ways for the company he's like the prototypical choice he would not be surprising at all to see IBM plug him right in a little bit of history as you know still him a bit of a history historian of the industry have been around for a while John Akers back in the early 1990s when IBM's mainframe business was was tanking and the whole company was was tanking and it was at the risk of actually believe it or not running out of money they were gonna split up the company because the industry was breaking apart Intel and microprocessors Microsoft and software C gated disk drives you know Oracle and databases and to be more competitive from a product standpoint they were gonna split the company up into pieces Gerstner came in and said no way Gerson it was you know CEO of American Express said no that's not how customers want to buy he bought PwC for a song compared to what Carly Fiorina at HP a Carly Fiorina at HP wanted to pay I think 15 billion for it I want to say IBM paid five billion or maybe even less for PwC it completely transformed the company it transformed IBM into a services company and that's where what IBM is today they don't like when you say that but that's where the revenue was coming from what that did now and they also started to buy software companies IBM was restricted from getting into applications for years and years and years because of the DOJ because they owned the mainframe they had a monopoly while Microsoft and Intel changed all that IBM started to buy software companies and bought lots of them so they became a services company with a collection of software assets and the main mainframe and you know the power they have a storage business and you know Finance I'd be a global finance business etc etc so my my point is I'm not sure Jim Whitehurst would want to run that you know it's it's kind of messy now what you need run that is somebody who really understands finance knows how to turn the knobs and that's why I think you know Martin Schroeder is actually an excellent pick for that to keep the cash flow going to keep the dividend going to keep the stock buybacks going it's still in my view not a growth play I think there's certainly near-term growth that can be had by modernizing applications but I don't look at IBM as a growth company I look at IBM as a portfolio company that throws off a lot of cash and if and when the market stops rewarding growth and profit list growth a company like IBM will become more favorable to investors yeah and the question at the end of the day is after spending thirty four billion dollars for red hat does IBM help weather the storm of what is happening with the phenomenal growth of AWS the changes happening in Microsoft build more of a relationship than they've already had with Google and help position themselves for this next wave of IT there's IBM helped create a lot of the waves that you know happen in IT well the pure play cloud players are in it for the long game you know you know Amazon's philosophy is give tools to builders and allow them to disrupt the you know traditional old guard whether it's old guard technology companies or old guard industry players and you've seen the stat of how many Fortune 1000 companies or you know have gone out of business in the last 20 or 30 years or whatever it is that's going to continue and Amazon and and certainly Google and Microsoft want to support that disruption by providing cloud tooling and put the data in the hands of people that allows them to create new business models now that doesn't mean everybody's gonna throw up there mainframes it's it's not gonna happen it's certainly not gonna happen overnight and probably will never happen but I just don't see how IBM becomes a growth company in that scenario the growth is going to be continue to be with the cloud well but Dave we had seen IBM I'd say struggle a little bit when it comes to the the developers these days and the Red Hat acquisition is definitely going to be a boon to them in this space because Red Hat all about the developers that that's what you know that their customers are so you know that that's such a huge community that they've already tapped into so Ginny has said this hybrid multi-cloud is a chapter two with a trillion dollar opportunity so who else is going after that trillion dollar opportunity let's let's lay it out there who are the multi cloud players VMware obviously IBM Red Hat with open shift is in there Google with anthos Cisco is coming at it from a network perspective so they have coming at it from their position of strength even though you know you know they're relatively new entrants well ever everybody wants to be the new management layer in this multi cloud environment what VMware had done is had you know vCenter became you know the console for everyone as they were consolidating all of their silos and when I go to a multi cloud environment right where do I live you know Microsoft has a strong play there that's the other you know VMware IBM Red Hat anthos Google Mentos Cisco and Microsoft yeah and of course the one that while they won't say that they are multi cloud you can't talk about multi cloud without talking about Amazon because Amazon is a piece of everyone's cloud environment we were seeing what they're doing with outpost there so they are the kind of Spectre looming over this entire multi-cloud discuss yeah right on I think you got to put Amazon into that mix they will be an entrance into this multi cloud play and it's not gonna be a winner-take-all deal I could say cisco is coming at it from a position of networking strength Microsoft has its software estate and it's gonna do very well there IBM Red Hat coming at it from a standpoint of modernizing applications and there's a services could play and services component there and VMware of course coming at it from the the infrastructure operating system I don't see Oracle as interested in that market there may be some smaller players like turbo anomic you know who probably get gobbled up by one of these guys that we just mentioned but that really is the landscape and this is you know five six companies a trillion dollars there's plenty to go around all right Stu final thoughts on on the the Red Hat news the IBM news that they've finalized the Red Hat acquisition yes so you know what you want to look for is you know first of all you know what's happening organizationally you know if open shift is the primary you know the the tip of the sphere what we're talking about here for this you know cloud native multi-cloud world you know what does you know the IBM Cloud messaging looked like they're gonna have an analyst event here in a couple of weeks that you know that they've invited all the analysts to going into what does that cloud portfolio looks like how do they sort through all of the kubernetes options that they've had today do they try to elevate IBM cloud to be a stronger player or will they let Red Hat continue to play across all of the cloud environments that they have so you know organization and product positioning of the two things that I'm looking at the most Tom Siebel said publicly yesterday that IBM is a great company national international treasure but they miss cloud and they missed a I I wouldn't agree totally they didn't miss cloud they were late to cloud they had to buy software they're in cloud just like Oracle's in cloud not as competitive as the AWS cloud but they're they've got a cloud yeah HP doesn't have a cloud Dell doesn't have a cloud these these two companies that I just mentioned do AI yeah they're not sound of generalized AI like what Google and Amazon and Facebook and Microsoft are doing IBM's trying to solve you know big chewy problems iBM is a services company as they said so you know Watson you see a lot of negative stories about Watson but Watson requires a lot of services to make it work and it's as they say solving different problems so they're a player in AI multi cloud is new and this move the acquisition of red hat yes thirty four billion dollars expensive it's not gonna be pretty on the balance sheet but they get good cash flow so they'll deal with that over time it puts them right in the mix as a leader in multi cloud so thanks to for breaking down the the acquisition and thank you for watching this is Dave Volante what's do min and then we'll see you next time

Published Date : Jul 9 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

DavePERSON

0.99+

GinnyPERSON

0.99+

OctoberDATE

0.99+

Arvind KrishnaPERSON

0.99+

JimPERSON

0.99+

Tom SiebelPERSON

0.99+

Jim WhitehurstPERSON

0.99+

Jim WhitehurstPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GersonPERSON

0.99+

Dave VolantePERSON

0.99+

AmazonORGANIZATION

0.99+

EuropeLOCATION

0.99+

GoogleORGANIZATION

0.99+

JulyDATE

0.99+

$34BQUANTITY

0.99+

DavidPERSON

0.99+

North CarolinaLOCATION

0.99+

GinniPERSON

0.99+

CiscoORGANIZATION

0.99+

five billionQUANTITY

0.99+

20-yearQUANTITY

0.99+

Martin SchroderPERSON

0.99+

DellORGANIZATION

0.99+

HPORGANIZATION

0.99+

John AkersPERSON

0.99+

80%QUANTITY

0.99+

FacebookORGANIZATION

0.99+

AmazonsORGANIZATION

0.99+

five billion dollarsQUANTITY

0.99+

GerstnerPERSON

0.99+

Martin SchroederPERSON

0.99+

arvindPERSON

0.99+

six hundred millionQUANTITY

0.99+

WhitehurstPERSON

0.99+

IntelORGANIZATION

0.99+

Charlie Kwon, IBM | Actifio Data Driven 2019


 

>> from Boston, Massachusetts. It's the queue covering active eo 2019. Data driven you by activity. >> Welcome back to Boston. Everybody watching the Cube, the leader and on the ground tech coverage. My name is David Locke. They still minimus here. John Barrier is also in the house. We're covering the active FIO data driven 19 event. Second year for this conference. It's all about data. It's all about being data driven. Charlie Quanis here. He's the director of data and a I offering management and IBM. Charlie, thanks for coming on The Cube. >> Happy to be here. Thank you. >> So active Theo has had a long history with IBM. Effect with company got started at a time the marketplace took a virtual ization product and allowed them to be be first really and then get heavily into the data virtualization. They since evolved that you guys are doing a lot of partnerships together. We're going to get into that, But talk about your role with an IBM and you know, what is this data and a I offering management thing? >> He absolutely eso data and a I is our business unit within IBN Overall Corporation, our focus and our mission is really about helping our customers drive better business outcomes through data. Leveraging data in the contacts and the pursuit of analytics and artificial intelligence are augmented intelligence. >> So >> a portion of the business that I'm part of his unified governance and integration and you think about data and I as a whole, you could think about it in the context of the latter day. I often times when we talk about data and I we talk about the foundational principles and capabilities that are required to help companies and our customers progress on their journey. They II and it really is about the information architecture that we help them build. That information architectures essentially a foundational prerequisite around that journey to a i. R. Analytics and those layers of the latter day I r. Collecting the data and making sure you haven't easily accessible to the individual's need it organizing the data. That's where the unified governance in Immigration folio comes into play. Building trusted business ready data, high quality with governance around that making shorts available to be used later, thie analyzed layer in terms of leveraging the data for analytics and die and then infuse across the organization, leveraging those models across the organization. So within that context of data and I, we partnered with Active Theo at the end of 2018. >> So before we get into that, I have started dropped. You know, probably Rob Thomas is, and I want a double click on what you just said. Rob Thomas is is famous for saying There is no way I without a training, no, no artificial intelligence without information architecture so sounds good. You talk about governance. That's obviously part of it. But what does that mean? No A without a. >> So it is really about the fundamental prerequisites to be able to have the underlying infrastructure around the data assets that you have. A fundamental tenet is that data is one of your tremendous assets. Any enterprise may have a lot of time, and effort has been spent investing and man hours invested into collecting the data, making sure it's available. But at the same time, it hasn't been freed up to be. A ploy used for downstream purpose is whether it's operational use cases or analytical cases, and the information architecture is really about How do you frame your data strategy so that you have that data available to use and to drive business outcomes later. And those business outcomes, maybe results of insights that are driven out of the way the data but they got could also be part of the data pipeline that goes into feeding things like application development or test data management. And that's one of the areas that were working with that feeling. >> So the information architecture's a framework that you guys essentially publish and communicate to your clients. It doesn't require that you have IBM products plugged in, but of course, you can certainly plug in. IBM products are. If you're smart enough to develop information architect here presumably, and you got to show where your products fit. You're gonna sell more stuff, but it's not a prerequisite. I confuse other tooling if I wanted to go there. The framework is a good >> prerequisite, the products and self of course, now right. But the framework is a good foundational. Construct around how you can think about it so that you can progress along that journey, >> right? You started talking about active fio. You're relationship there. See that created the Info sphere Virtual data pipeline, right? Why did you developed that product or we'll get into it? >> Sure, it's all part of our overall unified covers and integration portfolio. Like I said, that's that organized layer of the latter day I that I was referring to. And it's all about making sure you have clear visibility and knowing what they had assets that you have. So we always talk about in terms of no trust in use. No, the data assets you have. Make sure you understand the data quality in the classification around that data that you have trust the data, understand the lineage, understand how it's been Touch Haussmann, transformed building catalog around that data and then use and make sure it's usable to downstream applications of down street individuals. And the virtual data pipeline offering really helps us on that last category around using and making use of the data, the assets that you have putting it into directly into the hands of the users of that data. So whether they be data scientist and data engineers or application developers and testers. So the virtual data pipeline and the capabilities based on activity sky virtual appliance really help build a snapshot data provide the self service user interface to be able to get into the hands of application developers and testers or data engineers and data scientist. >> And why is that important? Is it because they're actually using the same O. R. O R. Substantially similar data sets across their their their their work stream. Maybe you could explain that it's important >> because the speed at which the applications are being built insights are being driven is requiring that there is a lot more agility and ability to self service into the data that you need. Traditional challenges that we see is you think about preparing to build an application or preparing to build an aye aye model, building it, deploy it and managing it the majority of the time. 80% of the time. Todd spilled front, preparing the data talking, trying to figure out what data you need asking for and waiting for two weeks to two months to try to get access to that data getting. And they're realizing, Oh, I got the wrong data. I need to supplement that. I need to do another iteration of the model going back to try to get more data on. That's you have the area that application developers and data scientists don't necessarily want to be spending their >> time on. >> And so >> we're trying to shrink >> that timeframe. And how do we shrink? That is by providing business users our line of business users, data scientist application developers with the individuals that are actually using the data to provide their own access to it, right To be able to get that snapshot that point in time, access to that point of production data to be able to then infuse it into their development process. They're testing process or the analytic development process >> is we're we're do traditional tooling were just traditional tooling fit in this sort of new world because you remember what the Duke came out. It was like, Oh, that enterprise data warehouses dead. And then you ask customers like What's one of the most important things you're doing in your big data? Play blind and they'd say, Oh, yeah, we need R w. So I could now collect more data for lower costs keep her longer low stuff. But the traditional btw was still critical, but well, you were just describing, you know, building a cube. You guys own Cognos Obviously, that's one of the biggest acquisitions that I'm being made here is a critical component. Um, you talk about data quality, integration, those things. It's all the puzzle fits together in this larger mosaic and help us understand that. Sure >> and well, One of the fundamental things to understand is you have to know what you have right, and the data catalogue is a critical component of that data strategy. Understanding where your enterprise assets sit, they could be structured information that may be a instruction information city and file repositories or e mails, for example. But understanding what you have, understanding how it's been touched, how it's been used, understanding the requirements and limitations around that data understanding. Who are the owners of that data? So building that catalog view of your overall enterprise assets fundamental starting point from a governess standpoint. And then from there, you can allow access to individuals that are interested in understanding and leveraging that date assets that you may have in one pool here challenges data exists across enterprise everywhere. Right silos that may have rose in one particular department that then gets murdered in with another department, and then you have two organization that may not even know what the other individual has. So the challenge is to try to break down those silos, get clarity of the visibility around what assets so that individuals condemned leverage that data for whatever uses they may have, whether it be development or testing or analytics. >> So if I could generalize the problem, Yeah, too much data, not enough value. And I'll talk about value in terms of things that you guys do that I'm inferring. Risk reduction. Correct uh, speed to insights. Andan. Ultimately, lowering costs are increasing revenue. That's kind of what it's all >> the way to talk about business outcomes in terms of increase revenue, decrease costs or reduce risk, right in terms of governance, those air the three things that you want to unlock for your customers and you don't think about governance and creating new revenue streams. We generally don't think about in terms of reducing costs, but you do think about it oftentimes in terms of reducing your risk profile and compliance. But the ability to actually know your data built trust and then use that data really does open up different opportunities to actually build new application new systems of engagement uses a record new applications around analytics and a I that will unlock those different ways that we can market to customers. Cell two customers engage our own employees. >> Yes. So the initial entry into the organism the budget, if you will, is around that risk reduction. Right? Can you stand that? I got all this data and I need to make sure that I'm managing a corner on the edicts of my organization. But you actually seeing we play skeptic, you're really seeing value beyond that risk reduction. I mean, it's been nirvana in the compliance and governance world, not just compliance and governance and, you know, avoiding fees and right getting slapped on the wrist or even something worse? Sure, but we can actually, through the state Equality Initiative and integration, etcetera, etcetera Dr. Other value. You actually seeing that? >> Yes. We are actually, particularly last year with the whole onslaught of GDP are in the European Union, and the implications of GDP are here in the U. S. Or other parts of the world. Really was a pervasive topic on a lot of what we were talking about was specifically that compliance make sure you stay on the right side of the regulation, but the same time investing in that data architecture, information, architecture, investing in the governance programme actually allowed our customers to understand the different components that are touching the individual. Because it's all about individual rights and individual privacy. It's understanding what they're buying, understanding what information we're collecting on them, understanding what permissions and consent that we have, the leverage their information really allowed. Our customers actually delivered that information and for a different purpose. Outside of the whole compliance mindset is compliance is a difficult nut to crack. There's requirements around it, but at the same time, they're our best effort requirements around that as well. So the driver for us is not necessarily just about compliance, But it's about what more can you do with that govern data that you already have? Because you have to meet those compliance department anyway, to be able to flip the script and talk about business value, business impact revenue, and that's everything. >> Now you So you're only about what, six months in correct this part of the partnership? All right, so it's early days, but how's it going and what can we expect going forward? >> Don't. Great. We have a terrific partner partnership with Octavio, Like tippy a virtual Or the IBM virtual data pipeline offering is part of our broader portfolio within unified governance and fits nicely to build out some of the test data management capability that we've already had. Optimal portfolio is part of our capability. Said it's really been focused around test data management building synthetic data, orchestrating test data management as well. And the virtual data pipeline offering actually is a nice compliment to that to build out our the robust portfolio now. >> All right, Charlie. Well, hey, thanks very much for coming in the house. The event >> has been terrific. It's been terrific. It's It's amazing to be surrounded by so many people that are excited about data. We don't get that everywhere. >> They were always excited about, Right, Charlie? Thanks so much. Thank you. Thank you. All right. Keep it right there, buddy. We're back with our next guest. A Valon Day, John. Furry and student Amanda in the house. You're watching the cube Active eo active Fio data driven. 2019. Right back

Published Date : Jun 19 2019

SUMMARY :

It's the queue covering active eo We're covering the active FIO data driven Happy to be here. They since evolved that you guys are doing a lot of partnerships together. Leveraging data in the contacts and the pursuit of analytics and a portion of the business that I'm part of his unified governance and integration and you think about data and I as a whole, You know, probably Rob Thomas is, and I want a double click on what you just said. or analytical cases, and the information architecture is really about How do you frame your data So the information architecture's a framework that you guys essentially publish and communicate to your clients. But the framework is a good foundational. See that created the Info sphere Virtual No, the data assets you have. Maybe you could explain that it's important preparing the data talking, trying to figure out what data you need asking for and waiting They're testing process or the analytic development process You guys own Cognos Obviously, that's one of the biggest acquisitions that I'm being made here is a critical component. and the data catalogue is a critical component of that data strategy. So if I could generalize the problem, Yeah, too much data, not enough value. But the ability to actually know your data built trust on the edicts of my organization. and the implications of GDP are here in the U. S. Or other parts of the world. And the virtual data pipeline offering actually is a nice compliment to that to build out our the robust portfolio now. All right, Charlie. It's It's amazing to be surrounded by so many people that are excited about data. Furry and student Amanda in the house.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David LockePERSON

0.99+

IBMORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

CharliePERSON

0.99+

John BarrierPERSON

0.99+

80%QUANTITY

0.99+

six monthsQUANTITY

0.99+

two weeksQUANTITY

0.99+

two monthsQUANTITY

0.99+

Charlie KwonPERSON

0.99+

last yearDATE

0.99+

BostonLOCATION

0.99+

AmandaPERSON

0.99+

2019DATE

0.99+

Second yearQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

Charlie QuanisPERSON

0.99+

end of 2018DATE

0.99+

Active TheoORGANIZATION

0.99+

two organizationQUANTITY

0.98+

U. S.LOCATION

0.98+

two customersQUANTITY

0.98+

one poolQUANTITY

0.98+

firstQUANTITY

0.98+

oneQUANTITY

0.97+

DukeORGANIZATION

0.96+

Touch HaussmannORGANIZATION

0.96+

OctavioORGANIZATION

0.95+

TheoPERSON

0.95+

Info sphereORGANIZATION

0.94+

CognosORGANIZATION

0.94+

OneQUANTITY

0.91+

three thingsQUANTITY

0.91+

ToddPERSON

0.87+

ActifioTITLE

0.86+

19QUANTITY

0.86+

CubeCOMMERCIAL_ITEM

0.79+

double clickQUANTITY

0.79+

IBN OverallORGANIZATION

0.76+

eo 2019EVENT

0.69+

European UnionORGANIZATION

0.69+

one particular departmentQUANTITY

0.67+

Valon DayPERSON

0.57+

JohnPERSON

0.56+

Arista Thurman III, Argonne | Veritas Vision Solution Day 2018


 

>> Narrator: From Chicago, it's The Cube. Covering Veritas Vision Solution Day 2018. Brought to you by Veritas. >> Welcome back to the Windy City everybody. You're watching The Cube, the leader in live tech coverage. We're goin' out to the events, we extract a signal from the noise. We're here at the Veritas Vision Solution Days in Chicago. We were just a few weeks ago we were at the iconic Tavern on the Green in New York City. We're here at the Palmer House Hotel, beautiful hotel right in downtown Chicago near the lake. It's just an awesome venue, it's great to be here. Arista Thurman III is here, he's the principle computer engineer at the Argonne National Labs. Great to see you, thanks for coming on The Cube. >> Yah, good to be here, thanks. >> So tell the audience about Argonne National Labs. What do you guys all about? >> About science, so we're all about the advancement of science. We do a lot of different experiments from technology for batteries and chemistry. The project we're working on is the advanced photon source, which is a light source that's used to collect data in experiments with a photon source. >> OK, so you're an IT practitioner, >> Arista Thurman: That is correct. >> Serving scientists. >> Arista Thurman: Yes. >> What's that like? Is that like an IT guy serving doctors? Are they kind of particular? >> Arista Thurman: A little bit. >> There's some challenges there, but yah it's great. So basically you have a unique customer base, and they have additional requirements. So, it's not like a normal customer base. They're very smart people. They have a lot of demands and needs, and we do our best to provide all the services they require. >> Yah, so given that they're technical people, they may not be IT people but they have an affinity to technology. First of all, it must be hard to BS them, right? (laughter) >> Arista Thurman: No doubt, no doubt. >> They'd cut through that, so you got to be straight with them. And they're probably pretty demanding, right? I mean, they have limited resources and limited time and limited budgets, and they're probably pounding you pretty hard. Is that the case, or are they more forgiving? >> They're great people to work with, but there can be some challenges. I mean, it's unique in the idea that they work on multiple platforms. So it's from Unix to Linux to Mac. Multiple computers in their offices, multiple data requirements. And a lot of things happen without a lot of process and planning. Some things are ad hoc. So, it puts a little bit of strain sometimes on you to try to make everything happen in the amount of time they have. And everything is There's some challenges with regard to how to get things done in a timely fashion when you don't know what's going to happen with some of these experiments. >> I mean I imagine, right? They can probably deal with a lot of uncertain processes because that's kind of their lives, right? You must have to cobble things together for them to get them a solution sometimes, is that the case? >> We do sometimes. I think it's all about getting enough funding and enough resources to take care of all the different experiments. >> Dave Vellante: A balancing act. >> Yah. >> Dave Vellante: Ya so you look after, compute and storage. >> Arista Thurman: Yes. >> Right, so talk about what's happening generally there and then specifically data protection. >> So in general, my primary focus is Linux. Linus administration, Red Hat Linux. And we've seen a lot of data growth over the last five years and we've got projection for more growth as we are planning for an upgrade. So we're going to change our bmine and make it more efficient. Have a better light source and that's all planned in the next two to three years. And so, there's a lot of extra projects on top of our normal workload. We have a lot of equipment that probably needs to be refreshed. There's resources and with IT and any kind of data management things change. So whatever we're doing today, in the next three years we'll be doing something different because things change with regard to CPU speeds, performance of IO networking, storage requirements. All those things are continually growing exponentially. And when scientists want to do more experiments and they get new resources in, it's going to require more resources for us to maintain and keep them operational at the speeds and performance they want. >> Yah, we do hundreds of events with The Cube. We do about 130 events this year, and a lot of them are so-called "big data" orientation. And when you go to those data oriented events, you hear a lot of, sort of the roots of that. Or at least similarities to the scientific technical computing areas and it's sort of evolved into big data. A lot of the disciplines are similar. So, you're talking about a lot of data here. Sometimes it's really fast data, and there's a lot of variety, presumably, in that data. So how much data are we talking about? Is it huge volumes? Maybe you could describe your data environment. >> Primarily we have things broken up into different areas. So we have some block storage, and that provides a lot of our virtual the back-end for our virtualization environments which is either Microsoft or Red Hat RHV. I would estimate that's somewhere in a petabyte range. And then we also have our NAS file systems which spread across multiple environments providing NFS version three and four and also to Windows clients CIFS and some of the Mac clients also utilize that. And that's at about a little less than a petabyte. We also have high performance computing and that's a couple petabytes, at least. And all those numbers are just estimates because we're constantly growing. >> Any given time it's changing. But you're talking about multiple petabytes. So how do you back up, how do you protect multiple petabytes? >> Well I think it has to, it's all about a balancing act 'cause it's hard to back up everything in that same time window. So we have multiple backup environments providing resources for individual platforms. Like for Windows we'd do something a little different than we'd do for Linux. And we have different retention policies. Some environments need to be retained, retention is three years and some is six months, some three months, and so you have to have a system of migrating your storage to faster discs and then layer off the tape for long term retention. It's a challenge that we're constantly fighting with. >> How do you use Veritas? You're a customer obviously? >> Yah, we've been a Veritas customer for many years and we utilize Veritas in our virtualization environments. They kind of help us out with central platform. We've actually explored other things but the most cost effective thing to us at this point has been Veritas. We utilize them to back up primarily our NAS and our black files, our black file systems that provide most of the virtualization. >> Why Veritas? What is it about them that you have an affinity for? There's a zillion other backup software vendors out there, why Veritas? >> I think we have invested a lot in Veritas over the years. Predating my time at Argonne we've been using Veritas. In my previous career, in Sun Microsystems we also had some kind of relationship with Veritas. So it's easy and I think, like I mentioned earlier, we explored other things but it wasn't cost effective to make that kind of change. And it's been a reliable product. It does require work but it has been a reliable product. >> So, you'd mentioned your Linux, Red Hat Linux. >> Arista Thurman: Yes. >> So you saw this IBM announced it's going to buy Red Hat for 34 billion dollars. What were your thoughts when you heard that news? >> I was like, "Wow, what is going to happen now?" I was like, "How is that going to impact us?" Is it going to change our licensing model? Or is it going to be a good thing, or a bad thing? Right now we just don't really know. We're just kind of waiting and seeing. But it's like, OK, I mean that's a big deal. It is a biggest deal certainly from IBM. Their biggest previous deal was I think Cognos at five billion, so this dwarfs that. The deal of course doesn't close probably till the second half of 2019. So it's going to take a while. But look, IBM is known when it buys software companies, saw this with SPSS, you've seen it with other companies that it buys, it often times will change the pricing model. How do you license Red Hat? Do you have an enterprise license agreement? Do you know offhand? >> We do have an agreement with them. >> Dave Vellante: Lock that in. Lock that long term in now before the deal goes down. >> One of my counterparts is in charge of that part of it. So I'm sure we'll be having that conversation shortly. >> Yah, interesting. Well listen, Arista thanks very much for coming on The Cube, really appreciate your insight. >> Thank you. >> It's great to meet you, all right, you're welcome. Thanks for watching everybody, it's a wrap from Chicago. This has been The Cube, Veritas Vision Days. Check out SiliconAngle.com for all the news. TheCube.net is where you'll find these videos and a lot of others. You'll see where The Cube is next. Wikibon.com for all the research. Thanks for the team here, appreciate your help on the ground. We're out from Chicago, this is Dave Vellante. We'll see ya next time.

Published Date : Nov 10 2018

SUMMARY :

Brought to you by Veritas. Arista Thurman III is here, he's the principle So tell the audience about Argonne National Labs. We do a lot of different experiments So basically you have a unique customer base, First of all, it must be hard to BS them, right? Is that the case, or are they more forgiving? So it's from Unix to Linux to Mac. and enough resources to take care of Right, so talk about what's happening We have a lot of equipment that A lot of the disciplines are similar. and some of the Mac clients also utilize that. So how do you back up, how do you protect 'cause it's hard to back up everything but the most cost effective thing to us at this point I think we have invested a lot in Veritas over the years. So you saw this IBM announced it's going to buy So it's going to take a while. Lock that long term in now before the deal goes down. One of my counterparts is in charge of that part of it. for coming on The Cube, really appreciate your insight. and a lot of others.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Argonne National LabsORGANIZATION

0.99+

Arista ThurmanPERSON

0.99+

three yearsQUANTITY

0.99+

VeritasORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

six monthsQUANTITY

0.99+

ChicagoLOCATION

0.99+

five billionQUANTITY

0.99+

34 billion dollarsQUANTITY

0.99+

Sun MicrosystemsORGANIZATION

0.99+

WindowsTITLE

0.99+

ArgonneORGANIZATION

0.99+

Arista Thurman IIIPERSON

0.99+

OneQUANTITY

0.99+

ArgonnePERSON

0.99+

AristaPERSON

0.99+

three monthsQUANTITY

0.99+

this yearDATE

0.99+

CognosORGANIZATION

0.98+

New York CityLOCATION

0.98+

LinuxTITLE

0.98+

second half of 2019DATE

0.97+

UnixTITLE

0.96+

todayDATE

0.95+

Wikibon.comORGANIZATION

0.94+

FirstQUANTITY

0.94+

Red HatORGANIZATION

0.92+

about 130 eventsQUANTITY

0.92+

Windy CityLOCATION

0.91+

hundreds of eventsQUANTITY

0.91+

Veritas Vision Solution Day 2018EVENT

0.91+

few weeks agoDATE

0.87+

Palmer House HotelORGANIZATION

0.87+

last five yearsDATE

0.86+

MacCOMMERCIAL_ITEM

0.85+

Veritas Vision Solution DaysEVENT

0.83+

TheCube.netOTHER

0.8+

less than a petabyteQUANTITY

0.77+

couple petabytesQUANTITY

0.76+

LinusORGANIZATION

0.73+

SPSSORGANIZATION

0.73+

SiliconAngle.comORGANIZATION

0.73+

Red Hat LinuxTITLE

0.72+

Veritas VisionEVENT

0.72+

CubeORGANIZATION

0.71+

twoQUANTITY

0.71+

Tavern on the GreenLOCATION

0.67+

CubeTITLE

0.67+

next three yearsDATE

0.65+

The CubeORGANIZATION

0.62+

The CubeTITLE

0.61+

fourQUANTITY

0.52+

versionQUANTITY

0.47+

petabytesQUANTITY

0.44+

yearsQUANTITY

0.44+

zillionQUANTITY

0.44+

RHVTITLE

0.43+

threeOTHER

0.24+

IBM $34B Red Hat Acquisition: Pivot To Growth But Questions Remain


 

>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now here are your hosts, Dave Vellante and Stu Miniman. >> Hi everybody, Dave Vellante here with Stu Miniman. We're here to unpack the recent acquisition that IBM announced of Red Hat. $34 billon acquisition financed with cash and debt. And Stu, let me get us started. Why would IBM spend $34 billion on Red Hat? Its largest acquisition to date of a software company had been Cognos at $5 billion. This is a massive move. IBM's Ginni Rometty called this a game changer. And essentially, my take is that they're pivoting. Their public cloud strategy was not living up to expectations. They're pivoting to hybrid cloud. Their hybrid cloud strategy was limited because they didn't really have strong developer mojo, their Bluemix PaaS layer had really failed. And so they really needed to make a big move here, and this is a big move. And so IBM's intent, and Ginni Rometty laid out the strategy, is to become number one in hybrid cloud, the undisputed leader. And so we'll talk about that. But Stu, from Red Hat's perspective, it's a company you're very close to and you've observed for a number of years, Red Hat was on a path touting a $5 billion revenue plan, what happened? Why would they capitulate? >> Yeah Dave, on the face of it, Red Hat says that IBM will help it further its mission. We just listened to Arvin Krishna from IBM talking with Paul Cormier at Red Hat, and they talked about how they were gonna keep the Red Hat brand alive. IBM has a long history with open source. As you mentioned, I've been working with Red Hat, gosh, almost 20 years now, and we all think back to two decades ago, when IBM put a billion dollars into Linux and really pushed on open source. So these are not strangers, they know each other really well. Part of me looks at these from a cynicism standpoint. Somebody on Twitter said that Red Hat is hitting it at the peak of Kubernetes hype. And therefore, they're gonna get maximum valuation for where the stock is. Red Hat has positioned itself rather well in the hybrid cloud world, really the multicloud world, when you go to AWS, when you go to the Microsoft Azure environment, you talk to Google. Open source fits into that environment and Red Hat products specifically tie into those environments. Remember last year, in Boston, there's a video of Andy Jassy talking about a partnership with Red Hat. This year, up on stage, Microsoft with Azure partnering deeply with Red Hat. So Red Hat has done a nice job of moving beyond Linux. But Linux is still at its core. There definitely is concern that the operating system is less important today than it was in the past. It was actually Red Hat's acquisition of CoreOS for about $250 million earlier this year that really put a fine point on it. CoreOS was launched to be just enough Linux to live in this kind of container and Kubernetes world. And Red Hat, of course, like we've seen often, the company that is saying, "We're going to kill you", well you go and you buy them. So Red Hat wasn't looking to kill IBM, but definitely we've seen this trend of softwares eating the world, and open sources eating software. So IBM, hopefully, is a embracing that open source ethos. I have to say, Dave, for myself, a little sad to see the news. Red Hat being the paragon of open source. The one that we always go to for winning in this space. So we hope that they will be able to keep their culture. We've had a chance, many times, to interview Jim Whitehurst, really respected CEO. One that we think should stay involved in IBM deeply for this. But if they can keep and grow the culture, then it's a win for Red Hat. But still sorting through everything, and it feels like a little bit of a capitulation that Red Hat decides to sell off rather than keep its mission of getting to five billion and beyond, and be the leading company in the space. >> Well I think it is a bit of a capitulation. Because look, Red Hat is roughly a $3 billion company, growing at 20% a year, had that vision of five billion Its stock, in June, had hit $175. So while IBM's paying a 60% premium off of its current price, it's really only about 8 or 9% higher than where Red Hat was just a few months ago. And so I think, there's an old saying on Wall Street, the first disappointment is never the last. And so I think that Red Hat was looking at a long slog. They reduced expectations, they guided lower, and they were looking at the 90-day shot clock. And this probably wasn't going to be a good 'nother couple of years for Red Hat. And they're selling at the peak of the market, or roughly the peak of the market. They probably figured, hey, the window is closing, potentially, to do this deal. Maybe not such a bad time to get out, as opposed to trying to slog it out. Your thoughts. >> Yeah, Dave, I think you're absolutely right. When you look at where Red Hat is winning, they've done great in OpenStack but there's not a lot of excitement around OpenStack. Kubernetes was talked about lots in the announcement, in the briefings, and everything like that. I was actually surprised you didn't hear as much about just the core business. You would think you would be hearing about all the companies using Red Hat Enterprise Linux around the world. That ratable model that Red Hat really has a nice base of their environment. It was talking more about the future and where Kubernetes, and cloud-native, and all of that development will go. IBM has done middling okay with developers. They have a strong history in middleware, which is where a lot of the Red Hat development activity has been heading. It was interesting to hear, on the call, it's like, oh well, what about the customers that are using IBM too say, "Oh well, if customers want that, we'll still do it." What about IBM with Cloud Foundry? Well absolutely, if customers wanna still be doing it, they'll do that. So you don't hear the typical, "Oh well, we're going to take Red Hat technology "and push it through all of IBM's channel." This is in the IBM cloud group, and that's really their focus, as it is. I feel like they're almost limiting the potential for growth for Red Hat. >> Well so IBM's gonna pay for this, as I said, it's an all cash deal. IBM's got about 14 and a half billion dollars on the balance sheet. And so they gotta take out some debt. S&P downgraded IBM's rating from an A+ to an A. And so the ratings agency is going to be watching IBM's growth. IBM said this will add 200 basis points of revenue growth over the five year CAGR. But that means we're really not gonna see that for six, seven years. And Ginni Rometty stressed this is not a backend loaded thing. We're gonna find revenue opportunities through cross-selling and go-to-market. But we have a lot of questions on this deal, Stu. And I wanna sorta get into that. So first of all, again, I think it's the right move for IBM. It's a big move for IBM. Rumors were that Cisco might have been interested. I'm not sure if Microsoft was in the mix. So IBM went for it and, as I said, didn't pay a huge premium over where their stock was back in June. Now of course, back in June, the market was kind of inflated. But nonetheless, the strategy now is to go multi-cloud. The number one in the multi-cloud world. What is that multi-cloud leadership? How are we gonna measure multi-cloud? Is IBM, now, the steward of open source for the industry? To your point earlier, you're sad, Stu, I know. >> You bring up a great point. So I think back to three years ago, with the Wikibon we put together, our true private cloud forecast. And when we built that, we said, "Okay, here's the hardware, and software, "and services in private cloud." And we said, "Well let's try to measure hybrid cloud." And we spent like, six months looking at this. And it's like, well what is hybrid cloud? I've got my public cloud pieces, and I've got my private cloud pieces. Well there's some management layers and things that go in between. Do I count things like PaaS? So do you save people like Pivotal and Red Hat's OpenShift? Are those hybrid cloud? Well but they live either here or there. They're not usually necessarily helping with the migration and moving around. I can live in multiple environments. So Linux and containers live in the public, they live in the private, they don't just fly around in the ether. So measuring hybrid cloud, I think is really tough. Does IBM plus Red Hat make them a top leader in this hybrid multi-cloud world? Absolutely, they should be mentioned a lot more. When I go to the cloud shows, the public cloud shows, IBM isn't one of the first peak companies you think about. Red Hat absolutely is in the conversation. It actually should raise the profile of Red Hat because, while Red Hat plays in a lot of the conversations, they're also not the first company that comes to mind when you talk about them. Microsoft, middle of hybrid cloud. Oracle, positioning their applications in this multi-cloud world. Of course you can't talk about cloud, any cloud, without talking about Amazon's position in the marketplace. And SAS is the real place that it plays. So IBM, one of their biggest strengths is that they have applications. Dave, you know the space really well. What does this mean vis-à-vis Oracle? >> Well let's see, so Oracle, I think, is looking at this, saying, alright. I would say IBM is Oracle's number one competitor in the enterprise. You got SAP, and Amazon obviously in cloud, et cetera, et cetera. But let me put it this way, I think Oracle is IBM's number one competitor. Whether Oracle sees it that way or not. But they're clearly similar companies, in terms of their vertical integration. I think Oracle's looking at this, saying, hey. There's no way Oracle was gonna spend $34 billion on Red Hat. And I don't think they were interested in really spending any money on the alternatives. But does this put Canonical and SUSE in play? I think Oracle's gonna look at this and sort of message to its customers, "We're already number one in our world in hybrid cloud." But I wanna come back to the deal. I'm actually optimistic on the deal, from the standpoint of, I think IBM had to make a big move like this. Because it was largely just bumping along. But I'm not buying the narrative from Jim Whitehurst that, "Well we had to do this to scale." Why couldn't they scale with partners? I just don't understand that. They're open. This is largely, to me, a services deal. This is a big boon for IBM Services business. In fact, Jim Whitehurst, and Ginni even said that today on the financial analyst call, Jim said, "Our big constraint was "services scale and the industry expertise there." So what was that constraint? Why couldn't they partner with Accenture, and Ernie Young, and PwC, and the likes of Deloitte, to scale and preserve greater independence? And I think that the reason is, IBM sees an opportunity and they're going hard after it. So how will, or will, IBM change its posture relative to some of those big services plays? >> Yeah, Dave, I think you're absolutely right there. Because Red Hat should've been able to scale there. I wonder if it's just that all of those big service system integrators, they're working really closely with the public cloud providers. And while Red Hat was a piece of it, it wasn't the big piece of it. And therefore, I'm worried on the application migration. I'm worried about the adoption of infrastructure as a service. And Red Hat might be a piece in the puzzle, but it wasn't the driver for that change, and the move, and the modernization activities that were going on. That being said, OpenShift was a great opportunity. It plays in a lot of these environments. It'll be really interesting to see. And a huge opportunity for IBM to take and accelerate that business. From a services standpoint, do you think it'll change their position with regard to the SIs? >> I don't. I think IBM's gonna try to present, preserve Red Hat as an independent company. I would love to see IBM do what EMC did years ago with VMware, and float some portion of the company, and truly have it at least be quasi-independent. With an independent operating structure, and reporting structure from the standpoint of a public company. That would really signal to the partners that IBM's serious about maintaining independence. >> Yeah now, look Dave, IBM has said they will keep the brand, they will keep the products. Of all the companies that would buy Red Hat, I'm not super worried about kinda polluting open source. It was kinda nice that Jim Whitehurst would say, if it's a Red Hat thing, it is 100% open source. And IBM plays in a lot of these environments. A friend of mine on Twitter was like, "Oh hey, IBM's coming back to OpenDaylight or things like that." Because they'd been part of Cloud Foundry, they'd been part of OpenDaylight. There's certain ones that they are part of it and then they step back. So IBM, credibly open source space, if they can let Red Hat people still do their thing. But the concern is that lots of other companies are gonna be calling up project leads, and contributors in the open source community that might've felt that Red Hat was ideal place to live, and now they might go get their paycheck somewhere else. >> There's rumors that Jim Whitehurst eventually will take over IBM. I don't see it, I just don't think Jim Whitehurst wants to run Z mainframes and Services. That doesn't make any sense to me. Ginni's getting to the age where IBM CEOs typically retire, within the next couple of years. And so I think that it's more likely they'll bring in somebody from internally. Whether it's Arvin or, more likely, Jim Kavanaugh 'cause he's got the relationship with Wall Street. Let's talk about winners and losers. It's just, again, a huge strategic move for IBM. Frankly, I see the big winners is IBM and Red Hat. Because as we described before, IBM was struggling with its execution, and Red Hat was just basically, finally hitting a wall after 60-plus quarters of growth. And so the question is, will its customers win? The big concern I have for the customers is, IBM has this nasty habit of raising prices when it does acquisitions. We've seen it a number of times. And so you keep an eye on it, if I were a Red Hat customer, I'd be locking in some attractive pricing, longterm. And I would also be calling Mark Shuttleworth, and get his take, and get that Amdahl coffee cup on my desk, as it were. Other winners and losers, your thoughts on some of the partners, and the ecosystem. >> Yeah, when I look at this and say, compare it to Microsoft buying GitHub. We're all wondering, is this a real game changer for IBM? And if they embrace the direction. It's not like Red Hat culture is going to just take over IBM. In the Q&A with IBM, they said, "Will there be influence? Absolutely. "Is this a marriage of equals? No. "We're buying Red Hat and we will be "communicating and working together on this" But you can see how this can help IBM, as to the direction. Open source and the multi-cloud world is a huge, important piece. Cisco, I think, could've made a move like this. I would've been a little bit more worried about maintaining open source purity, if it was somebody like Cisco. There's other acquisitions, you mentioned Canonical and SUSE are out there. If somebody wanted to do this, the role of the operating system is much less important than it is today. You wouldn't have seen Microsoft up on stage at Red Hat Summit this year if Windows was the driver for Microsoft going forward. The cloud companies out there, to be honest, it really cements their presence out there. I don't think AWS is sitting there saying, "Oh jeez, we need to worry." They're saying, "Well IBM's capitulated." Realizing that, "Sure they have their own cloud, "and their environment, but they're going to be "successful only when they live in, "and around, and amongst our platform of Amazon." And Azure's gonna feel the same way, and same about Google. So there's that dynamic there. >> What about VMware? >> So I think VMware absolutely is a loser here. When I went back to say one of the biggest strengths of IBM is that they have applications. When you talk about Red Hat, they're really working, not only at the infrastructure layer, but working with developers, and working in that environment. The biggest weakness of VMware, is they don't own the applications. I'm paying licenses to VMware. And in a multi-cloud world, why do I need VMware? As opposed to Red Hat and IBM, or Amazon, or Microsoft, have a much more natural affinity for the applications and the data in the future. >> And what about the arms dealers? HPE and Dell, in particular, and of course, Lenovo. Wouldn't they prefer Red Hat being independent? >> Absolutely, they would prefer that they're gonna stay independent. As long as it doesn't seem to customers that IBM is trying to twist everybody's arms, and get you on to Z, or Power, or something like that. And continues to allow partnerships with the HPEs, Dells, Lenovos of the world. I think they'll be okay. So I'd say middling to impact. But absolutely, Red Hat, as an independent, was really the Switzerland of the marketplace. >> Ginni Rometty had sited three growth areas. One was Red Hat scale and go-to-market. I think there's no question about that. IBM could help with Red Hat's go-to-market. The other growth vector was IBM's products and software on the Red Hat stack. I'm less optimistic there, because I think that it's the strength of IBM's products, in and of themselves, that are largely gonna determine that success. And then the third was Services. I think IBM Services is a huge winner here. Having the bat phone into Red Hat is a big win for IBM Services. They can now differentiate. And this is where I think it's gonna be really interesting to see the posture of Accenture and those other big guys. I think IBM can now somewhat differentiate from those guys, saying, "Well wait, "we have exclusive, or not exclusive, "but inside baseball access to Red Hat." So that's gonna be an interesting dynamic to watch. Your final thoughts here. >> Yeah, yeah, Dave, absolutely. On the product integration piece, the question would be, you're gonna have OpenAPIs. This is all gonna work with the entire ecosystem. Couldn't IBM have done more of this without having to pay $34 billion and put things together? Services, absolutely, will be the measurement as to whether this is successful or not. That's probably gonna be the line out of them in financials, that we're gonna have to look at. Because, Dave, going back to, what is hybrid, and how do we measure it? What is success for this whole acquisition down the line? Any final pieces to what we should watch and how we measure that? >> So I think that, first of all, IBM's really good with acquisitions, so keep an eye on that. I'm not so concerned about the debt. IBM's got strong free cash flow. Red Hat throws off a billion dollars a year in free cash flow. This should be an accretive acquisition. In terms of operating profits, it might take a couple of years. But certainly from a standpoint of free cash flow and revenue growth, I think it's gonna help near-term. If it doesn't, that's something that's really important to watch. And then the last thing is culture. You know a lot of people at these companies. I know a lot of people at these companies. Look, the Red Hat culture drinks the Kool-Aid of open. You know this. Do they see IBM as the steward of open, and are they gonna face a brain drain? That's why it's no coincidence that Whitehurst and Rometty were down in North Carolina today. And Arvin and Paul Cormier were in Boston today. This is where a lot of employees are for Red Hat. And they're messaging. And so that's very, very important. IBM's not foolish. So that, to me, Stu, is a huge thing, is the culture. Dave, IBM is no longer the navy suit with the red tie, and everybody buttoned down. People are concerned about like, oh, IBM's gonna give the Red Hat people a dress code. Sure, the typical IBMer is not in a graphic tee and a hoodie. But, Dave, you've seen such a transformation in IBM over the last couple of decades. >> Yeah, definitely. And I think this really does, in my view, cement, now, the legacy of Ginny Rometty, which was kinda hanging on Watson, and Cognitive, and this sort of bespoke set of capabilities, and the SoftLayer acquisition. It, now, all comes together. This is a major pivot by IBM. I think, strategically, it's the right move for IBM. And I think, if in fact, IBM can maintain Red Hat's independence and that posture, and maintain its culture and employee base, I think it does change the game for IBM. So I would say, smart move, good move. Expensive but probably worth it. >> Yeah, where else would they have put their money, Dave? >> Yeah, right. Alright, Stu, thank you very much for unpacking this announcement. And thank you for watching. We'll see you next time. (mellow electronic music)

Published Date : Oct 29 2018

SUMMARY :

From the SiliconANGLE Media office And so they really needed to make the company that is saying, "We're going to kill you", And so I think that Red Hat was looking at a long slog. This is in the IBM cloud group, But nonetheless, the strategy now is to go multi-cloud. And SAS is the real place that it plays. and Ernie Young, and PwC, and the likes of Deloitte, And Red Hat might be a piece in the puzzle, structure from the standpoint of a public company. keep the brand, they will keep the products. And so the question is, will its customers win? And Azure's gonna feel the same way, and same about Google. not only at the infrastructure layer, And what about the arms dealers? And continues to allow partnerships and software on the Red Hat stack. the question would be, you're gonna have OpenAPIs. Dave, IBM is no longer the navy suit And I think this really does, in my view, And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

Jim WhitehurstPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Paul CormierPERSON

0.99+

Stu MinimanPERSON

0.99+

DellORGANIZATION

0.99+

AWSORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Ginni RomettyPERSON

0.99+

AccentureORGANIZATION

0.99+

JuneDATE

0.99+

$5 billionQUANTITY

0.99+

LenovoORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Andy JassyPERSON

0.99+

Red HatORGANIZATION

0.99+

$34 billionQUANTITY

0.99+

PwCORGANIZATION

0.99+

ArvinPERSON

0.99+

Scott Hebner, IBM | Change the Game: Winning With AI


 

>> Live from Times Square in New York City, it's theCUBE. Covering IBMs Change the Game, Winning With AI. Brought to you by, IBM. >> Hi, everybody, we're back. My name is Dave Vellante and you're watching, theCUBE. The leader in live tech coverage. We're here with Scott Hebner who's the VP of marketing for IBM analytics and AI. Scott, it's good to see you again, thanks for coming back on theCUBE. >> It's always great to be here, I love doing these. >> So one of the things we've been talking about for quite some time on theCUBE now, we've been following the whole big data movement since the early Hadoop days. And now AI is the big trend and we always ask is this old wine, new bottle? Or is it something substantive? And the consensus is, it's real, it's real innovation because of the data. What's your perspective? >> I do think it's another one of these major waves, and if you kind of go back through time, there's been a series of them, right? We went from, sort of centralized computing into client server, and then we went from client server into the whole world of e-business in the internet, back around 2000 time frame or so. Then we went from internet computing to, cloud. Right? And I think the next major wave here is that next step is AI. And machine learning, and applying all this intelligent automation to the entire system. So I think, and it's not just a evolution, it's a pretty big change that's occurring here. Particularly the value that it can provide businesses is pretty profound. >> Well it seems like that's the innovation engine for at least the next decade. It's not Moore's Law anymore, it's applying machine intelligence and AI to the data and then being able to actually operationalize that at scale. With the cloud-like model, whether its OnPrem or Offprem, your thoughts on that? >> Yeah, I mean I think that's right on 'cause, if you kind of think about what AI's going to do, in the end it's going to be about just making much better decisions. Evidence based decisions, your ability to get to data that is previously unattainable, right? 'Cause it can discover things in real time. So it's about decision making and it's about fueling better, and more intelligent business processing. Right? But I think, what's really driving, sort of under the covers of that, is this idea that, are clients really getting what they need from their data? 'Cause we all know that the data's exploding in terms of growth. And what we know from our clients and from studies is only about 15% of what business leaders believe that they're getting what they need from their data. Yet most businesses are sitting on about 80% of their data, that's either inaccessible, un-analyzed, or un-trusted, right? So, what they're asking themselves is how do we first unlock the value of all this data. And they knew they have to do it in new ways, and I think the new ways starts to talk about cloud native architectures, containerization, things of that nature. Plus, artificial intelligence. So, I think what the market is starting to tell us is, AI is the way to unlock the value of all this data. And it's time to really do something significant with it otherwise, it's just going to be marginal progress over time. They need to make big progress. >> But data is plentiful, insights aren't. And part of your strategy is always been to bring insights out of that dividend and obviously focused on clients outcomes. But, a big part of your role is not only communicating IBMs analytic and AI strategy, but also helping shape that strategy. How do you, sort of summarize that strategy? >> Well we talk about the ladder to AI, 'cause one thing when you look at the actual clients that are ahead of the game here, and the challenges that they've faced to get to the value of AI, what we've learned, very, very clearly, is that the hardest part of AI is actually making your data ready for AI. It's about the data. It's sort of this notion that there's no AI without a information architecture, right? You have to build that architecture to make your data ready, 'cause bad data will be paralyzing to AI. And actually there was a great MIT Sloan study that they did earlier in the year that really dives into all these challenges and if I remember correctly, about 81% of them said that the number one challenge they had is, their data. Is their data ready? Do they know what data to get to? And that's really where it all starts. So we have this notion of the ladder to AI, it's several, very prescriptive steps, that we believe through best practices, you need to actually take to get to AI. And once you get to AI then it becomes about how you operationalize it in the way that it scales, that you have explainability, you have transparency, you have trust in what the model is. But it really much is a systematical approach here that we believe clients are going to get there in a much faster way. >> So the picture of the ladder here it starts with collect, and that's kind of what we did with, Hadoop, we collected a lot of data 'cause it was inexpensive and then organizing it, it says, create a trusted analytics foundation. Still building that sort of framework and then analyze and actually start getting insights on demand. And then automation, that seems to be the big theme now. Is, how do I get automation? Whether it's through machine learning, infusing AI everywhere. Be a blockchain is part of that automation, obviously. And it ultimately getting to the outcome, you call it trust, achieving trust and transparency, that's the outcome that we want here, right? >> I mean I think it all really starts with making your data simple and accessible. Which is about collecting the data. And doing it in a way you can tap into all types of data, regardless of where it lives. So the days of trying to move data around all over the place or, heavy duty replication and integration, let it sit where it is, but be able to virtualize it and collect it and containerize it, so it can be more accessible and usable. And that kind of goes to the point that 80% of the enterprised data, is inaccessible, right? So it all starts first with, are you getting all the data collected appropriately, and getting it into a way that you can use it. And then we start feeding things in like, IOT data, and sensors, and it becomes real time data that you have to do this against, right? So, notions of replicating and integrating and moving data around becomes not very practical. So that's step one. Step two is, once you collect all the data doesn't necessarily mean you trust it, right? So when we say, trust, we're talking about business ready data. Do people know what the data is? Are there business entities associated with it? Has it been cleansed, right? Has it been take out all the duplicate data? What do you when a situation with data, you know you have sources of data that are telling you different things. Like, I think we've all been on a treadmill where the phone, the watch, and the treadmill will actually tell you different distances, I mean what's the truth? The whole notion of organizing is getting it ready to be used by the business, in applying the policies, the compliance, and all the protections that you need for that data. Step three is, the ability to build out all this, ability to analyze it. To do it on scale, right, and to do it in a way that everyone can leverage the data. So not just the business analysts, but you need to enable everyone through self-service. And that's the advancements that we're getting in new analytics capabilities that make mere mortals able to get to that data and do their analysis. >> And if I could inject, the challenge with the sort of traditional decision support world is you had maybe two, or three people that were like, the data gods. You had to go through them, and they would get the analysis. And it's just, the agility wasn't there. >> Right. >> So you're trying to, democratizing that, putting it in the hands. >> Absolutely. >> Maybe the business user's not as much of an expert as the person who can build theCUBE, but they could find new use cases, and drive more value, right? >> Actually, from a developer, that needs to get access, and analytics infused into their applications, to the other end of the spectrum which could be, a marketing leader, a finance planner, someone who's planning budgets, supply chain planner. Right, so it's that whole spectrum, not only allowing them to tap into, and analyze the data and gain insights from it, but allow them to customize how they do it and do it in a more self-service. So that's the notion of scale on demand insights. It's really a cultural thing enabled through the technology. With that foundation, then you have the ability to start infuse, where I think the real power starts to kick in here. So I mean, all that's kind of making your data ready for AI, right? Then you start to infuse machine learning, everywhere. And that's when you start to build these models that are self-learning, that start to automate the ability to get to these insights, and to the data. And uncover what has previously been unattainable, right? And that's where the whole thing starts to become automated and more real time and more intelligent. And that's where those models then allow you to do things you couldn't do before. With the data, they're saying they're not getting access to. And then of course, once you get the models, just because you have good models doesn't mean that they've been operationalized, that they've been embedded in applications, embedded in business process. That you have trust and transparency and explainability of what it's telling you. And that's that top tier of the ladder, is really about embedding it, right, so that into your business process in a way that you trust it. So, we have a systematic set of approaches to that, best practices. And of course we have the portfolio that would help you step up that ladder. >> So the fat middle of this bell curve is, something kind of this maturity curve, is kind of the organize and analyze phase, that's probably where most people are today. And what's the big challenge of getting up that ladder, is it the algorithms, what is it? >> Well I think it, it clearly with most movements like this, starts with culture and skills, right? And the ability to just change the game within an organization. But putting that aside, I think what's really needed here is an information architecture that's based in the agility of a cloud native platform, that gives you the productivity, and truly allows you to leverage your data, wherever it resides. So whether it's in the private cloud, the public cloud, on premise, dedicated no matter where it sits, you want to be able to tap into all that data. 'Cause remember, the challenge with data is it's always changing. I don't mean the sources, but the actual data. So you need an architecture that can handle all that. Once you stabilize that, then you can start to apply better analytics to it. And so yeah, I think you're right. That is sort of the bell curve here. And with that foundation that's when the power of infusing machine learning and deep learning and neuronetworks, I mean those kind of AI technologies and models into it all, just takes it to a whole new level. But you can't do those models until you have those bottom tiers under control. >> Right, setting that foundation. Building that framework. >> Exactly. >> And then applying. >> What developers of AI applications, particularly those that have been successful, have told us pretty clearly, is that building the actual algorithms, is not necessarily the hard part. The hard part is making all the data ready for that. And in fact I was reading a survey the other day of actual data scientists and AI developers and 60% of them said the thing they hate the most, is all the data collection, data prep. 'Cause it's so hard. And so, a big part of our strategy is just to simplify that. Make it simple and accessible so that you can really focus on what you want to do and where the value is, which is building the algorithms and the models, and getting those deployed. >> Big challenge and hugely important, I mean IBM is a 100 year old company that's going through it's own digital transformation. You know, we've had Inderpal Bhandari on talking about how to essentially put data at the core of the company, it's a real hard problem for a lot of companies who were not born, you know, five or, seven years ago. And so, putting data at that core and putting human expertise around it as opposed to maybe, having whatever as the core. Humans or the plant or the manufacturing facility, that's a big change for a lot of organizations. Now at the end of the day IBM, and IBM sells strategy but the analytics group, you're in the software business so, what offerings do you have, to help people get there? >> Well in the collect step, it's essentially our hybrid data management portfolio. So think DB2, DB2 warehouse, DB2 event store, which is about IOT data. So there's a set of, and that's where big data in Hadoop and all that with Wentworth's, that's where that all fits in. So building the ability to access all this data, virtualize it, do things like Queryplex, things of that nature, is where that all sits. >> Queryplex being that to the data, virtualization capability. >> Yeah. >> Get to the data no matter where it is. >> To find a queary and don't worry about where it resides, we'll figure that out for you, kind of thought, right? In the organize, that is infosphere, so that's basically our unified governance and integration part of our portfolio. So again, that is collecting all this, taking the collected data and organizing it, and making sure you're compliant with whatever policies. And making it, you know, business ready, right? And so infosphere's where you should look to understand that portfolio better. When you get into scale and analytics on demand, that's Cognos analytics, it is our planning analytics portfolio. And that's essentially our business analytics part of all this. And some data science tools like, SPSS, we're doing statistical analysis and SPSS modeler, if we're doing statistical modeling, things of that nature, right? When you get into the automate and the ML, everywhere, that's Watson Studio which is the integrated development environment, right? Not just for IBM Watson, but all, has a huge array of open technologies in it like, TensorFlow and Python, and all those kind of things. So that's the development environment that Watson machine learning is the runtime that will allow you to run those models anywhere. So those are the two big pieces of that. And then from there you'll see IBM building out more and more of what we already have. But we have Watson applications. Like Watson Assistant, Watson Discovery. We have a huge portfolio of Watson APIs for everything from tone to speech, things of that nature. And then the ability to infuse that all into the business processes. Sort of where you're going to see IBM heading in the future here. >> I love how you brought that home, and we talked about the ladder and it's more than just a PowerPoint slide. It actually is fundamental to your strategy, it maps with your offerings. So you can get the heads nodding, with the customers. Where are you on this maturity curve, here's how we can help with products and services. And then the other thing I'll mention, you know, we kind of learned when we spoke to some others this week, and we saw some of your announcements previously, the Red Hat component which allows you to bring that cloud experience no matter where you are, and you've got technologies to do that, obviously, you know, Red Hat, you guys have been sort of birds of a feather, an open source. Because, your data is going to live wherever it lives, whether it's on Prem, whether it's in the cloud, whether it's in the Edge, and you want to bring sort of a common model. Whether it's, containers, kubernetes, being able to, bring that cloud experience to the data, your thoughts on that? >> And this is where the big deal comes in, is for each one of those tiers, so, the DB2 family, infosphere, business analytics, Cognos and all that, and Watson Studio, you can get started, purchase those technologies and start to use them, right, as individual products or softwares that service. What we're also doing is, this is the more important step into the future, is we're building all those capabilities into one integrated unified cloud platform. That's called, IBM Cloud Private for data. Think of that as a unified, collaborative team environment for AI and data science. Completely built on a cloud native architecture of containers and micro services. That will support a multi cloud environment. So, IBM cloud, other clouds, you mention Red Hat with Openshift, so, over time by adopting IBM Cloud Private for data, you'll get those steps of the ladder all integrated to one unified environment. So you have the ability to buy the unified environment, get involved in that, and it all integrated, no assembly required kind of thought. Or, you could assemble it by buying the individual components, or some combination of both. So a big part of the strategy is, a great deal of flexibility on how you acquire these capabilities and deploy them in your enterprise. There's no one size fits all. We give you a lot of flexibility to do that. >> And that's a true hybrid vision, I don't have to have just IBM and IBM cloud, you're recognizing other clouds out there, you're not exclusive like some companies, but that's really important. >> It's a multi cloud strategy, it really is, it's a multi cloud strategy. And that's exactly what we need, we recognize that most businesses, there's very few that have standardized on only one cloud provider, right? Most of them have multiples clouds, and then it breaks up of dedicated, private, public. And so our strategy is to enable this capability, think of it as a cloud data platform for AI, across all these clouds, regardless of what you have. >> All right, Scott, thanks for taking us through the strategies. I've always loved talking to you 'cause you're a clear thinker, and you explain things really well in simple terms, a lot of complexity here but, it is really important as the next wave sets up. So thanks very much for your time. >> Great, always great to be here, thank you. >> All right, good to see you. All right, thanks for watching everybody. We are now going to bring it back to CubeNYC so, thanks for watching and we will see you in the afternoon. We've got the panel, the influencer panel, that I'll be running with Peter Burris and John Furrier. So, keep it right there, we'll be right back. (upbeat music)

Published Date : Sep 13 2018

SUMMARY :

Brought to you by, IBM. it's good to see you again, It's always great to be And now AI is the big and if you kind of go back through time, and then being able to actually in the end it's going to be about And part of your strategy is of the ladder to AI, So the picture of the ladder And that's the advancements And it's just, the agility wasn't there. the hands. And that's when you start is it the algorithms, what is it? And the ability to just change Right, setting that foundation. is that building the actual algorithms, And so, putting data at that core So building the ability Queryplex being that to the data, Get to the data no matter And so infosphere's where you should look and you want to bring So a big part of the strategy is, I don't have to have And so our strategy is to I've always loved talking to you to be here, thank you. We've got the panel, the influencer panel,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

ScottPERSON

0.99+

Scott HebnerPERSON

0.99+

80%QUANTITY

0.99+

twoQUANTITY

0.99+

60%QUANTITY

0.99+

John FurrierPERSON

0.99+

New York CityLOCATION

0.99+

PythonTITLE

0.99+

Inderpal BhandariPERSON

0.99+

PowerPointTITLE

0.99+

IBMsORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

TensorFlowTITLE

0.99+

three peopleQUANTITY

0.99+

bothQUANTITY

0.98+

Times SquareLOCATION

0.98+

WatsonTITLE

0.98+

about 80%QUANTITY

0.98+

Watson AssistantTITLE

0.98+

step oneQUANTITY

0.98+

oneQUANTITY

0.97+

MIT SloanORGANIZATION

0.97+

next decadeDATE

0.97+

about 15%QUANTITY

0.97+

Watson StudioTITLE

0.97+

this weekDATE

0.97+

Step twoQUANTITY

0.96+

Watson DiscoveryTITLE

0.96+

two big piecesQUANTITY

0.96+

Red HatTITLE

0.96+

about 81%QUANTITY

0.96+

OpenshiftTITLE

0.95+

CubeNYCLOCATION

0.94+

fiveDATE

0.94+

QueryplexTITLE

0.94+

firstQUANTITY

0.93+

todayDATE

0.92+

100 year oldQUANTITY

0.92+

WentworthORGANIZATION

0.91+

Step threeQUANTITY

0.91+

Change the Game: Winning With AITITLE

0.9+

one cloud providerQUANTITY

0.9+

one thingQUANTITY

0.89+

DB2TITLE

0.85+

each oneQUANTITY

0.84+

seven years agoDATE

0.83+

OnPremORGANIZATION

0.83+

wavesEVENT

0.82+

number one challengeQUANTITY

0.8+

Red HatTITLE

0.78+

OffpremORGANIZATION

0.77+

DB2ORGANIZATION

0.76+

majorEVENT

0.76+

major waveEVENT

0.75+

SPSSTITLE

0.73+

Moore's LawTITLE

0.72+

CognosTITLE

0.72+

nextEVENT

0.66+

CloudTITLE

0.64+

around 2000QUANTITY

0.64+

HadoopTITLE

0.61+

early Hadoop daysDATE

0.55+

themQUANTITY

0.51+

waveEVENT

0.5+

inDATE

0.49+

theCUBETITLE

0.45+

theCUBEORGANIZATION

0.42+

John Thomas, IBM | IBM CDO Summit Spring 2018


 

>> Narrator: Live from downtown San Francisco, it's theCUBE, covering IBM Chief Data Officer Strategy Summit 2018, brought to you by IBM. >> We're back in San Francisco, we're here at the Parc 55 at the IBM Chief Data Officer Strategy Summit. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante and IBM's Chief Data Officer Strategy Summit, they hold them on both coasts, one in Boston and one in San Francisco. A couple times each year, about 150 chief data officers coming in to learn how to apply their craft, learn what IBM is doing, share ideas. Great peer networking, really senior audience. John Thomas is here, he's a distinguished engineer and director at IBM, good to see you again John. >> Same to you. >> Thanks for coming back in theCUBE. So let's start with your role, distinguished engineer, we've had this conversation before but it just doesn't happen overnight, you've got to be accomplished, so congratulations on achieving that milestone, but what is your role? >> The road to distinguished engineer is long but today, these days I spend a lot of my time working on data science and in fact am part of what is called a data science elite team. We work with clients on data science engagements, so this is not consulting, this is not services, this is where a team of data scientists work collaboratively with a client on a specific use case and we build it out together. We bring data science expertise, machine learning, deep learning expertise. We work with the business and build out a set of tangible assets that are relevant to that particular client. >> So this is not a for-pay service, this is hey you're a great customer, a great client of ours, we're going to bring together some resources, you'll learn, we'll learn, we'll grow together, right? >> This is an investment IBM is making. It's a major investment for our top clients working with them on their use cases. >> This is a global initiative? >> This is global, yes. >> We're talking about, what, hundreds of clients, thousands of clients? >> Well eventually thousands but we're starting small. We are trying to scale now so obviously once you get into these engagements, you find out that it's not just about building some models. There are a lot of challenges that you've got to deal with in an enterprise setting. >> Dave: What are some of the challenges? >> Well in any data science engagement the first thing is to have clarity on the use case that you're engaging in. You don't want to build models for models' sake. Just because Tensorflow or scikit-learn is great and build models, that doesn't serve a purpose. That's the first thing, do you have clarity of the business use case itself? Then comes data, now I cannot stress this enough, Dave, there is no data science without data, and you might think this is the most obvious thing, of course there has to be data, but when I say data I'm talking about access to the right data. Do we have governance over the data? Do we know who touched the data? Do we have lineage on that data? Because garbage in, garbage out, you know this. Do we have access to the right data in the right control setting for my machine learning models we built. These are challenges and then there's another challenge around, okay, I built my models but how do I operationalize them? How do I weave those models into the fabric of my business? So these are all challenges that we have to deal with. >> That's interesting what you're saying about the data, it does sound obvious but having the right data model as well. I think about when I interact with Netflix, I don't talk to their customer service department or their marketing department or their sales department or their billing department, it's one experience. >> You just have an experience, exactly. >> This notion of incumbent disruptors, is that a logical starting point for these guys to get to that point where they have a data model that is a single data model? >> Single data model. (laughs) >> Dave: What does that mean, right? At least from an experienced standpoint. >> Once we know this is the kind of experience we want to target, what are the relevant data sets and data pieces that are necessary to make their experience happen or come together. Sometimes there's core enterprise data that you have in many cases, it has been augmented with external data. Do you have a strategy around handling your internal, external data, your structured transactional data, your semi-structured data, your newsfeeds. All of these need to come together in a consistent fashion for that experience to be true. It is not just about I've got my credit card transaction data but what else is augmenting that data? You need a model, you need a strategy around that. >> I talk to a lot of organizations and they say we have a good back-end reporting system, we have Cognos we can build cubes and all kinds of financial data that we have, but then it doesn't get down to the front line. We have an instrument at the front line, we talk about IOT and that portends change there but there's a lot of data that either isn't persisted or not stored or doesn't even exist, so is that one of the challenges that you see enterprises dealing with? >> It is a challenge. Do I have access to the right data, whether that is data at rest or in motion? Am I persisting it the way I can consume it later? Or am I just moving big volumes of data around because analytics is there, or machine learning is there and I have to move data out of my core systems into that area. That is just a waste of time, complexity, cost, hidden costs often, 'cause people don't usually think about the hidden costs of moving large volumes of data around. But instead of that can I bring analytics and machine learning and data science itself to where my data is. Not necessarily to move it around all the time. Whether you're dealing with streaming data or large volumes of data in your Hadoop environment or mainframes or whatever. Can I do ML in place and have the most value out of the data that is there? >> What's happening with all that Hadoop? Nobody talks about Hadoop anymore. Hadoop largely became a way to store data for less, but there's all this data now and a data lake. How are customers dealing with that? >> This is such an interesting thing. People used to talk about the big data, you're right. We jumped from there to the cognitive It's not like that right? No, without the data then there is no cognition there is no AI, there is no ML. In terms of existing investments in Hadoop for example, you have to absolutely be able to tap in and leverage those investments. For example, many large clients have investments in large Cloudera or Hortonworks environment, or Hadoop environments so if you're doing data science, how do you push down, how do you leverage that for scale, for example? How do you access the data using the same access control mechanisms that are already in place? Maybe you have Carbros as your mechanism how do you work with that? How do you avoid moving data off of that environment? How do you push down data prep into the spar cluster? How do you do model training in that spar cluster? All of these become important in terms of leveraging your existing investments. It is not just about accessing data where it is, it's also about leveraging the scale that the company has already invested in. You have hundred, 500 node Hadoop clusters well make the most of them in terms of scaling your data science operations. So push down and access data as much as possible in those environments. >> So Beth talked today, Beth Smith, about Watson's law, and she made a little joke about that, but to me its poignant because we are entering a new era. For decades this industry marched to the cadence of Moore's law, then of course Metcalfe's law in the internet era. I want to make an observation and see if it resonates. It seems like innovation is no longer going to come from doubling microprocessor speed and the network is there, it's built out, the internet is built. It seems like innovation comes from applying AI to data together to get insights and then being able to scale, so it's cloud economics. Marginal costs go to zero and massive network effects, and scale, ability to track innovation. That seems to be the innovation equation, but how do you operationalize that? >> To your point, Dave, when we say cloud scale, we want the flexibility to do that in an off RAM public cloud or in a private cloud or in between, in a hybrid cloud environment. When you talk about operationalizing, there's a couple different things. People think that, say I've got a super Python programmer and he's great with Tensorflow or scikit-learn or whatever and he builds these models, great, but what happens next, how do you actually operationalize those models? You need to be able to deploy those models easily. You need to be able to consume those models easily. For example you have a chatbot, a chatbot is dumb until it actually calls these machine learning models, real time to make decisions on which way the conversation should go. So how do you make that chatbot intelligent? It's when it consumes the ML models that have been built. So deploying models, consuming models, you create a model, you deploy it, you've got to push it through the development test staging production phases. Just the same rigor that you would have for any applications that are deployed. Then another thing is, a model is great on day one. Let's say I built a fraud detection model, it works great on day one. A week later, a month later it's useless because the data that it trained on is not what the fraudsters are using now. So patterns have changed, the model needs to be retrained How do I understand the performance of the model stays good over time? How do I do monitoring? How do I retrain the models? How do I do the life cycle management of the models and then scale? Which is okay I deployed this model out and its great, every application is calling it, maybe I have partners calling these models. How do I automatically scale? Whether what you are using behind the scenes or if you are going to use external clusters for scale? Technology is like spectrum connector from our HPC background are very interesting counterparts to this. How do I scale? How do I burst? How do I go from an on-frame to an off-frame environment? How do I build something behind the firewall but deploy it into the cloud? We have a chatbot or some other cloud-native application, all of these things become interesting in the operationalizing. >> So how do all these conversations that you're having with these global elite clients and the challenges that you're unpacking, how do they get back into innovation for IBM, what's that process like? >> It's an interesting place to be in because I am hearing and experiencing first hand real enterprise challenges and there we see our product doesn't handle this particular thing now? That is an immediate circling back with offering management and development. Hey guys we need this particular function because I'm seeing this happening again and again in customer engagements. So that helps us shape our products, shape our data science offerings, and sort of running with the flow of what everyone is doing, we'll look at that. What do our clients want? Where are they headed? And shape the products that way. >> Excellent, well John thanks very much for coming back in theCUBE and it's a pleasure to see you again. I appreciate your time. >> Thank you Dave. >> All right good to see you. Keep it right there everybody we'll be back with our next guest. We're live from the IBM CDO strategy summit in San Francisco, you're watching theCUBE.

Published Date : May 1 2018

SUMMARY :

brought to you by IBM. to see you again John. but what is your role? that are relevant to This is an investment IBM is making. into these engagements, you find out the first thing is to have but having the right data model as well. Single data model. Dave: What does that mean, right? for that experience to be true. so is that one of the challenges and I have to move data out but there's all this that the company has already invested in. and scale, ability to track innovation. How do I do the life cycle management to be in because I am hearing pleasure to see you again. All right good to see you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

John ThomasPERSON

0.99+

BostonLOCATION

0.99+

Beth SmithPERSON

0.99+

San FranciscoLOCATION

0.99+

BethPERSON

0.99+

NetflixORGANIZATION

0.99+

oneQUANTITY

0.99+

A week laterDATE

0.99+

a month laterDATE

0.99+

thousandsQUANTITY

0.99+

HadoopTITLE

0.99+

WatsonPERSON

0.99+

one experienceQUANTITY

0.99+

MoorePERSON

0.98+

todayDATE

0.98+

PythonTITLE

0.98+

MetcalfePERSON

0.98+

Parc 55LOCATION

0.97+

both coastsQUANTITY

0.97+

zeroQUANTITY

0.96+

SingleQUANTITY

0.96+

about 150 chief data officersQUANTITY

0.96+

day oneQUANTITY

0.94+

CognosORGANIZATION

0.94+

each yearQUANTITY

0.93+

hundreds of clientsQUANTITY

0.92+

HortonworksORGANIZATION

0.91+

first thingQUANTITY

0.9+

TensorflowTITLE

0.9+

IBM CDO SummitEVENT

0.87+

Strategy SummitEVENT

0.86+

hundred, 500 node Hadoop clustersQUANTITY

0.85+

thousands of clientsQUANTITY

0.84+

single data modelQUANTITY

0.81+

Strategy Summit 2018EVENT

0.81+

Chief Data OfficerEVENT

0.79+

IBM CDO strategy summitEVENT

0.79+

Chief Data Officer Strategy SummitEVENT

0.79+

couple timesQUANTITY

0.77+

ClouderaORGANIZATION

0.75+

decadesQUANTITY

0.74+

Spring 2018DATE

0.72+

Data OfficerEVENT

0.67+

CarbrosORGANIZATION

0.63+

TensorflowORGANIZATION

0.61+

scikitORGANIZATION

0.58+

theCUBEORGANIZATION

0.58+

Daniel Hernandez, IBM | IBM Think 2018


 

>> Narrator: Live from Las Vegas It's theCUBE covering IBM Think 2018. Brought to you by IBM. >> We're back at Mandalay Bay in Las Vegas. This is IBM Think 2018. This is day three of theCUBE's wall-to-wall coverage. My name is Dave Vellante, I'm here with Peter Burris. You're watching theCUBE, the leader in live tech coverage. Daniel Hernandez is here. He's the Vice President of IBM Analytics, a CUBE alum. It's great to see you again, Daniel >> Thanks >> Dave: Thanks for coming back on >> Happy to be here. >> Big tech show, consolidating a bunch of shows, you guys, you kind of used to have your own sort of analytics show but now you've got all the clients here. How do you like it? Compare and contrast. >> IBM Analytics loves to share so having all our clients in one place, I actually like it. We're going to work out some of the kinks a little bit but I think one show where you can have a conversation around Artificial Intelligence, data, analytics, power systems, is beneficial to all of us, actually. >> Well in many respects, the whole industry is munging together. Folks focus more on workloads as opposed to technology or even roles. So having an event like this where folks can talk about what they're trying to do, the workloads they're trying to create, the role that analytics, AI, et cetera is going to play in informing those workloads. Not a bad place to get that crosspollination. What do you think? >> Daniel: Totally. You talk to a client, there are so many problems. Problems are a combination of stuff that we have to offer and analytics stuff that our friends in Hybrid Integration have to offer. So for me, logistically, I could say oh, Mike Gilfix, business process automation. Go talk to him. And he's here. That's happened probably at least a dozen times so far in not even two days. >> Alright so I got to ask, your tagline. Making data ready for AI. What does that mean? >> We get excited about amazing tech. Artificial intelligence is amazing technology. I remember when Watson beat Jeopardy. Just being inspired by all the things that I thought it could do to solve problems that matter to me. And if you look over the last many years, virtual assistants, image recognition systems that solve pretty big problems like catching bad guys are inspirational pieces of work that were inspired a lot by what we did then. And in business, it's triggered a wave of artificial intelligence can help me solve business critical issues. And I will tell you that many clients simply aren't ready to get started. And because they're not ready, they're going to fail. And so our attitude about things are, through IBM Analytics, we're going to deliver the critical capabilities you need to be ready for AI. And if you don't have that, 100% of your projects will fail. >> But how do you get the business ready to think about data differently? You can do a lot to say, the technology you need to do this looks differently but you also need to get the organization to acculturate, appreciate that their business is going to run differently as a consequence of data and what you do with it. How do you get the business to start making adjustments? >> I think you just said the magic word, the business. Which is to say, at least all the conversations I have with my customers, they can't even tell that I'm from the analytics because I'm asking them about the problems. What do you try to do? How would you measure success? What are the critical issues that you're trying to solve? Are you trying to make money, save money, those kinds of things. And by focusing on it, we can advise them then based on that how we can help. So the data culture that you're describing I think it's a fact, like you become data aware and understand the power of it by doing. You do by starting with the problems, developing successes and then iterating. >> An approach to solving problems. >> Yeah >> So that's kind of a step zero to getting data ready for AI >> Right. But in no conversation that leads to success does it ever start with we're going to do AI or machine learning, what problem are we going to solve? It's always the other way around. And when we do that, our technology then is easily explainable. It's like okay, you want to build a system for better customer interactions in your call center. Well, what does that mean? You need data about how they have interacted with you, products they have interacted with, you might want predictions that anticipate what their needs are before they tell you. And so we can systematically address them through the capabilities we've got. >> Dave, if I could amplify one thing. It makes the technology easier when you put it in these constants I think that's a really crucial important point. >> It's super simple. All of us have had to have it, if we're in technology. Going the other way around, my stuff is cool. Here's why it's cool. What problems can you solve? Not helpful for most of our clients. >> I wonder if you could comment on this Daniel. I feel like we're, the last ten years about cloud mobile, social, big data. We seem to be entering an era now of sense, speak, act, optimize, see, learn. This sort of pervasive AI, if you will. How- is that a reasonable notion, that we're entering that era, and what do you see clients doing to take advantage of that? What's their mindset like when you talk to them? >> I think the evidence is there. You just got to look around the show and see what's possible, technically. The Watson team has been doing quite a bit of stuff around speech, around image. It's fascinating tech, stuff that feels magical to me. And I know how this stuff works and it still feels kind of fascinating. Now the question is how do you apply that to solve problems. I think it's only a matter of time where most companies are implementing artificial intelligence systems in business critical and core parts of their processes and they're going to get there by starting, by doing what they're already doing now with us, and that is what problem am I solving? What data do I need to get that done? How do I control and organize that information so I can exploit it? How can I exploit machine learning and deep learning and all these other technologies to then solve that problem. How do I measure success? How do I track that? And just systematically running these experiments. I think that crescendos to a critical mass. >> Let me ask you a question. Because you're a technologist and you said it's amazing, it's like magic even to you. Imagine non technologists, what `it's like to me. There's a black box component of AI, and maybe that's okay. I'm just wondering if that's, is that a headwind, are clients comfortable with that? If you have to describe how you really know it's a cat. I mean, I know a cat when I see it. And the machine can tell me it's a cat, or not a hot dog Silicon Valley reference. (Peter laughs) But to tell me actually how it works, to figure that out there's a black box component. Does that scare people? Or are they okay with that? >> You've probably given me too much credit. So I really can't explain how all that just works but what I can tell you is how certainly, I mean, lets take regulated industries like banks and insurance companies that are building machine learning models throughout their enterprise. They've got to explain to a regulator that they are offering considerations around anti discriminatory, basically they're not buying systems that cause them to do things that are against the law, effectively. So what are they doing? Well, they're using tools like ones from IBM to build these models to track the process of creating these models which includes what data they used, how that training was done, prove that the inputs and outputs are not anti-discriminatory and actually go through their own internal general counsel and regulators to get it done. So whether you can explain the model in this particular case doesn't matter. What they're trying to prove is that the effect is not violating the law, which the tool sets and the process around those tool sets allow you to get that done today. >> Well, let me build on that because one of the ways that it does work is that, as Ginni said yesterday, Ginni Rometty said yesterday that it's always going to be a machine human component to it. And so the way it typically works is a machine says I think this is a cat and a human validates it or not. The machine still doesn't really know if it's a cat but coming back to this point, one of the key things that we see anyway, and one of the advantages that IBM likely has, is today the folks running Operational Systems, the core of the business, trust their data sources. >> Do they? >> They trust their DB2 database, they trust their Oracle database, they trust the data that's in the applications. >> Dave: So it's the data that's in their Data Lake? >> I'm not saying they do but that's the key question. At what point in time, and I think the real important part of your question is, at what point in time do the hardcore people allow AI to provide a critical input that's going to significantly or potentially dramatically change the behavior of the core operational systems. That seems a really crucial point. What kind of feedback do you get from customers as you talk about turning AI from something that has an insight every now and then to becoming effectively, an element or essential to the operation of the business? >> One of the critical issues in getting especially machine learning models, integrated in business critical processes and workflows is getting those models running where that work is done. So if you look, I mean, when I was here last time I was talking about the, we were focused on portfolio simplification and bringing machine learning where the data was. We brought machine learning to private cloud, we brought it onto Gadook, we brought it on mainframe. I think it is a critical necessary ingredient that you need to deliver that outcome. Like, bring that technology where the data is. Otherwise it just won't work. Why? As soon as you move, you've got latency. As soon as you move, you've got data quality issues you're going to have contending. That's going to exacerbate whatever mistrust you might have. >> Or the stuff's not cheap to move. It's not cheap to ingest. >> Yeah. By the way, the Machine Learning on Z offering that we launched last year in March, April was one of our highest, most successful offerings last year. >> Let's talk about some of the offerings. I mean, at the end of the day you're in the business of selling stuff. You've talked about Machine Learning on Z X, whatever platform. Cloud Private, I know you've got perspectives on that. Db2 Event Store is something that you're obviously familiar with. SPSS is part of the portfolio. >> 50 year, the anniversary. >> Give us the update on some of these products. >> Making data ready for AI requires a design principled on simplicity. We launched in January three core offerings that help clients benefit from the capability that we deliver to capture data, to organize and control that data and analyze that data. So we delivered a Hybrid Data Management offering which gives you everything you need to collect data, it's anchored by Db2. We have the Unified Governance and Integration portfolio that gives you everything you need to organize and control that data as anchored by our information server product set. And we've got our Data Science and Businesses Analytics portfolio, which is anchored by our data science experience, SPSS and Cognos Analytics portfolio. So clients that want to mix and match those capabilities in support of artificial intelligence systems, or otherwise, can benefit from that easily. We just announced here a radical- an even radical step forward in simplification, which we thought that there already was. So if you want to move to the public cloud but can't, don't want to move to the public cloud for whatever reason and we think, by the way, public cloud for workload to like, you should try to run as much as you can there because the benefits of it. But if for whatever reason you can't, we need to deliver those benefits behind the firewall where those workloads are. So last year the Hybrid Integration team led by Denis Kennelly, introduced an IBM cloud private offering. It's basically application paths behind the firewall. It's like run on a Kubernetes environment. Your applications do buildouts, do migrations of existing workloads to it. What we did with IBM Cloud Private for data is have the data companion for that. IBM Cloud Private was a runaway success for us. You could imagine the data companion to that just being like, what application doesn't need data? It's peanut butter and jelly for us. >> Last question, oh you had another point? >> It's alright. I wanted to talk about Db2 and SPCC. >> Oh yes, let's go there, yeah. >> Db2 Event Store, I forget if anybody- It has 100x performance improvement on Ingest relative to the current state of the order. You say, why does that matter? If you do an analysis or analytics, machine learning, artificial intelligence, you're only as good as whatever data you have captured of your, whatever your reality is. Currently our databases don't allow you to capture everything you would want. So Db2 Event Store with that Ingest lets you capture more than you could ever imagine you would want. 250 billion events per year is basically what it's rated at. So we think that's a massive improvement in database technology and it happens to be based in open source, so the programming model is something that developers feel is familiar. SPSS is celebrating it's 50th year anniversary. It's the number one digital offering inside of IBM. It had 510,000 users trying it out last year. We just renovated the user experience and made it even more simple on stats. We're doing the same thing on Modeler and we're bringing SPSS and our data science experience together so that there's one tool chain for data science end to end in the Private Cloud. It's pretty phenomenal stuff. >> Okay great, appreciate you running down the portfolio for us. Last question. It's kind of a, get out of your telescope. When you talk to clients, when you think about technology from a technologist's perspective, how far can we take machine intelligence? Think 20 plus years, how far can we take it and how far should we take it? >> Can they ever really know what a cat is? (chuckles) >> I don't know what the answer to that question is, to be honest. >> Are people asking you that question, in the client base? >> No. >> Are they still figuring out, how do I apply it today? >> Surely they're not asking me, probably because I'm not the smartest guy in the room. They're probably asking some of the smarter guys-- >> Dave: Well, Elon Musk is talking about it. Stephen Hawking was talking about it. >> I think it's so hard to anticipate. I think where we are today is magical and I couldn't have anticipated it seven years ago, to be honest, so I can't imagine. >> It's really hard to predict, isn't it? >> Yeah. I've been wrong on three to four year horizons. I can't do 20 realistically. So I'm sorry to disappoint you. >> No, that's okay. Because it leads to my real last question which is what kinds of things can machines do that humans can't and you don't even have to answer this, but I just want to put it out there to the audience to think about how are they going to complement each other. How are they going to compete with each other? These are some of the big questions that I think society is asking. And IBM has some answers, but we're going to apply it here, here and here, you guys are clear about augmented intelligence, not replacing. But there are big questions that I think we want to get out there and have people ponder. I don't know if you have a comment. >> I do. I think there are non obvious things to human beings, relationships between data that's expressing some part of your reality that a machine through machine learning can see that we can't. Now, what does it mean? Do you take action on it? Is it simply an observation? Is it something that a human being can do? So I think that combination is something that companies can take advantage of today. Those non obvious relationships inside of your data, non obvious insights into your data is what machines can get done now. It's how machine learning is being used today. Is it going to be able to reason on what to do about it? Not yet, so you still need human beings in the middle too, especially when you deal with consequential decisions. >> Yeah but nonetheless, I think the impact on industry is going to be significant. Other questions we ask are retail stores going to be the exception versus the normal. Banks lose control of the payment systems. Will cyber be the future of warfare? Et cetera et cetera. These are really interesting questions that we try and cover on theCUBE and we appreciate you helping us explore those. Daniel, it's always great to see you. >> Thank you, Dave. Thank you, Peter. >> Alright keep it right there buddy, we'll be back with our next guest right after this short break. (electronic music)

Published Date : Mar 21 2018

SUMMARY :

Brought to you by IBM. It's great to see you again, Daniel How do you like it? bit but I think one show where you can have a is going to play in informing those workloads. You talk to a client, Alright so I got to ask, your tagline. And I will tell you that many clients simply appreciate that their business is going to run differently I think you just said the magic word, the business. But in no conversation that leads to success when you put it in these constants What problems can you solve? entering that era, and what do you see Now the question is how do you apply that to solve problems. If you have to describe how you really know it's a cat. So whether you can explain the model in this Well, let me build on that because one of the the applications. What kind of feedback do you get from customers That's going to exacerbate whatever mistrust you might have. Or the stuff's not cheap to move. that we launched last year in March, April I mean, at the end of the day you're in to like, you should try to run as much as you I wanted to talk about Db2 and SPCC. So Db2 Event Store with that Ingest lets you capture When you talk to clients, when you think about is, to be honest. I'm not the smartest guy in the room. Dave: Well, Elon Musk is talking about it. I think it's so hard to anticipate. So I'm sorry to disappoint you. How are they going to compete with each other? I think there are non obvious things to industry is going to be significant. with our next guest right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Dave VellantePERSON

0.99+

DanielPERSON

0.99+

Daniel HernandezPERSON

0.99+

Mike GilfixPERSON

0.99+

IBMORGANIZATION

0.99+

GinniPERSON

0.99+

Ginni RomettyPERSON

0.99+

PeterPERSON

0.99+

Denis KennellyPERSON

0.99+

DavePERSON

0.99+

JanuaryDATE

0.99+

Stephen HawkingPERSON

0.99+

yesterdayDATE

0.99+

Elon MuskPERSON

0.99+

last yearDATE

0.99+

100xQUANTITY

0.99+

20 plus yearsQUANTITY

0.99+

100%QUANTITY

0.99+

Mandalay BayLOCATION

0.99+

510,000 usersQUANTITY

0.99+

MarchDATE

0.99+

todayDATE

0.99+

50 yearQUANTITY

0.99+

Db2ORGANIZATION

0.99+

Las VegasLOCATION

0.99+

IBM AnalyticsORGANIZATION

0.99+

seven years agoDATE

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

Z XTITLE

0.98+

20QUANTITY

0.98+

threeQUANTITY

0.98+

SPSSTITLE

0.98+

AprilDATE

0.96+

IBM AnalyticsORGANIZATION

0.96+

GadookORGANIZATION

0.96+

Silicon ValleyLOCATION

0.94+

two daysQUANTITY

0.94+

OracleORGANIZATION

0.92+

SPCCORGANIZATION

0.92+

DB2TITLE

0.9+

four yearQUANTITY

0.9+

one placeQUANTITY

0.89+

VegasLOCATION

0.89+

KubernetesTITLE

0.87+

SPSSORGANIZATION

0.86+

JeopardyORGANIZATION

0.86+

50th year anniversaryQUANTITY

0.86+

WatsonPERSON

0.82+

at least a dozen timesQUANTITY

0.82+

Db2 Event StoreTITLE

0.8+

theCUBEORGANIZATION

0.8+

intelligenceEVENT

0.79+

step zeroQUANTITY

0.78+

one toolQUANTITY

0.77+

250 billion events per yearQUANTITY

0.76+

three core offeringsQUANTITY

0.75+

one thingQUANTITY

0.7+

Db2 EventORGANIZATION

0.68+

Vice PresidentPERSON

0.68+

IngestORGANIZATION

0.68+

Day One Kickoff | PentahoWorld 2017


 

>> Narrator: Live from Orlando, Florida, its theCUBE. Covering Pentaho World 2017. Brought to you by Hitachi Vantara. >> We are kicking off day one of Pentaho World. Brought to you, of course, by Hitachi Vantara. I'm your host, Rebecca Knight, along with my co-hosts. We have Dave Vellante and James Kobielus. Guys I'm thrilled to be here in Orlando, Florida. Kicking off Pentaho World with theCUBE. >> Hey Rebecca, twice in one week. >> I know, this is very exciting, very exciting. So we were just listening to the key notes. We heard a lot about the big three, the power of the big three. Which is internet of things, predictive analytics, big data. So the question for you both is where is Hitachi Vantara in this marketplace? And are they doing what they need to do to win? >> Well so the first big question everyone is asking is what the heck is Hitachi-Vantara? (laughing) What is that? >> Maybe we should have started there. >> We joke, some people say it sounds like a SUV, Japanese company, blah blah blah. When we talked to Brian-- >> Jim: A well engineered SUV. >> So Brian Householder told us, well you know it really is about vantage and vantage points. And when you listen to their angles on insights and data, anywhere and however you want it. So they're trying to give their customers an advantage and a vantage point on data and insights. So that's kind of interesting and cool branding. The second big, I think, point is Hitachi has undergone a massive transformation itself. Certainly Hitachi America, which is really not a brand they use anymore, but Hitachi Data Systems. Brian Householder talked in his keynote, when he came in 14 years ago, Hitachi was 80 percent hardware, and infrastructure, and storage. And they've transformed that. They're about 50/50 last year. In terms of infrastructure versus software and services. But what they've done, in my view, is taken now the next step. I think Hitachi has said, alright listen, storage is going to the cloud, Dell and EMC are knocking each others head off. China is coming in to play. Do we really want to try and dominate that business? Rather, why don't we play from our strengths? Which is devices, internet of things, the industrial internet. So they buy Pentaho two years ago, and we're going to talk more about that, bring in an analytics platform. And this sort of marrying IT and OT, information technology and operation technology, together to go attack what is a trillion dollar marketplace. >> That's it so Pentaho was a very strategic acquisition. For Hitachi, of course, Hitachi data system plus Hitachi insides, plus Pentaho equals Hitachi Vantara. Pentaho was one of the pioneering vendors more than a decade ago. In the whole open source analytics arena. If you cast your mind back to the middle millennium decade, open source was starting to come into its own. Of course, we already had Linux an so forth, but in terms of the data world, we're talking about the pre-Hadoop era, the pre-Spark era. We're talking about the pre-TensorFlow era. Pentaho, I should say at that time. Which is, by the way, now a product group within Hitachi Vantara. It's not a stand alone company. Pentaho established itself as the spearhead for open-source, predictive analytics, and data mining. They made something called Weka, which is an open-source data mining toolkit that was actually developed initially in New Zealand. The core of their offering, to market, in many ways became very much a core player in terms of analytics as a service a so forth, but very much established themselves, Pentaho, as an up and coming solution provider taking a more or less, by the book, open source approach for delivering solutions to market. But they were entering a market that was already fairly mature in terms of data mining. Because you are talking about the mid-2000's. You already had SaaS, and SPSS, and some of the others that had been in that space. And done quite well for a long time. And so cut ahead to the present day. Pentaho had evolved to incorporate some fairly robust data integration, data transformation, all ETL capabilities into their portfolio. They had become a big data player in their own right, With a strong focus on embedded analytics, as the keynoters indicated this morning. There's a certain point where in this decade it became clear that they couldn't go it any further, in terms of differentiating themselves in this space. In a space that dominated by Hadoop and Spark, and AI things like TensorFlow. Unless they are part of a more diversified solution provider that offered, especially I think the critical thing was the edge orientation of the industrial internet of things. Which is really where many of the opportunities are now for a variety of new markets that are opening up, including autonomous vehicles, which was the focus of here all-- >> Let's clarify some things a little bit. So Pentaho actually started before the whole Hadoop movement. >> Yeah, yeah. >> That's kind of interesting. You know they were young company when Hadoop just started to take off. And they said alright we can adopt these techniques and processes as well. So they weren't true legacy, right? >> Jim: No. >> So they were able to ride that sort of modern wave. But essentially they're in the business of data, I call it data management. And maybe that's not the right term. They do ingest, they're doing ETL, transformation anyway. They're embedding, they've got analytics, they're embedding analytics. Like you said, they're building on top of Weka. >> James: In the first flesh and BI as a hot topic in the market in the mid-200's, they became a fairly substantial BI player. That actually helped them to grow in terms of revenue and customers. >> So they're one of those companies that touches on a lot of different areas. >> Yes. >> So who do we sort of compare them to? Obviously, what you think of guys like Informatica. >> Yeah, yeah. >> Who do heavy ETL. >> Yes. You mentioned BI, you mentioned before. Like, guys like Saas. What about Tableau? >> Well, BBI would be like, there's Tableau, and ClickView and so forth. But there's also very much-- >> Talend. >> Cognos under IBM. And, of course, there's the business objects Portfolio under SAP. >> David: Right. And Talend would be? >> In fact I think Talend is in many ways is the closest analog >> Right. >> to Pentaho in terms of predominatly open-source, go to market approach, that involves both the robust data integration and cleansing and so forth from the back end. And also, a deep dive of open source analytics on the front end. >> So they're differentiation they sort of claim is they're sort of end to end integration. >> Jim: Yeah. >> Which is something we've been talking about at Wikibon for a while. And George is doing some work there, you probably are too. It's an age old thing in software. Do you do best-of-breed or do you do sort of an integrated suite? Now the interesting thing about Pentaho is, they don't own their own cloud. Hitachi Vantara doesn't own their own cloud. So they do a lot of, it's an integrated pipeline, but it doesn't include its own database and other tooling. >> Jim: Yeah. >> Right, and so there is an interesting dynamic occurring that we want to talk to Donna Perlik about obviously, is how they position relative to roll your own. And then how they position, sort of, in the cloud world. >> And we should ask also how are they positioning now in the world of deep learning frameworks? I mean they don't provide, near as I know, their own deep learning frameworks to compete with the likes of TensorFlow, or MXNet, or CNT or so forth. So where are they going in that regard? I'd like to know. I mean there are some others that are big players in this space, like IBM, who don't offer their own deep learning framework, but support more than one of the existing frameworks in a portfolio that includes much of the other componentry. So in other words, what I'm saying is you don't need to have your own deep learning framework, or even open-source deep learning code-based, to compete in this new marketplace. And perhaps Pentaho, or Hitachi Vantara, roadmapping, maybe they'll take an IBM like approach. Where they'll bundle support, or incorporate support, for two or more of these third party tools, or open source code bases into their solution. Weka is not theirs either. It's open source. I mean Weka is an open source tool that they've supported from the get go. And they've done very well by it. >> It's just kind of like early day machine leraning. >> David: Yeah. >> Okay, so we've heard about Hitachi's transformation internally. And then their messaging today was, of course-- >> Exactly, that's where I really wanted to go next was we're talking about it from the product and the technology standpoint. But one of the things we kept hearing about today was this idea of the double bottom line. And this is how Hitachi Vantara is really approaching the marketplace, by really focusing on better business, better outcomes, for their customers. And obviously for Hitachi Vantara, too, but also for bettering society. And that's what we're going to see on theCUBE today. We're going to have a lot of guests who will come on and talk about how they're using Pentaho to solve problems in healthcare data, in keeping kids from dropping out of college, from getting computing and other kinds of internet power to underserved areas. I think that's another really important approach that Hitachi Vantara is taking in its model. >> The fact that Hitachi Vantara, I know, received Pentaho Solution, has been on the market for so long and they have such a wide range of reference customers all over the world, in many vertical. >> Rebecca: That's a great point. >> The most vertical. Willing to go on camera and speak at some length of how they're using it inside their business and so forth. Speaks volumes about a solution provider. Meaning, they do good work. They provide good offerings. They're companies have invested a lot of money in, and are willing to vouch for them. That says a lot. >> Rebecca: Right. >> And so the acquisition was in 2015. I don't believe it was a public number. It's Hitachi Limited. I don't think they had to report it, but the number I heard was about a half a billion. >> Jim: Uh-hm >> Which for a company with the potential of Pentaho, is actually pretty cheap, believe it or not. You see a lot of unicorns, billion dollar plus companies. But the more important thing is it allows Hitachi to further is transformation and really go after this trillion dollar business. Which is really going to be interesting to see how that unfolds. Because while Hitachi has a long-term view, it always takes a long-term view, you still got to make money. It's fuzzy, how you make money in IOT these days. Obviously, you can make money selling devices. >> How do you think money, open source anything? You know, so yeah. >> But they're sort of open source, with a hybrid model, right? >> Yeah. >> And we talked to Brian about this. There's a proprietary component in there so they can make their margin. Wikibon, we see this three tier model emerging. A data model, where you've got the edge in some analytics, real time analytics at the edge, and maybe persists some of that data, but they're low cost devices. And then there's a sort of aggregation point, or a hub. I think Pentaho today called it a gateway. Maybe it was Brian from Forester. A gateway where you're sort of aggregating data, and then ultimately the third tier is the cloud. And that cloud, I think, vectors into two areas. One is Onprem and one was public cloud. What's interesting with Brian from Forester was saying that basically said that puts the nail in the coffin of Onprem analytics and Onprem big data. >> Uh-hm >> I don't buy that. >> I don't buy that either. >> No, I think the cloud is going to go to your data. Wherever the data lives. The cloud model of self-service and agile and elastic is going to go to your data. >> Couple of weeks ago, of course we Wikibon, we did a webinar for our customers all around the notion of a true private cloud. And Dave, of course, Peter Burse were on it. Explaining that hybrid clouds, of course, public and private play together. But where the cloud experience migrates to where the data is. In other words, that data will be both in public and in private clouds. But you will have the same reliability, high availability, scaleability, ease of programming, so forth, wherever you happen to put your data assets. In other words, many companies we talk to do this. They combine zonal architecture. They'll put some of their resources, like some of their analytics, will be in the private cloud for good reason. The data needs to stay there for security and so forth. But much in the public cloud where its way cheaper quite often. Also, they can improve service levels for important things. What I'm getting at is that the whole notion of a true private cloud is critically important to understand that its all datacentric. Its all gravitating to where the data is. And really analytics are gravitating to where the data is. And increasingly the data is on the edge itself. Its on those devices where its being persistent, much of it. Because there's no need to bring much of the raw data to the gateway or to the cloud. If you can do the predominate bulk of the inferrencing on that data at edge devices. And more and more the inferrencing, to drive things like face recognition from you Apple phone, is happening on the edge. Most of the data will live there, and most of the analytics will be developed centrally. And then trained centrally, and pushed to those edge devices. That's the way it's working. >> Well, it is going to be an exciting conference. I can't wait to hear more from all of our guests, and both of you, Dave Vellante and Jim Kobielus. I'm Rebecca Knight, we'll have more from theCUBE's live coverage of Pentaho World, brought to you by Hitachi Vantara just after this.

Published Date : Oct 26 2017

SUMMARY :

Brought to you by Hitachi Vantara. Guys I'm thrilled to be So the question for you both is When we talked to Brian-- is taken now the next step. but in terms of the data world, before the whole Hadoop movement. And they said alright we can And maybe that's not the right term. in the market in the mid-200's, So they're one of those Obviously, what you think You mentioned BI, you mentioned before. ClickView and so forth. And, of course, there's the that involves both the they're sort of end to end integration. Now the interesting sort of, in the cloud world. much of the other componentry. It's just kind of like And then their messaging is really approaching the marketplace, has been on the market for so long Willing to go on camera And so the acquisition was in 2015. Which is really going to be interesting How do you think money, and maybe persists some of that data, is going to go to your data. and most of the analytics brought to you by Hitachi

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HitachiORGANIZATION

0.99+

BrianPERSON

0.99+

GeorgePERSON

0.99+

Rebecca KnightPERSON

0.99+

James KobielusPERSON

0.99+

Jim KobielusPERSON

0.99+

RebeccaPERSON

0.99+

Dave VellantePERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

Donna PerlikPERSON

0.99+

PentahoORGANIZATION

0.99+

JamesPERSON

0.99+

JimPERSON

0.99+

Peter BursePERSON

0.99+

2015DATE

0.99+

EMCORGANIZATION

0.99+

DavidPERSON

0.99+

New ZealandLOCATION

0.99+

Brian HouseholderPERSON

0.99+

IBMORGANIZATION

0.99+

80 percentQUANTITY

0.99+

twoQUANTITY

0.99+

Hitachi VantaraORGANIZATION

0.99+

Hitachi LimitedORGANIZATION

0.99+

last yearDATE

0.99+

Orlando, FloridaLOCATION

0.99+

OnpremORGANIZATION

0.99+

todayDATE

0.99+

twiceQUANTITY

0.99+

AppleORGANIZATION

0.99+

Hitachi Data SystemsORGANIZATION

0.99+

ForesterORGANIZATION

0.99+

two areasQUANTITY

0.99+

two years agoDATE

0.99+

InformaticaORGANIZATION

0.99+

one weekQUANTITY

0.99+

oneQUANTITY

0.99+

WekaORGANIZATION

0.99+

bothQUANTITY

0.98+

OneQUANTITY

0.98+

TableauTITLE

0.98+

PentahoWorldEVENT

0.98+

14 years agoDATE

0.98+

Hitachi AmericaORGANIZATION

0.98+

WikibonORGANIZATION

0.98+

LinuxTITLE

0.97+

about a half a billionQUANTITY

0.97+

Wikibon Presents: Software is Eating the Edge | The Entangling of Big Data and IIoT


 

>> So as folks make their way over from Javits I'm going to give you the least interesting part of the evening and that's my segment in which I welcome you here, introduce myself, lay out what what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know Wikibon is a part of SiliconANGLE which also includes theCUBE, so if you look around, this is what we have been doing for the past couple of days here in the TheCUBE. We've been inviting some significant thought leaders from over on the show and in incredibly expensive limousines driven them up the street to come on to TheCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down, and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of TheCUBE and other places and we incorporated it into our research. And work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight, and I've got a couple of my analysts assembled, and we're also going to have a panel, is this notion of software is eating the Edge. Now most of you have probably heard Marc Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software's eating the world. Well, if software is truly going to eat the world, it's going to eat at, it's going to take the big chunks, big bites at the Edge. That's where the actual action's going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things and IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that we're going to, I've already blown the schedule, that's on me. But to do that I'm going to spend a couple minutes talking about what we regard as the essential digital business capabilities which includes analytics and Big Data, and includes IIoT and we'll explain at least in our position why those two things come together the way that they do. But I'm going to ask the august and revered Neil Raden, Wikibon analyst to come on up and talk about harvesting value at the Edge. 'Cause there are some, not now Neil, when we're done, when I'm done. So I'm going to ask Neil to come on up and we'll talk, he's going to talk about harvesting value at the Edge. And then Jim Kobielus will follow up with him, another Wikibon analyst, he'll talk specifically about how we're going to take that combination of analytics and Edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come, going to invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room. That includes smart people here, smart people in the audience having a conversation ultimately about some of these significant changes so please participate and we look forward to talking about the rest of it. All right, let's get going! What is digital business? One of the nice things about being an analyst is that you can reach back on people who were significantly smarter than you and build your points of view on the shoulders of those giants including Peter Drucker. Many years ago Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping your customer. Now you can argue with that, at the end of the day, if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxi cab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason, the difference between Uber and a taxi cab company is data. That's the primary difference. Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is the business using data as an asset? Is a business driving its engagement with customers, the role of its product et cetera using data? And if they are, they are becoming a more digital business. Now when you think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improved customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about. But it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear. When I say strategic capabilities I mean something specific. When you talk about, for example technology architecture or information architecture there is this notion of what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in the digital business these are the capabilities that are now additive to this core question, ultimately of whether or not the company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition. In a way that is ultimately less intrusive on your markets and on your customers. That's in many respects, one of the first priorities of the internet of things and people. The idea of using sensors and related technologies to capture more data. Once you capture that data you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of Big Data. Including operations, including financial performance and everything else, but ultimately it's taking the data that's being captured and turning it into value within the business. The last point here is that once you have generated a model, or an insight or some other resource that you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency 'cause we think it's an interesting concept and it's something Jim Kobielus is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand. Or systems, combinations of machines and people are acting on behalf of the brand. And this whole notion of agency is the idea that ultimately these systems are now acting as the business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example I was talking to a senior decision maker at a business today and they made a quick observation, they talked about they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird. And then went through security with the bird. And the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this, the bird started talking and repeating things that the woman had said and many of those things, in fact, might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf, digital instrumentation and elements to do more on our behalf, it's going to have blow back and an impact on our brand if we don't do it well. I want to draw that forward a little bit because I suggest there's going to be a new lifecycle for data. And the way that we think about it is we have the internet or the Edge which is comprised of things and crucially people, using sensors, whether they be smaller processors in control towers or whether they be phones that are tracking where we go, and this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new costs. It's one of the few assets that you can say about that. So the concept of an information transducer's really important because it's the basis for a lot of transformations of data as data flies through organizations. So we end up with the transducers storing data in the form of analytics, machine learning, business operations, other types of things, and then it goes back and it's transduced, back into to the real world as we program the real world and turning into these systems of agency. So that's the new lifecycle. And increasingly, that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That could have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers. And that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect there's going to be a lot of hardware. A significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On premise technologies that provide the cloud experience but act where the data naturally needs to be processed. I'll come a little bit more to that in a second. So we think that it's going to be a relatively balanced market, a lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the Edge 'cause that's where the action's going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular the Edge is driving these conversations. So here is a piece of research that one of our cohorts at Wikibon did, David Floyer. Taking a look at IoT Edge cost comparisons over a three year period. And it showed on the left hand side, an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the Edge and processed at the Edge. And literally it is a 85 plus percent cost reduction to keep more of the data at the Edge. Now that has enormous implications, how we think about big data, how we think about next generation architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next two years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management has to be put in place. Ultimately we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper, look what Amazon can do. Or what AWS can do. All true statements. Very very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves do we have to move our business to the cloud, or can we move the cloud to the business? And increasingly what we see happening as we talk to our large customers about this, is that the cloud is being extended out to the Edge, we're moving the cloud and cloud services out to the business. Because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, a lot of planning, lot of change within an IT and business organizations. How we capture data, how we turn it into value, and how we translate that into real world action through software. That's going to lead to a rethinking, ultimately, based on cost and other factors about how we deploy infrastructure. How we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role the Edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil! So let me invite Neil up to spend some time talking about harvesting value at the Edge. Can you see his, all right. Got it. >> Oh boy. Hi everybody. Yeah, this is a really, this is a really big and complicated topic so I decided to just concentrate on something fairly simple, but I know that Peter mentioned customers. And he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I'd started a magazine called Hired Brains. It was for consultants. And Peter said, Peter said a number of really interesting things to me, but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was! So anyway, he had to leave to do a video conference with Jack Welch and so I said to him, how do you charge Jack Welch to spend an hour on a video conference? And he said, you know I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Suzie Welch recently and I told her that story and she said, "Oh he's full of it, Jack never paid "a dime for those conferences!" (laughs) So anyway, all right, so let's talk about this. To me, things about, engineered things like the hardware and network and all these other standards and so forth, we haven't fully developed those yet, but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in Edge Analytics is what you're going to get out of it, what the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition? What's the definition of this, that and the other thing? And with sensor data, I really have the feeling, when companies get, not you know, not companies, organizations get instrumented and start dealing with this kind of data what they're going to find is that this is the first time, and I've been involved in analytics, I don't want to date myself, 'cause I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours! We're not going to get it from some other source. It's going to be the real deal, to the extent that it's the real deal. Now you may say, ya know Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be really careful with this data. It's ours, we have to take care of it. We don't get to reload it from source some other day. If we munge it up it's gone forever. So that has, that has very serious implications, but let me, let me roll you back a little bit. The way I look at analytics is it's come in three different eras. And we're entering into the third now. The first era was business intelligence. It was basically built and governed by IT, it was system of record kind of reporting. And as far as I can recall, it probably started around 1988 or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that was sort of like BI, but 88 was when they really started coming out, that's when we saw BusinessObjects and Cognos and MicroStrategy and those kinds of things. The second generation just popped out on everybody else. We're all looking around at BI and we were saying why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep and Line of Business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the Big Data thing. Now we're in third generation, so we not only had Big Data, which has come and hit us like a tsunami, but we're looking at smart discovery, we're looking at machine learning. We're looking at AI induced analytics workflows. And then all the natural language cousins. You know, natural language processing, natural language, what's? Oh Q, natural language query. Natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says this chart is about the following and it used the following data, and it's blah blah blah blah blah. I think it's kind of wordy and it's going to refined some, but it's an interesting, it's an interesting thing to do. Now, the problem I see with Edge Analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I know we talk about autonomous cars, I hope to God we never have them, 'cause I'm a car guy. Fleet Management, I think Qualcomm started Fleet Management in 1988, that is not a new application. Industrial controls. I seem to remember, I seem to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative. Because the real value in Edge Analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different. Creating a brand new business. Changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about so, if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or Fleet Managing, 'cause I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with Big Data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said as far as I can tell, they don't have a narrative they have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product? They don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately some of them are going to be really stinky, you know, they're going to be really bad. You're going to lose more of your privacy, it's going to get harder to get, I dunno, mortgage for example, I dunno, maybe it'll be easier, but in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project. Because number one, it's very expensive, and number two, it's a waste of the technology because you should be looking at, you know the old numerator denominator thing? You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is you don't want to get over confident. Actually this is good advice about anything, right? But in this case, I love this quote by Derek Sivers He's a pretty funny guy. He said, "If more information was the answer, "then we'd all be billionaires with perfect abs." I'm not sure what's on his wishlist, but you know, I would, those aren't necessarily the two things I would think of, okay. Now, what I said about the data, I want to explain some more. Big Data Analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places, it's not original material. And when it comes in, it's always used as second hand data. Now what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay. That's Big Data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before. It's yours to construct, okay. You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it 'cause if you lose it, it's gone. It's the original data. It's the same way, in operational systems for a long long time we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data, that we're going to look at it for a minute, and we're going to throw most of it away. Personally I don't think that's going to happen. I think it's all going to be saved, at least for a while. Now, the governance and security, oh, by the way, I don't know where you're going to find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today they're still thinking along the same grids that we thought about it all along. But this is very very different and again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now when I say governance, my experience has been over the years that governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now the other thing is you may not get to think about it differently, because some of the stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you in how you put this together, but you know, that's the way it works. Now, machine learning, I think I told somebody the other day that claims about machine learning in software products are as common as twisters in trail parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the Edge, here's the question, what kind of machine learning would you have at the Edge? As opposed to developing your models back at say, the cloud, when you transmit the data there. The devices at the Edge are not very powerful. And they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this working with gobs of data creating models and testing them offline. And when you have something that works, you can put it there. Now there's one thing I want to talk about before I finish, and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making and the conclusion that I came up with was that little decisions add up, and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life, or frankly any life. Or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control, you're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that may seem fairly mundane. Managing machinery on the factory floor, I mean that may sound great, but really isn't that interesting. Managing well heads, drilling for oil, well I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in Edge Analytics, the next time I talk to you I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So, in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing a project with them. And in the miserable weather I looked at him and I said, for godsake Paul, where's the black car? And he said, that was the 90s. (laughs) Thank you. So, Jim, up to you. (audience applauding) This is terrible, go that way, this was terrible coming that way. >> Woo, don't want to trip! And let's move to, there we go. Hi everybody, how ya doing? Thanks Neil, thanks Peter, those were great discussions. So I'm the third leg in this relay race here, talking about of course how software is eating the world. And focusing on the value of Edge Analytics in a lot of real world scenarios. Programming the real world for, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really to all material reality potentially at the Edge. But mobile applications and industrial IoT and the smart appliances and self driving vehicles. I will break it out in terms of a reference architecture for understanding what functions are being pushed to the Edge to hardware, to our phones and so forth to drive various scenarios in terms of real world results. So I'll move a pace here. So basically AI software or AI microservices are being infused into Edge hardware as we speak. What we see is more vendors of smart phones and other, real world appliances and things like smart driving, self driving vehicles. What they're doing is they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data driven systems of agency of the sort that Peter is talking about. Infusing data driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, ya know for doing real time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architecture, neuro synaptic chips, GPUs and so forth. So what I've got here in front of you is a sort of a high level reference architecture that we're building up in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not going to unpack it completely. Of course we don't have oodles of time so I'm going to take you fairly quickly through the high points. It's a driver for systems of agency. Programming the real world. Transducing digital inputs, the data, to analog real world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the Edge with points of decision and action in real time. And there are four capabilities that we're seeing in terms of AI enabled, enabling capabilities that are absolutely critical to software being pushed to the Edge are sensing, actuation, inference and Learning. Sensing and actuation like Peter was describing, it's about capturing data from the environment within which a device or users is operating or moving. And then actuation is the fancy term for doing stuff, ya know like industrial IoT, it's obviously machine controlled, but clearly, you know self driving vehicles is steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data, the logic of the application. Predictive logic, correlations, classification, abstractions, differentiation, anomaly detection, recognizing faces and voices. We see that now with Apple and the latest version of the iPhone is embedding face recognition as a core, as the core multifactor authentication technique. Clearly that's a harbinger of what's going to be universal fairly soon which is that depends on AI. That depends on convolutional neural networks, that is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is the AI software is taking root in hardware to power continuous agency. Getting stuff done. Powered decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily want to let the car steer itself in all scenarios, we want some degree of override, for lots of good reasons. They want to protect life and limb including their own. And just more data driven automation across the internet of things in the broadest sense. So unpacking this reference framework, what's happening is that AI driven intelligence is powering real time decisioning at the Edge. Real time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all of that data, may be persistent at the Edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI deep learning, multilayer, to do a variety of anti-fraud and higher level like narrative, auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing is going to begin to happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device along with other data that may be coming down in real time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action, at the Edge. Continuous inference. What it all comes down to is that inference is what's going on inside the chips at the Edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, ASIC, Neuro synaptic chips of all sorts playing in various combinations that are automating more and more very complex inference scenarios at the Edge. And not just individual devices, swarms of devices, like drones and so forth are essentially an Edge unto themselves. You'll see these tiered hierarchies of Edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be, this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here, training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task, predictions, classifications, face recognition that you, you've built them for. But I use the term learning in a broader sense. It's what's make your inferences get better and better, more accurate over time is that you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know there's maximize a reward function versus minimize a loss function, you know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis critically important in a lot of real world scenarios. So Edge AI Algorithms, clearly, deep learning which is multilayered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures, doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of great sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing essentially what's called generative applications of all sort, composing music, and a lot of it's being used for auto programming. These are all deep learning. There's a variety of other algorithm approaches I'm not going to bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's going to have, has a camera, it has a microphone, it has the ability to of course, has geolocation and navigation capabilities. It's environmentally aware, it's got an accelerometer and so forth embedded therein. The reason that your phone and all of the devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in the wider range of scenarios. So machine learning is the foundation of all of this, but there are other, I mean of deep learning, artificial neural networks is the foundation of that. But there are other approaches for machine learning I want to make you aware of because support vector machines and these other established approaches for machine learning are not going away but really what's driving the show now is deep learning, because it's scary effective. And so that's where most of the investment in AI is going into these days for deep learning. AI Edge platforms, tools and frameworks are just coming along like gangbusters. Much development of AI, of deep learning happens in the context of your data lake. This is where you're storing your training data. This is the data that you use to build and test to validate in your models. So we're seeing a deepening stack of Hadoop and there's Kafka, and Spark and so forth that are driving the training (coughs) excuse me, of AI models that are power all these Edge Analytic applications so that that lake will continue to broaden in terms, and deepen in terms of a scope and the range of data sets and the range of modeling, AI modeling supports. Data science is critically important in this scenario because the data scientist, the data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and to deploy and iterate all this AI that's being pushed to the Edge. So clearly data science is at the center, data scientists of an increasingly specialized nature are necessary to the realization to this value at the Edge. AI frameworks are coming along like you know, a mile a minute. TensorFlow has achieved a, is an open source, most of these are open source, has achieved sort of almost like a defacto standard, status, I'm using the word defacto in air quotes. There's Theano and Keras and xNet and CNTK and a variety of other ones. We're seeing range of AI frameworks come to market, most open source. Most are supported by most of the major tool vendors as well. So at Wikibon we're definitely tracking that, we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort of thing Neil's covered in a variety of contexts in his career is fundamentally important to Edge Analytics to systems of agency 'cause it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the Edge to individual users as well as to process that automation. That's absolutely necessary for self driving vehicles to do their jobs and industrial IoT. So what we're seeing is more and more recommendation engine or recommender capabilities powered by ML and DL are going to the Edge, are already at the Edge for a variety of applications. Edge AI capabilities, like I said, there's sensing. And sensing at the Edge is becoming ever more rich, mixed reality Edge modalities of all sort are for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the Edge, into the chip sets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives. And you know, it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning. Like I said, it's, a growing range of inferences that are being done at the Edge. And that's where it has to happen 'cause that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the Edge at Edge devices so, the models that are built and trained in the cloud are pushed down through a dev ops process down to the Edge and that's the way it will work pretty much in most AI environments, Edge analytics environments. You centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications. I'll just run you through sort of a core list of the ones that are coming into, already come into the mainstream at the Edge. Multifactor authentication, clearly the Apple announcement of face recognition is just a harbinger of the fact that that's coming to every device. Computer vision speech recognition, NLP, digital assistance and chat bots powered by natural language processing and understanding, it's all AI powered. And it's becoming very mainstream. Emotion detection, face recognition, you know I could go on and on but these are like the core things that everybody has access to or will by 2020 and they're core devices, mass market devices. Developers, designers and hardware engineers are coming together to pool their expertise to build and train not just the AI, but also the entire package of hardware in UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, the submitted intelligence enables and most, much of what they build in terms of AI will be containerized as micro services through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, the Edge, you know at the Edge, some data will be persistent, needs to be persistent to drive inference. That's, and you know to drive a variety of different application scenarios that need some degree of historical data related to what that device in question happens to be sensing or has sensed in the immediate past or you know, whatever. The hardware itself is geared towards both sensing and increasingly persistence and Edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do. That's where that comes in. That has to be powered by low cost, low power commodity chip sets of various sorts. What we see right now in terms of chip sets is it's a GPUs, Nvidia has gone real far and GPUs have come along very fast in terms of power inference engines, you know like the Tesla cars and so forth. But GPUs are in many ways the core hardware sub straight for in inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized, and so we see a fair number of CPUs being used as the hardware for Edge Analytic applications. Some vendors are fairly big on FPGAs, I believe Microsoft has gone fairly far with FPGAs inside DL strategy. ASIC, I mean, there's neuro synaptic chips like IBM's got one. There's at least a few dozen vendors of neuro synaptic chips on the market so at Wikibon we're going to track that market as it develops. And what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chip set architecture at the inference side of the Edge, and other chip set architectures that are driving the DL as processed in the cloud, playing together within a common architecture. And we see some, a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory, but pushing Tensorflow models that might be trained through Spark down to the Edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very, very, likely to be standard going forward in lots of areas. So analytics at the Edge power continuous results is what it's all about. The whole point is really not moving the data, it's putting the inference at the Edge and working from the data that's already captured and persistent there for the duration of whatever action or decision or result needs to be powered from the Edge. Like Neil said cost takeout alone is not worth doing. Cost takeout alone is not the rationale for putting AI at the Edge. It's getting new stuff done, new kinds of things done in an automated consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a dev ops context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed. Continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm going to hand it over now. It's five minutes after the hour. We're going to get going with the Influencer Panel so what we'd like to do is I call Peter, and Peter's going to call our influencers. >> All right, am I live yet? Can you hear me? All right so, we've got, let me jump back in control here. We've got, again, the objective here is to have community take on some things. And so what we want to do is I want to invite five other people up, Neil why don't you come on up as well. Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. >> Neil: I'm glad I'm on the left side. >> From the Hurwitz Group. >> From the Hurwitz Group. Jennifer Shin who's affiliated with UC Berkeley. Jennifer are you here? >> She's here, Jennifer where are you? >> She was here a second ago. >> Neil: I saw her walk out she may have, >> Peter: All right, she'll be back in a second. >> Here's Jennifer! >> Here's Jennifer! >> Neil: With 8 Path Solutions, right? >> Yep. >> Yeah 8 Path Solutions. >> Just get my mic. >> Take your time Jen. >> Peter: All right, Stephanie McReynolds. Far left. And finally Joe Caserta, Joe come on up. >> Stephie's with Elysian >> And to the left. So what I want to do is I want to start by having everybody just go around introduce yourself quickly. Judith, why don't we start there. >> I'm Judith Hurwitz, I'm president of Hurwitz and Associates. We're an analyst research and fault leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. >> Jennifer. >> Hi, my name's Jennifer Shin. I'm the founder and Chief Data Scientist 8 Path Solutions LLC. We do data science analytics and technology. We're actually about to do a big launch next month, with Box actually. >> We're apparent, are we having a, sorry Jennifer, are we having a problem with Jennifer's microphone? >> Man: Just turn it back on? >> Oh you have to turn it back on. >> It was on, oh sorry, can you hear me now? >> Yes! We can hear you now. >> Okay, I don't know how that turned back off, but okay. >> So you got to redo all that Jen. >> Okay, so my name's Jennifer Shin, I'm founder of 8 Path Solutions LLC, it's a data science analytics and technology company. I founded it about six years ago. So we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have, I've been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecturer in data science. >> You know Jim, you know Neil, Joe, you ready to go? >> Joe: Just broke my microphone. >> Joe's microphone is broken. >> Joe: Now it should be all right. >> Jim: Speak into Neil's. >> Joe: Hello, hello? >> I just feel not worthy in the presence of Joe Caserta. (several laughing) >> That's right, master of mics. If you can hear me, Joe Caserta, so yeah, I've been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence type of work since 1996. And been doing, wholly dedicated to Big Data solutions and modern data engineering since 2009. Where should I be looking? >> Yeah I don't know where is the camera? >> Yeah, and that's basically it. So my company was formed in 2001, it's called Caserta Concepts. We recently rebranded to only Caserta 'cause what we do is way more than just concepts. So we conceptualize the stuff, we envision what the future brings and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. >> Peter: And finally Stephanie McReynolds. >> I'm Stephanie McReynolds, I had product marketing as well as corporate marketing for a company called Elysian. And we are a data catalog so we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision making. And some of our customers like City of San Diego, a large automotive manufacturer working on self driving cars and General Electric use Elysian to help power their solutions for IoT at the Edge. >> All right so let's jump right into it. And again if you have a question, raise your hand, and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between Big Data AI and IoT? Now Wikibon's put forward its observation that data's being generated at the Edge, that action is being taken at the Edge and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, you want to start? >> Yeah, so I think that if you look at AI machine learning, all these different areas, you have to be able to have the data learned. Now when it comes to IoT, I think one of the issues we have to be careful about is not all data will be at the Edge. Not all data needs to be analyzed at the Edge. For example if the light is green and that's good and it's supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually only really want to be able to analyze and take action when there's an anomaly. Well if it goes purple, that's actually a sign that something might explode, so that's where you want to make sure that you have the analytics at the edge. Not for everything, but for the things where there is an anomaly and a change. >> Joe, how about from your perspective? >> For me I think the evolution of data is really becoming, eventually oxygen is just, I mean data's going to be the oxygen we breathe. It used to be very very reactive and there used to be like a latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it and then you collect it, and then you can analyze it. And it was very very waterfallish, right? And then eventually we figured out to put it back into the system. Or at least human beings interpret it to try to make the system better and that is really completely turned on it's head, we don't do that anymore. Right now it's very very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean human beings are involved a bit, but less and less and less. And it's just a reality, it may not be politically correct to say but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing. And it can actually help me do things like get to where I want to go faster depending on my preference if I want to save money or save time or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes with the term of ARBI and ARBI is the integration of Augmented Reality and Business Intelligence. Where I think essentially we're going to see, you know, hold your phone up to Jim's face and it's going to recognize-- >> Peter: It's going to break. >> And it's going to say exactly you know, what are the key metrics that we want to know about Jim. If he works on my sales force, what's his attainment of goal, what is-- >> Jim: Can it read my mind? >> Potentially based on behavior patterns. >> Now I'm scared. >> I don't think Jim's buying it. >> It will, without a doubt be able to predict what you've done in the past, you may, with some certain level of confidence you may do again in the future, right? And is that mind reading? It's pretty close, right? >> Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess, I guess we could call that the Turing machine test of the paranormal. >> Well, face recognition, micro gesture recognition, I mean facial gestures, people can do it. Maybe not better than a coin toss, but if it can be seen visually and captured and analyzed, conceivably some degree of mind reading can be built in. I can see when somebody's angry looking at me so, that's a possibility. That's kind of a scary possibility in a surveillance society, potentially. >> Neil: Right, absolutely. >> Peter: Stephanie, what do you think? >> Well, I hear a world of it's the bots versus the humans being painted here and I think that, you know at Elysian we have a very strong perspective on this and that is that the greatest impact, or the greatest results is going to be when humans figure out how to collaborate with the machines. And so yes, you want to get to the location more quickly, but the machine as in the bot isn't able to tell you exactly what to do and you're just going to blindly follow it. You need to train that machine, you need to have a partnership with that machine. So, a lot of the power, and I think this goes back to Judith's story is then what is the human decision making that can be augmented with data from the machine, but then the humans are actually training the training side and driving machines in the right direction. I think that's when we get true power out of some of these solutions so it's not just all about the technology. It's not all about the data or the AI, or the IoT, it's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way and I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. >> So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial capturing information, and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken Big Data ultimately powering how a lot of that changes. Let's go to the next one. >> So actually I have something to add to that. So I think it makes sense, right, with IoT, why we have Big Data associated with it. If you think about what data is collected by IoT. We're talking about a serial information, right? It's over time, it's going to grow exponentially just by definition, right, so every minute you collect a piece of information that means over time, it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with Big Data. And also why you need AI to be able to differentiate between one minute versus next minute, right? Trying to find a better way rather than looking at all that information and manually picking out patterns. To have some automated process for being able to filter through that much data that's being collected. >> I want to point out though based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, or within IT certainly is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance. And we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousandths of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So, the reality is we're not just dealing with stylized data, we're dealing with real data, and it's more, more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? >> Well, I mean, I agree with that, I think I already said that, right. >> Yes you did, okay let's move on to the next one. >> Well it's a doppelganger, the digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent, or might not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests, and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know I mean there needs to be that kind of logic somewhere in this fabric to enable true agency. >> All right, so I'm going to start with you. Oh go ahead. >> I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the, we didn't have the facilities, we didn't have the resources to really do AI, we just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we develop now are very inward looking, we look at our organization, we look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. >> Actually what I would add to that is also it actually introduces being able to use engineering, right? Having engineers interested in the data. Because it's actually technical data that's collected not just say preferences or information about people, but actual measurements that are being collected with IoT. So it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. >> Well, Neil, you and I have talked about something, 'cause it's not just engineers. We have in the healthcare industry for example, which you know a fair amount about, there's this notion of empirical based management. And the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way the managers collect or collaborate and ultimately collectively how they take action. So it's not just engineers, it's supposed to also inform business, what's actually happening in the healthcare world when we start thinking about some of this empirical based management, is it working? What are some of the barriers? >> It's not a function of technology. What happens in medicine and healthcare research is, I guess you can say it borders on fraud. (people chuckling) No, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost written by pharmaceutical companies. (man chuckling) Right, so I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the women's health initiative. They spent $700 million gathering this data over 20 years. And when they released it they looked at all the wrong things deliberately, right? So I think that's a systemic-- >> I think you're bringing up a really important point that we haven't brought up yet, and that is is can you use Big Data and machine learning to begin to take the biases out? So if you let the, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think get better over time, but it's going to take a while to get there because we do tend to gravitate towards our biases. >> I will share an anecdote. So I had some arm pain, and I had numbness in my thumb and pointer finger and I went to, excruciating pain, went to the hospital. So the doctor examined me, and he said you probably have a pinched nerve, he said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it. (Neil laughs) And he came back because this little bit of information was something that could easily be looked up, right? Every nerve in your spine is connected to your different fingers so the pointer and the thumb just happens to be your C6, so he came back and said, it's your C6. (Neil mumbles) >> You know an interesting, I mean that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community, so by making Big Data accessible to everyone, you actually start a more rational conversation or debate on well what are the true insights-- >> If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test. It's not just about improving your understanding of the wrong thing, it's also testing whether it's the right or wrong thing as well. >> That's right, to be able to test that you need to have humans in dialog with one another bringing different biases to the table to work through okay is there truth in this data? >> It's context and it's correlation and you can have a great correlation that's garbage. You know if you don't have the right context. >> Peter: So I want to, hold on Jim, I want to, >> It's exploratory. >> Hold on Jim, I want to take it to the next question 'cause I want to build off of what you talked about Stephanie and that is that this says something about what is the Edge. And our perspective is that the Edge is not just devices. That when we talk about the Edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action which is not a simple thing. So what do you guys think? What does the Edge mean to you? Joe, why don't you start? >> Well, I think it could be a combination of the two. And specifically when we talk about healthcare. So I believe in 2017 when we eat we don't know why we're eating, like I think we should absolutely by now be able to know exactly what is my protein level, what is my calcium level, what is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have, and eat very very purposely and not by taste-- >> And it's amazing that red wine is always the answer. >> It is. (people laughing) And tequila, that helps too. >> Jim: You're a precision foodie is what you are. (several chuckle) >> There's no reason why we should not be able to know that right now, right? And when it comes to healthcare is, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't, you can't, you can't manage what you can't measure. And you're really not allowed to use a lot of this data so you can't measure it, right? You can't do things very very scientifically right, in the healthcare world and I think regulation in the healthcare world is really burdening advancement in science. >> Peter: Any thoughts Jennifer? >> Yes, I teach statistics for data scientists, right, so you know we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find a balance, right, a middle ground. For instance, in the case of are you being too biased through data, well you could say like we want to look at data only objectively, but then there are certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw earth, saw the people, everyone's carrying umbrellas right, and then it started to rain. That alien might think well, it's because they're carrying umbrellas that it's raining. Now we know from real world that that's actually not the way these things work. So if you look only at the data, that's the potential risk. That you'll start making associations or saying something's causal when it's actually not, right? So that's one of the, one of the I think big challenges. I think when it comes to looking also at things like healthcare data, right? Do you collect data about anything and everything? Does it mean that A, we need to collect all that data for the question we're looking at? Or that it's actually the best, more optimal way to be able to get to the answer? Meaning sometimes you can take some shortcuts in terms of what data you collect and still get the right answer and not have maybe that level of specificity that's going to cost you millions extra to be able to get. >> So Jennifer as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down to that that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these Big Datas. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely best or at least good result? >> So I think that's a bit of a tough question, right? Because part of it is, it's going to depend on how many of these researchers actually get exposed to real world scenarios, right? Research looks into all these papers, and you come up with all these models, but if it's never tested in a real world scenario, well, I mean we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is. Funding is going to matter in this case. If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again on the other side of course, if researchers don't validate those models then you really can't say for sure that it's actually more accurate, or it's more efficient. >> It's the issue of real world testing and experimentation, A B testing, that's standard practice in many operationalized ML and AI implementations in the business world, but real world experimentation in the Edge analytics, what you're actually transducing are touching people's actual lives. Problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you've got to tread lightly on doing operationalizing that kind of testing in the IoT when people's lives and health are at stake. >> We still give 'em placebos. So we still test 'em. All right so let's go to the next question. What are the hottest innovations in AI? Stephanie I want to start with you as a company, someone at a company that's got kind of an interesting little thing happening. We start thinking about how do we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? >> I think it's a little counter intuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath. But they're having a significant impact on business decision making or bringing a different type of application to the market and you know, I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision making and that means kind of hiding the AI to the business user. Because if you think a bot is making a decision instead of you, you're not going to partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM when recommendation engines were all the rage online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them, that algorithm was 99% correct most of the time, but there was this human resistance to letting a computer tell you what to tell that customer on the other side even if it was more successful in the end. And so I think that the innovation in AI that's really going to push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about as assisting their work and getting to a better result-- >> Hence the augmentation point you made earlier. >> Absolutely, absolutely. >> Joe how 'about you? What do you look at? What are you excited about? >> I think the coolest thing at the moment right now is chat bots. Like to be able, like to have voice be able to speak with you in natural language, to do that, I think that's pretty innovative, right? And I do think that eventually, for the average user, not for techies like me, but for the average user, I think keyboards are going to be a thing of the past. I think we're going to communicate with computers through voice and I think this is the very very beginning of that and it's an incredible innovation. >> Neil? >> Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, in all sorts of things. Meteorology. And that's where, well, hopefully not on an every day basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you can probably figure that one out right? But there are legions of people in many many companies that are involved in that industry. So if you're talking about the dollars spent on AI, I think the stuff that we do in our industries is probably fairly small. >> Well it reminds me of an application I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. >> Why not? >> Which is not a bad thing for a whole lot of people. Judith what do you look at? >> So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to, to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some, basically we have to move to levels of abstraction before this becomes truly ubiquitous across many different areas. >> Peter: Jennifer? >> So I'm going to say computer vision. >> Computer vision? >> Computer vision. So computer vision ranges from image recognition to be able to say what content is in the image. Is it a dog, is it a cat, is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a chihuahua and then it compares the two. And can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the new innovations. I think, for instance, cloud vision I think that's what Google still calls it. The vision API we've they've released on beta allows you to actually use an API to send your image and then have it be recognized right, by their API. There's another startup in New York called Clarify that also does a similar thing as well as you know Amazon has their recognition platform as well. So I think in a, from images being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it, but having a computer actually tally that information for you, right? >> There's actually an extra piece to that. So if I have a picture of a stop sign, and I'm an automated car, and is it a picture on the back of a bus of a stop sign, or is it a real stop sign? So that's going to be one of the complications. >> Doesn't matter to a New York City cab driver. How 'about you Jim? >> Probably not. (laughs) >> Hottest thing in AI is General Adversarial Networks, GANT, what's hot about that, well, I'll be very quick, most AI, most deep learning, machine learning is analytical, it's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff. In other words, to create realistic looking photographs, to compose music, to build CAD CAM models essentially that can be constructed on 3D printers. So GANT, it's a huge research focus all around the world are used for, often increasingly used for natural language generation. In other words it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like it was constructed by a human and doing it over and over again to fool humans. I mean you can imagine the fraud potential. But you can also imagine just the sheer, like it's going to shape the world, GANT. >> All right so I'm going to say one thing, and then we're going to ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs, or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the text. And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is absolutely fascinating to me. Whether it's gamification, or even some things we're thinking about with block chain and bitcoins and related types of stuff. To my mind that's going to have an enormous impact, some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think sir? And I'll try to do my best to repeat it. Oh we have a mic. >> So my question's about, Okay, so the question's pretty much about what Stephanie's talking about which is human and loop training right? I come from a computer vision background. That's the problem, we need millions of images trained, we need humans to do that. And that's like you know, the workforce is essentially people that aren't necessarily part of the AI community, they're people that are just able to use that data and analyze the data and label that data. That's something that I think is a big problem everyone in the computer vision industry at least faces. I was wondering-- >> So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have domain expertise people who have algorithm expertise and working together? >> I think the expertise issue comes in healthcare, right? In healthcare you need experts to be labeling your images. With contextual information where essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context based intelligence. And all of that comes through training images, and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily an algorithm, right? It's how well are datas labeled? Who's doing the labeling and how do we ensure that it happens? >> Great question. So for the panel. So if you think about it, a consultant talks about being on the bench. How much time are they going to have to spend on trying to develop additional business? How much time should we set aside for executives to help train some of the assistants? >> I think that the key is not, to think of the problem a different way is that you would have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow of that executive, or that individual? And is there a way to gather that context automatically using AI, right? And if you can do that, it's similar to what we do in our product, we observe how someone is analyzing the data and from those observations we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. >> Peter: Anybody else? >> I'll just add to what Stephanie said, so in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, all that, that's all potential training data. So it cross checks against all the other models that are processing all the other data coming from that device. So that the natural language process of understanding can be reality checked against the images that the person happens to be commenting upon, or the scene in which they're embedded, so yeah, the data's embedded-- >> I don't think we're, we're not at the stage yet where this is easy. It's going to take time before we do start doing the pre-training of some of these details so that it goes faster, but right now, there're not that many shortcuts. >> Go ahead Joe. >> Sorry so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know in Ethereum that has this notion, which is bot chain, has this theory, this concept of gas. Where like as the process becomes more efficient it costs less to actually run, right? It costs less ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's going to cost until the machine processes it, right? So there is like some notion of that there. But as far as like vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing, so as people start using it more they're going to be adding more pictures. Very very organically. And then the machines will be trained and right now is a very small handful doing it, and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. >> So Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? >> Well, to a certain extent the people who are contributing the insight own nothing because the systems collect their actions and the things they do and then that data doesn't belong to them, it belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff. It's not enough to say that the systems, people will do the right thing, because a lot of them are not motivated to do the right thing. The whole grant thing, the whole oh my god I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at University of Pittsburgh and they were doing a clinical study on the tubes that they put in little kids' ears who have ear infections, right? And-- >> Google it! Who helps out? >> Anyway, I forget the exact thing, but he came out and said that the principle investigator lied when he made the presentation, that it should be this, I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. 'Cause he went against the senior line of authority. He was-- >> Another question back here? >> Man: Yes, Mark Turner has a question. >> Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images and color images in the case of sonograms and ultrasound and mammograms, you see that happening using AI? You see that being, I mean it's already happening, do you see it moving forward in that kind of way? I mean, talk more about that, about you know, AI and black and white images being used and they can be transfixed, they can be made to color images so you can see things better, doctors can perform better operations. >> So I'm sorry, but could you summarize down? What's the question? Summarize it just, >> I had a lot of students, they're interested in the cross pollenization between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it can, using algorithms and stuff be made to color images that can help doctors better do the work that they've already been doing, just do it better. You touched on it like 30 seconds. >> So how AI can be used to actually add information in a way that's not necessarily invasive but is ultimately improves how someone might respond to it or use it, yes? Related? I've also got something say about medical images in a second, any of you guys want to, go ahead Jennifer. >> Yeah, so for one thing, you know and it kind of goes back to what we were talking about before. When we look at for instance scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order for me, who I don't have a medical background, to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is, so it's on both the slice level as well as, within that 2D image, where it's located and the size of it. So the beauty of things like AI is that ultimately right now a radiologist has to look at every slice and actually identify this manually, right? The goal of course would be that one day we wouldn't have to have someone look at every slice to like 300 usually slices and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%. And with anything we do in the real world it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is, definitively say a lung cancer nodule or not. I think the other thing to think about is in terms of how their using other information, what they might use is a for instance, to say like you know, based on other characteristics of the person's health, they might use that as sort of a grading right? So you know, how dark or how light something is, identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in the computer vision sense. I think that's, the difficulty with this of course, is being able to identify which variables were introduced into data that does exist. >> So I'll make two quick observations on this then I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within the medical community partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors which means they can do a lot of work in a little bit of time, and charge a fair amount of money. As we start to introduce some of these technologies that allow us to from a machine standpoint actually make diagnoses based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. >> It's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasounds and so forth are getting as accurate as many of the best radiologists. >> That's the point! >> Detecting cancer >> Now radiologists are saying oh look, we do this great thing in terms of interacting with the patients, never have because they're being dis-intermediated. The second thing that I'll note is one of my favorite examples of that if I got it right, is looking at the images, the deep space images that come out of Hubble. Where they're taking data from thousands, maybe even millions of images and combining it together in interesting ways you can actually see depth. You can actually move through to a very very small scale a system that's 150, well maybe that, can't be that much, maybe six billion light years away. Fascinating stuff. All right so let me go to the last question here, and then I'm going to close it down, then we can have something to drink. What are the hottest, oh I'm sorry, question? >> Yes, hi, my name's George, I'm with Blue Talon. You asked earlier there the question what's the hottest thing in the Edge and AI, I would say that it's security. It seems to me that before you can empower agency you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there has to be security similarly done at the Edge. And I saw (speaking faintly) slides that called out security as a key prerequisite and maybe Judith can comment, but I'm curious how security's going to evolve to meet this analytics at the Edge. >> Well, let me do that and I'll ask Jen to comment. The notion of agency is crucially important, slightly different from security, just so we're clear. And the basic idea here is historically folks have thought about moving data or they thought about moving application function, now we are thinking about moving authority. So as you said. That's not necessarily, that's not really a security question, but this has been a problem that's been in, of concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. >> Yeah actually I'll, yeah, thank you for bringing up security so identity is the foundation of security. Strong identity, multifactor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning are powering a new era of biometrics and you know it's behavioral metrics and so forth that's organic to people's use of devices and so forth. You know getting to the point that Peter was raising is important, agency! Systems of agency. Your agent, you have to, you as a human being should be vouching in a secure, tamper proof way, your identity should be vouching for the identity of some agent, physical or virtual that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well a lot of that's been worked. It all ran through webs of trust, public key infrastructure, formats and you know SAML for single sign and so forth. It's all about assertion, strong assertions and vouching. I mean there's the whole workflows of things. Back in the ancient days when I was actually a PKI analyst three analyst firms ago, I got deep into all the guts of all those federation agreements, something like that has to be IoT scalable to enable systems agency to be truly fluid. So we can vouch for our agents wherever they happen to be. We're going to keep on having as human beings agents all over creation, we're not even going to be aware of everywhere that our agents are, but our identity-- >> It's not just-- >> Our identity has to follow. >> But it's not just identity, it's also authorization and context. >> Permissioning, of course. >> So I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. >> Role based permissioning, yeah. Or persona based. >> That's right. >> I agree. >> And obviously it's going to be interesting to see the role that block chain or its follow on to the technology is going to play here. Okay so let me throw one more questions out. What are the hottest applications of AI at the Edge? We've talked about a number of them, does anybody want to add something that hasn't been talked about? Or do you want to get a beer? (people laughing) Stephanie, you raised your hand first. >> I was going to go, I bring something mundane to the table actually because I think one of the most exciting innovations with IoT and AI are actually simple things like City of San Diego is rolling out 3200 automated street lights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so has some environmental change, positive environmental change impact. I mean, it's street lights, it's not like a, it's not medical industry, it doesn't look like a life changing innovation, and yet if we automate streetlights and we manage our energy better, and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone's life. >> And dramatically suppress the impact of backseat driving! >> (laughs) Exactly. >> Joe what were you saying? >> I was just going to say you know there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an x-ray and determine if there's a tumor there is not out of the realm of possibility, right? >> Neil? >> I agree with both of them, that's what I meant about external kind of applications. Instead of figuring out what to sell our customers. Which is most what we hear. I just, I think all of those things are imminently doable. And boy street lights that help you find a parking place, that's brilliant, right? >> Simple! >> It improves your life more than, I dunno. Something I use on the internet recently, but I think it's great! That's, I'd like to see a thousand things like that. >> Peter: Jim? >> Yeah, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything to enable fine grain microclimate awareness of all of us as human beings moving through the world. And enable reading of every microclimate in buildings. In other words, you know you have sensors on your body that are always detecting the heat, the humidity, the level of pollution or whatever in every environment that you're in or that you might be likely to move into fairly soon and either A can help give you guidance in real time about where to avoid, or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be to your specific requirements. And you know when you have a room like this, full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever but I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet potentially. That's really the Edge analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. >> Jennifer? >> So I think, what's really interesting about it is being able to utilize the technology we do have, it's a lot cheaper now to have a lot of these ways of measuring that we didn't have before. And whether or not engineers can then leverage what we have as ways to measure things and then of course then you need people like data scientists to build the right model. So you can collect all this data, if you don't build the right model that identifies these patterns then all that data's just collected and it's just made a repository. So without having the models that supports patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what will be really interesting is to see how existing technology is leveraged, to collect data and then how that's actually modeled as well as to be able to see how technology's going to now develop from where it is now, to being able to either collect things more sensitively or in the case of say for instance if you're dealing with like how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights in things like healthcare and just maybe even just our behaviors. >> Peter: Judith? >> So, I think we also have to look at it from a peer to peer perspective. So I may be able to get some data from one thing at the Edge, but then all those Edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may, in our business lives, act in silos, but in the real world when you look at things like sensors and devices it's how they react with each other on a peer to peer basis. >> All right, before I invite John up, I want to say, I'll say what my thing is, and it's not the hottest. It's the one I hate the most. I hate AI generated music. (people laughing) Hate it. All right, I want to thank all the panelists, every single person, some great commentary, great observations. I want to thank you very much. I want to thank everybody that joined. John in a second you'll kind of announce who's the big winner. But the one thing I want to do is, is I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember, and I'm going to give you the award Stephanie. And that is increasing we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John why don't you come on up. Who won the, whatever it's called, the raffle? >> You won. >> Thank you! >> How 'about a round of applause for the great panel. (audience applauding) Okay we have a put the business cards in the basket, we're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and speaker, Bluetooth speaker. Got to wait for that. I just want to say thank you for coming and for the folks watching, this is our fifth year doing our own event called Big Data NYC which is really an extension of the landscape beyond the Big Data world that's Cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. Really appreciate you taking the time to come out and share your data and your knowledge, appreciate it. Thank you. Where's the? >> Sam's right in front of you. >> There's the thing, okay. Got to be present to win. We saw some people sneaking out the back door to go to a dinner. >> First prize first. >> Okay first prize is the Bose headset. >> Bluetooth and noise canceling. >> I won't look, Sam you got to hold it down, I can see the cards. >> All right. >> Stephanie you won! (Stephanie laughing) Okay, Sawny Cox, Sawny Allie Cox? (audience applauding) Yay look at that! He's here! The bar's open so help yourself, but we got one more. >> Congratulations. Picture right here. >> Hold that I saw you. Wake up a little bit. Okay, all right. Next one is, my kids love this. This is great, great for the beach, great for everything portable speaker, great gift. >> What is it? >> Portable speaker. >> It is a portable speaker, it's pretty awesome. >> Oh you grabbed mine. >> Oh that's one of our guys. >> (lauging) But who was it? >> Can't be related! Ava, Ava, Ava. Okay Gene Penesko (audience applauding) Hey! He came in! All right look at that, the timing's great. >> Another one? (people laughing) >> Hey thanks everybody, enjoy the night, thank Peter Burris, head of research for SiliconANGLE, Wikibon and he great guests and influencers and friends. And you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party and some drinks and that's out, that's it for the influencer panel and analyst discussion. Thank you. (logo music)

Published Date : Sep 28 2017

SUMMARY :

is that the cloud is being extended out to the Edge, the next time I talk to you I don't want to hear that are made at the Edge to individual users We've got, again, the objective here is to have community From the Hurwitz Group. And finally Joe Caserta, Joe come on up. And to the left. I've been in the market for a couple years now. I'm the founder and Chief Data Scientist We can hear you now. And I have, I've been developing a lot of patents I just feel not worthy in the presence of Joe Caserta. If you can hear me, Joe Caserta, so yeah, I've been doing We recently rebranded to only Caserta 'cause what we do to make recommendations about what data to use the realities of how data is going to work in these to make sure that you have the analytics at the edge. and ARBI is the integration of Augmented Reality And it's going to say exactly you know, And if the machine appears to approximate what's and analyzed, conceivably some degree of mind reading but the machine as in the bot isn't able to tell you kind of some of the things you talked about, IoT, So that's one of the reasons why the IoT of the primary source. Well, I mean, I agree with that, I think I already or might not be the foundation for your agent All right, so I'm going to start with you. a lot of the applications we develop now are very So it's really interesting in the engineering space And the idea that increasingly we have to be driven I know the New England Journal of Medicine So if you let the, if you divorce your preconceived notions So the doctor examined me, and he said you probably have One of the issues with healthcare data is that the data set the actual model that you use to set priorities and you can have a great correlation that's garbage. What does the Edge mean to you? And then find the foods to meet that. And tequila, that helps too. Jim: You're a precision foodie is what you are. in the healthcare world and I think regulation For instance, in the case of are you being too biased We don't have the same notion to the same degree but again on the other side of course, in the Edge analytics, what you're actually transducing What are some of the hottest innovations in AI and that means kind of hiding the AI to the business user. I think keyboards are going to be a thing of the past. I don't have to tell you what country that was. AI being applied to removing mines from war zones. Judith what do you look at? and the problems you're trying to solve. And can the AI really actually detect difference, right? So that's going to be one of the complications. Doesn't matter to a New York City cab driver. (laughs) So GANT, it's a huge research focus all around the world So the thing that I find interesting is traditional people that aren't necessarily part of the AI community, and all of that requires people to do it. So for the panel. of finding the workflow that then you can feed into that the person happens to be commenting upon, It's going to take time before we do start doing and Jim's and Jen's images, it's going to keep getting Who owns the value that's generated as a consequence But the other thing, getting back to the medical stuff. and said that the principle investigator lied and color images in the case of sonograms and ultrasound and say the medical community as far as things in a second, any of you guys want to, go ahead Jennifer. to say like you know, based on other characteristics I find it fascinating that you now see television ads as many of the best radiologists. and then I'm going to close it down, It seems to me that before you can empower agency Well, let me do that and I'll ask Jen to comment. agreements, something like that has to be IoT scalable and context. So I may be the right person to do something yesterday, Or persona based. that block chain or its follow on to the technology into the atmosphere, so has some environmental change, the technology out there where you can put a camera And boy street lights that help you find a parking place, That's, I'd like to see a thousand things like that. that are always detecting the heat, the humidity, patterns that are actually in the data, but in the real world when you look at things and I'm going to give you the award Stephanie. and for the folks watching, We saw some people sneaking out the back door I can see the cards. Stephanie you won! Picture right here. This is great, great for the beach, great for everything All right look at that, the timing's great. that's it for the influencer panel and analyst discussion.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JudithPERSON

0.99+

JenniferPERSON

0.99+

JimPERSON

0.99+

NeilPERSON

0.99+

Stephanie McReynoldsPERSON

0.99+

JackPERSON

0.99+

2001DATE

0.99+

Marc AndreessenPERSON

0.99+

Jim KobielusPERSON

0.99+

Jennifer ShinPERSON

0.99+

AmazonORGANIZATION

0.99+

Joe CasertaPERSON

0.99+

Suzie WelchPERSON

0.99+

JoePERSON

0.99+

David FloyerPERSON

0.99+

PeterPERSON

0.99+

StephaniePERSON

0.99+

JenPERSON

0.99+

Neil RadenPERSON

0.99+

Mark TurnerPERSON

0.99+

Judith HurwitzPERSON

0.99+

JohnPERSON

0.99+

ElysianORGANIZATION

0.99+

UberORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

2017DATE

0.99+

HoneywellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Derek SiversPERSON

0.99+

New YorkLOCATION

0.99+

AWSORGANIZATION

0.99+

New York CityLOCATION

0.99+

1998DATE

0.99+

Day Two Open - Inforum 2017 - #Inforum2017 - #theCUBE


 

(upbeat digital music) >> Announcer: Live, from the Javits Center in New York City, it's theCube, covering Inforum 2017. Brought to you by Infor. >> Welcome to day two of theCube's live coverage of Inforum 2017 here in New York City at the Javits Center. I'm your host, Rebecca Knight, along with my co-hosts, Dave Vellante, and Jim Kobielus, who is the lead analyst at Wikibon for AI. So we're here in day two, fellas. We just heard the keynote. Any thoughts on what your expectations are for today, Jim, and what you're hoping to uncover, or at least get more insight on what we learned already in day one? >> I'd like to have Infor unpack a bit more of the Coleman announcement. I wrote a blog last night that I urge our listeners to check out on wikibon.com. There's a number of unanswered issues in terms of their strategy going forward to incorporate Coleman AI and their technology. You know, I suspect that Infor, like most companies, is working out that strategy as they go along, piece by piece, they've got a good framework then. We have Duncan Angove on right after this segment. Dave and I and you, we'll grill Duncan on that and much more, but that in particular. You know, I mean, AI is great. AI is everybody's secret sauce, now. There's a lot of substance behind what they're doing at Infor that sets them apart from their competitors in the ERP space. I want to go deeper there. >> So, yeah, so I'm looking at the blog right now. But what are the particular questions that you have regarding Coleman, in terms of how it's going to work? >> Yeah, well, first of all, I want to know, do they intend to incorporate Coleman AI in their premises-based software offerings? You know, for, I'm sure the vast majority of their customers want to know when, if ever, they're going to get access to Coleman, number one. Number two is, when are they going to complete the process of incorporating Coleman in their CloudSuite portfolio, which is vast and detailed? And then, really number three, are they going to do all the R&D themselves? I mean, they've got AWS as a major partner. AWS has significant intellectual property in AI. Will they call on others to work with them on co-developing these capabilities? You know, those are, like, the high-level things that I want to get out of today. >> Rebecca: Okay, okay. >> Well, so a couple things. So, I mean, the keynote today was okay. It wasn't, like, mind-blowing. We had customer appreciation, which was great. Alexis, who is from Foot Locker, cube alum was up there, and B of A got customer of the year. I met those guys last night at one of the customer appreciation dinners, so that was kind of cool. They all got plaques, or you know, that's nice, little trophies. I heard a lot about design thinking, and they shared some screen shots, essentially, of this new UI, started talking about AI is the new UI. It was very reminiscent of the conversation that we had in May at the ServiceNow Knowledge conference, where they're bringing consumer-like experience to the enterprise. It's always been something that ServiceNow has focused on, and certainly, Charles Phillips and Hook and Loop have been focused on that. The difference is, quite frankly, that ServiceNow showed an actual demo, got a lot of claps as a result. Infor said this is ready to be tested and downloaded, but they didn't show any demo. So that was sort of like, hmm. >> Jim: They haven't shown any demos. >> Rebecca: Yeah. >> Is it really baked out? Steve Lucas was up there. He killed it, very high energy guy. You know, again, another cube alum. He's been in our studio, and he's an awesome dude. >> Jim: He's awesome. >> And I thought he did a really good job. >> From Marketo. >> Talking about, you know, the whole engagement economy, you know, we think it's going a little bit beyond engagement to more action, and systems of an action, I think, is a term you guys use. >> Systems of agency or enablement, yeah. Bringing more of the IoT into it and robotics and so forth, yeah. >> And then DSW was up there. I said yesterday, "I love DSW." I tweeted out that, you know, the CIO had a picture, Ashlee had a picture of DSW, and I said, "Okay, when the girls and I go to DSW, "I break left, they go middle-right, "we meet at the checkout to negotiate "what actually goes home," so that was good. It was kind of fun. And then a lot of talk about digital transformation. Marc Scibelli was talking about that, and IoT and AI and data. So that's sort of, you know, kind of a summary there. As you know, Rebecca, I've been kind of trying to make the math work on the $2-plus billion investment from Koch. >> Rebecca: Yes, this is your-- >> And the messaging that Infor is putting forth is this is a source of new capital for us, but I'm-- >> Rebecca: You're skeptical. >> You know, as a private company, they have the right not to divulge everything, and they're not on a 90-day shot clock. Charles Phillips, I think, said yesterday, "We're on a 10-year shot clock." I said, "Okay." I think what happened is, so I found, I scanned 10-Qs, and I've been doing so for the last couple of days. There is virtually no information about how much, exactly, of the cash went in and what they're doing with it. And so, I suspect, but there are references to Golden Gate Capital and some of the management team taking some money off the table. Cool, that's good. I'm just, it's unclear to me that there's any debt being retired. I think there is none. And it's unclear to me how much cash there is for the business, so the only reference I was able to find, believe it or not, was on Wikipedia, and it says, "Citation still needed," okay? And the number here, and the math works, is $2.68 billion for 66.6% of the company, and a valuation of $10 billion, which Charles Phillips told us off-camera yesterday, it was $10.5 billion. So you can actually make the math work if you take that $10 billion and subtract off the $6 billion in debt. Then the numbers work, and they get five out of 11 board seats, so they've got about 45% or 49%, I think, is the actual number, you know, voting control of the company. So here's the question. What's next? And now, a couple billion for Koch is nothing. It's like the money in my pocket, I mean, it's really-- >> Rebecca: Right, right, right, the empty, yeah, exactly. >> And I suspect what happened is, 'cause it always says "$2 billion plus." So in squinting through this, my guess is, this is a pure guess, we'll try to confirm this, is that what happened is, Koch provided the additional funding to buy Birst recently. That upped their share to 66%, and maybe that's how Koch is going to operate going forward. When they see opportunities to help invest, they're going to do that. Now, one might say, "Well, that's going to further dilute "the existing Infor shareholders," but who cares, as long as the valuation goes up? And that's the new model of private equity. The old model of private equity is suck as much cash out of the company as possible and leave the carcass for somebody else to deal with. The new model of private equity is to invest selectively, use, essentially, what is a zero-interest loan, that $6 billion debt is like free money for Infor, pay down that debt over time with the cashflow of the company, and then raise the valuation of the company, and then at some point, have some kind of public market exit, and everybody's happy and makes a ton of dough. So, I think that's the new private equity play, and I think it's quite brilliant, actually, but there's not a lot of information. So a lot of this, have to be careful, is speculation on my part. >> Right, right. >> Well, the thing is, will the Coleman plan, initiative raise the valuation of the company in the long term if it's, you know, an attrition war in ERP, and they've got SAP, Oracle, Microsoft, all of whom have deep pockets, deeper than Infor, investing heavily in this stuff? Will Coleman be a net-net, just table stays? >> Well, so I think again, there's a couple ways in the tech business, as you guys know, to make money, and one is to invest in R&D and translate that R&D into commercial products. Some companies are really good at that, some companies aren't so good at that. The other way to make money is to do acquisitions and tuck-ins, and many, many companies have built value doing that, certainly Oracle, certainly IBM has, EMC back in the day, with its VMware acquisition, hit probably the biggest home run ever, and Infor has done a very good job of M&A, and I think, clearly, has raised the value of the company. And the other way is to resell technologies and generate cash and keep your costs low. I think a software company like Infor has the opportunity to innovate, to do tuck-in acquisitions, and to drive software marginal economics, so I think, on paper, that's all good, if, to answer your question, they can differentiate. And their differentiation is the way in which they're embedding AI into their deep, vertical, last-mile approach, and that is unique in the software business. Now, the other big question you have is beautiful UIs, and it sounds really great and looks really great, well, when you talk to the customers, they say, "Yeah, it's a little tough to implement sometimes," so it's still ERP, and ERP is complicated, alright? So, you know, it's not like Infor is shielded from some of the complexities of Oracle and SAP. It might look prettier, they might be moving a little faster in certain areas, they might, they clearly have some differentiation. At the end of the day, it's still complicated enterprise software. >> Right, exactly, and we heard that over and over again from the people, from Infor themselves, and also from customers, is that it isn't seamless. It's complicated, it involves a lot of change management initiatives, people have to be on board, and that's not always easy. >> Well, and that's why I'm encouraged, that to see some of the larger SIs, you know, you see Grant Thornton, Capgemini, I think Accenture's here, Deloitte-- >> Rebecca: We're having Capgemini later on the program. >> Deloitte's coming on as well. And so, those guys, even though I always joke they love to eat at the trough and do big, complex things, but, this is maybe not as lucrative as some of the other businesses, but it's clearly a company with momentum, and some tailwind that, in the context of digital transformations and AI, the big SIs and some of the smaller SIs, you know, like Avaap, that we had on yesterday, can do pretty well and actually help companies and customers add value. >> And with a fellow like Charles Phillips at the helm, I mean, he is just an impressive person who, as you have pointed out multiple times, is a real visionary when it comes to this stuff. >> Yeah, except when he's shooting hoops. He's not impressive on the hoop court, no. >> No? Oh! (laughing) >> I tweeted out last night, "He's got Obama's physique, "but not his hoop game." >> Oh! (laughing) >> So don't hate me for saying that, Charles. But yes, I think he's, first of all, he's a software industry guru. I think he, you know, single-handedly changed, I shouldn't say that, single-handedly, but he catalyzed the major change in the software business when Oracle went on its acquisition spree, and he architected that whole thing. It was interesting to hear his comments yesterday about what he sees. He said, "You'll see a lot more tech industry "CEOs running non-tech-industry companies "because they're all becoming SAS companies." >> If they have been so invested in understanding the vertical, they really get it. You can see someone who worked on a retail vertical here going in and being the CEO of Target or Walmart or something. >> Yes, I thought that was a pretty interesting comment from somebody who's got some chops in that business, and again, very impressive, I mean, the acquisitions that this company has done and continues to do. You and I both like the Birst acquisition. It's modern-day BI, it's not sort of just viz, and I don't mean to deposition Clik and Tableau, they've done a great job, you know, but it's not, it doesn't solve all your enterprise-grade, BI sort of problems. And, you know, you talk to the Cognos customer base, as great of an acquisition as that was for IBM, that is a big, chewy, heavy lift that IBM is trying to inject Watson and Watson Analytics. I mean, you know, you used to work at IBM, Jim. And they're doing a pretty good job of that, improving the UI, but it's still big, chunky, Cognos BI. Build cubes, wait for results. >> Yeah. So in many ways, the Birst acquisition for Infor and their portfolio is a bit like the thematics that IBM's been putting out on HTAP, you know, injecting analytics into transactional processing to make them more agile, and so forth. What I like about the Birst acquisition, vis-a-vis Coleman and where Infor is going, is that the Birst acquisition gives them a really good team, the people who really know analytics and how to drive it into transactional environments such as this. They've got, I mean, ostensibly, a deep fund of capital to fund the Coleman development going forward. Plus, they've got a really strong plan. I think there's potential strong differentiators for Infor, far more comprehensive in their plan to incorporate AI across their portfolio than SAP or Oracle or Microsoft have put out there in public, so I think they're in a good position for growth and innovation. >> Well, we have a lot of great guests coming up today. As you said, Duncan Angove is going to be on, up next. So, I'm Rebecca Knight, for Dave Vellante and Jim Kobielus, we will have more from Inforum just after this. (digital music) (pensive electronic music)

Published Date : Jul 12 2017

SUMMARY :

Brought to you by Infor. at the Javits Center. of the Coleman announcement. But what are the particular questions that you have You know, for, I'm sure the vast majority and B of A got customer of the year. Steve Lucas was up there. I think, is a term you guys use. Bringing more of the IoT into it "we meet at the checkout to negotiate of the cash went in and what they're doing with it. Rebecca: Right, right, right, the empty, Koch provided the additional funding to buy Birst recently. in the tech business, as you guys know, to make money, and also from customers, is that it isn't seamless. the big SIs and some of the smaller SIs, you know, I mean, he is just an impressive person He's not impressive on the hoop court, no. I tweeted out last night, "He's got Obama's physique, I think he, you know, single-handedly changed, going in and being the CEO of Target You and I both like the Birst acquisition. that IBM's been putting out on HTAP, you know, As you said, Duncan Angove is going to be on, up next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Dave VellantePERSON

0.99+

Rebecca KnightPERSON

0.99+

JimPERSON

0.99+

Steve LucasPERSON

0.99+

Jim KobielusPERSON

0.99+

DavePERSON

0.99+

ObamaPERSON

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Marc ScibelliPERSON

0.99+

AshleePERSON

0.99+

MicrosoftORGANIZATION

0.99+

CharlesPERSON

0.99+

$10.5 billionQUANTITY

0.99+

$2 billionQUANTITY

0.99+

$10 billionQUANTITY

0.99+

$6 billionQUANTITY

0.99+

TargetORGANIZATION

0.99+

$2.68 billionQUANTITY

0.99+

90-dayQUANTITY

0.99+

fiveQUANTITY

0.99+

Golden Gate CapitalORGANIZATION

0.99+

Charles PhillipsPERSON

0.99+

yesterdayDATE

0.99+

InforORGANIZATION

0.99+

49%QUANTITY

0.99+

66%QUANTITY

0.99+

AlexisPERSON

0.99+

66.6%QUANTITY

0.99+

11 board seatsQUANTITY

0.99+

DeloitteORGANIZATION

0.99+

BirstORGANIZATION

0.99+

ServiceNowORGANIZATION

0.99+

New York CityLOCATION

0.99+

DSWORGANIZATION

0.99+

todayDATE

0.99+

MayDATE

0.99+

DuncanPERSON

0.99+

Duncan AngovePERSON

0.99+

AccentureORGANIZATION

0.99+

CapgeminiORGANIZATION

0.99+

SAPORGANIZATION

0.99+

EMCORGANIZATION

0.99+

Javits CenterLOCATION

0.99+

Grant ThorntonPERSON

0.99+

KochORGANIZATION

0.98+

bothQUANTITY

0.97+

last nightDATE

0.97+

M&AORGANIZATION

0.96+

Day One Kickoff - Inforum 2017 - #Inforum2017 - #theCUBE


 

>> Announcer: Live from the Javits Center in New York City, it's theCUBE! Covering Inforum 2017. Brought to you by Inforum. >> Welcome to day one of theCUBE's coverage of Inforum here at the Javits Center in New York City. I'm your host, Rebecca Knight, along with my co-host, Dave Vellante. We are also joined by Jim Kobielus, who is the lead analyst for artificial intelligence at Wikibon. Thanks so much. It's exciting to be here, day one. >> Yeah, good to see you again, Rebecca. Really, our first time, we really worked a little bit at Red Hat Summit. >> Exactly, first time on the desk together. >> It's our very first time. I first met you a little while ago, and already you're an old friend. >> This is the third time we've done Inforum. The first time we did it was in New Orleans, and then Infor decided to skip a year. And then, last year, they decided to have it in the middle of July, which is kind of a strange time to have a show, but there are a lot of people here. I don't know what the number is, but it looks like several thousand, maybe as many as 4000 to 5000. I don't know what you saw. >> Rebecca: No, no, I feel like this is a big show. >> Jim: Heck, for July? For any month, actually. >> Exactly, particularly at a time where we're having a lot of rail issues, issues at LaGuardia too, so it's exciting. >> theCUBE first met Infor at the second Amazon re:Invent. I remember the folks at Amazon told us, "We really have an exciting SAS company. "It's the largest privately-held SAS company in the world." We were thinking, is that SAS? And they said, "No, no, it's a company called Infor." We said, "Who the heck is Infor?" And then we had Pam Murphy on. That's when we first were introduced to the company, and then, of course, we were invited to come to New Orleans. At the time, the questions around Infor were, who is Infor? What are they all about? And then it became, okay, we started to understand the strategy a little bit. For those of you who don't familiar with Infor, their strategy from early on was to really focus on the micro-verticals. We've talked about that a little bit. Just a quick bit of history. Charles Phillips, former president of Oracle, orchestrator of the M&A at Oracle, PeopleSoft, Siebel and many others, left, started Infor to roll up, gold-funded by Golden Gate Capital and other private equity, substantial base of Lawson Software customers, and then, many, many other acquisitions. Today, fast forward, you got a basically almost $3 billion company with a ton of debt, about $5 billion in debt, notwithstanding the Koch brothers' investment, which is almost $2.5 billion, which was to retire some of the equity that Golden Gate had, some of the owners, Charles and the three other owners took some money off the table, but the substantial amount of the investment goes into running the company. Here's what's interesting. Koch got a 2/3 stake in the company, but a 49% voting share, which implies a valuation of about, I want to say, just under four billion. Let's call it 3.7, 3.8 billion. For a $2 billion to $3 billion company, that's not a software company with 28% operating margin. That's not a huge valuation. So, we'll ask Charles Phillips about that, I mean, some of this wonky stuff in the financials, you know, we want to get through. I'm sure Infor doesn't want to talk too much about that. >> But it is true. It is, for a unicorn, for a privately-held company, this is one of them. This is up there with Uber and Airbnb, and it's a question that, why isn't it valued at more? >> My only assumption here is they went to Koch and said, "Okay, here's the deal. "We want $2 billion plus. "You only get 49%, only. "If you get 49% of the company in terms of voting rights, "we'll give you 2/3 in terms of ownership. "It's a sweetheart deal. "Of course, it's a lot of dough. "You get a board seat." Maybe two board seats, I can't remember. "And we'll pump this thing up, we'll build up the equity, "and we'll float it someday in the public markets, "and we'll all make a bunch of dough "and our shareholders will all be happy." That's the only thing I can assume, was this sort of conversation that went on. Well, again, we'll ask Charles Phillips, see if he answers that. But James, you sat in yesterday at the analyst event, you got sort of the history of the company, and the fire hose of information leading up to what was announced today, Coleman AI. What were your impressions as an analyst? >> Well, first of all, my first impression was a thought, a question. Is Infor with Coleman AI simply playing catch-up in a very, I call it a war of attrition in the ERP space. Really, it's four companies now. It's SAP, it's Microsoft, it's Oracle, and it's Infor duking it out. SAP, Microsoft and Oracle all have fairly strong AI capabilities and strategies and investments, and clearly they're infused, I was at Microsoft Build a few months ago. They're infusing those capabilities into all of their offerings. With Coleman, sounds impressive, thought it's just an early announcement, they've only begun to trickle it out to their vast suite. I want to get a sense, and probably later today we'll talk to Mr. Angove, Duncan Angove. I want to get a sense for how does, or does, Infor intend to differentiate their suite in this fiercely competitive ERP world? How will Coleman enable them to differentiate it? Right now it seems like everything they're announcing about Coleman is great in terms of digital assistance, conversational interface, everybody does this, too, now, with chatbots and so forth, in-line providing recommendations. Everybody's doing that. Essentially, everybody wants to go there. How are they going to stand apart with those capabilities, number one? Number two is just the timeline. They have this vast suite, and we just came from the keynote, where Charles and the other execs laid out in minute detail the micro-vertical applications. What is their timeline for rolling out those Coleman capabilities throughout the suite so customers can realize they have value? And is there a layered implementation? They talked about augmentation versus automation, and versus assistance. I'd like to see sort of a layer of capabilities in an architecture with a sense for how they're going to invest in each of those capabilities. For example, they talked about open source, like with TensorFlow, which is a new deep learning framework from Google Open Source. I just want to get a deep dive into where the investment funds that they're getting from Koch and others, especially from Koch, where that's going in terms of driving innovation going forward in their portfolio. I'm not cynical about it, I think they're doing some really interesting things. But I want some more meat on the bones of their strategy. >> Well, it's interesting, because I think Infor came into the show wanting to message innovation. They're not known as an innovative company. But you heard Charles Phillips up there talking, today he was talking about quantum computing, he was talking about the end of Moore's Law, he was obviously talking about AI. They named Coleman after Katherine Coleman Johnson. >> Here's my speculation. My speculation, of course, they recently completed the acquisition of Birst. Brad Peters did a really good discussion of Birst, the BI startup that's come along real fast. My sense, and I want to get confirmation, is that, possibly, Birst and Brad Peters and his team, will they drive the Coleman strategy going forward? It seems likely, 'cause Birst has some AI assets that Brad Peters brought us up to speed on yesterday. I want to get a sense for how Birst's AI and Coleman AI are going to come together into a convergence. >> But wouldn't they say that it's quote-unquote embedded, embedded AI? >> Jim: It'll be invisible, it has to be. >> You know, buried within the software suite? We saw, like you said, in gory detail the application portfolio that Infor had. I think one of the challenges the company has, it's like some of my staff meetings. Not everything is relevant to everybody. Very clearly, they have a lot of capabilities that most people aren't aware of. The question is, how much can they embed AI across those, and where are the use cases, and what's the value? And it's early days, right? >> Oh, yeah, very much. And you know, in some of those applications, probably many of them, the automation capabilities that they described for Coleman will be just as important as the human augmentation capabilities. In other words, micro-verticalize their AI in diverse ways going forward across their portfolio. In other words, one AI brush, broad brush of AI across every application probably won't make sense. The applications are quite different. >> I want to talk about the use cases, here. The selling points for these things are making the right decision all the time, more quickly. >> Jim: Productivity accelerators for knowledge workers, all that. >> And one of the other points that was made is that there are fewer arguments, because we are all looking at the same data, and we trust the data. Where do you see Birst and Coleman? Give me an example of where you can see this potentially transforming the industry? >> "We all trust data." Actually, we don't all trust data, because not all data is created the same. Birst comes into the portfolio not just to, really great visualizations and dashboarding and so forth, but they've got a well-built data management backend for data governance and so forth, to cleanse the data. 'Cause if you have dirty data, you can't derive high-quality decisions from the data. >> Rebecca: Excellent point, right. >> That's really my general take on where it's going. In terms of the Birst, I think the Birst acquisition will become pivotal in terms of them taking their data-driven functionality to the next level of consumability, 'cause Birst has done a really good job of making their capability consumable for the general knowledge worker audience. >> Well, a couple things. Actually, let me frame. Charles Phillips, I thought, did a good job framing the strategy. Sort of his strategy stack, if you will, starting with, at the bottom of the stack, the micro-verticals strategy, and then moving up the next layer was their decision to go all cloud, AWS Cloud. The third was the network. Infor made an acquisition of a company called GT Nexus, which is a commerce platform that has 18 years of commerce data and transaction data there. And the next layer was analytics, which is Birst, and I'll come back to that. And then the top layer is Coleman AI. The Birst piece is interesting, because we saw the ascendancy of Tableau and its land-and-expand strategy, and Christian Chabot, the CEO of Tableau, used to talk about, and they said this yesterday, the slow BI, you know, cubes, and the life cycle of actually getting an answer. By the time you get the answer, the market has changed. And that's what Tableau went after, and Tableau did very, very, well. But it turned out Tableau was largely a desktop tool. Wasn't available in the Cloud. It is now. And it had its limitations. It was basically a visualization tool. What Infor has done with Birst is they're positioning the old Cognos, which is now IBM, and the micro strategies of the world as the old guard. They're depositioning Tableau, and they didn't use that specific name, Tableau, but that's what they're talking about, Tableau and Click, as less than functional. Sort of spreadsheet plus. And they are now the rich, robust platform that both scales and has visualization, and has all the connections into the enterprise software world. So I thought it was interesting positioning. Would love to talk to some customers and see what that really looks like. But that, essentially, was the strategy stack that Charles Phillips laid out. I guess the last point I'd make as I come back to the decision to go AWS, you saw the application portfolio. Those are hardcore enterprise apps which everybody says don't live in the Cloud. Well, 55% of Infor's revenue is from the Cloud, so, clearly, it's not true. A lot of these apps are becoming cloud-enabled. >> Jim: Yeah, most of them. >> Most of them? >> Most of them are, yeah. BI, mode-predictive analytics, most AI. Machine learning is going in the Cloud. >> 'Cause Oracle's argument is, Oracle will be only one who can put those apps in the Cloud. >> 'Cause the data lives in the Cloud. It's trained on the data. >> Not all the data lives in the Cloud. >> It's like GT Nexus. That's EDI, that's rich EDI data, as they've indicated for training this new generation of neutral networks, machine learning and deep learning models continuously from fresh transaction data. You know that's where GT Nexus and e-commerce network fits into this overall strategy. It's a massive pile stream of data for mining. >> But, you know, SAP has struggled in the Cloud. SuccessFactors, obviously, is their SAS play. Most of their stuff remains on-prem. Oracle again claims they have the only end-to-end hybrid. You see Microsoft finally shipping Azure Stack, or at least claiming to soon be shipping Azure Stack. They've obviously got a strategy there with their productivity estate. But here you have Infor-- >> Don't forget IBM. They've got a very rich, high-rated portfolio. >> Well, you heard, I don't know if it was Charles, somebody took a swipe at IBM today, saying that the company's competitors have purchased all these companies, these SAS companies, and they don't have a way to really stitch them together. Well, that's not totally true. Bluemix is IBM's way. Although, that's been a heavy lift. We saw with Oracle Fusion, it took over a decade and they're still working on that. So, Infor, again, I want to talk to customers and find out, okay, how much of this claim that everything's seamless in the Cloud is actually true? I think, obviously, a large portion of the install base is still that legacy on-prem Lawson base that hasn't modernized. That's always, in my view, enforced big challenges. How do you get that base, leverage that install base to move, and then attract new customers? By all accounts, they're doing a pretty good job of it. >> I don't think what's going on, I don't think a lot of lift-and-shift is going on. Legacy Lawson customers are not moving in droves to the Cloud with their data and all that. There's not a massive lift-and-shift. It's all the new greenfield applications for these new use cases, in terms of predictive analytics. They're being born and living their entire lives in the Cloud. >> And a lot of HR, a lot of HCM, obviously, competing with Workday and Peoplesoft. That stuff's going into the Cloud. We're going to be unpacking this all day today, and tomorrow. Two days here of coverage. >> Indeed, yes indeed. >> Dave: Excited to be here. >> It's going to be a great show. Bruno Mars is performing the final day. >> Jim: Bruno Mars? >> I know, very-- >> You know a company's doing good, Infor, when they can pay for the likes of a Bruno Mars, who's still having mega hits on the radio. I wish I was staying long enough to catch that one. >> I know, indeed, indeed. Well, for Dave and Jim, I'm Rebecca Knight, and we'll be back with more from Inforum 2017 just after this. (fast techno music)

Published Date : Jul 11 2017

SUMMARY :

Announcer: Live from the Javits Center here at the Javits Center in New York City. Yeah, good to see you again, Rebecca. I first met you a little while ago, This is the third time we've done Inforum. Jim: Heck, for July? a lot of rail issues, issues at LaGuardia too, I remember the folks at Amazon told us, and it's a question that, why isn't it valued at more? and the fire hose of information leading up to I want to get a sense, and probably later today we'll talk to But you heard Charles Phillips up there talking, the acquisition of Birst. the application portfolio that Infor had. the automation capabilities that they described for Coleman making the right decision all the time, more quickly. for knowledge workers, all that. And one of the other points that was made is that because not all data is created the same. In terms of the Birst, I think the Birst acquisition And the next layer was analytics, which is Birst, Machine learning is going in the Cloud. Oracle will be only one who can put those apps in the Cloud. 'Cause the data lives in the Cloud. You know that's where GT Nexus and e-commerce network But here you have Infor-- They've got a very rich, high-rated portfolio. that everything's seamless in the Cloud is actually true? It's all the new greenfield applications That stuff's going into the Cloud. Bruno Mars is performing the final day. I wish I was staying long enough to catch that one. and we'll be back with more from Inforum 2017

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

RebeccaPERSON

0.99+

Rebecca KnightPERSON

0.99+

Charles PhillipsPERSON

0.99+

JamesPERSON

0.99+

JimPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

IBMORGANIZATION

0.99+

New OrleansLOCATION

0.99+

$2 billionQUANTITY

0.99+

Golden Gate CapitalORGANIZATION

0.99+

PeopleSoftORGANIZATION

0.99+

M&AORGANIZATION

0.99+

BirstORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

Golden GateORGANIZATION

0.99+

Brad PetersPERSON

0.99+

UberORGANIZATION

0.99+

18 yearsQUANTITY

0.99+

$3 billionQUANTITY

0.99+

AWSORGANIZATION

0.99+

Lawson SoftwareORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

last yearDATE

0.99+

49%QUANTITY

0.99+

3.7, 3.8 billionQUANTITY

0.99+

tomorrowDATE

0.99+

JulyDATE

0.99+

Christian ChabotPERSON

0.99+

Two daysQUANTITY

0.99+

AngovePERSON

0.99+

Pam MurphyPERSON

0.99+

first timeQUANTITY

0.99+

55%QUANTITY

0.99+

SAPORGANIZATION

0.99+

Bruno MarsPERSON

0.99+

28%QUANTITY

0.99+

TableauORGANIZATION

0.99+

CharlesPERSON

0.99+

InforORGANIZATION

0.99+

third timeQUANTITY

0.99+

SASORGANIZATION

0.99+

PeoplesoftORGANIZATION

0.99+

Duncan AngovePERSON

0.99+

yesterdayDATE

0.99+

ColemanPERSON

0.99+

KochORGANIZATION

0.99+

thirdQUANTITY

0.99+

todayDATE

0.99+

TableauTITLE

0.99+

Wrap Up | IBM Fast Track Your Data 2017


 

>> Narrator: Live from Munich Germany, it's theCUBE, covering IBM, Fast Track Your Data. Brought to you by IBM. >> We're back. This is Dave Vellante with Jim Kobielus, and this is theCUBE, the leader in live tech coverage. We go out to the events. We extract the signal from the noise. We are here covering special presentation of IBM's Fast Track your Data, and we're in Munich Germany. It's been a day-long session. We started this morning with a panel discussion with five senior level data scientists that Jim and I hosted. Then we did CUBE interviews in the morning. We cut away to the main tent. Kate Silverton did a very choreographed scripted, but very well done, main keynote set of presentations. IBM made a couple of announcements today, and then we finished up theCUBE interviews. Jim and I are here to wrap. We're actually running on IBMgo.com. We're running live. Hilary Mason talking about what she's doing in data science, and also we got a session on GDPR. You got to log in to see those sessions. So go ahead to IBMgo.com, and you'll find those. Hit the schedule and go to the Hilary Mason and GDP our channels, and check that out, but we're going to wrap now. Jim two main announcements today. I hesitate to call them big announcements. I mean they were you know just kind of ... I think the word you used last night was perfunctory. You know I mean they're okay, but they're not game changing. So what did you mean? >> Well first of all, when you look at ... Though IBM is not calling this a signature event, it's essentially a signature event. They do these every June or so. You know in the past several years, the signature events have had like a one track theme, whether it be IBM announcing their investing deeply in Spark, or IBM announcing that they're focusing on investing in R as the core language for data science development. This year at this event in Munich, it's really a three track event, in terms of the broad themes, and I mean they're all important tracks, but none of them is like game-changing. Perhaps IBM doesn't intend them to be it seems like. One of which is obviously Europe. We're holding this in Munich. And a couple of things of importance to European customers, first and foremost GDPR. The deadline next year, in terms of compliance, is approaching. So sound the alarm as it were. And IBM has rolled out compliance or governance tools. Download and the go from the information catalog, governance catalog and so forth. Now announcing the consortium with Hortonworks to build governance on top of Apache Atlas, but also IBM announcing that they've opened up a DSX center in England and a machine-learning hub here in Germany, to help their European clients, in those countries especially, to get deeper down into data science and machine learning, in terms of developing those applicants. That's important for the audience, the regional audience here. The second track, which is also important, and I alluded to it. It's governance. In all of its manifestations you need a master catalog of all the assets for building and maintaining and controlling your data applications and your data science applications. The catalog, the consortium, the various offerings at IBM is announced and discussed in great detail. They've brought in customers and partners like Northern Trust, talk about the importance of governance, not just as a compliance mandate, but also the potential strategy for monetizing your data. That's important. Number three is what I call cloud native data applications and how the state of the art in developing data applications is moving towards containerized and orchestrated environments that involve things like Docker and Kubernetes. The IBM DB2 developer community edition. Been in the market for a few years. The latest version they announced today includes kubernetes support. Includes support for JSON. So it's geared towards new generation of cloud and data apps. What I'm getting at ... Those three core themes are Europe governance and cloud native data application development. Each of them is individually important, but none of them is game changer. And one last thing. Data science and machine learning, is one of the overarching envelope themes of this event. They've had Hilary Mason. A lot of discussion there. My sense I was a little bit disappointed because there wasn't any significant new announcements related to IBM evolving their machine learning portfolio into deep learning or artificial intelligence in an environment where their direct competitors like Microsoft and Google and Amazon are making a huge push in AI, in terms of their investments. There's a bit of a discussion, and Rob Thomas got to it this morning, about DSX. Working with power AI, the IBM platform, I would like to hear more going forward about IBM investments in these areas. So I thought it was an interesting bunch of announcements. I'll backtrack on perfunctory. I'll just say it was good that they had this for a lot of reasons, but like I said, none of these individual announcements is really changing the game. In fact like I said, I think I'm waiting for the fall, to see where IBM goes in terms of doing something that's actually differentiating and innovative. >> Well I think that the event itself is great. You've got a bunch of partners here, a bunch of customers. I mean it's active. IBM knows how to throw a party. They've always have. >> And the sessions are really individually awesome. I mean terms of what you learn. >> The content is very good. I would agree. The two announcements that were sort of you know DB2, sort of what I call community edition. Simpler, easier to download. Even Dave can download DB2. I really don't want to download DB2, but I could, and play with it I guess. You know I'm not database guy, but those of you out there that are, go check it out. And the other one was the sort of unified data governance. They tried to tie it in. I think they actually did a really good job of tying it into GDPR. We're going to hear over the next, you know 11 months, just a ton of GDPR readiness fear, uncertainty and doubt, from the vendor community, kind of like we heard with Y2K. We'll see what kind of impact GDPR has. I mean it looks like it's the real deal Jim. I mean it looks like you know this 4% of turnover penalty. The penalties are much more onerous than any other sort of you know, regulation that we've seen in the past, where you could just sort of fluff it off. Say yeah just pay the fine. I think you're going to see a lot of, well pay the lawyers to delay this thing and battle it. >> And one of our people in theCUBE that we interviewed, said it exactly right. It's like the GDPR is like the inverse of Y2K. In Y2K everybody was freaking out. It was actually nothing when it came down to it. Where nobody on the street is really buzzing. I mean the average person is not buzzing about GDPR, but it's hugely important. And like you said, I mean some serious penalties may be in the works for companies that are not complying, companies not just in Europe, but all around the world who do business with European customers. >> Right okay so now bring it back to sort of machine learning, deep learning. You basically said to Rob Thomas, I see machine learning here. I don't see a lot of the deep learning stuff quite yet. He said stay tuned. You know you were talking about TensorFlow and things like that. >> Yeah they supported that ... >> Explain. >> So Rob indicated that IBM very much, like with power AI and DSX, provides an open framework or toolkit for plugging in your, you the developers, preferred machine learning or deep learning toolkit of an open source nature. And there's a growing range of open source deep learning toolkits beyond you know TensorFlow, including Theano and MXNet and so forth, that IBM is supporting within the overall ESX framework, but also within the power AI framework. In other words they've got those capabilities. They're sort of burying that message under a bushel basket, at least in terms of this event. Also one of the things that ... I said this too Mena Scoyal. Watson data platform, which they launched last fall, very important product. Very important platform for collaboration among data science professionals, in terms of the machine learning development pipeline. I wish there was more about the Watson data platform here, about where they're taking it, what the customers are doing with it. Like I said a couple of times, I see Watson data platform as very much a DevOps tool for the new generation of developers that are building machine learning models directly into their applications. I'd like to see IBM, going forward turn Watson data platform into a true DevOps platform, in terms of continuous integration of machine learning and deep learning another statistical models. Continuous training, continuous deployment, iteration. I believe that's where they're going, or probably she will be going. I'd like to see more. I'm expecting more along those lines going forward. What I just described about DevOps for data science is a big theme that we're focusing on at Wikibon, in terms where the industry is going. >> Yeah, yeah. And I want to come back to that again, and get an update on what you're doing within your team, and talk about the research. Before we do that, I mean one of the things we talked about on theCUBE, in the early days of Hadoop is that the guys are going to make the money in this big data business of the practitioners. They're not going to see, you know these multi-hundred billion dollar valuations come out of the Hadoop world. And so far that prediction has held up well. It's the Airbnbs and the Ubers and the Spotifys and the Facebooks and the Googles, the practitioners who are applying big data, that are crushing it and making all the money. You see Amazon now buying Whole Foods. That in our view is a data play, but who's winning here, in either the vendor or the practitioner community? >> Who's winning are the startups with a hot new idea that's changing, that's disrupting some industry, or set of industries with machine learning, deep learning, big data, etc. For example everybody's, with bated breath, waiting for you know self-driving vehicles. And the ecosystem as it develops somebody's going to clean up. And one or more companies, companies we probably never heard of, leveraging everything we're describing here today, data science and containerized distributed applications that involve you know deep learning for you know image analysis and sensor analyst and so forth. Putting it all together in some new fabric that changes the way we live on this planet, but as you said the platforms themselves, whether they be Hadoop or Spark or TensorFlow, whatever, they're open source. You know and the fact is, by it's very nature, open source based solutions, in terms of profit margins on selling those, inexorably migrate to zero. So you're not going to make any money as a tool vendor, or a platform vendor. You got to make money ... If you're going to make money, you make money, for example from providing an ecosystem, within which innovation can happen. >> Okay we have a few minutes left. Let's talk about the research that you're working on. What's exciting you these days? >> Right, right. So I think a lot of people know I've been around the analyst space for a long long time. I've joined the SiliconANGLE Wikibon team just recently. I used to work for a very large solution provider, and what I do here for Wikibon is I focus on data science as the core of next generation application development. When I say next-generation application development, it's the development of AI, deep learning machine learning, and the deployment of those data-driven statistical assets into all manner of application. And you look at the hot stuff, like chatbots for example. Transforming the experience in e-commerce on mobile devices. Siri and Alexa and so forth. Hugely important. So what we're doing is we're focusing on AI and everything. We're focusing on containerization and building of AI micro-services and the ecosystem of the pipelines and the tools that allow you to do that. DevOps for data science, distributed training, federated training of statistical models, so forth. We are also very much focusing on the whole distributed containerized ecosystem, Docker, Kubernetes and so forth. Where that's going, in terms of changing the state of the art, in terms of application development. Focusing on the API economy. All of those things that you need to wrap around the payload of AI to deliver it into every ... >> So you're focused on that intersection between AI and the related topics and the developer. Who is winning in that developer community? Obviously Amazon's winning. You got Microsoft doing a good job there. Google, Apple, who else? I mean how's IBM doing for example? Maybe name some names. Who do you who impresses you in the developer community? But specifically let's start with IBM. How is IBM doing in that space? >> IBM's doing really well. IBM has been for quite a while, been very good about engaging with new generation of developers, using spark and R and Hadoop and so forth to build applications rapidly and deploy them rapidly into all manner of applications. So IBM has very much reached out to, in the last several years, the Millennials for whom all of this, these new tools, have been their core repertoire from the very start. And I think in many ways, like today like developer edition of the DB2 developer community edition is very much geared to that market. Saying you know to the cloud native application developer, take a second look at DB2. There's a lot in DB2 that you might bring into your next application development initiative, alongside your spark toolkit and so forth. So IBM has startup envy. They're a big old company. Been around more than a hundred years. And they're trying to, very much bootstrap and restart their brand in this new context, in the 21st century. I think they're making a good effort at doing it. In terms of community engagement, they have a really good community engagement program, all around the world, in terms of hackathons and developer days, you know meetups here and there. And they get lots of turnout and very loyal customers and IBM's got to broadest portfolio. >> So you still bleed a little bit of blue. So I got to squeeze it out of you now here. So let me push a little bit on what you're saying. So DB2 is the emphasis here, trying to position DB2 as appealing for developers, but why not some of the other you know acquisitions that they've made? I mean you don't hear that much about Cloudant, Dash TV, and things of that nature. You would think that that would be more appealing to some of the developer communities than DB2. Or am I mistaken? Is it IBM sort of going after the core, trying to evolve that core you know constituency? >> No they've done a lot of strategic acquisitions like Cloudant, and like they've acquired Agrath Databases and brought them into their platform. IBM has every type of database or file system that you might need for web or social or Internet of Things. And so with all of the development challenges, IBM has got a really high-quality, fit-the-purpose, best-of-breed platform, underlying data platform for it. They've got huge amounts of developers energized all around the world working on this platform. DB2, in the last several years they've taken all of their platforms, their legacy ... That's the wrong word. All their existing mature platforms, like DB2 and brought them into the IBM cloud. >> I think legacy is the right word. >> Yeah, yeah. >> These things have been around for 30 years. >> And they're not going away because they're field-proven and ... >> They are evolving. >> And customers have implemented them everywhere. And they're evolving. If you look at how IBM has evolved DB2 in the last several years into ... For example they responded to the challenge from SAP HANA. We brought BLU Acceleration technology in memory technology into DB2 to make it screamingly fast and so forth. IBM has done a really good job of turning around these product groups and the product architecture is making them cloud first. And then reaching out to a new generation of cloud application developers. Like I said today, things like DB2 developer community edition, it's just the next chapter in this ongoing saga of IBM turning itself around. Like I said, each of the individual announcements today is like okay that's interesting. I'm glad to see IBM showing progress. None of them is individually disruptive. I think the last week though, I think Hortonworks was disruptive in the sense that IBM recognized that BigInsights didn't really have a lot of traction in the Hadoop spaces, not as much as they would have wished. Hortonworks very much does, and IBM has cast its lot to work with HDP, but HDP and Hortonworks recognizes they haven't achieved any traction with data scientists, therefore DSX makes sense, as part of the Hortonworks portfolio. Likewise a big sequel makes perfect sense as the sequel front end to the HDP. I think the teaming of IBM and Hortonworks is propitious of further things that they'll be doing in the future, not just governance, but really putting together a broader cloud portfolio for the next generation of data scientists doing work in the cloud. >> Do you think Hortonworks is a legitimate acquisition target for IBM. >> Of course they are. >> Why would IBM ... You know educate us. Why would IBM want to acquire Hortonworks? What does that give IBM? Open source mojo, obviously. >> Yeah mojo. >> What else? >> Strong loyalty with the Hadoop market with developers. >> The developer angle would supercharge the developer angle, and maybe make it more relevant outside of some of those legacy systems. Is that it? >> Yeah, but also remember that Hortonworks came from Yahoo, the team that developed much of what became Hadoop. They've got an excellent team. Strategic team. So in many ways, you can look at Hortonworks as one part aqui-hire if they ever do that and one part really substantial and growing solution portfolio that in many ways is complementary to IBM. Hortonworks is really deep on the governance of Hadoop. IBM has gone there, but I think Hortonworks is even deeper, in terms of their their laser focus. >> Ecosystem expansion, and it actually really wouldn't be that expensive of an acquisition. I mean it's you know north of ... Maybe a billion dollars might get it done. >> Yeah. >> You know so would you pay a billion dollars for Hortonworks? >> Not out of my own pocket. >> No, I mean if you're IBM. You think that would deliver that kind of value? I mean you know how IBM thinks about about acquisitions. They're good at acquisitions. They look at the IRR. They have their formula. They blue-wash the companies and they generally do very well with acquisitions. Do you think Hortonworks would fit profile, that monetization profile? >> I wouldn't say that Hortonworks, in terms of monetization potential, would match say what IBM has achieved by acquiring the Netezza. >> Cognos. >> Or SPSS. I mean SPSS has been an extraordinarily successful ... >> Well the day IBM acquired SPSS they tripled the license fees. As a customer I know, ouch, it worked. It was incredibly successful. >> Well, yeah. Cognos was. Netezza was. And SPSS. Those three acquisitions in the last ten years have been extraordinarily pivotal and successful for IBM to build what they now have, which is really the most comprehensive portfolio of fit-to-purpose data platform. So in other words all those acquisitions prepared IBM to duke it out now with their primary competitors in this new field, which are Microsoft, who's newly resurgent, and Amazon Web Services. In other words, the two Seattle vendors, Seattle has come on strong, in a way that almost Seattle now in big data in the cloud is eclipsing Silicon Valley, in terms of where you know ... It's like the locus of innovation and really of customer adoption in the cloud space. >> Quite amazing. Well Google still hanging in there. >> Oh yeah. >> Alright, Jim. Really a pleasure working with you today. Thanks so much. Really appreciate it. >> Thanks for bringing me on your team. >> And Munich crew, you guys did a great job. Really well done. Chuck, Alex, Patrick wherever he is, and our great makeup lady. Thanks a lot. Everybody back home. We're out. This is Fast Track Your Data. Go to IBMgo.com for all the replays. Youtube.com/SiliconANGLE for all the shows. TheCUBE.net is where we tell you where theCUBE's going to be. Go to wikibon.com for all the research. Thanks for watching everybody. This is Dave Vellante with Jim Kobielus. We're out.

Published Date : Jun 25 2017

SUMMARY :

Brought to you by IBM. I mean they were you know just kind of ... I think the word you used last night was perfunctory. And a couple of things of importance to European customers, first and foremost GDPR. IBM knows how to throw a party. I mean terms of what you learn. seen in the past, where you could just sort of fluff it off. I mean the average person is not buzzing about GDPR, but it's hugely important. I don't see a lot of the deep learning stuff quite yet. And there's a growing range of open source deep learning toolkits beyond you know TensorFlow, of Hadoop is that the guys are going to make the money in this big data business of the And the ecosystem as it develops somebody's going to clean up. Let's talk about the research that you're working on. the pipelines and the tools that allow you to do that. Who do you who impresses you in the developer community? all around the world, in terms of hackathons and developer days, you know meetups here Is it IBM sort of going after the core, trying to evolve that core you know constituency? They've got huge amounts of developers energized all around the world working on this platform. Likewise a big sequel makes perfect sense as the sequel front end to the HDP. You know educate us. The developer angle would supercharge the developer angle, and maybe make it more relevant Hortonworks is really deep on the governance of Hadoop. I mean it's you know north of ... They blue-wash the companies and they generally do very well with acquisitions. I wouldn't say that Hortonworks, in terms of monetization potential, would match say I mean SPSS has been an extraordinarily successful ... Well the day IBM acquired SPSS they tripled the license fees. now in big data in the cloud is eclipsing Silicon Valley, in terms of where you know Well Google still hanging in there. Really a pleasure working with you today. And Munich crew, you guys did a great job.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kate SilvertonPERSON

0.99+

Jim KobielusPERSON

0.99+

AmazonORGANIZATION

0.99+

JimPERSON

0.99+

Hilary MasonPERSON

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AppleORGANIZATION

0.99+

EuropeLOCATION

0.99+

PatrickPERSON

0.99+

Dave VellantePERSON

0.99+

GermanyLOCATION

0.99+

HortonworksORGANIZATION

0.99+

Y2KORGANIZATION

0.99+

DavePERSON

0.99+

ChuckPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

MunichLOCATION

0.99+

EnglandLOCATION

0.99+

Rob ThomasPERSON

0.99+

second trackQUANTITY

0.99+

SiriTITLE

0.99+

twoQUANTITY

0.99+

21st centuryDATE

0.99+

three trackQUANTITY

0.99+

RobPERSON

0.99+

next yearDATE

0.99+

4%QUANTITY

0.99+

Mena ScoyalPERSON

0.99+

AlexPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EachQUANTITY

0.99+

CloudantORGANIZATION

0.99+

Marc Altshuller, IBM - IBM Fast Track Your Data 2017


 

>> Announcer: Live from Munich, Germany; it's The Cube! Covering IBM Fast Track Your Data, brought to you by IBM. >> Welcome back to Munich, Germany everybody. This is The Cube, the leader in live tech coverage. We're covering Fast Track Your Data, IBM's signature moment here in Munich. Big themes around GDPR, data science, data science being a team sport. I'm Dave Vellante, I'm here with my co-host Jim Kobielus. Marc Altshuller is here, he's the general manager of IBM Business Analytics. Good to see you again Marc. >> Hey, always great to see you. Welcome, it's our first time together. >> Okay so we heard your key note, you were talking about the caveats of correlations, you were talking about rear view mirror analysis versus sort of looking forward, something that I've been sort of harping on for years. You know, I mean I remember the early days of "decision support" and the promises of 360 degree views of the customer, and predictive analytics, and I've always said it, "DSS really never lived up to that", y'know? "Will big data live up to that?" and we're kind of living that now, but what's your take on where we're at in this whole databean? >> I mean look, different customers are at different ends of the spectrum, but people are really getting value. They're becoming these data driven businesses. I like what Rob Thomas talked about on stage, right. Visiting companies a few years ago where they'd say "I'm not a technology company.". Now, how can you possibly say you're not a technology company, regardless of the industry. Your competitors will beat you if they are using data and you're not. >> Yeah, and everybody talks about digital transformation. And you hear that a lot at conferences, you guys haven't been pounding that theme, other than, y'know below the surface. And to us, digital means data, right? And if you're going to transform digitally, then it's all about the data, you mentioned data driven. What are you seeing, I mean most organizations in our view aren't "data driven" they're sort of reactive. Their CEO's maybe want to be data driven, maybe they're aboard conversations as to how to get there, but they're mostly focused on "Alright, how do we keep "the lights on, how do we meet our revenue targets, "how do we grow a little bit, and then whatever money "we have leftover we'll try to, y'know transform." What are you seeing? Is that changing? >> I would say, look I can give you an example right from my own space, the software space. For years we would have product managers, offering managers, maybe interviewing clients, on gut feel deciding what features to put at what priority within the next release. Now we have all these products instrumented behind the scenes with data, so we can literally see the friction points, the exit points, how frequently they come back, how long they're sessions are, we can even see them effectively graduating within the system where they continue to learn, and where they had shorter sessions, they're now going the longer sessions. That's really, really powerful for us in terms of trying to maximize our outcome from a software perspective. So that's where we kind of like, drink our own champagne. >> I got to ask you, so in around 2003, 2004 HBR had an article, front page y'know cover article of how "gut feel beats data and analytics", now this is 2003, 2004, software development as you know it's a lot of art involved, so my question is how are you doing? Is the data informing you in ways that are nonintuitive? And is it driving y'know, business outcomes for IBM? >> It is, look you see, I'll see like GM's of sports teams talking about maybe pushing back a little bit on the data. It's not all data driven, there's a little bit of gut, like is the guy going to, is he a checker in hockey or whatever that happens to be, and I would say, when you actually look at what's going on within baseball, and you look at the data, when you watch baseball growing up, the commentator might say something along the lines of "the pitcher has their stuff" right? "Does the pitcher have their stuff or not?". Now they literally know, the release point based on elevation, IOT within the state of the release point, the spin velocity of the ball, where they mathematically know "does the pitcher have their stuff?", are they hitting their locations? So all that stuff has all become data driven, and if you don't want to embrace it, you get beat, right? I mean even in baseball, I remember talking to one of these Moneyball type guys where I said like "Doesn't weather impact baseball?" And they're like "Yeah, we've looked at that, it absolutely impacts it." 'Cause you always hear of football and remember the old Peyton Manning thing? Don't play Peyton Manning in cold weather, don't bet on Peyton Manning in cold weather. So "I'm like isn't the same in baseball?", And he's like, absolutely it's the same in baseball, players preform different based on the climate. Do any mangers change their lineup based on that? Never. >> Speaking of HBR, I mean in the last few years there was also an article or two by Michael Shrage about the whole notion of real world experimentation and e-commerce, driven by data, y'know in line, to an operational process, like tuning the design iteratively of say, a shopping cart within your e-commerce environment, based on the stats on what work and what does not work. So, in many ways I mean AB testing, real world experimentation thrives on data science. Do you see AB testing becoming a standard business practice everywhere, or only in particular industries like you know, like the Wal-marts of the world? >> Yeah, look so, AB testing, multi-variant testing, they're pervasive, pretty much anyone who has a website ought to be doing this if they're not doing it already. Maybe some startups aren't quite into it. They prioritized in different spots, but mainstream fortune 500 companies are doing this, the tools have made it really easy. I would say, maybe the Achilles heel or the next frontier is, that is effectively saying, kind of creating one pattern of user, putting everyone in a single bucket, right? "Does this button perform better "when it's orange or when it's green? "Oh, it performs better orange." Really, does it perform well for every segmentation orange better than green or is it just a certain segmentation? So that next kind of frontier is going to be, how do we segment it, know a little bit more about you when you're coming in so that AB testing starts to build these kind of sub-profiles, sub-segmentation. >> Micro-segmentation, and of course, the end extreme of that dynamic is one-to-one personalization of experiences and engagements based on knowing 360 degrees about you and what makes you tick as well, so yeah. >> Altshuller: And add onto that context, right? You have your business, let's even keep it really simple, right, you've got your business life, you've got your social life, and your profile of what you're looking for when you're shopping your social life or something is very different than when you're shopping your business life. We have to personalize it to the idea where, I don't want to say schizophrenic but you do have multiple personalities from an online perspective, right? From a digital perspective it all depends in the moment, what is it that you're actually doing, right? And what are you, who are you acting for? >> Marc, I want to ask you, you're homies, your peeps are the business people. >> Yes. >> That's where you spend your time. I'm interested in the relationship between those business people and the data science teams. They're all, we all hear about how data science and unicorns are hard to find, difficult to get the skills, citizen data science is sort of a nirvana. But, how are you seeing businesses bring the domain expertise of the business and blending that with data science? >> So, they do it, I have some cautionary tales that I've experienced in terms of how they're doing it. They feel like, let's just assign the subject matter expert, they'll work with the data scientist, they'll give them context as they're doing their project, but unfortunately what I've seen time and time again, is that subject matter expert right out of the gate brings a tremendous amount of bias based on the types of analysis they've done in the past. >> Vellante: That's not how we do it here. >> Yeah, exactly, like "did you test this?". "Oh yeah, there's no correlation there, we've tried it." Well, just because there's no correlation, as I talked about onstage, doesn't mean it's not part of the pattern in terms of, like you don't want someone in there right off the bat dismissing things. So I always coach, when the business user subject matter experts become involved early, they have to be tremendously open-minded and not all of them can be. I like bringing them in later, because that data scientist, they are unbiased, like they see this data set, it doesn't mean anything to them, they're just numerically telling you what the data set says. Now the business user can then add some context, maybe they grabbed a field that really is an irrelevant field and they can give them that context afterwards. But we just don't want them shutting down, kind of roots, too early in the process. >> You know, we've been talking for a couple of years now within our community about this digital matrix, this digital fabric that's emerged and you're seeing these horizontal layers of technology, whether it's cloud or, you know, security, you all OAuth in with LinkedIn, Facebook, and Twitter. There's a data fabric that's emerging and you're seeing all these new business models, whether it's Uber or Airbnb or WAZE, et cetera, and then you see this blockbuster announcement last week, Amazon buying Whole Foods. And it's just fascinating to us and it's all about the data that a company like an Amazon can be a content company, could be a retail company, now it's becoming a grocer, you see Apple getting into financial services. So, you're seeing industries being able to traverse or companies being able traverse industries and it's all because of the data, so these conversations absolutely are going on in boardrooms. It's all about the digital transformation, the digital disruption, so how do you see, you know, your clients trying to take advantage of that or defend against that? >> Yeah look, I mean, you have to be proactive. You have to be willing to disrupt yourself in all these tech industries, it's just moving too quickly. I read a similar story, I think yesterday, around potentially Blockchain disrupting ridesharing programs, right? Why do you need the intermediary if you have this open ledger and these secure transactions you can do back and forth with this ecosystem. So there's another interesting disruption. Now do the ridesharing guys proactively get into that and promote it, or do they almost in slow motion, get replaced by that at some point. So yeah I think it's a come-on on all of us, like you don't remain a market lead, every market leader gets destructive at some point, the key is, do you disrupt yourself and you remain the market leader, or do you let someone else disrupt you. And if you get disrupted, how quickly can you recover. >> Well you know, you talked to banking executives and they're all talking Blockchain. Blockchain is the future, Bitcoin was designed to disintermediate the bank, so they're many, many banks are embracing it and so it comes back to the data. So my question I have, the discussion I'd like to have is how organizations are valuing data. You can't put data as a value on, y'know an asset on your balance sheet. The accounting industry standards don't exist. They probably won't for decades. So how are companies, y'know crocking data value, is it limiting their ability to move toward a data driven economy, is it a limiting factor that they don't have a good way to value their data, and understand how to monetize it. >> So I have heard of cases where companies have but data on their balance sheet, it's not mainstream at this point, but I mean you've seen it sometimes, and even some bankruptcy proceedings, their industry that's being in a bankruptcy protection where they say "Hey, but this data asset "is really where the value is." >> Vellante: And it's certainly implicit in valuations. >> Correct, I mean you see bios all the time based on the actual data sets, so yeah that data set, they definitely treasure it, and they realize that a lot of their answers are within that data set. And they also I think, understand that they're is a lot of peeling the onion that goes on when you're starting to work through that data, right? You have your initial thoughts, then you correct something based on what the data told you to do, and then the new data comes in based on what your new experience is, and then all of a sudden you have, you see what your next friction point is. You continue to knock down these things, so it is also very iterative working with that data asset. But yeah, these companies are seeing it's very value when they collect the data, but the other thing is the signal of what's driving your business may not be in your data, more and more often it may be in market data that's out there. So you think about social media data, you think about weather data and being able to go and grab that information. I remember watching the show Millions, where they talk about the hedge fund guys running satellites over like Wal-mart parking lots to try to predict the redux for the quarter, right? Like, you're collecting all this data but it's out there. >> Or maybe the value is not so much in the data itself, but in what it enables you to develop as a derivative asset, meaning a statistical predictive model or machine learning model that shows the patterns that you can then drive into, recommendation engines, and your target marketing y'know applications. So you see any clients valuate, doing their valuation of data on those derivative assets? >> Altshuller: Yeah. >> In lieu of... >> In these new business models I see within corporations that have been around for decades, it's actual data offers that they make to maybe their ecosystem, their channel. "Here's data we have, here's how you interpret it, "we'll continue to collect it, we'll continue to curate it, "we'll make it available." And this is really what's driving your business. So yeah, data assets become something that, companies are figuring out how to monetize their data assets. >> Of course those derived assets will decay if those models of, for example machine learning models are not trained with fresh, y'know data from the sources. >> And if we're not testing for new variable too, right? Like if the variable was never in the model, you still have to have this discovery process, that's always going on the see what new variables might be out there, what new data set, right. Like if a new IOT sensor in the baseball stadium becomes available, maybe that one I talked about with elevation of the pitcher, like until you have that you can't use it, but once you have it you have to figure out how to use it. >> Alright lets bring it back to your business, what can I buy from you, what do sell, what are your products? >> Yeah so after being in business analytics is Cognos analytics, Watson analytics, Watts analytics for social media, and planning analytics. Cognos is the "what", what's going on in my business. Watts analytics is the "why", planning analytics is "what do we think is going to happen?". We're starting to do more and more smarter, what do we think's going to happen based on these predictive models instead of just guessing what's going to happen. And then social media really gets into this idea of trying to find the signal, the sentiment. Not just around your own brand, it could be a competitor recall, and what now the intent is of that customer, are they going to now start buying other products, or are they going to stick with the recall company. >> Vellante: Okay so the starting point of your business having Cognos, one of the largest acquisitions ever in IBM's history, and of course it was all about CFO's and reporting and Sarbanes-Oxley was a huge boom to that business, but as I was saying before it, it never really got us to that predictive era. So you're layering those predictive pieces on top. >> That's what you saw on stage. >> Yes, that's right, what, so we saw on stage, and then are you selling to the same constituencies? Or how is constituency that you sell to changing? >> Yeah, no it's actually the same. Well Cognos BI, historically was selling to IT, and Cognos Analytics is selling to the business. But if we take that leap forward then we're now in the market, we have been for a few years now at Cognos Analytics. Yeah, that capability we showed onstage where we talked about not only what's going on, why it's going on, what will happen next, and what we ought to do about it. We're selling that capability for them, the business user, the dashboard becomes like a piece of glass to them. And that glass is able to call services that they don't have to be proficient in, they just want to be able to use them. It calls the weather service, it calls the optimization service, it calls the machine learning data sign service, and it actually gives them information that's forward looking and highly accurate, so they love it, 'cause it's cool they haven't had anything like that before. >> Vellante: Alright Marc Altshuller, thanks very much for coming back on The Cube, it's great to see you. >> Thank you. >> "You can't measure heart" as we say in boston, but you better start measuring. Alright keep right there everybody, Jim and I will right back after this short break. This is The Cube, we're live from Fast Track Your Data in Munich. We'll be right back. (upbeat jingle) (thoughtful music)

Published Date : Jun 24 2017

SUMMARY :

Covering IBM Fast Track Your Data, brought to you by IBM. Good to see you again Marc. Hey, always great to see you. about the caveats of correlations, you were talking about of the spectrum, but people are really getting value. And you hear that a lot at conferences, the exit points, how frequently they come back, and if you don't want to embrace it, you get beat, right? based on the stats on what work and what does not work. how do we segment it, know a little bit more about you Micro-segmentation, and of course, the end extreme I don't want to say schizophrenic but you do have your peeps are the business people. That's where you spend your time. based on the types of analysis they've done in the past. part of the pattern in terms of, like you don't want and it's all because of the data, so these conversations the key is, do you disrupt yourself So my question I have, the discussion I'd like to have So I have heard of cases where companies based on what the data told you to do, but in what it enables you to develop as a derivative asset, "Here's data we have, here's how you interpret it, are not trained with fresh, y'know data from the sources. that you can't use it, but once you have it Cognos is the "what", what's going on in my business. Vellante: Okay so the starting point of your business the dashboard becomes like a piece of glass to them. for coming back on The Cube, it's great to see you. but you better start measuring.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

Dave VellantePERSON

0.99+

Marc AltshullerPERSON

0.99+

Michael ShragePERSON

0.99+

AmazonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

MarcPERSON

0.99+

JimPERSON

0.99+

IBMORGANIZATION

0.99+

MunichLOCATION

0.99+

Peyton ManningPERSON

0.99+

Wal-martORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

2004DATE

0.99+

Cognos AnalyticsORGANIZATION

0.99+

2003DATE

0.99+

360 degreesQUANTITY

0.99+

yesterdayDATE

0.99+

twoQUANTITY

0.99+

360 degreeQUANTITY

0.99+

last weekDATE

0.99+

MillionsTITLE

0.99+

CognosORGANIZATION

0.99+

one patternQUANTITY

0.99+

UberORGANIZATION

0.99+

IBM Business AnalyticsORGANIZATION

0.99+

Whole FoodsORGANIZATION

0.98+

Munich, GermanyLOCATION

0.98+

first timeQUANTITY

0.98+

LinkedInORGANIZATION

0.98+

AirbnbORGANIZATION

0.98+

AltshullerPERSON

0.98+

2017DATE

0.97+

HBRORGANIZATION

0.97+

companiesQUANTITY

0.97+

VellantePERSON

0.97+

FacebookORGANIZATION

0.96+

Cognos analyticsORGANIZATION

0.94+

TwitterORGANIZATION

0.94+

GDPRTITLE

0.93+

decadesQUANTITY

0.92+

DSSORGANIZATION

0.9+

bostonLOCATION

0.9+

last few yearsDATE

0.89+

Cognos BIORGANIZATION

0.89+

Wal-martsORGANIZATION

0.88+

oneQUANTITY

0.87+

Watson analyticsORGANIZATION

0.86+

few years agoDATE

0.86+

Watts analyticsORGANIZATION

0.85+

2003,DATE

0.8+

single bucketQUANTITY

0.79+

OAuthTITLE

0.76+

VellanteORGANIZATION

0.75+

WAZEORGANIZATION

0.72+

Sarbanes-OxleyORGANIZATION

0.7+

couple of yearsQUANTITY

0.67+

baseballTITLE

0.66+

yearsQUANTITY

0.61+

The CubeCOMMERCIAL_ITEM

0.57+

fortune 500ORGANIZATION

0.47+

CubePERSON

0.46+

CubeTITLE

0.42+