Image Title

Search Results for The Computer Vision Group:

Bill Schmarzo, Hitachi Vantara | CUBE Conversation, August 2020


 

>> Announcer: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey, welcome back, you're ready. Jeff Frick here with theCUBE. We are still getting through the year of 2020. It's still the year of COVID and there's no end in sight I think until we get to a vaccine. That said, we're really excited to have one of our favorite guests. We haven't had him on for a while. I haven't talked to him for a long time. He used to I think have the record for the most CUBE appearances of probably any CUBE alumni. We're excited to have him joining us from his house in Palo Alto. Bill Schmarzo, you know him as the Dean of Big Data, he's got more titles. He's the chief innovation officer at Hitachi Vantara. He's also, we used to call him the Dean of Big Data, kind of for fun. Well, Bill goes out and writes a bunch of books. And now he teaches at the University of San Francisco, School of Management as an executive fellow. He's an honorary professor at NUI Galway. I think he's just, he likes to go that side of the pond and a many time author now, go check him out. His author profile on Amazon, the "Big Data MBA," "The Art of Thinking Like A Data Scientist" and another Big Data, kind of a workbook. Bill, great to see you. >> Thanks, Jeff, you know, I miss my time on theCUBE. These conversations have always been great. We've always kind of poked around the edges of things. A lot of our conversations have always been I thought, very leading edge and the title Dean of Big Data is courtesy of theCUBE. You guys were the first ones to give me that name out of one of the very first Strata Conferences where you dubbed me the Dean of Big Data, because I taught a class there called the Big Data MBA and look what's happened since then. >> I love it. >> It's all on you guys. >> I love it, and we've outlasted Strata, Strata doesn't exist as a conference anymore. So, you know, part of that I think is because Big Data is now everywhere, right? It's not the standalone thing. But there's a topic, and I'm holding in my hands a paper that you worked on with a colleague, Dr. Sidaoui, talking about what is the value of data? What is the economic value of data? And this is a topic that's been thrown around quite a bit. I think you list a total of 28 reference sources in this document. So it's a well researched piece of material, but it's a really challenging problem. So before we kind of get into the details, you know, from your position, having done this for a long time, and I don't know what you're doing today, you used to travel every single week to go out and visit customers and actually do implementations and really help people think these through. When you think about the value, the economic value, how did you start to kind of frame that to make sense and make it kind of a manageable problem to attack? >> So, Jeff, the research project was eyeopening for me. And one of the advantages of being a professor is, you have access to all these very smart, very motivated, very free research sources. And one of the problems that I've wrestled with as long as I've been in this industry is, how do you figure out what is data worth? And so what I did is I took these research students and I stick them on this problem. I said, "I want you to do some research. Let me understand what is the value of data?" I've seen all these different papers and analysts and consulting firms talk about it, but nobody's really got this thing clicked. And so we launched this research project at USF, professor Mouwafac Sidaoui and I together, and we were bumping along the same old path that everyone else got, which was inched on, how do we get data on our balance sheet? That was always the motivation, because as a company we're worth so much more because our data is so valuable, and how do I get it on the balance sheet? So we're headed down that path and trying to figure out how do you get it on the balance sheet? And then one of my research students, she comes up to me and she says, "Professor Schmarzo," she goes, "Data is kind of an unusual asset." I said, "Well, what do you mean?" She goes, "Well, you think about data as an asset. It never depletes, it never wears out. And the same dataset can be used across an unlimited number of use cases at a marginal cost equal to zero." And when she said that, it's like, "Holy crap." The light bulb went off. It's like, "Wait a second. I've been thinking about this entirely wrong for the last 30 some years of my life in this space. I've had the wrong frame. I keep thinking about this as an act, as an accounting conversation. An accounting determines valuation based on what somebody is willing to pay for." So if you go back to Adam Smith, 1776, "Wealth of Nations," he talks about valuation techniques. And one of the valuation techniques he talks about is valuation and exchange. That is the value of an asset is what someone's willing to pay you for it. So the value of this bottle of water is what someone's willing to pay you for it. So everybody fixates on this asset, valuation in exchange methodology. That's how you put it on balance sheet. That's how you run depreciation schedules, that dictates everything. But Adam Smith also talked about in that book, another valuation methodology, which is valuation in use, which is an economics conversation, not an accounting conversation. And when I realized that my frame was wrong, yeah, I had the right book. I had Adam Smith, I had "Wealth of Nations." I had all that good stuff, but I hadn't read the whole book. I had missed this whole concept about the economic value, where value is determined by not how much someone's willing to pay you for it, but the value you can drive by using it. So, Jeff, when that person made that comment, the entire research project, and I got to tell you, my entire life did a total 180, right? Just total of 180 degree change of how I was thinking about data as an asset. >> Right, well, Bill, it's funny though, that's kind of captured, I always think of kind of finance versus accounting, right? And then you're right on accounting. And we learn a lot of things in accounting. Basically we learn more that we don't know, but it's really hard to put it in an accounting framework, because as you said, it's not like a regular asset. You can use it a lot of times, you can use it across lots of use cases, it doesn't degradate over time. In fact, it used to be a liability. 'cause you had to buy all this hardware and software to maintain it. But if you look at the finance side, if you look at the pure play internet companies like Google, like Facebook, like Amazon, and you look at their valuation, right? We used to have this thing, we still have this thing called Goodwill, which was kind of this capture between what the market established the value of the company to be. But wasn't reflected when you summed up all the assets on the balance sheet and you had this leftover thing, you could just plug in goodwill. And I would hypothesize that for these big giant tech companies, the market has baked in the value of the data, has kind of put in that present value on that for a long period of time over multiple projects. And we see it captured probably in goodwill, versus being kind of called out as an individual balance sheet item. >> So I don't think it's, I don't know accounting. I'm not an accountant, thank God, right? And I know that goodwill is one of those things if I remember from my MBA program is something that when you buy a company and you look at the value you paid versus what it was worth, it stuck into this category called goodwill, because no one knew how to figure it out. So the company at book value was a billion dollars, but you paid five billion for it. Well, you're not an idiot, so that four billion extra you paid must be in goodwill and they'd stick it in goodwill. And I think there's actually a way that goodwill gets depreciated as well. So it could be that, but I'm totally away from the accounting framework. I think that's distracting, trying to work within the gap rules is more of an inhibitor. And we talk about the Googles of the world and the Facebooks of the world and the Netflix of the world and the Amazons and companies that are great at monetizing data. Well, they're great at monetizing it because they're not selling it, they're using it. Google is using their data to dominate search, right? Netflix is using it to be the leader in on-demand videos. And it's how they use all the data, how they use the insights about their customers, their products, and their operations to really drive new sources of value. So to me, it's this, when you start thinking about from an economics perspective, for example, why is the same car that I buy and an Uber driver buys, why is that car more valuable to an Uber driver than it is to me? Well, the bottom line is, Uber drivers are going to use that car to generate value, right? That $40,000, that car they bought is worth a lot more, because they're going to use that to generate value. For me it sits in the driveway and the birds poop on it. So, right, so it's this value in use concept. And when organizations can make that, by the way, most organizations really struggle with this. They struggle with this value in use concept. They want to, when you talk to them about data monetization and say, "Well, I'm thinking about the chief data officer, try not to trying to sell data, knocking on doors, shaking their tin cup, saying, 'Buy my data.'" No, no one wants your data. Your data is more valuable for how you use it to drive your operations then it's a sell to somebody else. >> Right, right. Well, on of the other things that's really important from an economics concept is scarcity, right? And a whole lot of economics is driven around scarcity. And how do you price for scarcity so that the market evens out and the price matches up to the supply? What's interesting about the data concept is, there is no scarcity anymore. And you know, you've outlined and everyone has giant numbers going up into the right, in terms of the quantity of the data and how much data there is and is going to be. But what you point out very eloquently in this paper is the scarcity is around the resources to actually do the work on the data to get the value out of the data. And I think there's just this interesting step function between just raw data, which has really no value in and of itself, right? Until you start to apply some concepts to it, you start to analyze it. And most importantly, that you have some context by which you're doing all this analysis to then drive that value. And I thought it was really an interesting part of this paper, which is get beyond the arguing that we're kind of discussing here and get into some specifics where you can measure value around a specific business objective. And not only that, but then now the investment of the resources on top of the data to be able to extract the value to then drive your business process for it. So it's a really different way to think about scarcity, not on the data per se, but on the ability to do something with it. >> You're spot on, Jeff, because organizations don't fail because of a lack of use cases. They fail because they have too many. So how do you prioritize? Now that scarcity is not an issue on the data side, but it is this issue on the people resources side, you don't have unlimited data scientists, right? So how do you prioritize and focus on those opportunities that are most important? I'll tell you, that's not a data science conversation, that's a business conversation, right? And figuring out how you align organizations to identify and focus on those use cases that are most important. Like in the paper we go through several different use cases using Chipotle as an example. The reason why I picked Chipotle is because, well, I like Chipotle. So I could go there and I could write it off as research. But there's a, think about the number of use cases where a company like Chipotle or any other company can leverage your data to drive their key business initiatives and their key operational use cases. It's almost unbounded, which by the way, is a huge challenge. In fact, I think part of the problem we see with a lot of organizations is because they do such a poor job of prioritizing and focusing, they try to solve the entire problem with one big fell swoop, right? It's slightly the old ERP big bang projects. Well, I'm just going to spend $20 million to buy this analytic capability from company X and I'm going to install it and then magic is going to happen. And then magic is going to happen, right? And then magic is going to happen, right? And magic never happens. We get crickets instead, because the biggest challenge isn't around how do I leverage the data, it's about where do I start? What problems do I go after? And how do I make sure the organization is bought in to basically use case by use case, build out your data and analytics architecture and capabilities. >> Yeah, and you start backwards from really specific business objectives in the use cases that you outline here, right? I want to increase my average ticket by X. I want to increase my frequency of visits by X. I want to increase the amount of items per order from X to 1.2 X, or 1.3 X. So from there you get a nice kind of big revenue hit that you can plan around and then work backwards into the amount of effort that it takes and then you can come up, "Is this a good investment or not?" So it's a really different way to get back to the value of the data. And more importantly, the analytics and the work to actually call out the information. >> The technologies, the data and analytic technologies available to us. The very composable nature of these allow us to take this use case by use case approach. I can build out my data lake one use case at a time. I don't need to stuff 25 data sources into my data lake and hope there's someone more valuable. I can use the first use case to say, "Oh, I need these three data sources to solve that use case. I'm going to put those three data sources in the data lake. I'm going to go through the entire curation process of making sure the data has been transformed and cleansed and aligned and enriched and met of, all the other governance, all that kind of stuff this goes on. But I'm going to do that use case by use case, 'cause a use case can tell me which data sources are most important for that given situation. And I can build up my data lake and I can build up my analytics then one use case at a time. And there is a huge impact then, huge impact when I build out use case by use case. That does not happen. Let me throw something that's not really covered in the paper, but it is very much covered in my new book that I'm working on, which is, in knowledge-based industries, the economies of learning are more powerful than the economies of scale. Now think about that for a second. >> Say that again, say that again. >> Yeah, the economies of learning are more powerful than the economies of scale. And what that means is what I learned on the first use case that I build out, I can apply that learning to the second use case, to the third use case, to the fourth use case. So when I put my data into my data lake for my first use case, and the paper covers this, well, once it's in my data lake, the cost of reusing that data in a second, third and fourth use cases is basically, you know marginal cost is zero. So I get this ability to learn about what data sets are most important and to reapply that across the organization. So this learning concept, I learn use case by use case, I don't have to do a big economies of scale approach and start with 25 datasets of which only three or four might be useful. But I'm incurring the overhead for all those other non-important data sets because I didn't take the time to go through and figure out what are my most important use cases and what data do I need to support those use cases. >> I mean, should people even think of the data per se or should they really readjust their thinking around the application of the data? Because the data in and of itself means nothing, right? 55, is that fast or slow? Is that old or young? Well, it depends on a whole lot of things. Am I walking or am I in a brand new Corvette? So it just, it's funny to me that the data in and of itself really doesn't have any value and doesn't really provide any direction into a decision or a higher order, predictive analytics until you start to manipulate the data. So is it even the wrong discussion? Is data the right discussion? Or should we really be talking about the capabilities to do stuff within and really get people focused on that? >> So Jeff, there's so many points to hit on there. So the application of data is what's the value, and the queue of you guys used to be famous for saying, "Separating noise from the signal." >> Signal from the noise. Signal from a noise, right. Well, how do you know in your dataset what's signal and what's noise? Well, the use case will tell you. If you don't know the use case and you have no way of figuring out what's important. One of the things I use, I still rail against, and it happens still. Somebody will walk up my data science team and say, "Here's some data, tell me what's interesting in it." Well, how do you separate signal from noise if I don't know the use case? So I think you're spot on, Jeff. The way to think about this is, don't become data-driven, become value-driven and value is driven from the use case or the application or the use of the data to solve that particular use case. So organizations that get fixated on being data-driven, I hate the term data-driven. It's like as if there's some sort of frigging magic from having data. No, data has no value. It's how you use it to derive customer product and operational insights that drive value,. >> Right, so there's an interesting step function, and we talk about it all the time. You're out in the weeds, working with Chipotle lately, and increase their average ticket by 1.2 X. We talk more here, kind of conceptually. And one of the great kind of conceptual holy grails within a data-driven economy is kind of working up this step function. And you've talked about it here. It's from descriptive, to diagnostic, to predictive. And then the Holy grail prescriptive, we're way ahead of the curve. This comes into tons of stuff around unscheduled maintenance. And you know, there's a lot of specific applications, but do you think we spend too much time kind of shooting for the fourth order of greatness impact, instead of kind of focusing on the small wins? >> Well, you certainly have to build your way there. I don't think you can get to prescriptive without doing predictive, and you can't do predictive without doing descriptive and such. But let me throw a really one at you, Jeff, I think there's even one beyond prescriptive. One we're talking more and more about, autonomous, a ton of analytics, right? And one of the things that paper talked about that didn't click with me at the time was this idea of orphaned analytics. You and I kind of talked about this before the call here. And one thing we noticed in the research was that a lot of these very mature organizations who had advanced from the retrospective analytics of BI to the descriptive, to the predicted, to the prescriptive, they were building one off analytics to solve a problem and getting value from it, but never reusing this analytics over and over again. They were done one off and then they were thrown away and these organizations were so good at data science and analytics, that it was easier for them to just build from scratch than to try to dig around and try to find something that was never actually ever built to be reused. And so I have this whole idea of orphaned analytics, right? It didn't really occur to me. It didn't make any sense into me until I read this quote from Elon Musk, and Elon Musk made this statement. He says, " I believe that when you buy a Tesla, you're buying an asset that appreciates in value, not depreciates through usage." I was thinking, "Wait a second, what does that mean?" He didn't actually say it, "Through usage." He said, "He believes you're buying an asset that appreciates not depreciates in value." And of course the first response I had was, "Oh, it's like a 1964 and a half Mustang. It's rare, so everybody is going to want these things. So buy one, stick it in your garage. And 20 years later, you're bringing it out and it's worth more money." No, no, there's 600,000 of these things roaming around the streets, they're not rare. What he meant is that he is building an autonomous asset. That the more that it's used, the more valuable it's getting, the more reliable, the more efficient, the more predictive, the more safe this asset's getting. So there is this level beyond prescriptive where we can think about, "How do we leverage artificial intelligence, reinforcement, learning, deep learning, to build these assets that the more that they are used, the smarter they get." That's beyond prescriptive. That's an environment where these things are learning. In many cases, they're learning with minimal or no human intervention. That's the real aha moment. That's what I miss with orphaned analytics and why it's important to build analytics that can be reused over and over again. Because every time you use these analytics in a different use case, they get smarter, they get more valuable, they get more predictive. To me that's the aha moment that blew my mind. I realized I had missed that in the paper entirely. And it took me basically two years later to realize, dough, I missed the most important part of the paper. >> Right, well, it's an interesting take really on why the valuation I would argue is reflected in Tesla, which is a function of the data. And there's a phenomenal video if you've never seen it, where they have autonomous vehicle day, it might be a year or so old. And he's got his number one engineer from, I think the Microprocessor Group, The Computer Vision Group, as well as the autonomous driving group. And there's a couple of really great concepts I want to follow up on what you said. One is that they have this thing called The Fleet. To your point, there's hundreds of thousands of these things, if they haven't hit a million, that are calling home reporting home every day as to exactly how everyone took the Northbound 101 on-ramp off of University Avenue. How fast did they go? What line did they take? What G-forces did they take? And every one of those cars feeds into the system, so that when they do the autonomous update, not only are they using all their regular things that they would use to map out that 101 Northbound entry, but they've got all the data from all the cars that have been doing it. And you know, when that other car, the autonomous car couple years ago hit the pedestrian, I think in Phoenix, which is not good, sad, killed a person, dark tough situation. But you know, we are doing an autonomous vehicle show and the guy who made a really interesting point, right? That when something like that happens, typically if I was in a car wreck or you're in a car wreck, hopefully not, I learned the person that we hit learns and maybe a couple of witnesses learn, maybe the inspector. >> But nobody else learns. >> But nobody else learns. But now with the autonomy, every single person can learn from every single experience with every vehicle contributing data within that fleet. To your point, it's just an order of magnitude, different way to think about things. >> Think about a 1% improvement compounded 365 times, equals I think 38 X improvement. The power of 1% improvements over these 600,000 plus cars that are learning. By the way, even when the autonomous FSD, the full self-driving mode module isn't turned on, even when it's not turned on, it runs in shadow mode. So it's learning from the human drivers, the human overlords, it's constantly learning. And by the way, not only they're collecting all this data, I did a little research, I pulled out some of their job search ads and they've built a giant simulator, right? And they're there basically every night, simulating billions and billions of more driven miles because of the simulator. They are building, he's going to have a simulator, not only for driving, but think about all the data he's capturing as these cars are riding down the road. By the way, they don't use Lidar, they use video, right? So he's driving by malls. He knows how many cars are in the mall. He's driving down roads, he knows how old the cars are and which ones should be replaced. I mean, he has this, he's sitting on this incredible wealth of data. If anybody could simulate what's going on in the world and figure out how to get out of this COVID problem, it's probably Elon Musk and the data he's captured, be courtesy of all those cars. >> Yeah, yeah, it's really interesting, and we're seeing it now. There's a new autonomous drone out, the Skydio, and they just announced their commercial product. And again, it completely changes the way you think about how you use that tool, because you've just eliminated the complexity of driving. I don't want to drive that, I want to tell it what to do. And so you're saying, this whole application of air force and companies around things like measuring piles of coal and measuring these huge assets that are volume metric measured, that these things can go and map out and farming, et cetera, et cetera. So the autonomy piece, that's really insightful. I want to shift gears a little bit, Bill, and talk about, you had some theories in here about thinking of data as an asset, data as a currency, data as monetization. I mean, how should people think of it? 'Cause I don't think currency is very good. It's really not kind of an exchange of value that we're doing this kind of classic asset. I think the data as oil is horrible, right? To your point, it doesn't get burned up once and can't be used again. It can be used over and over and over. It's basically like feedstock for all kinds of stuff, but the feedstock never goes away. So again, or is it that even the right way to think about, do we really need to shift our conversation and get past the idea of data and get much more into the idea of information and actionable information and useful information that, oh, by the way, happens to be powered by data under the covers? >> Yeah, good question, Jeff. Data is an asset in the same way that a human is an asset. But just having humans in your company doesn't drive value, it's how you use those humans. And so it's really again the application of the data around the use cases. So I still think data is an asset, but I don't want to, I'm not fixated on, put it on my balance sheet. That nice talk about put it on a balance sheet, I immediately put the blinders on. It inhibits what I can do. I want to think about this as an asset that I can use to drive value, value to my customers. So I'm trying to learn more about my customer's tendencies and propensities and interests and passions, and try to learn the same thing about my car's behaviors and tendencies and my operations have tendencies. And so I do think data is an asset, but it's a latent asset in the sense that it has potential value, but it actually has no value per se, inputting it into a balance sheet. So I think it's an asset. I worry about the accounting concept medially hijacking what we can do with it. To me the value of data becomes and how it interacts with, maybe with other assets. So maybe data itself is not so much an asset as it's fuel for driving the value of assets. So, you know, it fuels my use cases. It fuels my ability to retain and get more out of my customers. It fuels ability to predict what my products are going to break down and even have products who self-monitor, self-diagnosis and self-heal. So, data is an asset, but it's only a latent asset in the sense that it sits there and it doesn't have any value until you actually put something to it and shock it into action. >> So let's shift gears a little bit and start talking about the data and talk about the human factors. 'Cause you said, one of the challenges is people trying to bite off more than they can chew. And we have the role of chief data officer now. And to your point, maybe that mucks things up more than it helps. But in all the customer cases that you've worked on, is there a consistent kind of pattern of behavior, personality, types of projects that enables some people to grab those resources to apply to their data to have successful projects, because to your point there's too much data and there's too many projects and you talk a lot about prioritization. But there's a lot of assumptions in the prioritization model that you can, that you know a whole lot of things, especially if you're comparing project A over in group A with project B, with group B and the two may not really know the economics across that. But from an individual person who sees the potential, what advice do you give them? What kind of characteristics do you see, either in the type of the project, the type of the boss, the type of the individual that really lends itself to a higher probability of a successful outcome? >> So first off you need to find somebody who has a vision for how they want to use the data, and not just collect it. But how they're going to try to change the fortunes of the organization. So it always takes a visionary, may not be the CEO, might be somebody who's a head of marketing or the head of logistics, or it could be a CIO, it could be a chief data officer as well. But you've got to find somebody who says, "We have this latent asset we could be doing more with, and we have a series of organizational problem challenges against which I could apply this asset. And I need to be the matchmaker that brings these together." Now the tool that I think is the most powerful tool in marrying the latent capabilities of data with all the revenue generating opportunities in the application side, because there's a countless number, the most important tool that I found doing that is design thinking. Now, the reason why I think design thinking is so important, because one of the things that design thinking does a great job is it gives everybody a voice in the process of identifying, validating, valuing, and prioritizing use cases you're going to go after. Let me say that again. The challenge organizations have is identifying, validating, valuing, and prioritizing the use cases they want to go after. Design thinking is a marvelous tool for driving organizational alignment around where we're going to start and what's going to be next and why we're going to start there and how we're going to bring everybody together. Big data and data science projects don't die because of technology failure. Most of them die because of passive aggressive behaviors in the organization that you didn't bring everybody into the process. Everybody's voice didn't get a chance to be heard. And that one person who's voice didn't get a chance to get heard, they're going to get you. They may own a certain piece of data. They may own something, but they're just waiting and lay, they're just laying there waiting for their chance to come up and snag it. So what you got to do is you got to proactively bring these people together. We call this, this is part of our value engineering process. We have a value engineering process around envisioning where we bring all these people together. We help them to understand how data in itself is a latent asset, but how it can be used from an economics perspective, drive all those value. We get them all fired up on how these can solve any one of these use cases. But you got to start with one, and you've got to embrace this idea that I can build out my data and analytic capabilities, one use case at a time. And the first use case I go after and solve, makes my second one easier, makes my third one easier, right? It has this ability that when you start going use case by use case two really magical things happen. Number one, your marginal cost flatten. That is because you're building out your data lake one use case at a time, and you're bringing all the important data lake, that data lake one use case at a time. At some point in time, you've got most of the important data you need, and the ability that you don't need to add another data source. You got what you need, so your marginal costs start to flatten. And by the way, if you build your analytics as composable, reusable, continuous learning analytic assets, not as orphaned analytics, pretty soon you have all the analytics you need as well. So your marginal cost flatten, but effect number two is that you've, because you've have the data and the analytics, I can accelerate time to value, and I can de-risked projects as I go use case by use case. And so then the biggest challenge becomes not in the data and the analytics, it's getting the all the business stakeholders to agree on, here's a roadmap we're going to go after. This one's first, and this one is going first because it helps to drive the value of the second and third one. And then this one drives this, and you create a whole roadmap of rippling through of how the data and analytics are driving this value to across all these use cases at a marginal cost approaching zero. >> So should we have chief design thinking officers instead of chief data officers that really actually move the data process along? I mean, I first heard about design thinking years ago, actually interviewing Dan Gordon from Gordon Biersch, and they were, he had just hired a couple of Stanford grads, I think is where they pioneered it, and they were doing some work about introducing, I think it was a a new apple-based alcoholic beverage, apple cider, and they talked a lot about it. And it's pretty interesting, but I mean, are you seeing design thinking proliferate into the organizations that you work with? Either formally as design thinking or as some derivation of it that pulls some of those attributes that you highlighted that are so key to success? >> So I think we're seeing the birth of this new role that's marrying capabilities of design thinking with the capabilities of data and analytics. And they're calling this dude or dudette the chief innovation officer. Surprise. >> Title for someone we know. >> And I got to tell a little story. So I have a very experienced design thinker on my team. All of our data science projects have a design thinker on them. Every one of our data science projects has a design thinker, because the nature of how you build and successfully execute a data science project, models almost exactly how design thinking works. I've written several papers on it, and it's a marvelous way. Design thinking and data science are different sides of the same coin. But my respect for data science or for design thinking took a major shot in the arm, major boost when my design thinking person on my team, whose name is John Morley introduced me to a senior data scientist at Google. And I was bottom coffee. I said, "No," this is back in, before I even joined Hitachi Vantara, and I said, "So tell me the secret to Google's data science success? You guys are marvelous, you're doing things that no one else was even contemplating, and what's your key to success?" And he giggles and laughs and he goes, "Design thinking." I go, "What the hell is that? Design thinking, I've never even heard of the stupid thing before." He goes, "I'd make a deal with you, Friday afternoon let's pop over to Stanford's B school and I'll teach you about design thinking." So I went with him on a Friday to the d.school, Design School over at Stanford and I was blown away, not just in how design thinking was used to ideate and bring and to explore. But I was blown away about how powerful that concept is when you marry it with data science. What is data science in its simplest sense? Data science is about identifying the variables and metrics that might be better predictors of performance. It's that might phrase that's the real key. And who are the people who have the best insights into what values or metrics or KPIs you might want to test? It ain't the data scientists, it's the subject matter experts on the business side. And when you use design thinking to bring this subject matter experts with the data scientists together, all kinds of magic stuff happens. It's unbelievable how well it works. And all of our projects leverage design thinking. Our whole value engineering process is built around marrying design thinking with data science, around this prioritization, around these concepts of, all ideas are worthy of consideration and all voices need to be heard. And the idea how you embrace ambiguity and diversity of perspectives to drive innovation, it's marvelous. But I feel like I'm a lone voice out in the wilderness, crying out, "Yeah, Tesla gets it, Google gets it, Apple gets it, Facebook gets it." But you know, most other organizations in the world, they don't think like that. They think design thinking is this Wufoo thing. Oh yeah, you're going to bring people together and sing Kumbaya. It's like, "No, I'm not singing Kumbaya. I'm picking their brains because they're going to help make their data science team much more effective and knowing what problems we're going to go after and how I'm going to measure success and progress. >> Maybe that's the next Dean for the next 10 years, the Dean of design thinking instead of data science, and who knew they're one and the same? Well, Bill, that's a super insightful, I mean, it's so, is validated and supported by the trends that we see all over the place, just in terms of democratization, right? Democratization of the tools, more people having access to data, more opinions, more perspective, more people that have the ability to manipulate the data and basically experiment, does drive better business outcomes. And it's so consistent. >> If I could add one thing, Jeff, I think that what's really powerful about design thinking is when I think about what's happening with artificial intelligence or AI, there's all these conversations about, "Oh, AI is going to wipe out all these jobs. Is going to take all these jobs away." And what we're actually finding is that if we think about machine learning, driven by AI and human empowerment, driven by design thinking, we're seeing the opportunity to exploit these economies of learning at the front lines where every customer engagement, every operational execution is an opportunity to gather not only more data, but to gather more learnings, to empower the humans at the front lines of the organization to constantly be seeking, to try different things, to explore and to learn from each of these engagements. I think it's, AI to me is incredibly powerful. And I think about it as a source of driving more learning, a continuous learning and continuously adapting an organization where it's not just the machines that are doing this, but it's the humans who've been empowered to do that. And my chapter nine in my new book, Jeff, is all about team empowerment, because nothing you do with AI is going to matter of squat if you don't have empowered teams who know how to take and leverage that continuous learning opportunity at the front lines of customer and operational engagement. >> Bill, I couldn't set a better, I think we'll leave it there. That's a great close, when is the next book coming out? >> So today I do my second to last final review. Then it goes back to the editor and he does a review and we start looking at formatting. So I think we're probably four to six weeks out. >> Okay, well, thank you so much, congratulations on all the success. I just love how the Dean is really the Dean now, teaching all over the world, sharing the knowledge and attacking some of these big problems. And like all great economics problems, often the answer is not economics at all. It's completely really twist the lens and don't think of it in that, all that construct. >> Exactly. >> All right, Bill. Thanks again and have a great week. >> Thanks, Jeff. >> All right. He's Bill Schmarzo, I'm Jeff Frick. You're watching theCUBE. Thanks for watching, we'll see you next time. (gentle music)

Published Date : Aug 3 2020

SUMMARY :

leaders all around the world. And now he teaches at the of the very first Strata Conferences into the details, you know, and how do I get it on the balance sheet? of the data, has kind of put at the value you paid but on the ability to And how do I make sure the analytics and the work of making sure the data has the time to go through that the data in and of itself and the queue of you is driven from the use case And one of the great kind And of course the first and the guy who made a really But now with the autonomy, and the data he's captured, and get past the idea of of the data around the use cases. and the two may not really and the ability that you don't need into the organizations that you work with? the birth of this new role And the idea how you embrace ambiguity people that have the ability of the organization to is the next book coming out? Then it goes back to the I just love how the Dean Thanks again and have a great week. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Bill SchmarzoPERSON

0.99+

Jeff FrickPERSON

0.99+

SidaouiPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

John MorleyPERSON

0.99+

AppleORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

AmazonsORGANIZATION

0.99+

five billionQUANTITY

0.99+

1%QUANTITY

0.99+

$20 millionQUANTITY

0.99+

$40,000QUANTITY

0.99+

August 2020DATE

0.99+

365 timesQUANTITY

0.99+

Adam SmithPERSON

0.99+

PhoenixLOCATION

0.99+

UberORGANIZATION

0.99+

secondQUANTITY

0.99+

NUI GalwayORGANIZATION

0.99+

fourQUANTITY

0.99+

thirdQUANTITY

0.99+

SchmarzoPERSON

0.99+

billionsQUANTITY

0.99+

ChipotleORGANIZATION

0.99+

Friday afternoonDATE

0.99+

The Art of Thinking Like A Data ScientistTITLE

0.99+

University AvenueLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

oneQUANTITY

0.99+

threeQUANTITY

0.99+

28 reference sourcesQUANTITY

0.99+

Elon MuskPERSON

0.99+

BillPERSON

0.99+

BostonLOCATION

0.99+

180QUANTITY

0.99+

The Computer Vision GroupORGANIZATION

0.99+

four billionQUANTITY

0.99+

first use caseQUANTITY

0.99+

Dan GordonPERSON

0.99+

TeslaORGANIZATION

0.99+

firstQUANTITY

0.99+

1776DATE

0.99+

zeroQUANTITY

0.99+

third use caseQUANTITY

0.99+

180 degreeQUANTITY

0.99+

Elon MuskPERSON

0.99+

38 XQUANTITY

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

hundreds of thousandsQUANTITY

0.99+

Microprocessor GroupORGANIZATION

0.99+

25 data sourcesQUANTITY

0.99+

six weeksQUANTITY

0.99+

USFORGANIZATION

0.99+

fourth use caseQUANTITY

0.99+

Greg Hughes, Veritas | Veritas Vision Solution Day NYC 2018


 

>> From Tavern on the Green in Central Park, New York, it's theCUBE, covering Veritas Vision Solution Day. Brought to you by Veritas. (robotic music) >> We're back in the heart of Central Park. We're here at Tavern on the Green. Beautiful location for the Veritas Vision Day. You're watching theCUBE, my name is Dave Vellante. We go out to the events, we extract the signal from the noise, we got the CEO of Veritas here, Greg Hughes, newly minted, nine months in. Greg, thanks for coming on theCUBE. >> It's great to be here Dave, thank you. >> So let's talk about your nine. What was your agenda your first nine months? You know they talk about the 100 day plan. What was your nine month plan? >> Yeah, well look, I've been here for nine months, but I'm a boomerang. So I was here from 2003 to 2010. I ran all of global services, during that time and became the chief strategy officer after that. Was here during the merger by Semantic. And then ran the Enterprise Product Group. So I had all the products and all the engineering teams for all the Enterprise products. And really my starting point is the customer. I really like to hear directly from the customer. So I've spent probably 50% of my time out and about, meeting with customers. And at this point, I've met with a 100 different accounts all around the world. And what I'm hearing, makes me even more excited to be here. Digital transformation is real. These customers are investing a lot in digitizing their companies. And that's driving an explosion of data. That data all needs to be available and recoverable and that's where we step in. We're the best at that. >> Okay, so that was sort of alluring to you. You're right, everybody's trying to get digital transformation right. It changes the whole data protection equation. It kind of reminds me, in a much bigger scale, of virtualization. You remember, everybody had to rethink their backup strategies because you now have less physical resources. This is a whole different set of pressures, isn't it? It's like you can't go down, you have to always have access to data. Data is-- >> 24 by seven. >> Increasingly valuable. >> Yup. >> So talk a little bit more about the importance of data, the role of data, and where Veritas fits in. >> Well, our customers are using new, they're driving new applications throughout the enterprise. So machine learning, AI, big data, internet of things. And that's all driving the use of new data management technologies. Cassandra, Hadoop, Open Sequel, MongoDB. You've heard all of these, right? And then that's driving the use of new platforms. Hyper-converged, virtual machines, the cloud. So all this data is popping up in all these different areas. And without Veritas, it can exist, it'll just be in silos. And that becomes very hard to manage and protect it. All that data needs to be protected. We're there to protect everything. And that's really how we think about it. >> The big message we heard today was you got a lot of different clouds, you don't want to have a different data protection strategy for each cloud. So you've got to simplify that for people. Sounds easy, but from an R&D perspective, you've got a large install base, you've been around for a long, long time. So you've got to put investments to actually see that through. Talk about your R&D and investment strategy. >> Well, our investment strategy's very simple. We are the market share leader in data protection and software-defined storage. And that scale, gives us a tremendous advantage. We can use that scale to invest more aggressively than anybody else, in those areas. So we can cover all the workloads, we can cover wherever our customers are putting their data, and we can help them standardize on one provider of data protection, and that's us. So they don't have to have the complexity of point products in their infrastructure. >> So I wonder if we could talk, just a little veer here, and talk about the private equity play. You guys are the private equity exit. And you're seeing a lot of high profile PE companies. It used to be where companies would go to die, and now it's becoming a way for the PE guys to actually get step-ups, and make a lot of money by investing in companies, and building communities, investing in R&D. Some of the stuff we've covered. We've followed Syncsort, BMC, Infor, a really interesting company, what's kind of an exit from PE, right? Dell, the biggest one of all. Riverbed, and of course Veritas. So, there's like a new private equity playbook. It's something you know well from your Silver Lake days. Describe what that dynamic is like, and how it's changed. >> Oh look, private equity's been involved in software for 10 or 15 years. It's been a very important area of investment in private equity. I've worked for private equity firms, worked for software companies, so I know it very well. And the basic idea is, continue the investment. Continue in the investment in the core products and the core customers, to make sure that there is continued enhancement and innovation, of the core products. With that, there'll be continuity in customer relationships, and those customer relationships are very valuable. That's really the secret, if you will, of the private equity playbook. >> Well and public markets are very fickle. I mean, they want growth now. They don't care about profits. I see you've got a very nice cash flow, you and some of the brethren that I mentioned. So that could be very attractive, particularly when, you know, public markets they ebb and flow. The key is value for customers, and that's going to drive value for shareholders. >> That's absolutely right. >> So talk about the TAM. Part of a CEOs job, is to continually find new ways, you're a strategy guy, so TAM expansion is part of the role. How do you look at the market? Where are the growth opportunities? >> We see our TAM, or our total addressable market, at being around $17 billion, cutting across all of our areas. Probably growing into high single digits, 8%. That's kind of a big picture view of it. When I like to think about it, I like to think about it from the themes I'm hearing from customers. What are our customers doing? They're trying to leverage the cloud. Most of our customers, which are large enterprises. We work with the blue-chip enterprises on the planet. They're going to move to a hybrid approach. They're going to on-premise infrastructure and multiple cloud providers. So that's really what they're doing. The second thing our customers are worried about is ransomware, and ransomware attacks. Spearfishing works, the bad guys are going to get in. They're going to put some bad malware in your environment. The key is to be resilient and to be able to restore at scale. That's another area of significant investment. The third, they're trying to automate. They're trying to make investments in automation, to take out manual labor, to reduce error rate. In this whole world, tape should go away. So one of the things our customers are doing, is trying to get rid of tape backup in their environment. Tape is a long-term retention strategy. And then finally, if you get rid of tape, and you have all your secondary data on disc or in the cloud, what becomes really cool, is you can analyze all that data. Out of bound, from the primary storage. That's one of the bigger changes I've seen since I've returned back to Veritas. >> So $17 billion, obviously, that transcends backup. Frankly, we go back to the early days of Veritas, I always thought of it as a data management company and sort of returned to those roots. >> Backup, software defined storage, compliance, all those areas are key to what we do. >> You mentioned automation. When you think about cloud and digital transformation, automation is fundamental, we had NBCUniversal on earlier, and the customer was talking about scripts and how scripts are fragile and they need to be maintained and it doesn't scale. So he wants to drive automation into his processes as much as possible, using a platform, a sort of API based, modern, microservices, containers. Kind of using all those terms. What does that mean for you guys in terms of your R&D roadmap, in terms of the investments that you're making in those types of software innovations? >> Well actually one of the things we're talking about today is our latest release of NetBackup 812, which had a significant investment in APIs and that allow our customers to use the product and automate processes, tie it together with their infrastructure, like ServiceNow, or whatever they have. And we're going to continue full throttle on APIs. Just having lunch with some customers just today, they want us to go even further in our APIs. So that's really core to what we're doing. >> So you guys are a little bit like the New England Patriots. You're the leader, and everybody wants to take you down. So you always start-- >> Nobody's confused me for Tom Brady. Although my wife looks... I'll stack her up against Giselle anytime, but I'm no Tom Brady. >> So okay, how do you maintain your leadership and your relevance for customers? A lot of VC money coming into the marketplace. Like I said, everybody wants to take the leader down. How do you maintain your leadership? >> We've been around for 25 years. We're very honored to have 95% of the Fortune 100, are our customers. If you go to any large country in the world it's very much like that. We work with the bluest of blue-chips, the biggest companies, the most complex, the most demanding (chuckling), the most highly regulated. Those are our customers. We steer the ship based on their input, and that's why we're relevant. We're listening to them. Our customer's extremely relevant. We're going to help them protect, classify, archive their data, wherever it is. >> So the first nine months was all about hearing from customers. So what's the next 12 to 18 months about for you? >> We're continuing to invest, delighted to talk about partnerships, and where those are going, as well. I think that's going to be a major emphasis of us to continue to drive our partnerships. We can't do this alone. Our customers use products from a variety of other players. Today we had Henry Axelrod, from Amazon Web Services, here talking about how we're working closely with Amazon. We announced a really cool partnership with Pure Storage. Our customers that use Pure Storage's all-flash arrays, they know their data's backed up and protected with Veritas and with NetBackup. It's continually make sure that across this ecosystem of partners, we are the one player that can help our large customers. >> Great, thank you for mentioning that ecosystem is a key part of it. The channel, that's how you continue to grow. You get a lot of leverage out of that. Well Greg, thanks very much for coming on theCUBE. Congratulations on your-- >> Dave, thank you. >> On the new role. We are super excited for you guys, and we'll be watching. >> I enjoyed it, thank you. >> All right. Keep it right there everybody we'll be back with our next guest. This is Dave Vellante, we're here in Central Park. Be right back, Veritas Vision, be right back. (robotic music)

Published Date : Oct 11 2018

SUMMARY :

Brought to you by Veritas. We're back in the So let's talk about your nine. and became the chief It changes the whole about the importance of data, And that's all driving the use to actually see that through. So they don't have to have the complexity and talk about the private equity play. and innovation, of the core products. and that's going to drive So talk about the TAM. So one of the things and sort of returned to those roots. all those areas are key to what we do. and the customer was talking about scripts So that's really core to what we're doing. like the New England Patriots. for Tom Brady. into the marketplace. of the Fortune 100, are our customers. So the first nine months We're continuing to invest, You get a lot of leverage out of that. On the new role. This is Dave Vellante,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Greg HughesPERSON

0.99+

AmazonORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

VeritasORGANIZATION

0.99+

DavePERSON

0.99+

BMCORGANIZATION

0.99+

2003DATE

0.99+

SyncsortORGANIZATION

0.99+

GregPERSON

0.99+

50%QUANTITY

0.99+

$17 billionQUANTITY

0.99+

2010DATE

0.99+

Tom BradyPERSON

0.99+

nine monthsQUANTITY

0.99+

8%QUANTITY

0.99+

Henry AxelrodPERSON

0.99+

Pure StorageORGANIZATION

0.99+

10QUANTITY

0.99+

95%QUANTITY

0.99+

SemanticORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

GisellePERSON

0.99+

NBCUniversalORGANIZATION

0.99+

Veritas VisionORGANIZATION

0.99+

TAMORGANIZATION

0.99+

DellORGANIZATION

0.99+

New England PatriotsORGANIZATION

0.99+

InforORGANIZATION

0.99+

100 dayQUANTITY

0.99+

100 different accountsQUANTITY

0.99+

Silver LakeLOCATION

0.99+

24QUANTITY

0.99+

around $17 billionQUANTITY

0.99+

first nine monthsQUANTITY

0.99+

Central ParkLOCATION

0.99+

thirdQUANTITY

0.99+

Veritas Vision DayEVENT

0.98+

TodayDATE

0.98+

nine monthQUANTITY

0.98+

todayDATE

0.98+

nineQUANTITY

0.98+

each cloudQUANTITY

0.98+

oneQUANTITY

0.97+

Pure StorageORGANIZATION

0.97+

Enterprise Product GroupORGANIZATION

0.97+

second thingQUANTITY

0.97+

sevenQUANTITY

0.96+

one playerQUANTITY

0.96+

NetBackup 812TITLE

0.95+

Vision Solution DayEVENT

0.95+

18 monthsQUANTITY

0.94+

12QUANTITY

0.94+

Central Park, New YorkLOCATION

0.93+

25 yearsQUANTITY

0.92+

CassandraPERSON

0.91+

ServiceNowTITLE

0.91+

Tavern on the GreenLOCATION

0.9+

NYCLOCATION

0.84+

HadoopPERSON

0.82+

one providerQUANTITY

0.81+

PEORGANIZATION

0.75+

NetBackupORGANIZATION

0.73+

Veritas Vision Solution DayEVENT

0.72+

theCUBEORGANIZATION

0.68+

2018EVENT

0.6+

MongoDBTITLE

0.58+

Open SequelORGANIZATION

0.57+

RiverbedORGANIZATION

0.55+

FortuneORGANIZATION

0.48+

100QUANTITY

0.28+

VMworld Day 1 General Session | VMworld 2018


 

For Las Vegas, it's the cube covering vm world 2018, brought to you by vm ware and its ecosystem partners. Ladies and gentlemen, Vm ware would like to thank it's global diamond sponsors and it's platinum sponsors for vm world 2018 with over 125,000 members globally. The vm ware User Group connects via vmware customers, partners and employees to vm ware, information resources, knowledge sharing, and networking. To learn more, visit the [inaudible] booth in the solutions exchange or the hemoglobin gene vm village become a part of the community today. This presentation includes forward looking statements that are subject to risks and uncertainties. Actual results may differ materially as a result of various risk factors including those described in the 10 k's 10 q's and k's vm ware. Files with the SEC. Ladies and Gentlemen, please welcome Pat Gelsinger. Welcome to vm world. Good morning. Let's try that again. Good morning and I'll just say it is great to be here with you today. I'm excited about the sixth year of being CEO. When it was on this stage six years ago were Paul Maritz handed me the clicker and that's the last he was seen. We have 20,000 plus here on site in Vegas and uh, you know, on behalf of everyone at Vm ware, you know, we're just thrilled that you would be with us and it's a joy and a thrill to be able to lead such a community. We have a lot to share with you today and we really think about it as a community. You know, it's my 23,000 plus employees, the souls that I'm responsible for, but it's our partners, the thousands and we kicked off our partner day yesterday, but most importantly, the vm ware community is centered on you. You know, we're very aware of this event would be nothing without you and our community and the role that we play at vm wares to build these cool breakthrough innovations that enable you to do incredible things. You're the ones who take our stuff and do amazing things. You altogether. We have truly changed the world over the last two decades and it is two decades. You know, it's our anniversary in 1998, the five people that started a vm ware, right. You know, it was, it was exactly 20 years ago and we're just thrilled and I was thinking about this over the weekend and it struck me, you know, anniversary, that's like old people, you know, we're here, we're having our birthday and it's a party, right? We can't have a drink yet, but next year. Yeah. We're 20 years old. Right. We can do that now. And I'll just say the culture of this community is something that truly is amazing and in my 38 years, 38 years in tech, that sort of sounds like I'm getting old or something, but the passion, the loyalty, almost a cult like behavior that we see in this team of people to us is simply thrilling. And you know, we put together a little video to sort of summarize the 20 years and some of that history and some of the unique and quirky aspects of our culture. Let's watch that now. We knew we had something unique and then we demonstrated that what was unique was also some reasons that we love vm ware, you know, like the community out there. So great. The technology I love it. Ware is solid and much needed. Literally. I do love Vmr. It's awesome. Super Awesome. Pardon? There's always someone that wants to listen and learn from us and we've learned so much from them as well. And we reached out to vm ware to help us start building. What's that future world look like? Since we're doing really cutting edge stuff, there's really no better people to call and Bmr has been known for continuous innovation. There's no better way to learn how to do new things in it than being with a company that's at the forefront of technology. What do you think? Don't you love that commitment? Hey Ashley, you know, but in the prep sessions for this, I thought, boy, what can I do to take my commitment to the next level? And uh, so, uh, you know, coming in a couple days early, I went to down the street to bad ass tattoo. So it's time for all of us to take our commitment up level and sometimes what happens in Vegas, you take home. Thank you. Vm Ware has had this unique role in the industry over these 20 years, you know, and for that we've seen just incredible things that have happened over this period of time and it's truly extraordinary what we've accomplished together. And you know, as we think back, you know, what vm ware has uniquely been able to do is I'll say bridge across know and we've seen time and again that we see these areas of innovation emerging and rapidly move forward. But then as they become utilized by our customers, they create this natural tension of what business wants us flexibility to use across these silos of innovation. And from the start of our history, we have collectively had this uncanny ability to bridge across these cycles of innovation. You know, an act one was clearly the server generation. You know, it may seem a little bit, uh, ancient memory now, but you remember you used to walk into your data center and it looked like the loove the museum of it passed right? You know, and you had your old p series and your z series in your sparks and your pas and your x86 cluster and Yo, it had to decide, well, which architecture or am I going to deploy and run this on? And we bridged across and that was the magic of Esx. You don't want to just changed the industry when that occurred. And I sort of called the early days of Esx and vsphere. It was like the intelligence test. If you weren't using it, you fail because Yup. Servers, 10 servers become one months, become minutes. I still have people today who come up to me and they reflect on their first experience of vsphere or be motion and it was like a holy moment in their life and in their careers. Amazing and act to the Byo d, You know, can we bridge across these devices and users wanted to be able to come in and say, I have my device and I'm productive on it. I don't want to be forced to use the corporate standard. And maybe more than anything was the power of the iphone that was introduced, the two, seven, and suddenly every employee said this is exciting and compelling. I want to use it so I can be more productive when I'm here. Bye. Jody was the rage and again it was a tough challenge and once again vm ware helped to bridge across the surmountable challenge. And clearly our workspace one community today is clearly bridging across these silos and not just about managing devices but truly enabling employee engagement and productivity. Maybe act three was the network and you know, we think about the network, you know, for 30 years we were bound to this physical view of what the network would be an in that network. We are bound to specific protocols. We had to wait months for network upgrades and firewall rules. Once every two weeks we'd upgrade them. If you had a new application that needed a firewall rule, sorry, you know, come back next month we'll put, you know, deep frustration among developers and ceos. Everyone was ready to break the chains. And that's exactly what we did. An NSX and Nice Sierra. The day we acquired it, Cisco stock drops and the industry realizes the networking has changed in a fundamental way. It will never be the same again. Maybe act for was this idea of cloud migration. And if we were here three years ago, it was student body, right to the public cloud. Everything is going there. And I remember I was meeting with a cio of federal cio and he comes up to me and he says, I tried for the last two years to replatform my 200 applications I got to done, you know, and all of a sudden that was this. How do I do cloud migration and the effective and powerful way. Once again, we bridged across, we brought these two worlds together and eliminated this, uh, you know, this gap between private and public cloud. And we'll talk a lot more about that today. You know, maybe our next act is what we'll call the multicloud era. You know, because today in a recent survey by Deloitte said that the average business today is using eight public clouds and expected to become 10 plus public clouds. And you know, as you're managing different tools, different teams, different architectures, those solution, how do you, again bridge across, and this is what we will do in the multicloud era, we will help our community to bridge across and take advantage of these powerful cycles of innovation that are going on, but be able to use them across a consistent infrastructure and operational environment. And we'll have a lot more to talk about on this topic today. You know, and maybe the last item to bridge across maybe the most important, you know, people who are profit. You know, too often we think about this as an either or question. And as a business leader, I'm are worried about the people or the And Milton Friedman probably set us up for this issue decades ago when he said, planet, right? the sole purpose of a business is to make profits. You want to create a multi-decade dilemma, right? For business leaders, could I have both people and profits? Could I do well and do good? And particularly for technology, I think we don't have a choice to think about these separately. We are permeating every aspect of business. And Society, we have the responsibility to do both and have all the things that vm ware has accomplished. I think this might be the one that I'm most proud of over, you know, w we have demonstrated by vsphere and the hypervisor alone that we have saved over 540 million tons of co two emissions. That is what you have done. Can you believe that? Five hundred 40 million tons is enough to have 68 percent of all households for a year. Wow. Thank you for what you have done. Thank you. Or another translation of that. Is that safe enough to drive a trillion miles and the average car or you could go to and from Jupiter just in case that was in your itinerary a thousand times. Right? He was just incredible. What we have done and as a result of that, and I'll say we were thrilled to accept this recognition on behalf of you and what you have done. You know, vm were recognized as number 17 in the fortune. Change the world list last week. And we really view it as accepting this honor on behalf of what you have done with our products and technology tech as a force for good. We believe that fundamentally that is our opportunity, if not our obligation, you know, fundamentally tech is neutral, you know, we together must shape it for good. You know, the printing press by Gutenberg in 1440, right? It was used to create mass education and learning materials also can be used for extremist propaganda. The technology itself is neutral. Our ecosystem has a critical role to play in shaping technology as a force for good. You know, and as we think about that tomorrow, we'll have a opportunity to have a very special guest and I really encourage you to be here, be on time tomorrow morning on the stage and you know, Sanjay's a session, we'll have Malala, Nobel Peace Prize winner and fourth will be a bit of extra security as you come in and you understand that. And I just encourage you not to be late because we see this tech being a force for good in everything that we do at vm ware. And I hope you'll enjoy, I'm quite looking forward to the session tomorrow. Now as we think about the future. I like to put it in this context, the superpowers of tech know and you know, 38 years in the industry, you know, I am so excited because I think everything that we've done over the last four decades is creating a foundation that allows us to do more and go faster together. We're unlocking game, changing opportunities that have not been available to any people in the history of humanity. And we have these opportunities now and I, and I think about these four cloud, you have unimaginable scale. You'll literally with your Amex card, you can go rent, you know, 10,000 cores for $100 per hour. Or if you have Michael's am ex card, we can rent a million cores for $10,000 an hour. Thanks Michael. But we also know that we're in many ways just getting started and we have tremendous issues to bridge across and compatible clouds, mobile unprecedented scale. Literally, your application can reach half the humans on the planet today. But we also know that five percent, the lowest five percent of humanity or the other half of humanity, they're still in the lower income brackets, less than five percent penetrated. And we know that we have customer examples that are using mobile phones to raise impoverished farmers in Africa, out of poverty just by having a smart phone with proper crop, the information field and whether a guidance that one tool alone lifting them out of poverty. Ai knows, you know, I really love the topic of ai in 1986. I'm the chief architect of the 80 46. Some of you remember what that was. Yeah, I, you know, you're, you're my folk, right? Right. And for those of you who don't, it was a real important chip at the time. And my marketing manager comes running into my office and he says, Pat, pat, we must make the 46 a great ai chip. This is 1986. What happened? Nothing an AI is today, a 30 year overnight success because the algorithms, the data have gotten so much bigger that we can produce results, that we can bring intelligence to everything. And we're seeing dramatic breakthroughs in areas like healthcare, radiology, you know, new drugs, diagnosis tools, and designer treatments. We're just scratching the surface, but ai has so many gaps, yet we don't even in many cases know why it works. Right? And we'll call that explainable ai and edge and Iot. We're connecting the physical and the digital worlds was never before possible. We're bridging technology into every dimension of human progress. And today we're largely hooking up things, right? We have so much to do yet to make them intelligent. Network secured, automated, the patch, bringing world class it to Iot, but it's not just that these are super powers. We really see that each and each one of them is a super power in and have their own right, but they're making each other more powerful as well. Cloud enables mobile conductivity. Mobile creates more data, more data makes the AI better. Ai Enables more edge use cases and more edge requires more cloud to store the data and do the computing right? They're reinforcing each other. And with that, we know that we are speeding up and these superpowers are reshaping every aspect of society from healthcare to education, the transportation, financial institutions. This is how it all comes together. Now, just a simple example, how many of you have ever worn a hardhat? Yeah, Yo. Pretty boring thing. And it has one purpose, right? You know, keep things from smacking me in the here's the modern hardhat. It's a complete heads up display with ar head. Well, vr capabilities that give the worker safety or workers or factory workers or supply people the ability to see through walls to understand what's going on inside of the equipment. I always wondered when I was a kid to have x Ray Vision, you know, some of my thoughts weren't good about why I wanted it, but you know, I wanted to. Well now you can have it, you know, but imagine in this environment, the complex application that sits behind it. You know, you're accessing maybe 50 year old building plants, right? You're accessing HVAC systems, but modern ar and vr capabilities and new containerized displays. You'll think about that application. You know, John Gage famously said the network is the computer pat today says the application is now a network and pretty typically a complicated one, you know, and this is the vm ware vision is to make that kind of environment realizable in every aspect of our business and community and we simply have been on this journey, any device, any application, any cloud with intrinsic security. And this vision has been consistent for those of you who have been joining us for a number of years. You've seen this picture, but it's been slowly evolving as we've worked in piece by piece to refine and extend this vision, you know, and for it, we're going to walk through and use this as the compass for our discussion today as we walk through our conversation. And you know, we're going to start by a focus on any cloud. And as we think about this cloud topic, you know, we see it as a multicloud world hybrid cloud, public cloud, but increasingly seeing edge and telco becoming clouds in and have their own right. And we're not gonna spend time on it today, but this area of Telco to the is an enormous opportunity for us in our community. You know, data centers and cloud today are over 80 percent virtualized. The Telco network is less than 10 percent virtualized. Wow. An industry that's almost as big as our industry entirely unvirtualized, although the technologies we've created here can be applied over here and Telco and we have an enormous buildout coming with five g and environments emerging. What an opportunity for us, a virgin market right next to us and we're getting some early mega winds in this area using the technologies that you have helped us cure rate than the So we're quite excited about this topic area as well. market. So let's look at this full view of the multicloud. Any cloud journey. And we see that businesses are on a multicloud journey, you know, and today we see this fundamentally in these two paths, a hybrid cloud and a public cloud. And these paths are complimentary and coexisting, but today, each is being driven by unique requirements and unique teams. Largely the hybrid cloud is being driven by it. And operations, the public cloud being driven more by developers and line of business requirements and as some multicloud environment. So how do we deliver upon that and for that, let's start by digging in on the hybrid cloud aspect of this and as we think about the hybrid cloud, we've been talking about this subject for a number of years and I want to give a very specific and crisp definition. You're the hybrid cloud is the public cloud and the private cloud cooperating with consistent infrastructure and consistent operations simply put seamless path to and from the cloud that my workloads don't care if it's here or there. I'm able to run them in a agile, scalable, flexible, efficient manner across those two environments, whether it's my data center or someone else's, I can bring them together to make that work is the magic of the Vm ware Cloud Foundation. The vm ware Cloud Foundation brings together computer vsphere and the core of why we are here, but combines with that networking storage delivered through a layer of management and automation. The rule of the cloud is ruthlessly automate everything. We laid out this vision of the software defined data center seven years ago and we've been steadfastly working on this vision and vm ware. Cloud Foundation provides this consistent infrastructure and operations with integrated lifecycle management automation. Patching the m ware cloud foundation is the simplest path to the hybrid cloud and the fastest way to get vm ware cloud foundation is hyperconverged infrastructure, you know, and with this we've combined integrated then validated hardware and as a building block inside of this we have validated hardware, the v Sand ready environments. We have integrated appliances and cloud delivered infrastructure, three ways that we deliver that integrate integrated hyperconverged infrastructure solution. And we have by far the broadest ecosystem of partners to do it. A broad set of the sand ready nodes from essentially everybody in the industry. Secondly, we have integrated appliances, the extract of vxrail that we have co engineered with our partners at Dell technology and today in fact Dell is releasing the power edge servers, a major step in blade servers that again are going to be powering vxrail and vxrack systems and we deliver hyperconverged infrastructure through a broader set of Vm ware cloud partners as well. At the heart of the hyperconverged infrastructure is v San and simply put, you know, be San has been the engine that's just been moving rapidly to take over the entire integration of compute and storage and expand to more and more areas. We have incredible momentum over 15,000 customers for v San Today and for those of you who joined us, we say thank you for what you have done with this product today. Really amazing you with 50 percent of the global 2000 using it know vm ware. V San Vxrail are clearly becoming the standard for how hyperconverge is done in the industry. Our cloud partner programs over 500 cloud partners are using ulv sand in their solution, you know, and finally the largest in Hci software revenue. Simply put the sand is the software defined storage technology of choice for the industry and we're seeing that customers are putting this to work in amazing ways. Vm Ware and Dell technologies believe in tech as a force for good and that it can have a major impact on the quality of life for every human on the planet and particularly for the most underdeveloped parts of the world. Those that live on less than $2 per day. In fact that this moment 5 billion people worldwide do not have access to modern affordable surgery. Mercy ships is working hard to change the global surgery crisis with greater than 400 volunteers. Mercy ships operates the largest NGO hospital ship delivering free medical care to the poorest of the poor in Africa. Let's see from them now. When the ship shows up to port, literally people line up for days to receive state of the art life, sane changing life saving surgeries, tumor site limbs, disease blindness, birth defects, but not only that, the personnel are educating and training the local healthcare providers with new skills and infrastructure so they can care for their own. After the ship has left, mercy ships runs on Vm ware, a dell technology with VX rail, Dell Isilon data protection. We are the it platform for mercy ships. Mercy ships is now building their next generation ship called global mercy, which were more than double. It's lifesaving capacity. It's the largest charity hospital ever. It will go live in 20 slash 20 serving Africa and I personally plan on being there for its launch. It is truly amazing what they are doing with our technology. Thanks. So we see this picture of the hybrid cloud. We've talked about how we do that for the private cloud. So let's look over at the public cloud and let's dig into this a little bit more deeply. You know, we're taking this incredible power of the Vm ware Cloud Foundation and making it available for the leading cloud providers in the world and with that, the partnership that we announced almost two years ago with Amazon and on the stage last year, we announced their first generation of products, no better example of the hybrid cloud. And for that it's my pleasure to bring to stage my friend, my partner, the CEO of aws. Please welcome Andy Jassy. Thank you andy. You know, you honor us with your presence, you know, and it really is a pleasure to be able to come in front of this audience and talk about what our teams have accomplished together over the last, uh, year. Yo, can you give us some perspective on that, Andy and what customers are doing with it? Well, first of all, thanks for having me. I really appreciate it. It's great to be here with all of you. Uh, you know, the offering that we have together customers because it allows them to use the same software they've been using to again, where cloud and aws is very appealing to manage their infrastructure for years to be able to deploy it an aws and we see a lot of customer momentum and a lot of customers using it. You see it in every imaginable vertical business segment in transportation. You see it with stagecoach and media and entertainment. You see it with discovery communications in education, Mit and Caltech and consulting and accenture and cognizant and dxc you see in every imaginable vertical business segment and the number of customers using the offering is doubling every quarter. So people were really excited about it and I think that probably the number one use case we see so far, although there are a lot of them, is customers who are looking to migrate on premises applications to the cloud. And a good example of that is mit. We're there right now in the process of migrating. In fact, they just did migrate 3000 vms from their data centers to Vm ware cloud native us. And this would have taken years before to do in the past, but they did it in just three months. It was really spectacular and they're just a fun company to work with and the team there. But we're also seeing other use cases as well. And you're probably the second most common example is we'll say on demand capabilities for things like disaster recovery. We have great examples of customers you that one in particular, his brakes, right? Urban in those. The brings security trucks and they all armored trucks coming by and they had a critical need to retire a secondary data center that they were using, you know, for Dr. so we quickly built to Dr Protection Environment for $600. Bdms know they migrated their mission critical workloads and Wallah stable and consistent Dr and now they're eliminating that site and looking for other migrations as well. The rate of 10 to 15 percent. It was just a great deal. One of the things I believe Andy, he'll customers should never spend capital, uh, Dr ever again with this kind of capability in place. That is just that game changing, you know, and you know, obviously we've been working on expanding our reach, you know, we promised to make the service available a year ago with the global footprint of Amazon and now we've delivered on that promise and in fact today or yesterday if you're an ozzie right down under, we announced in Sydney, uh, as well. And uh, now we're in US Europe and in APJ. Yeah. It's really, I mean it's very exciting. Of course Australia is one of the most virtualized places in the world and, and it's pretty remarkable how fast European customers have started using the offering to and just the quarter that's been out there and probably have the many requests customers has had. And you've had a, probably the number one request has been that we make the offering available in all the regions. The aws has regions and I can tell you by the end of 2019 will largely be there including with golf clubs and golf clap. You guys have been, that's been huge for you guys. Yeah. It's a government only region that we have that a lot of federal government workloads live in and we are pretty close together having the offering a fedramp authority to operate, which is a big deal on a game changer for governments because then there'll be able to use the familiar tools they use and vm ware not just to run their workloads on premises but also in the cloud as well with the data privacy requirements, security requirements they need. So it's a real game changer for government too. Yeah. And this you can see by the picture here basically before the end of next year, everywhere that you are and have an availability zone. We're going to be there running on data. Yup. Yeah. Let's get with it. Okay. We're a team go faster. Okay. You'll and you know, it's not just making it available, but this pace of innovation and you know, you guys have really taught us a few things in this respect and since we went live in the Oregon region, you know, we've been on a quarterly cadence of major releases and two was really about mission critical at scale and we added our second region. We added our hybrid cloud extension with m three. We moved the global rollout and we launched in Europe with m four. We really add a lot of these mission critical governance aspects started to attack all of the industry certifications and today we're announcing and five right. And uh, you know, with that, uh, I think we have this little cool thing you know, two of the most important priorities for that we're doing with ebs and storage. Yeah, we'll take, customers, our cost and performance. And so we have a couple of things to talk about today that we're bringing to you that I think hit both of those on a storage side. We've combined the elasticity of Amazon Elastic Block store or ebs with ware is Va v San and we've provided now a storage option that you'll be able to use that as much. It's very high capacity and much more cost effective and you'll start to see this initially on the Vm ware cloud. Native us are five instances which are compute instances, their memory optimized and so this will change the cost equation. You'll be able to use ebs by default and it'll be much more cost effective for storage or memory intensive workloads. Um, it's something that you guys have asked for. It's been very frequently requested it, it hits preview today. And then the other thing is that we've worked really hard together to integrate vm ware's Nsx along with aws direct neck to have a private even higher performance conductivity between on premises and the cloud. So very, very exciting new capabilities to show deep integration between the companies. Yeah. You know, in that aspect of the deep integration. So it's really been the thing that we committed to, you know, we have large engineering teams that are working literally every day. Right on bringing together and how do we fuse these platforms together at a deep and intimate way so that we can deliver new services just like elastic drs and the c and ebs really powerful, uh, capabilities and that pace of innovation continue. So next maybe. Um, maybe six. I don't know. We'll see. All right. You know, but we're continuing this toward pace of innovation, you know, completing all of the capabilities of Nsx. You'll full integration for all of the direct connect to capabilities. Really expanding that. You're only improving licensed capabilities on the platform. We'll be adding pks on top of for expanded developer a capabilities. So just. Oh, thank you. I, I think that was formerly known as Right, and y'all were continuing this pace of storage Chad. So anyway. innovation going forward, but I think we also have a few other things to talk about today. Andy. Yeah, I think we have some news that hopefully people here will be pretty excited about. We know we have a pretty big database business and aws and it's. It's both on the relational and on the nonrelational side and the business is billions of dollars in revenue for us and on the relational side. We have a service called Amazon relational database service or Amazon rds that we have hundreds of thousands of customers using because it makes it much easier for them to set up, operate and scale their databases and so many companies now are operating in hybrid mode and will be for a while and a lot of those customers have asked us, can you give us the ease of manageability of those databases but on premises. And so we talked about it and we thought about and we work with our partners at Vm ware and I'm excited to announce today, right now Amazon rds on Vm ware and so that will bring all the capabilities of Amazon rds to vm ware's customers for their on premises environments. And so what you'll be able to do is you'll be able to provision databases. You'll be able to scale the compute or the memory or the storage for those database instances. You'll be able to patch the operating system or database engines. You'll be able to create, read replicas to scale your database reads and you can deploy this rep because either on premises or an aws, you'll be able to deploy and high high availability configuration by replicating the data to different vm ware clusters. You'll be able to create online backups that either live on premises or an aws and then you'll be able to take all those databases and if you eventually want to move them to aws, you'll be able to do so rather easily. You have a pretty smooth path. This is going to be available in a few months. It will be available on Oracle sql server, sql postgresql and Maria DB. I think it's very exciting for our customers and I think it's also a good example of where we're continuing to deepen the partnership and listen to what customers want and then innovate on their behalf. Absolutely. Thank you andy. It is thrilling to see this and as we said, when we began the partnership, it was a deep integration of our offerings and our go to market, but also building this bi-directional hybrid highway to give customers the capabilities where they wanted cloud on premise, on premise to the cloud. It really is a unique partnership that we've built, the momentum we're feeling to our customer base and the cool innovations that we're doing. Andy, thank you so much for you Jordan Young, rural 20th. You guys appreciate it. Yeah, we really have just seen incredible momentum and as you might have heard from our earnings call that we just finished this. We finished the last quarter. We just really saw customer momentum here. Accelerating. Really exciting to see how customers are starting to really do the hybrid cloud at scale and with this we're just seeing that this vm ware cloud foundation available on Amazon available on premise. Very powerful, but it's not just the partnership with Amazon. We are thrilled to see the momentum of our Vm ware cloud provider program and this idea of the vm ware cloud providers has continued to gain momentum in the industry and go over five years. Right. This program has now accumulated more than 4,200 cloud partners in over 120 countries around the globe. It gives you choice, your local provider specialty offerings, some of your local trusted partners that you would have in giving you the greatest flexibility to choose from and cloud providers that meet your unique business requirements. And we launched last year a program called Vm ware cloud verified and this was saying you're the most complete embodiment of the Vm ware Cloud Foundation offering by our cloud partners in this program and this logo you know, allows you to that this provider has achieved the highest standard for cloud infrastructure and that you can scale and deliver your hybrid cloud and partnering with them. It know a particular. We've been thrilled to see the momentum that we've had with IBM as a huge partner and our business with them has grown extraordinarily rapidly and triple digits, but not just the customer count, which is now over 1700, but also in the depth of customers moving large portions of the workload. And as you see by the picture, we're very proud of the scope of our partnerships in a global basis. The highest standard of hybrid cloud for you, the Vm ware cloud verified partners. Now when we come back to this picture, you know we, you know, we're, we're growing in our definition of what the hybrid cloud means and through Vm Ware Cloud Foundation, we've been able to unify the private and the public cloud together as never before, but we're also seeing that many of you are interested in how do I extend that infrastructure further and farther and will simply call that the edge right? And how do we move data closer to where? How do we move data center resources and capacity closer to where the data's being generated at the operations need to be performed? Simply the edge and we'll dig into that a little bit more, but as we do that, what are the things that we offer today with what we just talked about with Amazon and our VCP p partners is that they can consume as a service this full vm ware Cloud Foundation, but today we're only offering that in the public cloud until project dimension of project dimension allows us to extend delivered as a service, private, public, and to the edge. Today we're announcing the tech preview, a project dimension Vm ware cloud foundation in a hyperconverged appliance. We're partnered deeply with Dell EMC, Lenovo for the first partners to bring this to the marketplace, built on that same proven infrastructure, a hybrid cloud control plane, so literally just like we're managing the Vm ware cloud today, we're able to do that for your on premise. You're small or remote office or your edge infrastructure through that exact same as a service management and control plane, a complete vm ware operated end to end environment. This is project dimension. Taking the vcf stack, the full vm ware cloud foundation stack, making an available in the cloud to the edge and on premise as well, a powerful solution operated by BM ware. This project dimension and project dimension allows us to have a fundamental building block in our approach to making customers even more agile, flexible, scalable, and a key component of our strategy as well. So let's click into that edge a little bit more and we think about the edge in the following layers, the compute edge, how do we get the data and operations and applications closer to where they need to be. If you remember last year I talked about this pendulum swinging of centralization and decentralization edge is a decentralization force. We're also excited that we're moving the edge of the devices as well and we're doing that in two ways. One with workspace, one for human optimized devices and the second is project pulse or Vm ware pulse. And today we're announcing pulse two point zero where you can consume it now as a service as well as with integrated security. And we've now scaled pulse to support 500 million devices. Isn't that incredible, right? I mean this is getting a scale. Billions and billions and finally networking is a key component. You all that. We're stretching the networking platform, right? And evolving how that edge operates in a more cloud and that's a service white and this is where Nsx St with Velo cloud is such a key component of delivering the edge of network services as well. Taken together the device side, the compute edge and rethinking and evolving the networking layer together is the vm ware edge strategy summary. We see businesses are on this multicloud journey, right? How do we then do that for their private of public coming together, the hybrid cloud, but they're also on a journey for how they work and operate it across the public cloud and the public cloud we have this torrid innovation, you'll want Andy's here, challenges. You know, he's announcing 1500 new services or were extraordinary innovation and you'll same for azure or Google Ibm cloud, but it also creates the same complexity as we said. Businesses are using multiple public clouds and how do I operate them? How do I make them work? You know, how do I keep track of my accounts and users that creates a set of cloud operations problems as well in the complexity of doing that. How do you make it work? Right? And your for that. We'll just see that there's this idea cloud cost compliance, analytics as these common themes that of, you know, keep coming up and we're seeing in our customers that are new role is emerging. The cloud operations role. You're the person who's figuring out how to make these multicloud environments work and keep track of who's using what and which data is landing where today I'm thrilled to tell you that the, um, where is acquiring the leader in this space? Cloudhealth technologies. Thank you. Cloudhealth technologies supports today, Amazon, azure and Google. They have some 3,500 customers, some of the largest and most respected brands in the, as a service industry. And Sasa business today rapidly span expanding feature sets. We will take cloudhealth and we're going to make it a fundamental platform and branded offering from the um, where we will add many of the other vm ware components into this platform, such as our wavefront analytics, our cloud, choreo compliance, and many of the other vm ware products will become part of the cloudhealth suite of services. We will be enabling that through our enterprise channels as well as through our MSP and BCPP partners as well know. Simply put, we will make cloudhealth the cloud operations platform of choice for the industry. I'm thrilled today to have Joe Consella, the CTO and founder. Joe, please stand up. Thank you joe to your team of a couple hundred, you know, mostly in Boston. Welcome to the Vm ware family, the Vm ware community. It is a thrill to have you part of our team. Thank you joe. Thank you. We're also announcing today, and you can think of this, much like we had v realize operations and v realize automation, the compliment to the cloudhealth operations, vm ware, cloud automation, and some of you might've heard of this in the past, this project tango. Well, today we're announcing the initial availability of Vm ware, cloud automation, assemble, manage complex applications, automate their provisioning and cloud services, and manage them through a brokerage the initial availability of cloud automation services, service. Your today, the acquisition of cloudhealth as a platform, the aware of the most complete set of multicloud management tools in the industry, and we're going to do so much more so we've seen this picture of this multicloud journey that our customers are on and you know, we're working hard to say we are going to bridge across these worlds of innovation, the multicloud world. We're doing many other things. You're gonna hear a lot at the show today about this year. We're also giving the tech preview of the Vm ware cloud marketplace for our partners and customers. Also today, Dell technologies is announcing their cloud marketplace to provide a self service, a portfolio of a Dell emc technologies. We're fundamentally in a unique position to accelerate your multicloud journey. So we've built out this any cloud piece, but right in the middle of that any cloud is the network. And when we think about the network, we're just so excited about what we have done and what we're seeing in the industry. So let's click into this a little bit further. We've gotten a lot done over the last five years. Networking. Look at these numbers. 80 million switch ports have been shipped. We are now 10 x larger than number two and software defined networking. We have over 7,500 customers running on Nsx and maybe the stat that I'm most proud of is 82 percent of the fortune 100 has now adopted nsx. You have made nsx these standard and software defined networking. Thank you very much. Thank you. When we think about this journey that we're on, we started. You're saying, Hey, we've got to break the chains inside of the data center as we said. And then Nsx became the software defined networking platform. We started to do it through our cloud provider partners. Ibm made a huge commitment to partner with us and deliver this to their customers. We then said, boy, we're going to make a fundamental to all of our cloud services including aws. We built this bridge called the hybrid cloud extension. We said we're going to build it natively into what we're doing with Telcos, with Azure and Amazon as a service. We acquired the St Wagon, right, and a Velo cloud at the hottest product of Vm ware's portfolio today. The opportunity to fundamentally transform branch and wide area networking and we're extending it to the edge. You're literally, the world has become this complex network. We have seen the world go from the old defined by rigid boundaries, simply put in a distributed world. Hardware cannot possibly work. We're empowering customers to secure their applications and the data regardless of where they sit and when we think of the virtual cloud network, we say it's these three fundamental things, a cloud centric networking fabric with intrinsic security and all of it delivered in software. The world is moving from data centers to centers of data and they need to be connected and Nsx is the way that we will do that. So you'll be aware of is well known for this idea of talking but also showing. So no vm world keynote is okay without great demonstrations of it because you shouldn't believe me only what we can actually show and to do that know I'm going to have our CTL come onstage and CTL y'all. I used to be a cto and the CTO is the certified smart guy. He's also known as the chief talking officer and today he's my demo partner. Please walk, um, Vm ware, cto ray to the stage. Right morning pat. How you doing? Oh, it's great ray, and thanks so much for joining us. Know I promised that we're going to show off some pretty cool stuff here. We've covered a lot already, but are you up to the task? We're going to try and run through a lot of demos. We're going to do it fast and you're going to have to keep me on time to ask an awkward question. Slow me down. Okay. That's my fault if you run along. Okay, I got it. I got it. Let's jump right in here. So I'm a CTO. I get to meet lots of customers that. A few weeks ago I met a cio of a large distribution company and she described her it infrastructure as consisting of a number of data centers troll to us, which he also spoke of a large number of warehouses globally, and each of these had local hyperconverged compute and storage, primarily running surveillance and warehouse management applications, and she pulls me four questions. The first question she asked me, she says, how do I migrate one of these data centers to Vm ware cloud on aws? I want to get out of one of these data centers. Okay. Sounds like something andy and I were just talking exactly, exactly what you just spoke to a few moments ago. She also wanted to simplify the management of the infrastructure in the warehouse as themselves. Okay. He's age and smaller data centers that you've had out there. Her application at the warehouses that needed to run locally, butter developers wanted to develop using cloud infrastructure. Cloud API is a little bit late. The rds we spoken with her in. Her final question was looking to the future, make all this complicated management go away. I want to be able to focus on my application, so that's what my business is about. So give me some new ways of how to automate all of this infrastructure from the edge to the cloud. Sounds pretty clear. Can we do it? Yes we can. So we're going to dive right in right now into one of these demos. And the first demo we're going to look at it is vm ware cloud on aws. This is the best solution for accelerating this public cloud journey. So can we start the demo please? So what you were looking at here is one of those data centers and you should be familiar with this product. It's a familiar vsphere client. You see it's got a bunch of virtual machines running in there. These are the virtual machines that we now want to be able to migrate and move the VMC on aws. So we're going to go through that migration right now. And to do that we use a product that you've seen already atx, however it's the x has been, has got some new cool features since the last time we download it. Probably on this stage here last year, I wanted those in particular is how do we do bulk migration and there's a new cool thing, right? Whole thing we want to move the data center en mass and his concept here is cloud motion with vsphere replication. What this does is it replicates the underlying storage of the virtual machines using vsphere replication. So if and when you want to now do the final migration, it actually becomes a vmotion. So this is what you see going on right here. The replication is in place. Now when you want to touch you move those virtual machines. What you'll do is a vmotion and the key thing to think about here is this is an actual vmotion. Those the ends as room as they're moving a hustler, migrating remained life just as you would in a v motion across one particular infrastructure. Did you feel complete application or data center migration with no dying town? It's a Standard v motion kind of appearance. Wow. That is really impressive. That's correct. Wow. You. So one of the other things we want to talk about here is as we are moving these virtual machines from the on prem infrastructure to the VMC on aws infrastructure, unfortunately when we set up the cloud on VMC and aws, we only set up for hosts, uh, that might not be, that'd be enough because she is going to move the whole infrastructure of that this was something you guys, you and Andy referred to briefly data center. Now, earlier, this concept of elastic drs. what elastic drs does, it allows the VMC on aws to react to the workloads as they're being created and pulled in onto that infrastructure and automatically pull in new hosts into the VMC infrastructure along the way. So what you're seeing here is essentially the MC growing the infrastructure to meet the needs of the workloads themselves. Very cool. So overseeing that elastic drs. we also see the ebs capabilities as well. Again, you guys spoke about this too. This is the ability to be able to take the huge amount of stories that Amazon have, an ebs and then front that by visa you get the same experience of v Sign, but you get this enormous amount of storage capabilities behind it. Wow. That's incredible. That's incredible. I'm excited about this. This is going to enable customers to migrate faster and larger than ever before. Correct. Now she had a series of little questions. Okay. The second question was around what about all those data centers and those age applications that I did not move, and this is where we introduce the project which you've heard of already tonight called project dementia. What this does, it gives you the simplicity of Vm ware cloud, but bringing that out to the age, you know what's basically going on here, vmc on aws is a service which manages your infrastructure in aws. We know stretch that service out into your infrastructure, in your data center and at the age, allowing us to be able to manage that infrastructure in the same way. Once again, let's dive down into a demo and take a look at what this looks like. So what you've got here is a familiar series of services available to you, one of them, which is project dimension. When you enter project dimension, you first get a view of all of the different infrastructure that you have available to you, your data centers, your edge locations. You can then dive deeply into one of these to get a closer look at what's going on here. We're diving into one of these The problem is there's a networking problem going on in this warehouse. warehouses and we see it as a problem here. How do we know? We know because vm ware is running this as a managed service. We are directly managing or sorry, monitoring your infrastructure or we discover there's something going wrong here. We automatically create the ASR, so somebody is dealing with this. You have visibility to what's going on, but the vm ware managed service is already chasing the problem for you. Oh, very good. So now we're seeing this dispersed infrastructure with project dementia, but what's running on it so well before we get with running out, you've got another problem and the problem is of course, if you're managing a lot of infrastructure like this, you need to keep it up to date. And so once again, this is where the vm ware managed service kicks in. We manage that infrastructure in terms of patching it and updating it for you. And as an example, when we released a security patch, here's one for the recent l, one terminal fault, the Vmr managed service is already on that and making sure that your on prem and edge infrastructure is up to date. Very good. Now, what's running? Okay. So what's running, uh, so we mentioned this case of this software running at the edge infrastructure itself, and these are workloads which are running locally in those age, uh, those edge locations. This is a surveillance application. You can see it here at the bottom it says warehouse safety monitor. So this is an application which gathers images and then stores those images He said my sql database on top there, now this is where we leverage the somewhere and it puts them in a database. technology you just learned about when Andy and pat spoke about disability to take rds and run that on your on prem infrastructure. The block of virtual machines in the moment are the rds components from Amazon running in your infrastructure or in your edge location, and this gives you the ability to allow your developers to be able to leverage and operate against those Apis, but now the actual database, the infrastructure is running on prem and you might be doing just for performance reasons because of latency, you might be doing it simply because this data center is not always connected to the cloud. When you take a look into under the hood and see what's going on here, what you actually see this is vsphere, a modified version of vsphere. You see this new concept of my custom availability zone. That is the availability zone running on your infrastructure which supports or ds. What's more interesting is you flip back to the Amazon portal. This is typically what your developers are going to do. Once again, you see an availability zone in your Amazon portal. This is the availability zone running on your equipment in your data center. So we've truly taken that already as infrastructure and moved it to the edge so the developer sees what they're comfortable with and the infrastructure sees what they're comfortable with bridging those two worlds. Fabulous. Right. So the final question of course that we got here was what's next? How do I begin to look to the future and say I am going to, I want to be able to see all of my infrastructure just handled in an automated fashion. And so when you think about that, one of the questions there is how do we leverage new technologies such as ai and ml to do that? So what you've got here is, sorry we've got a little bit later. What you've got here is how do I blend ai in a male and the power of what's in the data center itself. Okay. And we could do that. We're bringing you the AI and ml, right? And fusing them together as never before to truly change how the data center operates. Correct. And it is this introduction is this merging of these things together, which is extremely powerful in my mind. This is a little bit like a self driving vehicle, so thinking about a car driving down the street is self driving vehicle, it is consuming information from all of the environment around it, other vehicles, what's happening, everything from the wetter, but it also has a lot of built in knowledge which is built up to to self learning and training along the way in the kids collecting lots of that data for decades. Exactly. And we've got all that from all the infrastructure that we have. We can now bring that to bear. So what we're focusing on here is a project called project magna and project. Magna leverage is all of this infrastructure. What it does here is it helps connect the dots across huge datasets and again a deep insight across the stack, all the way from the application hardware, the infrastructure to the public cloud, and even the age and what it does, it leverages hundreds of control points to optimize your infrastructure on Kpis of cost performance, even user specified policies. This is the use of machine language in order to fundamentally transform. I'm sorry, machine learning. I'm going back to some. Very early was here, right? This is the use of machine learning and ai, which will automatically transform. How do you actually automate these data centers? The goal is true automation of your infrastructure, so you get to focus on the applications which really served needs of your business. Yeah, and you know, maybe you could think about that as in the past we would have described the software defined data center, but in the future we're calling it the self driving data center. Here we are taking that same acronym and redefining it, right? Because the self driving data center, the steep infusion of ai and machine learning into the management and automation into the storage, into the networking, into vsphere, redefining the self driving data center and with that we believe fundamentally is to be an enormous advance and how they can take advantage of new capabilities from bm ware. Correct. And you're already seeing some of this in pieces of projects such as some of the stuff we do in wavefront and so already this is how do we take this to a new level and that's what project magnet will do. So let's summarize what we've seen in a few demos here as we work in true each of these very quickly going through these demos. First of all, you saw the n word cloud on aws. How do I migrate an entire data center to the cloud with no downtime? Check, we saw project dementia, get the simplicity of Vm ware cloud in the data center and manage it at the age as a managed service check. Amazon rds and Vm ware. Cool Demo, seamlessly deploy a cloud service to an on premises environment. In this case already. Yes, we got that one coming in are in m five. And then finally project magna. What happens when you're looking to the future? How do we leverage ai and ml to self optimize to virtual infrastructure? Well, how did ray do as our demo guy? Thank you. Thanks. Thanks. Right. Thank you. So coming back to this picture, our gps for the day, we've covered any cloud, let's click into now any application, and as we think about any application, we really view it as this breadth of the traditional cloud native and Sas Coobernetti is quickly maybe spectacularly becoming seen as the consensus way that containers will be managed and automate as the framework for how modern APP teams are looking at their next generation environment, quickly emerging as a key to how enterprises build and deploy their applications today. And containers are efficient, lightweight, portable. They have lots of values for developers, but they need to also be run and operate and have many infrastructure challenges as well. Managing automation while patch lifecycle updates, efficient move of new application services, know can be accelerated with containers. We also have these infrastructure problems and you know, one thing we want to make clear is that the best way to run a container environment is on a virtual machine. You know, in fact, every leader in public cloud runs their containers and virtual machines. Google the creator and arguably the world leader in containers. They runs them all in containers. Both their internal it and what they run as well as G K, e for external users as well. They just announced gke on premise on vm ware for their container environments. Google and all major clouds run their containers and vms and simply put it's the best way to run containers. And we have solved through what we have done collectively the infrastructure problems and as we saw earlier, cool new container apps are also typically some ugly combination of cool new and legacy and existing environments as well. How do we bridge those two worlds? And today as people are rapidly moving forward with containers and Coobernetti's, we're seeing a certain set of problems emerge. And Dan cone, right, the director of CNCF, the Coobernetti, uh, the cloud native computing foundation, the body for Coobernetti's collaboration and that, the group that sort of stewards the standardization of this capability and he points out these four challenges. How do you secure them? How do you network and you know, how do you monitor and what do you do for the storage underneath them? Simply put, vm ware is out to be, is working to be is on our way to be the dial tone for Coobernetti's. Now, some of you who were in your twenties might not know what that means, so we know over to a gray hair or come and see me afterward. We'll explain what dial tone means to you or maybe stated differently. Enterprise grade standard for Cooper netties and for that we are working together with our partners at Google as well as pivotal to deliver Vm ware, pks, Cooper netties as an enterprise capability. It builds on Bosh. The lifecycle engine that's foundational to the pivotal have offerings today, uh, builds on and is committed to stay current with the latest Coobernetti's releases. It builds on Nsx, the SDN container, networking and additional contributions that were making like harbor the Vm ware open source contribution for the container registry. It packages those together makes them available on a hybrid cloud as well as public cloud environments with pks operators can efficiently deploy, run, upgrade their coopernetties environments on SDDC or on all public clouds. While developers have the freedom to embrace and run their applications rapidly and efficiently, simply put, pks, the standard for Coobernetti's in the enterprise and underneath that Nsx you'll is emerging as the standard for software defined networking. But when we think about and we saw that quote on the challenges of Kubernetes today, we see that networking is one of the huge challenge is underneath that and in a containerized world, things are changing even more rapidly. My network environment is moving more quickly. NSX provides the environment's easily automate networking and security for rapid deployment of containerized environments that fully supports the MRP chaos, fully supports pivotal's application service, and we're also committed to fully support all of the major kubernetes distribution such as red hat, heptio and docker as well Nsx, the only platform on the planet that can address the complexity and scale of container deployments taken together Vm Ware, pks, the production grade computer for the enterprise available on hybrid cloud, available on major public clouds. Now, let's not just talk about it again. Let's see it in action and please walk up to the stage. When di Carter with Ray, the senior director of cloud native marketing for Vm ware. Thank you. Hi everybody. So we're going to talk about pks because more and more new applications are built using kubernetes and using containers with vm ware pts. We get to simplify the deploying and the operation of Kubernetes at scale. When the. You're the experts on all of this, right? So can you take as true the scenario of how pks or vm ware pts can really help a developer operating the Kubernedes environment, developed great applications, but also from an administrator point of view, I can really handle things like networking, security and those configurations. Sounds great. I love to dive into the demo here. Okay. Our Demo is. Yeah, more pks running coubernetties vsphere. Now pks has a lot of cool functions built in, one of which is Nsx. And today what I'm going to show you is how NSX will automatically bring up network objects as quick Coobernetti's name spaces are spun up. So we're going to start with the fees per client, which has been extended to Ron pks, deployed cooper clusters. We're going to go into pks instance one, and we see that there are five clusters running. We're going to select one other clusters, call application production, and we see that it is running nsx. Now a cluster typically has multiple users and users are assigned namespaces, and these namespaces are essentially a way to provide isolation and dedicated resources to the users in that cluster. So we're going to check how many namespaces are running in this cluster and more brought up the Kubernetes Ui. We're going to click on namespace and we see that this cluster currently has four namespaces running wire. We're going to do next is bringing up a new name space and show that Nsx will automatically bring up the network objects required for that name space. So to do that, we're going to upload a Yammel file and your developer may actually use Ku Kata command to do this as well. We're going to check the namespace and there it is. We have a new name space called pks rocks. Yeah. Okay. Now why is that guy now? It's great. We have a new name space and now we want to make sure it has the network elements assigned to us, so we're going to go to the NSX manager and hit refresh and there it is. PKS rocks has a logical robber and a logical switch automatically assigned to it and it's up and running. So I want to interrupt here because you made this look so easy, right? I'm not sure people realize the power of what happened here. The developer, winton using Kubernetes, is api infrastructure to familiar with added a new namespace and behind the scenes pks and tardy took care of the networking. It combination of Nsx, a combination of what we do at pks to truly automate this function. Absolutely. So this means that if you are on the infrastructure operation, you don't need to worry about your developer springing up namespaces because Nsx will take care of bringing the networking up and then bringing them back down when the namespace is not used. So rate, but that's not it. Now, I was in operations before and I know how hard it is for enterprises to roll out a new product without visibility. Right, so pks took care of those dates, you operational needs as well, so while it's running your clusters, it's also exporting Meta data so that your developers and operators can use wavefront to gain deep visibility into the health of the cluster as well as resources consumed by the cluster. So here you see the wavefront Ui and it's showing you the number of nodes running, active parts, inactive pause, et cetera. You can also dive deeper into the analytics and take a look at information site, Georgia namespace, so you see pks rocks there and you see the number of active nodes running as well as the CPU utilization and memory consumption of that nice space. So now pks rocks is ready to run containerized applications and microservices. So you just get us a very highlight of a demo here to see a little bit what pks pks says, where can we learn more? So we'd love to show you more. Please come by the booth and we have more cool functions running on pks and we'd love to have you come by. Excellent. Thank you, Lindy. Thank you. Yeah, so when we look at these types of workloads now running on vsphere containers, Kubernedes, we also see a new type of workload beginning to appear and these are workloads which are basically machine learning and ai and in many cases they leverage a new type of infrastructure, hardware accelerators, typically gps. What we're going to talk about here is how in video and Vm ware have worked together to give you flexibility to run sophisticated Vdi workloads, but also to leverage those same gpu for deep learning inference workloads also on vsphere. So let's dive right into a demo here. Again, what you're seeing here is again, you're looking at here, you're looking at your standard view realized operations product, and you see we've got two sets of applications here, a Vdi desktop workload and machine learning, and the graph is showing what's happening with the Vdi desktops. These are office workers leveraging these desktops everyday, so of course the infrastructure is super busy during the daytime when they're in the office, but the green area shows this is not been used very heavily outside of those times. So let's take a look. What happens to the machine learning application in this case, this organization leverages those available gpu to run the machine learning operations outside the normal working hours. Let's take a little bit of a deeper dive into what the application it is before we see what we can do from an infrastructure and configuration point of view. So this machine learning application processes a vast number of images and it clarify or sorry, it categorizes these images and as it's doing so, it is moving forward and putting each of these in a database and you can see it's operating here relatively fast and it's leveraging some gps to do that. So typical image processing type of machine learning problem. Now let's take a dive in and look at the infrastructure which is making this happen. First of all, we're going to look only at the Vdi employee Dvt, a Vdi infrastructure here. So I've got a bunch of these applications running Vdi applications. What I want to do is I want to move these so that I can make this image processing out a application run a lot faster. Now normally you wouldn't do this, but pot insisted that we do this demo at 10:30 in the morning when the office workers are in there, so we're going to move older Vdi workloads over to the other cluster and that's what you're seeing is going on right now. So as they move over to this other cluster, what we are now doing is freeing up all of the infrastructure. The GPU that Vdi workload was using here. We see them moving across and now you've freed up that infrastructure. So now we want to take a look at this application itself, the machine learning application and see how we can make use of that. Now freed up infrastructure we've got here is the application is running using one gpu in a vsphere cluster, but I've got three more gpu is available now because I've moved the Vdi workloads. We simply modify the application, let it know that these are available and you suddenly see an increase in the processing capabilities because of what we've done here in terms of making the flexibility of accessing those gps. So what you see here is the same gps that youth for Vdi, which you probably have in your infrastructure today, can also be used to run sophisticated machine learning and ai type of applications on your vsphere infrastructure. So let's summarize what we've seen in the various demos here in this section. First of all, we saw how the MRPS simplifies the deployment and operating operation of Kubernetes at scale. What we've also seen is that leveraging the Nvidia Gpu, we can now run the most demanding workloads on vsphere. When we think about all of these applications and these new types of workloads that people are running. I want to take one second to speak to another workload that we're seeing beginning to appear in the data center. And this is of course blockchain. We're seeing an increasing number of organizations evaluating blockchains for smart contract and digital consensus solutions. So this tech, this technology is really becoming or potentially becoming a critical role in how businesses will interact each other, how they will work together. We'd project concord, which is an open source project that we're releasing today. You get the choice, performance and scale of verifiable trust, which you can then bring to bear and run in the enterprise, but this is not just another blockchain implementation. We have focused very squarely on making sure that this is good for enterprises. It focuses on performance, it focuses on scalability. We have seen examples where running consensus algorithms have taken over 80 days on some of the most common and widely used infrastructure in blockchain and we project conquered. You can do that in two and a half hours. So I encourage you to check out this project on get hub today. You'll also see lots of activity around the whole conference. Speaking about this. Now we're going to dive into another section which is the anti device section. And for that I need to welcome pat back up there. Thank you pat. Thanks right. So diving into any device piece of the puzzle, you and as we think about the superpowers that we have, maybe there are no more area that they are more visible than in the any device aspect of our picture. You know, and as we think about this, the superpowers, you know, think about mobility, right? You know, and how it's enabling new things like desktop as a service in the mobile area, these breadth of smartphones and devices, ai and machine learning allow us to manage them, secure them and this expanding envelope of devices in the edge that need to be connected and wearables and three d printers and so on. We've also seen increasing research that says engaged employees are at the center of business success. Engaged employees are the critical ingredient for digital transformation. And frankly this is how I run vm ware, right? You know, I have my device and my work, all my applications, every one of my 23,000 employees is running on our transformed workspace one environment. Research shows that companies that, that give employees ready anytime access are nearly three x more likely to be leaders in digital transformation. That employees spend 20 percent of their time today on manual processes that can be automated. The way team collaboration and speed of division decisions increases by 16 percent with engaged employees with modern devices. Simply put this as a critical aspect to enabling your business, but you remember this picture from the silos that we started with and each of these environments has their own tribal communities of management, security automation associated with them, and the complexity associated with these is mind boggling and we start to think about these. Remember the I'm a pc and I'm a Mac. Well now you have. I'm an Ios. I'm a droid and other bdi and I'm now a connected printer and I'm a connected watch. You remember citrix manager and good is now bad and sccm a failed model and vpns and Xanax. The chaos is now over at the center of that is vm ware, workspace one, get it out of the business of managing devices, automate them from the cloud, but still have the mentor price. Secure cloud based analytics that brings new capabilities to this critical topic. You'll focus your energy on creating employee and customer experiences. You know, new capabilities to allow like our airlift, the new capability to help customers migrate from their sccm environment to a modern management, expanding the use of workspace intelligence. Last year we announced the chromebook and a partnership with HP and today I'm happy to announce the next step in our partnerships with Dell. And uh, today we're announcing that Dell provisioning for Vm ware, workspace one as part of Dell's ready to work solutions Dallas, taking the next leap and bringing workspace one into the core of their client to offerings. And the way you can think about this as Literally a dell drop ship, lap pops showing up to new employee. day one, productivity. You give them their credential and everything else is delivered by workspace one, your image, your software, everything patched and upgraded, transforming your business, right beginning at that device experience that you give to your customer. And again, we don't want to talk about it. We want to show you how this works. Please walk to the stage with re renew the head of our desktop products marketing. Thank you. So we just heard from pat about how workspace one integrated with Dell laptops is really set up to manage windows devices. What we're broadly focused on here is how do we get a truly modern management system for these devices, but one that has an intelligence behind it to make sure that we're kept with a good understanding of how to keep these devices always up to date and secure. Can we start the demo please? So what we're seeing here is to be the the front screen that you see of workspace one and you see you've got multiple devices a little bit like that demo that patch assured. I've got Ios, android, and of course I've got windows renewal. Can you please take us through how workspace one really changes the ability of somebody an it administrator to update and manage windows into our environment? Absolutely. With windows 10, Microsoft has finally joined the modern management body and we are really excited about that. Now. The good news about modern management is the frequency of ostp updates and how quickly they come out because you can address all those security issues that are hitting our radar on a daily basis, but the bad news about modern management is the frequency of those updates because all of us in it admins, we have to test each and every one of our applications would that latest version because we don't want to roll out that update in case of causes any problems with workspace one, we saw that we simply automate and provide you with the APP compatibility information right out of the box so you can now automate that update process. Let's take a quick look. Let's drill down here further into the windows devices. What we'll see is that only a small percentage of those devices are on that latest version of operating system. Now, that's not a good thing because it might have an important security fix. Let's scroll down further and see what the issue is. We find that it's related to app compatibility. In fact, 38 percent of our devices are blocked from being upgraded and the issue is app compatibility. Now we were able to find that not by asking the admins to test each and every one of those, but we combined windows analytics data with APP intelligent out of the box and be provided that information right here inside of the console. Let's dig down further and see what those devices and apps look like. So knew this is the part that I find most interesting. If I am a system administrator at this point I'm looking at workspace one is giving me a key piece of information. It says if you proceed with this update, it's going to fail 84, 85 percent at a time. So that's an important piece of information here, but not alone. Is it telling me that? It is telling me roughly speaking why it thinks it's going to fail. We've got a number of apps which are not ready to work with this new version, particularly the Mondo card sales lead tracker APP. So what we need to do is get engineering to tackle the problems with this app and make sure that it's updated. So let's get fixing it in order to fix it. What we'll do is create an automation and we can do this right out of the box in this automation will open up a Jira ticket right from within the console to inform the engineers about the problem, not just that we can also flag and send a notification to that engineering manager so that it's top of mine and they can get working on this fixed right away. Let's go ahead and save that automation right here, ray UC. There's the automation that we just So what's happening here is essentially this update is now scheduled meeting. saved. We can go and update oldest windows devices, but workspace one is holding the process of proceeding with that update, waiting for the engineers to update the APP, which is going to cause the problem. That's going to take them some time, right? So the engineers have been working on this, they have a fixed and let's go back and see what's happened to our devices. So going back into the ios updates, what we'll find is now we've unblocked those devices from being upgraded. The 38 percent has drastically dropped down. It can rest in peace that all of the devices are compliant and on that latest version of operating system. And again, this is just a snapshot of the power of workspace one to learn more and see more. I invite you all to join our EOC showcase keynote later this evening. Okay. So we've spoken about the presence of these new devices that it needs to be able to manage and operate across everything that they do. But what we're also seeing is the emergence of a whole new class of computing device. And these are devices which are we commonly speak to have been at the age or embedded devices or Iot. And in many cases these will be in factories. They'll be in your automobiles, there'll be in the building, controlling, controlling, uh, the building itself, air conditioning, etc. Are quite often in some form of industrial environment. There's something like this where you've got A wind farm under embedded in each of these turbines. This is a new class of computing which needs to be managed, secured, or we think virtualization can do a pretty good job of that in new virtualization frontier, right at the edge for iot and iot gateways, and that's gonna. That's gonna, open up a whole new realm of innovation in that space. Let's dive down and taking the demo. This spaces. Well, let's do that. What we're seeing here is a wind turbine farm, a very different than a data center than what we're used to and all the compute infrastructure is being managed by v center and we see to edge gateway hose and they're running a very mission critical safety watchdog vm right on there. Now the safety watchdog vm is an fte mode because it's collecting a lot of the important sensor data and running the mission critical operations for the turbine, so fte mode or full tolerance mode, that's a pretty sophisticated virtualization feature allowing to applications to essentially run in lockstep. So if there's a failure, wouldn't that gets to take over immediately? So this no sophisticated virtualization feature can be brought out all the way to the edge. Exactly. So just like in the data center, we want to perform an update, so as we performed that update, the first thing we'll do is we'll suspend ft on that safety watchdog. Next, we'll put two. Oh, five into maintenance mode. Once that's done, we'll see the power of emotion that we're all familiar with. We'll start to see all the virtual machines vmotion over to the second backup host. Again, all the maintenance, all the update without skipping a heartbeat without taking down any daily operations. So what we're seeing here is the basic power of virtualization being brought out to the age v motion maintenance mode, et cetera. Great. What's the big deal? We've been doing that for years. What's the, you know, come on. What's the big deal? So what you're on the edge. So when you get to the age pack, you're dealing with a whole new class of infrastructure. You're dealing with embedded systems and new types of cpu hours and process. This whole demo has been done on an arm 64. Virtualization brought to arm 64 for embedded devices. So we're doing this on arm on the edge, correct. Specifically focused for embedded for age oems. Okay. Now that's good. Okay. Thank you ray. Actually, we've got a summary here. Pat, just a second before you disappear. A lot to rattle off what we've just seen, right? We've seen workspace one cross platform management. What we've also seen, of course esx for arm to bring the power of vfx to edge on 64, but are in platforms will go no. Okay. Okay. Thank you. Thanks. Now we've seen a look at a customer who is taking advantage of everything that we just saw and again, a story of a customer that is just changing lives in a fundamental way. Let's see. Make a wish. So when a family gets the news that a child is sick and it's a critical illness, it could be a life threatening illness. The whole family has turned upside down. Imagine somebody comes to you and they say, what's the one thing you want that's in your heart? You tell us and then we make that happen. So I was just calling to give you the good news that we're going to be able to grant jackson a wish make, which is the largest wish granting organizations in the United States. English was featured in the cbs 60 minutes episode. Interestingly, it got a lot of hits, but uh, unfortunately for the it team, the whole website crashed make a wish is going through a program right now where we're centralizing technology and putting certain security standards in place at our chapters. So what you're seeing here, we're configuring certain cloud services to make sure that they always are able to deliver on the mission whether they have a local problem or not is we continue to grow the partnership and work with vm ware. It's enabling us to become more efficient in our processes and allows us to grant more wishes. It was a little girl. She had a two year old brother. She just wanted a puppy and she was forthright and I want to name the puppy in my name so my brother would always have me to list them off a five year old. It's something we can't change their medical outcome, but we can change their spiritual outcome and we can transform their lives. Thank you. Working together with you truly making wishes come true. The last topic I want to touch on today, and maybe the most important to me personally is security. You got to fundamentally, when we think about this topic of security, I'll say it's broken today and you know, we would just say that the industry got it wrong that we're trying to bolt on or chasing bad, and when we think about our security spend, we're spending more and we're losing more, right? Every day we're investing more in this aspect of our infrastructure and we're falling more behind. We believe that we have to have much less security products and much more security. You know, fundamentally, you know, if you think about the problem, we build infrastructure, right? Generic infrastructure, we then deploy applications, all kinds of applications, and we're seeing all sorts of threats launched that as daily tens of millions. You're simple virus scanner, right? Is having tens of millions of rules running and changing many times a day. We simply believe the security model needs to change. We need to move from bolted on and chasing bad to an environment that has intrinsic security and is built to ensure good. This idea of built in security. We are taking every one of the core vm ware products and we are building security directly into it. We believe with this, we can eliminate much of the complexity. Many of the sensors and agents and boxes. Instead, they'll directly leverage the mechanisms in the infrastructure and we're using that infrastructure to lock it down to behave as we intended it to ensure good, right on the user side with workspace one on the network side with nsx and microsegmentation and storage with native encryption and on the compute with app defense, we are building in security. We're not chasing threats or adding on, but radically reducing the attack surface. When we look at our applications in the data center, you see this collection of machines running inside of it, right? You know, typically running on vsphere and those machines are increasingly connected. Through nsx and last year we introduced the breakthrough security solution called app defense and app defense. Leverages the unique insight we get into the application so that we can understand the application and map it into the infrastructure and then you can lock down, you could take that understanding, that manifest of its behavior and then lock those vms to that intended behavior and we do that without the operational and performance burden of agents and other rear looking use of attack detection. We're shrinking the attack surface, not chasing the latest attack vector, you know, and this idea of bolt on versus chasing bad. You sort of see it right in the network. Machines have lots of conductivity, lots of applications running and something bad happens. It basically has unfettered access to move horizontally through the data center and most of our security is north, south. MosT of the attacks are eastwest. We introduced this idea of microsegmentation five years ago, and by it we're enabling organizations to secure some networks and separate sensitive applications and services as never before. This idea isn't new, that just was never practical before nsx, but we're not standing still. Our teams are innovating to leap beyond 12. What's next beyond microsegmentation, and we see this in three simple words, learn, imagine a system that can look into the applications and understand their behavior and how they should operate. we're using machine learning and ai instead of chasing were to be able to ensure good where that that system can then locked down its behavior so the system consistently operates that way, but finally we know we have a world of increasing dynamic applications and as we move to more containerize the microservices, we know this world is changing, so we need to adapt. We need to have more automation to adapt to the current behavior. Today I'm very excited to have two major announcements that are delivering on this vision. The first of those vsphere platinum, our flagship vm ware vsphere product now has app defense built right in platinum will enable virtualization teams. Yeah, go ahead. Yeah, let's use it. Platinum will enable virtualization teams you to give an enormous contribution to the security profile of your enterprise. You could see whatever vm is for its purpose, its behavior until the system. That's what it's allowed to do. Dramatically reducing the attack surface without impact. On operations or performance, the capability is so powerful, so profound. We want you to be able to leverage it everywhere, and that's why we're building it directly into vsphere, vsphere platinum. I call it the burger and fries. You know, nobody leaves the restaurant without the fries who would possibly run a vm in the future without turning security on. That's how we want this to work going forward. Vsphere platinum and as powerful as microsegmentation has been as an idea. We're taking the next step with what we call adaptive microsegmentation. We are fusing Together app defense and vsphere with nsx to allow us to align the policies of the application through vsphere and the network. We can then lock down the network and the compute and enable this automation of the microsegment formation taken together adaptive microsegmentation. But again, we don't want to just tell you about it. We want to show you. Please welcome to the stage vj dante, who heads our machine learning team for app dispense. Vj a very good vj. Thanks for joining us. So, you know, I talked about this idea right, of being able to learn, lock and adapt. Uh, can you show it to us? Great. Yeah. Thank you. With vc a platinum, what we have done is we have put in everything you need to learn, lock and adapt, right with the infrastructure. The next time you bring up your wifi at line, you'll actually see a difference right in there. Let's go with that demo. There you go. And when you look at our defense there, what you see is that all your guests, virtual machines and all your host, hundreds of them and thousands of virtual machines enabling for that difference. It's in there. And what that does is immediately gets you visibility into the processes running on those virtual machines and the risk for the first time. Think about it for the first time. You're looking at the infrastructure through the lens of an application. Here, for example, the ecommerce application, you can see the components that make up that application, how they interact with each other, the specific process, a specific ip address on a specific board. That's what you get, but so we're learning the behavior. Yes. Yeah, that's very good. But how do you make sure you only learn good behavior? Exactly. How do we make sure that it's not bad? We actually verify me insured. It's all good. We ensured that everybody these reputation is verified. We ensured that the haven is verified. Let's go to svc host, for example. This process can exhibit hundreds of behaviors across numerous. Realize what we do here is we actually verify that failure saw us. It's actually a machine learning models that had been trained on millions of instances of good, bad at you said, and then automatically verify that for okay, so we said, you. We learned simply, learn now, lock. How does that work? Well, once you learned the application, locking it is as simple as clicking on that verify and protect button and then you can lock both the compute and network and it's done. So we've pushed those policies into nsx and microsegmentation has been established actually locked down the compute. What is the operating system is exactly. Let's first look at compute, protected the processes and the behaviors are locked down to exactly what is allowed for that application. And we have bacon policies and program your firewall. This is nsx being configured automatically for you, laurie, with one single click. Very good. So we said learn lock. Now, how does this adapt thing work? Well, a bad change is the only constant, but modern applications applications change on a continuous basis. What we do is actually pretty simple. We look at every change as it comes in determinant is good or bad. If it's good, we say allow it, update the policies. That's bad. We denied. Let's look at an example as asco dxc. It's exhibiting a behavior that they've not seen getting the learning period. Okay? So this machine has never behave this This hasn't been that way. But. way. But again, our machine learning models had seen thousands of instances of this process. They know this is normal. It talks on three 89 all the time. So what it's done to the few things, it's lowered the criticality of the alarm. Okay, so false positive. Exactly. The bane of security operations, false positives, and it has gone and updated. Jane does locks on compute and network to allow for that behavior. Applications continues to work on this project. Okay, so we can learn and adapt and action right through the compute and the network. What about the client? Well, we do with workplace one, intelligence protect and manage end user endpoint, but what's one intelligence? Nsx and actually work together to protect your entire data center infrastructure, but don't believe me. You can watch it for yourself tomorrow tom cornu keynote. You want to be there, at 1:00 PM, be there or be nowhere. I love you. Thank you veejay. Great job. Thank you so much. So the idea of intrinsic security and ensuring good, we believe fundamentally changing how security will be delivered in the enterprise in the future and changing the entire security industry. We've covered a lot today. I'm thrilled as I stand on stage to stand before this community that truly has been at the center of changing the world of technology over the last couple of decades. In it. We've talked about this idea of the super powers of technology and as they accelerate the huge demand for what you do, you know in the same way we together created this idea of the virtual infrastructure admin. You'll think about all the jobs that we are spawning in the discussion that we had today, the new skills, the new opportunities for each one of us in this room today, quantum program, machine learning engineer, iot and edge expert. We're on the cusp of so many new capabilities and we need you and your skills to do that. The skills that you possess, the abilities that you have to work across these silos of technology and enabled tomorrow. I'll tell you, I am now 38 years in the industry and I've never been more excited because together we have the opportunity to build on the things that collective we have done over the last four decades and truly have a positive global impact. These are hard problems, but I believe together we can successfully extend the lifespan of every human being. I believe together we can eradicate chronic diseases that have plagued mankind for centuries. I believe we can lift the remaining 10 percent of humanity out of extreme poverty. I believe that we can reschedule every worker in the age of the superpowers. I believe that we can give modern ever education to every child on the planet, even in the of slums. I believe that together we could reverse the impact of climate change. I believe that together we have the opportunity to make these a reality. I believe this possibility is only possible together with you. I asked you have a please have a wonderful vm world. Thanks for listening. Happy 20th birthday. Have a great topic.

Published Date : Aug 28 2018

SUMMARY :

of devices in the edge that need to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AndyPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

1998DATE

0.99+

TelcoORGANIZATION

0.99+

1986DATE

0.99+

TelcosORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul MaritzPERSON

0.99+

DellORGANIZATION

0.99+

BostonLOCATION

0.99+

Andy JassyPERSON

0.99+

LenovoORGANIZATION

0.99+

10QUANTITY

0.99+

DeloitteORGANIZATION

0.99+

JoePERSON

0.99+

SydneyLOCATION

0.99+

Joe ConsellaPERSON

0.99+

AfricaLOCATION

0.99+

Pat GelsingerPERSON

0.99+

OregonLOCATION

0.99+

20 percentQUANTITY

0.99+

AshleyPERSON

0.99+

16 percentQUANTITY

0.99+

VegasLOCATION

0.99+

JupiterLOCATION

0.99+

Last yearDATE

0.99+

last yearDATE

0.99+

first questionQUANTITY

0.99+

LindyPERSON

0.99+

telcoORGANIZATION

0.99+

John GagePERSON

0.99+

10 percentQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dan conePERSON

0.99+

68 percentQUANTITY

0.99+

200 applicationsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

50 percentQUANTITY

0.99+

Vm Ware Cloud FoundationORGANIZATION

0.99+

1440DATE

0.99+

30 yearQUANTITY

0.99+

HPORGANIZATION

0.99+

38 percentQUANTITY

0.99+

38 yearsQUANTITY

0.99+

$600QUANTITY

0.99+

20 yearsQUANTITY

0.99+

one monthsQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

windows 10TITLE

0.99+

hundredsQUANTITY

0.99+

yesterdayDATE

0.99+

80 millionQUANTITY

0.99+

five percentQUANTITY

0.99+

second questionQUANTITY

0.99+

JodyPERSON

0.99+

TodayDATE

0.99+

tomorrowDATE

0.99+

SanjayPERSON

0.99+

23,000 employeesQUANTITY

0.99+

five peopleQUANTITY

0.99+

sixth yearQUANTITY

0.99+

82 percentQUANTITY

0.99+

five instancesQUANTITY

0.99+

tomorrow morningDATE

0.99+

CoobernettiORGANIZATION

0.99+

Day One Kickoff | Veritas Vision 2017


 

>> Narrator: Live from Las Vegas, it's theCUBE, covering Veritas Vision 2017. Brought to you by Veritas. >> Dave: We're here at Veritas Vision, #VtasVision, The Truth in Information. This is a company that was founded in 1983 and has gone through a very interesting history, acquired by Symantec for around 15 or 16 billion dollars and then spun back out and purchased by a private equity Carlyle Group in 2005 for about 7 billion net of cash; it's about a two and a half billion dollar company with a really interesting growth plan, one that involves transforming from what many consider to be a legacy backup company into a multi cloud, hyperscale, data protection, value of information organization. My name is Dave Valente and I'm here with Stu Miniman. Stu! Good to see you. >> Stu: Great to be here with you, Dave. It's interesting, yeah, Veritas Company, I've known for, I don't know, gosh, about 20 years and they kind of went under the radar a little bit, under the Symantec piece and now back at it, but you know gosh, felt like a time warp hearing about like Netbackup, you know? A product that you know well, entrenched in the market, has lots of customers, so you know, in talking to the people here, people on board Veritas, some, you know, very veteran to the company, a lot of new faces though, and you know, they say it's energy, innovation, bringing as Bill Coleman who we're going to have on shortly, it's about the software-defined, multi cloud, hyperscale word so you know, A for hitting all the buzzwords and excited to, in the next two days, to kind of dig in and see where the reality is. >> Dave: Yeah, and you know, Stu, you know me, Stu. I like to look at the structure and the organizational structure and the market caps and things like that, but I always felt like, you know Veritas kind of disappeared under Symantec's governance and now, it is breaking out. I love the new private equity play, I want to hear from Bill Coleman about that, what the relationship is with Carlyle, you know it used to be that private equity would come in and they would just suck all the cash out of a company, I mean the classic example was ZA, right? They would maybe do some acquiring companies, they would maybe buy cashflow positive companies, take on more debt, suck all of the cash out and leave the carcass. That's not the new private equity way. We see that with Riverbed, we see that with Infor, VMC, and many, many others have said, you know what, the public markets aren't going to give us the love that we need, we're going to go private, we're going to get a deal on the company, we're going to invest in that company, invest in R&D, build the asset value of that company, maybe even in some cases do acquisitions, grow it, and then maybe do another exit, and that is a great way, a better way in fact, for these private equity firms to really cash in and I think Veritas is an interesting asset from that regard. >> Stu: Yeah, absolutely, I think back, you know, Dave, when I worked at EMC, you know Veritas was one of those competitors that EMC was like, we got to keep an eye on them. Veritas would put out, you know, billboards and have people running around in shirts that said No Hardware Agenda. One of the reasons I think that Veritas also disappeared a little bit under Symantec, is while they were great for lots of environments, they didn't really hit hard that wave of virtualization. Interesting thing is that, you know, EMC bought VMware, everybody knows, but the company that almost bought VMware was Symantec, and lots of us say, what if? What if Symantec had bought Vmware, would they, as a software company, really kind of squash that, you know, could Veritas have then really, integrated very deep here, and now as, Dave, you and I were at the Veem show earlier this year, and they said Veem and VREN, you know, the tenures of virtualization, and now hopping on multi cloud, well, you know, a lot of that message I hear from companies like Veem, companies like NetApp, you know, software-based storage companies, if you're not living in that multi cloud world, you know, what is your future, so. >> Dave: Well, to your point. >> Stu: Microsoft and Google, Amazon, and how those all fit. >> Dave: To your point, with no hardware agenda, Veritas was always viewed as the company with that sort of open software glue to bring together the data management pieces, and as I said, it sort of got lost over the last several years under Symantec. When you hear the keynotes this morning, you hear a story of information, information value, leveraging that information, information governance, a lot of talk about GDPR, obviously a lot of talk about backup, multi cloud, really an entirely new vision from the brand that has frankly become Veritas over the last decade, and new management really trying to affect that brand and send a message to customers that we hear you, that we're self-deprecating, talking about their UX not being what it should be, listening to customers, and putting forth the vision around not just the backup, but data management, now, that's always been the Holy Grail. Can you use that data protection backup corpus of data to really leverage that, to turn information into an asset, that's something that we're going to be unpacking all week with executives, partners, customers, analysts and the like. Last thought before we get to our next guest. >> Stu: Yeah, Dave, absolutely, you know, a bunch of new products are out there, it's that balance of how do they build off of their brand, all of their customer adoption, and now they have a lot of new things going on, so how do they fit in that environment, how do they differentiate, because everyone's trying to partner with the mega clouds, and it's not just the big three that we talk about. IBM and Oracle are two big partners that Veritas is talking about here, and something like hyperconverged infrastructure, Veritas has a play there. They came out with an object story, you know, you're asking me like wait, is this an array, or is it, well no, it's Veritas, it's software, it's always going to be software. Joseph Skorupa who was giving one of the super sessions, we're going to have him on to say your infrastructure does not differentiate you, it is your data, and that is what they want to highlight to the top. I think a message that we in general agree with, and looking forward to digging into it. >> Dave: Okay, so we'll be here for the next two days and what we like to do in theCUBE is what we hear in the messaging, and then we like to test that messaging, poke at it a little bit with the executives, talk to the customers about it, see how well it aligns, and then opine on where we think this is going, but if you were at Vmworld, you knew that data protection was the hottest category, it's an exploding area, a lot of dynamism, because it's all about the data, so we'll be talking about that, digital business. Keep right there everybody, this is theCUBE. Veritas Vision, #VtasVision. We'll be right back with our next guest, right after this short break. (electronic music)

Published Date : Sep 19 2017

SUMMARY :

Brought to you by Veritas. and then spun back out and purchased by a a lot of new faces though, and you know, Dave: Yeah, and you know, Stu, you know me, Stu. and they said Veem and VREN, you know, and send a message to customers that we hear you, They came out with an object story, you know, but if you were at Vmworld, you knew that data protection

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Joseph SkorupaPERSON

0.99+

OracleORGANIZATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

EMCORGANIZATION

0.99+

2005DATE

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValentePERSON

0.99+

SymantecORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Bill ColemanPERSON

0.99+

VeritasORGANIZATION

0.99+

1983DATE

0.99+

Veritas VisionORGANIZATION

0.99+

StuPERSON

0.99+

Carlyle GroupORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Veritas CompanyORGANIZATION

0.99+

VmworldORGANIZATION

0.98+

RiverbedORGANIZATION

0.98+

VeemORGANIZATION

0.98+

oneQUANTITY

0.98+

two big partnersQUANTITY

0.98+

VMCORGANIZATION

0.98+

OneQUANTITY

0.98+

InforORGANIZATION

0.98+

about 7 billionQUANTITY

0.96+

earlier this yearDATE

0.96+

16 billion dollarsQUANTITY

0.96+

#VtasVisionORGANIZATION

0.96+

CarlylePERSON

0.95+

NetAppORGANIZATION

0.95+

VmwareORGANIZATION

0.95+

Las VegasLOCATION

0.95+

about 20 yearsQUANTITY

0.94+

GDPRTITLE

0.93+

NetbackupORGANIZATION

0.91+

VMwareORGANIZATION

0.91+

this morningDATE

0.9+

Day OneQUANTITY

0.89+

last decadeDATE

0.89+

around 15QUANTITY

0.88+

VRENORGANIZATION

0.88+

about a two and a half billion dollarQUANTITY

0.86+

The Truth in InformationORGANIZATION

0.79+

2017DATE

0.72+

next two daysDATE

0.72+

threeQUANTITY

0.68+

Veritas VisionEVENT

0.66+

yearsDATE

0.63+

lastDATE

0.62+

theCUBETITLE

0.41+

ZAPERSON

0.41+

Veeru Ramaswamy, IBM | CUBEConversation


 

(upbeat music) >> Hi we're at the Palo Alto studio of SiliconANGLE Media and theCUBE. My name is George Gilbert, we have a special guest with us this week, Veeru Ramaswamy who is VP IBM Watson IoT platform and he's here to fill us in on the incredible amount of innovation and growth that's going on in that sector of the world and we're going to talk more broadly about IoT and digital twins as a broad new construct that we're seeing in how to build enterprise systems. So Veeru, good to have you. Why don't you introduce yourself and tell us a little bit about your background. >> Thanks George, thanks for having me. I've been in the technology space for a long time and if you look at what's happening in the IoT, in the digital space, it's pretty interesting the amount of growth, the amount of productivity and efficiency the companies are trying to achieve. It is just phenomenal and I think we're now turning off the hype cycle and getting into real actions in a lot of businesses. Prior to joining IBM, I was junior offiicer and senior VP of data science with Cable Vision where I led the data strategy for the entire company and prior to that I was the GE of one of the first two guys who actually built the Cyamon digital center. GE digital center, it's a center of excellence. Looking at different kinds of IoT related projects and products along with leading some of the UX and the analytics and the club ration or the social integration. So that's the background. >> So just to set context 'cause this is as we were talking before, there was another era when Steve Jobs was talking about the next work station and he talked about objectory imitation and then everything was sprinkled with fairy dust about objects. So help us distinguish between IoT and digital twins which GE was brilliant in marketing 'cause that concept everyone could grasp. Help us understand where they fit. >> The idea of digital twin is, how do you abstract the actual physical entity out there in the world, and create an object model out of it. So it's very similar in that sense, what happened in the 90s for Steve Jobs and if you look at that object abstraction, is what is now happening in the digital twin space from the IoT angle. The way we look at IoT is we look at every center which is out there which can actually produce a metric on every device which produces a metric we consider as a sense so it could be as simple as the pressure, temperature, humidity sensors or it could be as complicated as cardio sensors and your healthcare and so on and so forth. The concept of bringing these sensors into the to the digital world, the data from that physical world to the digital world is what is making it even more abstract from a programming perspective. >> Help us understand, so it sounds like we're going to have these fire hoses of data. How do we organize that into something that someone who's going to work on that data, someone is going to program to it. How do they make sense out of it the way a normal person looks at a physical object? >> That's a great question. We're looking at sensors as a device that we can measure out of and that we call it a device twin. Taking the data that's coming from the device, we call that as a device twin and then your physical asset, the physical thing itself, which could be elevators, jet engines anything, physical asset that we have what we call the asset twin and there's hierarchical model that we believe that will have to be existing for the digital twin to be actually constructed from an IoT perspective. The asset twins will basically encompass some of the device twins and then we actually take that and represent the digital twin on a physical world of that particular asset. >> So that would be sort of like as we were talking about earlier like an elevator might be the asset but the devices within it might be the bricks and the pulleys and the panels for operating it. >> Veeru: Exactly. >> And it's then the hierarchy of these or in manufacturing terms, the building materials that becomes a critical part of the twin. What are some other components of this digital twin? >> When we talk about digital twin, we don't just take the blueprint as schematics. We also think about the system, the process, the operation that goes along with that physical asset and when we capture that and be able to model that, in the digital world, then that gives you the ability to do a lot of things where you don't have to do it in the physical world. For instance, you don't have to train your people but on the physical world, if it is periodical systems and so on and so forth, you could actually train them in the digital world and then be able to allow them to operate on the physical world whenever it's needed. Or if you want to increase your productivity or efficiency doing predictive models and so forth, you can test all the models in your digital world and then you actually deploy it in your physical world. >> That's great for context setting. How would you think of, this digital twins is more than just a representation of the structure, but it's also got the behavior in there. So in a sense it's a sensor and an actuator in that you could program the real world. What would that look like? What things can you do with that sort of approach? >> So when you actually have the data coming this humongous amount of terabyte data that comes from the sensors, once you model it and you get the insights out of that, based on the insight, you can take an actionable outcome that could be turning off an actuator or turning on an actuator and simple thngs like in the elevator case, open the door, shut the door, move the elevator up, move the elevator down etc. etc All of these things can be done from a digital world. That's where it makes a humongous difference. >> Okay, so it's a structured way of interacting with the highly structured world around us. >> Veeru: That's right. >> Okay, so it's not the narrow definition that many of us have been used to like an airplane engine or the autonomous driving capability of a car. It's more general than that. >> Yeah, it is more general than that. >> Now let's talk about having sort of set context with the definition so everyone knows we're talking about a broader sense that's going on. What are some of the business impacts in terms of operational efficiency, maybe just the first-order impact. But what about the ability to change products into more customizable services that have SLAs or entirely new business models including engineered order instead of make to stock. Tell us something about that hierarchy of value. >> That's a great question. You're talking about things like operations optimization and predicament and all of that which you can actually do from the digital world it's all on digital twin. You also can look into various kinds of business models now instead of a product, you can actually have a service out of the product and then be able to have different business models like powered by the hour, pay per use and kinds of things. So these kinds of models, business models can be tried out. Think about what's happening in the world of Air BnB and Uber, nobody owns any asset but still be able to make revenue by pay per use or power by the hour. I think that's an interesting model. I don't think it's being tested out so much in the physical asset world but I think that could be interesting model that you could actually try. >> One thing that I picked up at the Genius of Things event in Munich in February was that we really have to rethink about software markets in the sense that IBM's customers become in the way your channel, sometimes because they sell to their customers. Almost like a supply chain master or something similar and also pricing changes from potentially we've already migrated or are migrating from perpetual licenses to service softwares or service but now we could do unit pricing or SLA-based pricing, in which case you as a vendor have to start getting very smart about, you owe your customers the risk in meeting an SLA so it's almost more like insurance, actuarial modeling. >> Correct so the way we want think about is, how can we make our customers more, what do you call, monetizable. Their products to be monetizable with their customers and then in that case, when we enter into a service level agreement with our customers, there's always that risk of what we deliver to make their products and services more successful? There's always a risk component which we will have to work with the customers to make sure that combined model of what our customers are going to deliver is going to be more beneficial, more contributing to both bottom line and top line. >> That implies that your modeling, someone's modeling and risk from you the supplier to your customer as vendor to their customer. >> Right. >> That sounds tricky. >> I'm pretty sure we have a lot of financial risk modeling entered into our SLAs when we actually go to our customers. >> So that's a new business model for IBM, for IBM's sort of supply chain master type customers if that's the right word. As this capability, this technology pervades more industries, customers become software vendors or if not software vendors, services vendors for software enhanced products or service enhanced products. >> Exactly, exactly. >> Another thing, I'd listened to a briefing by IBM Global Services where they thought, ultimately, this might end up where there's far more industries are engineered to order instead of make to stock. How would this enable that? >> I think the way we want think about it is that most of the IoT based services will actually start by co-designing and co-developing with your customers. And that's where you're going to start. That's how you're going to start. You're not going to say, here's my 100 data centers and you bring your billion devices and connect and it's going to happen. We are going to start that way and then our customers are going to say, hey by the way, I have these used cases that we want to start doing, so that's why platform becomes so imortant. Once you have the platform, now you can scale, into a scale, individual silos as a vertical use case for them. We provide the platform and the use cases start driving on top of the platform. So the scale becomes much easier for the customers. >> So this sounds like the traditional application. The traditional way an application vendor might turn into a platform vendor which is a difficult transition in itself but you take a few use cases and then generalize into a platform. >> We call that a zone application services. The zone application service is basically, is drawing on perfectly cold platform service which actually provides you the abilities. So for instance like an asset management. An asset management can be done in an oil and gas rig, you can look at asset management in power tub vine, you can can look at asset management in a jet engine. You can do asset management across any different vertical but that is a common horizontal application so most of the time you get 80% of your asset management API's if you will. Then you can be able to scale across multiple different vertical applications and solutions. >> Hold that thought 'cause we're going to come back to joint development and leveraging expertise from vendor and customer and sharing that. Let's talk just at a high level one of the things that I keep hearing is that in Europe industry 4.0 is sort of the hot topic and in the states, it's more digital twins. Help parse that out for us. >> So the way we believe how digital twin should be viewed is a component view. What we mean the component view is that we have your knowledge graph representation of the real assets in the digital world and then you bring in your IoT sensors and connections to the models then you have your functional, logical, physical models that you want to bring into your knowledge graph and then you also want to be able to give the ability of search visualize allies. Kind of an intelligent experience for the end consumer and then you want to bring your similation models when you do the actual similation models in digital to bring it in there and then your enterprise asset management, your ERP systems, all of that and then when you connect, when you're able to build a knowledge graph, that's when the digital twin really connects with your enterprise systems. Sort of bring the OT and the IT together. >> So this is sort of to try and summarize 'cause there are a lot of moving parts in there. You've got you've got the product hierarchy which, in product Kaiser call it building materials, sort of the explosion of parts in an assembly, sub-assembly and then that provides like a structure, a data model then the machine learning models in the different types of models that they could be represent behavior and then when you put a knowledge graph across that structure and behavior, is that what makes it simulation ready? >> Yes, so you're talking about entities and connecting these entities with the actual relationship between these entities. That's the graph that holds the relation between nodes and your links. >> And then integrating the enterprise systems that maybe the lower level operation systems. That's how you effect business processes. >> Correct. >> For efficiency or optimization, automation. >> Yes, take a look at what you can do with like a shop floor optimization. You have all the building materials, you need to know from your existing ERP systems and then you will actually have the actual real parts that's coming to your shop floors to manage them and now base supposing, depending on whether you want to repair, you want to replace, you want an overall, you want to modify whatever that is, you want to look at your existing building materials and see, okay do I first have it do we need more? Do we need to order more? So your auditing system naturally gets integrated into that and then you have to integrate the data that's coming from these models and the availability of the existing assets with you. You can integrate it and say how fast can you actually start moving these out of your shop, into the. >> Okay that's where you translate essentially what's more like intelligent about an object or a rich object into sort of operational implications. >> Veeru: Yes. >> Okay operational process. Let's talk about customer engagement so far. There's intense interest in this. I remember in the Munich event, they were like they had to shut off attendance because they couldn't find a big enough venue. >> Veeru: That's true. >> So what are the characteristics of some of the most successful engagements or the ones that are promising. Maybe it's a little early to say successful. >> So, I think the way you can definitely see success from customer engagement are two fold. One is show what's possible. Show what's possible with after all desire to connect, collection of data, all of that so that one part of it. The second part is understand the customer. The customer has certain requirements in their existing processes and operations. Understand that and then deliver based on what solutions they are expecting, what applications they want to build. How you bring them together is what is, so we're thinking about. That Munich center you talked about. We are actually bringing in chip manufacturers, sensor manufacturers, device manufacturers. We are binging in network providers. We are bringing in SIs, system integrators all of them into the fold and show what is possible and then your partners enable you to get to market faster. That's how we see the engagement with customer should happen in a much more foster manner and show them what's possible. >> It sounds like in the chip industry Moore's law for many years it wasn't deterministic that you we would do double things every 18 months or two years, it was actually an incredibly complex ecosystem web where everyone's sort of product release cycles were synchronized so as to enable that. And it sounds like you're synchronizing the ecosystem to keep up. >> Exactly The saxel of a particular organization IoT efforts is going to depend on how do you build this ecosystem and how do you establish that ecosystem to get to market faster. That's going to be extremely key for all your integration efforts with your customer. >> Let's start narrowly with you. IBM what are the key skills that you feel you need to own starting from sort of the base rocket scientists you know who not only work on machine learning models but they come up with new algorithms on top of say tons of flow work or something like that. And all the way up to the guys who are going to work in conjunction with the customer to apply that science to a particular industry. How does that hold together? >> So it all starts on the platform. On the platform side we have all the developers, the engineers who build these platform all the video connection and all of that to make the connections. So you need the highest software development engineers to build these on the platform and then you also need the solution builders so who is in front of the customer understanding what kind of solutions you want to build. Solutions could be anything. It could be predictive maintenance, it could be as simple as management, it could be remote monitoring and diagnostics. It could be any of these solutions that you want to build and then the solution builders and the platform builders work together to make sure that it's the holistic approach for the customer at the final deployment. >> And how much is the solution builder typically in the early stages IBM or is there some expertise that the customer has to contribute almost like agile development, but not two programmers but like 500 and 500 from different companies. >> 500 is a bit too much. (laughs) I would say this is the concept of co-designing and co-development. We definitely want the ultimate, the developer, the engineers form, the subject exports from our customers and we also need our analytics experts and software developers to come and sit together and understand what's the use case. How do we actually bring in those optimized solution for the customer. >> What level of expertise or what type of expertise are the developers who are contributing to this effort in terms of do they have to, if you're working with manufacturing let's say auto manufacturing. Do they have to have automotive software development expertise or are they more generically analytics and the automotive customer brings in the specific industry expertise. >> It depends. In some cases we have RGB for instance. We have dedicated servers, that particular vertical service provider. We understand some of this industry knowledge. In some cases we don't, in some cases it actually comes from the customer. But it has to be an aggregation of the subject matter experts with our platform developers and solution developers sitting together, finding what's the solution. Literally going through, think about how we actually bring in the UX. What does a typical day of a persona look like? We always by the way believe it's an augmented allegiance which means the human and the machine work together rather than a complete. It gives you the answer for everything you ask for. >> It's a debate that keeps coming up Doug Anglebad sort of had his own answer like 50 years ago which was he sort of set the path for modern computing by saying we're not going to replace people, we're going to augment them and this is just a continuation of that. >> It's a continuation of that. >> Like UX design sounds like someone on the IBM side might be talking to the domain expert and the customer to say how does this workflow work. >> Exactly. So have this design thinking, design sessions with our customers and then based on that we take that knowledge, take it back, we build our mark ups, we build our wire frames, visual designs and the analytics and software that goes behind it and then we provide on top of platform. So most of the platform work, the standard what do you call table state connections, collection of data. All of that as they are already existing then it's one level above as to what the particular solution a customer wants. That's when we actually. >> In terms of getting the customer organization aligned to make this project successful, what are some of the different configurations? Who needs to be a sponsor? Where does budget typically come from? How long are the pilots? That sort of stuff so to set expectations. >> We believe in all the agile thinking, agile development and we believe in all of that. It's almost given now. So depending on where the customer comes from so the customer could actually directly come and sign up to our platform on the existing cloud infrastructure and then they will say, okay we want to build applications then there are some customers really big customers, large enterprises who want to say, give me the platform, we have our solution folks. We will want to work on board with you but we also want somebody who understands building solutions. We integrate with our solution developers and then we build on top of that. They build on top of that actually. So you have that model as well and then you have a GBS which actually does this, has been doing this for years, decades. >> George: Almost like from the silicon. >> All the way up to the application level. >> When the customer is not outsourcing completely, The custom app that they need to build in other words when when they need to go to GBS Global Business Services, whereas if they want a semi-packaged app, can they go to the industry solutions group? >> Yes. >> I assume it's the IoT, Industry Solutions Group. >> Solutions group, yes. >> They then take a it's almost maybe a framework or an existing application that needs customization. >> Exactly so we have IoT-4. IoT for manufacturing, IoT for retail, IoT for insurance IoT for you name it. We have all these industry solutions so there would be some amount of template which is already existing in some fashion so when GBS gets a request to say here is customer X coming and asking for a particular solution. They would come back to IoT solutions group to say, they already have some template solutions from where we can start from rather than building it from scratch. You speed to market again is much faster and then based on that, if it's something that is to be customizable, both of them work together with the customer and then make that happen, and they leverage our platform underneath to do all the connection collection data analytics and so on and so forth that goes along with that. >> Tell me this from everything we hear. There's a huge talent shortage. Tell me in which roles is there the greatest shortage and then how do different members of the ecosystem platform vendors, solution vendors sort of a supply-chain master customers and their customers. How do they attract and retain and train? >> It's a fantastic question. One of the difficulties both in the valley and everywhere across is that three is a skill gap. You want advanced data scientists you want advances machinery experts, you want advanced AI specialists to actually come in. Luckily for us, we have about 1000 data scientists and AI specialists distributed across the globe. >> When you say 1000 data scientists and AI specialists, help us understand which layer are they-- >> It could be all the way from like a BI person all the way to people who can build advanced AI models. >> On top of an engine or a framework. >> We have our Watson APIs from which we build then we have our data signs experience which actually has some of the models then built on top of what's in the data platform so we take that as well. There are many different ways by which we can actually bring the AM model missionary models to build. >> Where do you find those people? Not just the sort of band strengths that's been with IBM for years but to grow that skill space and then where are they also attracted to? >> It's a great question. The valley definitely has a lot of talent, then we also go outside. We have multiple centers of excellence in Israel, in India, in China. So we have multiple centers of excellence we gather from them. It's difficult to get all the talent just from US or just from one country so it's naturally that talent has to be much more improvement and enhanced all the wat fom fresh graduates from colleges to more experienced folks in the in the actual profession. >> What about when you say enhancing the pool talent you have. Could it also include productivity improvements, qualitative productivity improvements in the tools that makes machine learning more accessible at any level? The old story of rising obstruction layers where deep learning might help design statistical models by doing future engineering and optimizing the search for the best model, that sort of stuff. >> Tools are very, very hopeful. There are so many. We have from our tools to python tools to psychic and all of that which can help the data scientist. The key part is the knowledge of the data scientist so data science, you need the algorithm, the statistical background, then you need your applications software development background and then you also need the domestics for engineering background. You have to bring all of them together. >> We don't have too many Michaelangelos who are these all around geniuses. There's the issue of, how do you to get them to work more effectively together and then assuming even each of those are in short supply, how do you make them more productive? >> So making them more productive is by giving them the right tools and resources to work with. I think that's the best way to do it, and in some cases in my organization, we just say, okay we know that a particular person is skilled is up skilled in certain technologies and certain skill sets and then give them all the tools and resources for them to go on build. There's a constant education training process that goes through that we in fact, we have our entire Watson ED platform that can be learned on Kosera today. >> George: Interesting. >> So people can go and learn how to build a platform from a Kosera. >> When we start talking with clients and with vendors, things we hear is that and we were kind of I think early that calling foul but in the open source infrastructure big data infrastructure this notion of mix-and-match and roll your own pipeline sounded so alluring, but in the end it was only the big Internet companies and maybe some big banks and telcos that had the people to operate that stuff and probably even fewer who could build stuff on it. Do we do we need to up level or simplify some of those roles because mainstream companies can't have enough or won't will have enough data scientists or other roles needed to make that whole team work >> I think it will be a combination of both one is we need to up school our existing students with the stem background, that's one thing and the other aspect is, how do you up scale your existing folks in your companies with the latest tools and how can you automate more things so that people who may not be schooled will still be able to use the tool to deliver other things but they don't have to go to a rigorous curriculum to actually be able to deal with it. >> So what does that look like? Give us an example. >> Think of tools like today. There are a lot of BI folks who can actually build. BI is usually your trends and graphs and charts that comes out of the data which are simple things. So they understand the distribution and so on and so forth but they may not know what is the random model. If you look at tools today, that actually gives you to build them, once you give the data to that model, it actually gives you the outputs so they don't really have to go dig deep I have to understand the decision tree model and so on and so forth. They have the data, they can give the data, tools like that. There are so many different tools which would actually give you the outputs and then they can actually start building app, the analytics application on top of that rather than being worried about how do I write 1000 line code or 2000 line code to actually build that model itself. >> The inbuilt machine learning models in and intend, integrated to like pentaho or what's another example. I'm trying to think, I lost my, I having a senior moment. These happen too often now. >> We do have it in our own data science tools. We already have those models supported. You can actually go and call those in your web portal and be able to call the data and then call the model and then you'll get all that. >> George: Splank has something like that. >> Splank does, yes. >> I don't know how functional it is but it seems to be oriented towards like someone who built a dashboard can sort of wire up a model, it gives you an example of what type of predictions or what type of data you need. >> True, in the Splank case, I think it is more of BI tool actually supporting a level of data science moral support on the back. I do not know, maybe I have to look at this but in our case we have a complete data science experience where you actually start from the minute the data gets ingested, you can actually start the storage, the transformation, the analytics and all of that can be done in less than 10 lines of coding. You can just actually do the whole thing. You just call those functions then it will the right there in front of you. So in twin you can do that. That I think is much more powerful and there are tools, there are many many tools today. >> So you're saying that data science experience is an enter in pipeline and therefore can integrate what were boundaries between separate products. >> The boundary is becoming narrower and narrower in some sense. You can go all the way from data ingestion to the analytics in just few clicks or few lines of course. That's what's happening today. Integrated experience if you will. >> That's different from the specialized skills where you might have a tri-factor, prexada or something similar as for the wrangling and then something else for sort of the the visualizations like Altracks or Tavlo and then into modeling. >> A year or so ago, most of data scientists try to spend a lot of time doing data wrangling because some of the models, they can actually call very directly but the wrangling is actually where they spend their time. How do you get the data crawl the data, cleanse the data, etc. That is all now part of our data platform. It is already integrated into the platform so you don't have to go through some of these things. >> Where are you finding the first success for that tool suite? >> Today it is almost integrated with, for instance, I had a case where we exchange the data we integrate that into what's in the Watson data platform and the Watson APIs is a layer above us in the platform where we actually use the analytics tools, more advanced AI tools but the simple machinery models and so on and so forth is already integrated into as part of the Watson data platform. It is going to become an integrated experience through and through. >> To connect data science experience into eWatson IoT platform and maybe a little higher at this quasi-solution layer. >> Correct, exactly. >> Okay, interesting. >> We are doing that today and given the fact that we have so much happening on the edge side of things which means mission critical systems today are expecting stream analysts to get to get insights right there and then be able to provide the outcomes at the edge rather than pushing all the data up to your cloud and then bringing it back down. >> Let's talk about edge versus cloud. Obviously, we can't for latency and band width reasons we can't forward all the data to the cloud, but there's different use cases. We were talking to Matasa Harry at Sparks Summit and one of the use cases he talked about was video. You can't send obviously all the video back and you typically on an edge device wouldn't have heavy-duty machine learning, but for video camera, you might want to learn what is anomalous or behavior call out for that camera. Help us understand some of the different use cases and how much data do you bring back and how frequently do retrain the models? >> In the case of video, it's so true that you want to do a lot of any object ignition and so on and so forth in the video itself. We have tools today, we have cameras outside where if a van goes it detect the particular object in the video live. Realtime streaming analytics so we can do that today. What I'm seeing today in the market is, in the transaction between the edge and the cloud. We believe edge is an extension of the cloud, closer to the asset or device and we believe that models are going to get pushed from the cloud, closer to the edge because the compute capacity and storage and the networking capacity are all improving. We are pushing more and more computing to their devices. >> When you talk about pushing more of the processing. you're talking more about predicts and inferencing then the training. >> Correct. >> Okay. >> I don't think I see so much of the training needs to be done at the edge. >> George: You don't see it. >> No, not yet at least. We see the training happening in the cloud and then once a train, the model has been trained, then you come to a steady, steady model and then that is the model you want to push. When you say model, it could be a bunch of coefficients. That could be pushed onto the edge and then when a new data comes in, you evaluate, make decisions on that, create insights and push it back as actions to the asset and then that data can be pushed back into the cloud once a day or once in a week, whatever that is. Whatever the capacity of the device you have and we believe that edge can go across multiple scales. We believe it could be as small with 128 MB it could be one or two which I see sitting in your local data center on the premise. >> I've had to hear examples of 32 megs in elevators. >> Exactly. >> There might be more like a sort of bandwidth and latency oriented platform at the edge and then throughput and an volume in the cloud for training. And then there's the issue of do you have a model at the edge that corresponds to that instance of a physical asset and then do you have an ensemble meaning, the model that maps to that instance, plus a master canonical model. Does that work for? >> In some cases, I think it'll be I think they have master canonical model and other subsidiary models based on what the asset, it could be a fleet so you in the fleet of assets which you have, you can have, does one asset in the fleet behave similar to another asset in the fleet then you could build similarity models in that. But then there will also be a model to look at now that I have to manage this fleet of assets which will be a different model compared to action similarity model, in terms of operations, in terms of optimization if I want to make certain operations of that asset work more efficiently, that model could be completely different with when compared to when you look at similarity of one model or one asset with another. >> That's interesting and then that model might fit into the information technology systems, the enterprise systems. Let's talk, I want to go get a little lower level now about the issue of intellectual property, joint development and sharing and ownership. IBM it's a nuanced subject. So we get different sort of answers, definitive answers from different execs, but at this high level, IBM says unlike Google and Facebook we will not take your customer data and make use of it but there's more to it than that. It's not as black-and-white. Help explain that for so us. >> The way you want to think is I would definitely paired back what our chairman always says customers' data is customers' data, customer insights is customer insights so they way we look at it is if you look at a black box engine, that could be your analytics engine, whatever it is. The data is your inputs and the insights are our outputs so the insights and outputs belong to them. we don't take their data and marry it with somebody else's data and so forth but we use the data to train the models and the model which is an abstract version of what that engine should be and then more we train the more better the model becomes. And then we can then use across many different customers and as we improve the models, we might go back to the same customers and hey we have an improved model you want to deploy this version rather than the previous version of the model we have. We can go to customer Y and say, here is a model which we believe it can take more of your data and fine tune that model again and then give it back to them. It is true that we don't actually take their data and share the data or the insights from one customer X to another customer Y but the models that make it better. How do you make that model more intelligent is what out job is and that's what we do. >> If we go with precise terminology, it sounds like when we talk about the black box having learned from the customer data and the insights also belonging to the customer. Let's say one of the examples we've heard was architecture engineering consulting for large capital projects has a model that's coming obviously across that vertical but also large capital projects like oil and gas exploration, something like that. There, the model sounds like it's going to get richer with each engagement. And let's pin down so what in the model is sort of not exposed to the next customer and what part of the model that has gotten richer does the next customer get the balance of? >> When we actually build a model, when we pass the data, in some cases, customer X data, the model is built out of customer X data may not sometimes work with the customer Y's data so in which case you actually build it from scratch again. Sometimes it doesn't. In some case it does help because of the similarity of the data in some instance because if the data from company X in oil gas is similar to company Y in oil gas, sometimes the data could be similar so in which case when you train that model, it becomes more efficient and the efficiency goes back to both customers. we will do that but there are places where it would really not work. What we are trying to do is. We are in fact trying to build some kind of knowledge bundles where we can actually what used to be a long process to train the model can ow shortened using that knowledge bundle of what we have actually gained. >> George: Tell me more about how it works. >> In retail for instance, when we actually provide analytics, from any kind of IoT sense, whatever sense of data this comes in we train the model, we get analytics used for ads, pushing coupons, whatever it is. That knowledge, what you have gained off that retail, it could be models of models, it could be metamodels, whatever you built. That can actually serve many different customers but the first customer who is trying to engage with us, you don't have any data to the model. It's almost starting from ground zero and so that would actually take a longer time when you are starting with a new industry and you don't have the data, it'll take you a longer time to understand what is that saturation point or optimization point where you think the model cannot go any further. In some cases, once you do that, you can take that saturated model or near saturated model and improve it based on more data that actually comes from different other segments. >> When you have a model that has gotten better with engagements and we've talked about the black box which produces the insights after taking in the customer data. Inside that black box there's like at the highest level we might call it the digital twin with the broad definition that we started with, then there's a data model which a data model which I guess could also be incorporated into the knowledge graft for the structure and then would it be fair to call the operational model the behavior? >> Yes, how does the system perform or behave with respect the data and the asset itself. >> And then underpinning that, the different models that correspond to the behaviors of different parts of this overall asset. So if we were to be really precise about this black box, what can move from one customer to the next and what what won't? >> The overall model, supposing I'm using a random data retrieval model, that remains but actual the coefficients are the feature rector, or whatever I use, that could be totally different for customers, depending on what kind of data they actually provide us. In data science or in analytics you have a whole platora of all the way from simple classification algorithms to very advanced predictive modeling algorithms. If you take the whole class when you start with a customer, you don't know which model is really going to work for a specific user case because the customer might come and can say, you might get some idea but you will not know exactly this is the model that will work. How you test it with one customer, that model could remain the same kind of use case for some of other customer, but that actual the coefficients the degree of the digital in some cases it might be two level decision trees, in others case it might be a six level decision tree. >> It is not like you take the model and the features and then just let different customers tweak the coefficients for the features. >> If you can do that, that will be great but I don't know whether you can really do it the data is going to change. The data is definitely going to change at some point of time but in certain cases it might be directly correlated where it can help, in certain cases it might not help. >> What I'm taking away is this is fundamentally different from traditional enterprise applications where you could standardize business processes and the transactional data that they were producing. Here it's going to be much more bespoke because I guess the processes, the analytic processes are not standardized. >> Correct, every business processes is unique for a business. >> The accentures of the world we're trying to tell people that when SAP shipped packaged processes, which were pretty much good enough, but that convince them to spend 10 times as much as the license fee on customization. But is there a qualitative difference between the processes here and the processes in the old ERP era? I think it's kind of different in the ERP era and the processes, we are more talking about just data management. Here we're talking about data science which means in the data management world, you're just moving data or transforming data and things like that, that's what you're doing. You're taking the data. transforming to some other form and then you're doing basic SQL queries to get some response, blah blah blah. That is a standard process that is not much of intelligence attached to it but now you are trying to see from the data what kind of intelligence can you derive by modeling the characteristics of the data. That becomes a much tougher problem so it now becomes one level higher of intelligence that you need to capture from the data itself that you want to serve a particular outcome from the insights you get from is model. >> This sounds like the differences are based on one different business objectives and perhaps data that's not as uniform that you would in enterprise applications, you would standardize the data here, if it's not standardized. >> I think because of the varied the disparity of the businesses and the kinds of verticals and things like that you're looking at, to get complete unified business model, is going to be extremely difficult. >> Last question, back-office systems the highest level they got to were maybe the CFO 'cause you had a sign off on a lot of the budget for the license and a much much bigger budget for the SI but he was getting something that was like close you quarter in three days or something instead of two weeks. It was a control function. Who do you sell to now for these different systems and what's the message, how much more strategic how do you sell the business impact differently? >> The platforms we directly interact with the CIO and CTOs or the head of engineering. And the actual solutions or the insights, we usually sell it to the COOs or the operational folks. So because the COO is responsible for showing you productivity, efficiency, how much of savings can you do on the bottom line top line. So the insights would actually go through the COOs or in some sense go through their CTOs to COOs but the actual platform itself will go to the enterprise IT folks in that order. >> This sounds like it's a platform and a solution sell which requires, is that different from the sales motions of other IBM technologies or is this a new approach? >> IBM is transforming on its way. The days where we believe that all the strategies and predictives that we are aligned towards, that actually needs to be the key goal because that's where the world is going. There are folks who, like Jeff Boaz talks about in the olden days you need 70 people to sell or 70% of the people to sell a 30% product. Today it's a 70% product and you need 30% to actually sell the product. The model is completely changing the way we interact with customers. So I think that's what's going to drive. We are transforming that in that area. We are becoming more conscious about all the strategy operations that we want to deliver to the market we want to be able to enable our customers with a much broader value proposition. >> With the industry solutions group and the Global Business Services teams work on these solutions. They've already been selling, line of business CXO type solutions. So is this more of the same, it's just better or is this really higher level than IBM's ever gotten in terms of strategic value? >> This is possibly in decades I would say a high level of value which come from a strategic perspective. >> Okay, on that note Veeru, we'll call it a day. This is great discussion and we look forward to writing it up and clipping all the videos and showering the internet with highlights. >> Thank you George. Appreciate it. >> Hopefully I will get you back soon. >> I was a pleasure, absolutely. >> With that, this George Gilbert. We're in our Palo Alto studio for wiki bond and theCUBE and we've been talking to Veeru Ramaswamy who's VP of Watson IoT platform and we look forward to coming back with Veeru sometime soon. (upbeat music)

Published Date : Aug 23 2017

SUMMARY :

and he's here to fill us in and the club ration or the social integration. the next work station and he talked about into the to the digital world, the way a normal person looks at a physical object? and represent the digital twin on a physical world and the pulleys and the panels for operating it. that becomes a critical part of the twin. in the digital world, then that gives you the ability in that you could program the real world. that comes from the sensors, once you model it Okay, so it's a structured way of interacting Okay, so it's not the narrow definition What are some of the business impacts and then be able to have different business models in the sense that IBM's customers become in the way Correct so the way we want think about is, someone's modeling and risk from you the supplier I'm pretty sure we have a lot of financial risk modeling if that's the right word. are engineered to order instead of make to stock. and you bring your billion devices and connect but you take a few use cases and then generalize so most of the time you get 80% of your asset management sort of the hot topic and in the states, and then you want to bring your similation models and behavior, is that what makes it simulation ready? That's the graph that holds the relation between nodes that maybe the lower level operation systems. and the availability of the existing assets with you. Okay that's where you translate essentially I remember in the Munich event, of some of the most successful engagements the way you can definitely see success It sounds like in the chip industry Moore's law is going to depend on how do you build this ecosystem And all the way up to the guys who are going to and all of that to make the connections. And how much is the solution builder and software developers to come and sit together and the automotive customer brings in We always by the way believe he sort of set the path for modern computing someone on the IBM side might be talking the standard what do you call In terms of getting the customer organization and then you have a GBS which actually or an existing application that needs customization. analytics and so on and so forth that goes along with that. and then how do different members of the ecosystem and AI specialists distributed across the globe. like a BI person all the way to people who can build then we have our data signs experience it's naturally that talent has to be much more the pool talent you have. and then you also need the domestics There's the issue of, and resources to work with. how to build a platform from a Kosera. that had the people to operate that stuff and the other aspect is, So what does that look like? and charts that comes out of the data in and intend, integrated to like pentaho and be able to call the data what type of data you need. the data gets ingested, you can actually start the storage, can integrate what were boundaries You can go all the way from data ingestion sort of the the visualizations like Altracks It is already integrated into the platform and the Watson APIs is a layer above us a little higher at this quasi-solution layer. and given the fact that we have and one of the use cases he talked about was video. and so on and so forth in the video itself. When you talk about pushing more of the processing. needs to be done at the edge. Whatever the capacity of the device you have and then do you have an ensemble meaning, so you in the fleet of assets which you have, about the issue of intellectual property, and share the data or the insights from There, the model sounds like it's going to get richer and the efficiency goes back to both customers. and you don't have the data, it'll take you a longer time incorporated into the knowledge graft for the structure Yes, how does the system perform or behave that correspond to the behaviors of different parts and can say, you might get some idea It is not like you take the model and the features the data is going to change. and the transactional data that they were producing. is unique for a business. and the processes, we are more talking about This sounds like the differences are based on and the kinds of verticals the highest level they got to were maybe the CFO So because the COO is responsible for showing you in the olden days you need 70 people to sell and the Global Business Services teams a high level of value which come from and showering the internet with highlights. Thank you George. and we look forward to coming back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

GeorgePERSON

0.99+

IBMORGANIZATION

0.99+

Steve JobsPERSON

0.99+

VeeruPERSON

0.99+

Jeff BoazPERSON

0.99+

IsraelLOCATION

0.99+

80%QUANTITY

0.99+

GBSORGANIZATION

0.99+

Doug AnglebadPERSON

0.99+

oneQUANTITY

0.99+

EuropeLOCATION

0.99+

UberORGANIZATION

0.99+

Veeru RamaswamyPERSON

0.99+

100 data centersQUANTITY

0.99+

IBM Global ServicesORGANIZATION

0.99+

128 MBQUANTITY

0.99+

1000 data scientistsQUANTITY

0.99+

GEORGANIZATION

0.99+

twoQUANTITY

0.99+

30%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

second partQUANTITY

0.99+

IndiaLOCATION

0.99+

two yearsQUANTITY

0.99+

FebruaryDATE

0.99+

10 timesQUANTITY

0.99+

USLOCATION

0.99+

MunichLOCATION

0.99+

GoogleORGANIZATION

0.99+

TodayDATE

0.99+

70%QUANTITY

0.99+

32 megsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

KaiserORGANIZATION

0.99+

70 peopleQUANTITY

0.99+

ChinaLOCATION

0.99+

six levelQUANTITY

0.99+

bothQUANTITY

0.99+

two programmersQUANTITY

0.99+

OneQUANTITY

0.99+

both customersQUANTITY

0.99+

eachQUANTITY

0.99+

two weeksQUANTITY

0.99+

Cable VisionORGANIZATION

0.99+

one partQUANTITY

0.99+

three daysQUANTITY

0.99+

GBS Global Business ServicesORGANIZATION

0.99+

two levelQUANTITY

0.98+

one customerQUANTITY

0.98+

todayDATE

0.98+

KoseraORGANIZATION

0.98+

one modelQUANTITY

0.98+

less than 10 linesQUANTITY

0.98+

90sDATE

0.98+

threeQUANTITY

0.98+

Air BnBORGANIZATION

0.98+

a dayQUANTITY

0.97+