Image Title

Search Results for Splank:

Spiros Xanthos, Splunk | Splunk .conf21


 

(Upbeat music) >> Hi everyone and welcome back to the Cube's coverage of Splunk.conf 2021, virtual. We are here, live in the Splunk studios here in Silicon valley. I'm John Furrier, host of the Cube. Spiros Xanthos VP of product management of observability with Splunk is here inside the cube, Spiros, thanks for coming on. Great to see you. [Spiros Xanthos]- John, thanks for having me glad to be here. >> We love observability. Of course we love Kubernetes, but that was before observability became popular. We've been covering cube-con since it was invented even before, during the OpenStack days, a lot of open source momentum with you guys with observability and also in the customer base. So I want to thank you for coming on. Give us the update. What is the observability story its clearly in the headlines of all the stories SiliconANGLE's headline is multi-cloud observability security Splunk doubling down on all three. >> Correct. >> Big part of the story is observability. >> Correct. And you mentioned CubeCon. I was there last week as well. It seems that those observability and security are the two most common buzzwords you hear these days different from how it was when we started it. But yeah, Splank actually has made the huge investment in observability, starting with the acquisition of Victor ops three years ago, and then with Omnition and Signalfx. And last year with Plumbr synthetics company called Rigor and Flowmill and a network monitoring company. And plus a lot of organic investment we've made over the last two years to essentially build an end-to-end observability platform that brings together metrics, traces, and logs, or otherwise infrastructure monitoring, log analytics, application monitoring. Visual experience monitoring all in one platform to monitor let's say traditional legacy and modern cloud native apps. >> For the folks that know SiliconANGLE, the Cube know we've been really following this from the beginning for signal effects, remember when they started they never changed their course. they've had the right They have the right history and from spot by spot, you guys, same way open source and cloud was poo-pooed upon, people went like, oh, it's not secure, they never were. Now it's the center of all the action. [Spiros Xanthos]- Yes >> And so that's really cool. And thanks for doing that. The other thing I want to get your point on is what does end-to-end observability mean? Because there's a lot of observability companies out there right now saying, Hey, we're the solution We're the utility, we're the tool, but I haven't seen a platform. So what's your answer to that? >> Yes. So observability, in my opinion, in the context of what you're describing means two things. One is that when, when we say internal durability, it means that instead of having, let's say multiple monitoring tools that are silent, let's say one for monitoring network, one for monitoring infrastructure, a separate one for monitoring APM that do not work with each other. We bring all of these telemetry in one place we connect it and exactly because actually applications and infrastructure themselves are becoming one. You have a way to monitor all of it from one place. So that's observability. But the other thing that observability also is because these environments tend to be a lot more complex. It's not just about connecting them, right? It's also about having enough data and enough analytics to be able to make sense out of those environments and solve problems faster than you could do in the past with traditional monitoring. >> That's a great definition. I've got to then ask you one of the things coming up that came out of CoopCon was clear, is that the personnel to hire, to run this stuff, it's not everyone can get the skills gap problem. At the same time, automation is at an all time high people are automating and doing AI ops, get outs. What do you want to call this a buzz word for that basically automating the data observability into the CICB pipeline, huge trend right now. And the speed of developers is fast now. They're coding fast. They don't want to wait. >> I agree. So, and that's exactly what's happening, right? We want essentially from traditional IT where developers would develop something a little bit deployed months later by some IT professional, of course, all of this coming together, But we're not stopping that as you say, right, that the shifting left is going earlier into the pipeline. Everyone expect, essentially let's say monitoring to happen at the speed of deployment. And I guess observability again, is this not, as a requirement. Observability is this idea. Let's say that I should be able to monitor my applications in real time and, you know, get information as soon as something happens. >> With the evolution of the shift left trend. I would say for the people don't know what shift left is you put security the beginning, not bolted on at the end and developers can do it with automation, all that good stuff that they have. But how, how real is that right now in terms of it happening? Can you, can you share some vision and ideas and anecdotal data on how, how fast shift left is, or is there still bottlenecks and security groups and IT groups? >> So there are bottlenecks for sure. In my opinion, we are aware with, let's say the shift left or the dev sec ops trend, whether IT and devs maybe a few years ago. And this is both a cultural evolution that has to happen. So security teams and developers have to come closer together, understand like, say the consensus of the requirements of each other so they can work better together the way it happened with DevOps and all sorts of tooling problem, right? Like still observability or monitoring solutions are not working very well with security yet. We at Splunk of course, make this a priority. And we have the platform to integrate all the data in one place. But I don't think is generally something that we'll have achieved as well as an industry yet. And including the cultural aspects of it. >> Is that why you think end to end is important to hit that piece there so that people feel like it's all working together >> I think end to end is important for two reasons. actually one is that essentially, as you say, you hit all the pieces from the point of deployment, let's say all the way to production, but it's also because I think applications and infrastructure, FMLA infrastructure with Kubernetes, microservices are in traditional so much more complexity that you need to step function improvement in the tooling as well. Right? So that you need keep up with the complexity. So bringing everything together and applying analytics on top is the way essentially to have this step function improvement in how your monitoring solution works so that it can keep up with the complexity of the underlying infrastructure and application. >> That is a huge, huge points Spiros. I got to double down on that with you and say, let's expand that because that's the number one problem, taming the complexity without slowing down. Right? So what is the best practice for that? What do people do? Cause, I mean, I know it's evolving, it's going faster than that, but it's still getting better, but not always there, but what can people do to go faster? >> So, and I will add that it's even more complex than just what the cloud, let's say, native applications introduced because especially large enterprises have to maintain their routine, that on-prem footprint legacy applications that are still in production and then still expand. So it's additive to what they have today, right? If somebody was to start from a clean slate, let's say started with Kubernetes today, maybe yes, we have the cloud native tooling to monitor that, but that's not the reality of most, most enterprises out there. Right? So I think our goal at Splunk at least is to be able to essentially work with our customers through their digital, digital transformation and cloud journey. So to be able to support all their existing applications, but also help them bring those to the cloud and develop new applications in a cloud native fashion, let's say, and we have the tooling, I think, to support all of that, right between let's say our original data platform and our metrics and traces platform that we develop further. >> That's awesome. And then one quick question on the customer side, if I'm a customer, I want observability, I want this, I want everything you just said. How do I tell the difference between a pretender and a player, the good solution and a bad solution? What are the signals that this is the real deal, that's a fake product >> Agreed. So, I mean, everyone obviously believes that original (laughing) I'm not sure if I will. >> You don't want to name names? Here's my, my perspective on what truly is a requirement for absorb-ability right? First of all, I think we have moved past the time where let's say proprietary instrumentation and data collection was a differentiator. In fact, it actually is a problem today, if you are deploying that because it creates silos, right? If I have a proprietary instrumentation approach for my application, that data cannot be connected to my infrastructure or my logs, let's say, right. So that's why we believe open telemetry is the future. And we start there in terms of data collection. Once we standardize, let's say data collection, then the problem moves to analytics. And that's, I think where the future is, right? So observability is not just about collecting a bunch of data and that bring it back to the user. It's about making sense out of this data, right? So the name of the game is analytics and machine learning on top of the data. And of course the more data you can collect, the better it is from that perspective. And of course, then when we're talking about enterprises, scale controls, compliance all of these matter. And I think real time matters a lot as well, right? We cannot be alerting people after minutes of a problem that has happened, but within a few seconds, if we wanted to really be pro-active. >> I think one thing I like to throw out there, maybe get your reaction to it, I think maybe one other thing might be enabling the customer to code on top of it, because I think trying to own the vertical stack as well as is also risky as a vendor to sell to a company, having the ability to add programming ability on top of it. >> I completely agree actually, You do? In general giving more control to the users and how, what do they do with their data, let's say, right? And even allowing them to use open source, whatever is appropriate for them, right? In combination, maybe with a vendor solution when they don't want to invest themselves. >> Build their own apps, build your own experience. That's the way the world works. That's software. >> I agree. And again, Splunk from the beginning was about that, right? Like we'll have thousands of apps built ontop of our platform >> Awesome. Well, I want to talk about open source and the work you're doing with open telemetry. I think that's super important. Again, go back even five, 10 years ago. Oh my God. The cloud's not secure. Oh my God, open source has got security holes. It turns out it's actually the opposite now. So, you know finally through the people woke up. No, but it's gotten better. So take us through the open telemetry and what you guys are doing with that. >> Yes. So first of all, my belief, my personal belief is that if there is no future where infrastructure is anything about open source, right? Because people do not trust actually close our solutions in terms of security. They prefer open source at this point. So I think that's the future. And in that sense, a few years ago, I guess our belief was that all data collection instrumentations with standards based first of all, so that the users have control and second should be open source. That's why we, at Omnition the company I co-founded that was acquired by Splunk. We we're one of the main tenders of open sensors and that we brought together open sensors and OpenTracing in creating open telemetry. And now , Open telemetry is pretty much the de facto. Every vendor supports it, its the second most active project in CNCF. And I think it's the future, right? Both because it frees up the data and breaks up the silos, but also because, has support from all the vendors. It's impossible for any single vendor to keep up with all this complexity and compete with the entire industry when we all come together. So I think it's a great success it's I guess, kudos to everybody, kudos to CNCF as well, that was able to actually create and some others. >> And props to CNCF. Yeah. CNC has done an amazing job and been going to all those events all the years and all the innovations has been phenomenal. I got to ask what the silos, since you brought it up, come multiple times. And again, I think this is important just to kind of put an exclamation point on, machine learning is based upon data. Okay. If you have silos, you have the high risk of having bad machine learning. >> Yes. >> Okay. That's you agree with that? >> Completely. >> So customers, they kind of understand this, right. If you have silos that equals bad future >> Correct >> because machine learning is baked into everything now. >> And I will add to that. So silos is the one problem, and then not being able to have all the data is another problem, right? When it comes to being able to make sense out of it. So we're big believers in what we call full fidelity. So being able to connect every byte of data and do it in a way that makes sense, obviously economically for the customer, but also have, let's say high signal to noise ratio, right? By structuring the data at the source. Overt telemetry is another contributor to that. And by collecting all the data and by having an ability, let's say to connect the data together, metrics, traces, logs, events, incidents, then we can actually build a little more effective tooling on top to provide answers back to the user with high confidence. So then users can start trusting the answers as opposed to they themselves, always having to figure out what the problem is. And I think that's the future. And we're just starting. >> Spiros I want to ask you now, my final question is about culture And you know, when you have scale with the cloud and data, goodness, where you have people actually know the value of data and they incorporate into their application, you have advantages. You have competitive advantages in some cases, but developers were just coding love dev ops because it's infrastructure as code. They don't have to get into the weeds and do the under the hood, datas have that same phenomenon right now where people want access to data. But there's certain departments like security departments and IT groups holding back and slowing down the developers who are waiting days and weeks when they want it in minutes and seconds for have these kinds of things. So the trend is, well there's, first of all, there's the culture of people aren't getting along and they're hating each other or they're not liking each other. >> Yes >> There's a little conflict, always kind of been there, but now more than ever, because why wait? >> I agree. >> How can companies shorten that cycle? Make it more cohesive, still decouple the groups because you've got, you got compliance. How do you maximize the best of a good security group, a good IT group and enables as fast as possible developers. >> I agree with you, by the way, this is primarily cultural. And then of course there is a tooling gap as well. Right. But I think we have to understand, let's say as a security group, instead of developers, what are the needs of each other, right. Why we're doing the things we're doing because everybody has the right intentions to some extent, right? But the truth is there is pain. We are me and myself. Like as we develop our own solutions in a cloud native fashion, we see that right. We want to move as fast as possible, but at the same time, want to be compliant and secure, right. And we cannot compromise actually on security or compliance. I mean, that's really the wrong solution here. So I think we need to come together, understand what each other is trying to do and provide. And actually we need to build better tooling that doesn't get into the way. Today, oftentimes it's painful to have, let's say a compliance solution or a secure solution because it slows down development. I think we need to actually, again, maybe a step function improvement in the type of tooling we'll have in this space. So it doesn't get into the way Right? It does the work it provides. Let's say the security, the security team requires, it provides the guarantees there, but doesn't get in the way of developers. And today it doesn't happen like this most of the time. So we have some ways to go. >> And Garth has mentioning how you guys got some machine learning around different products is one policy kind of give some, you know, open, you know, guardrails for the developers to bounce around and do things until they, until they have to put a new policy in place. Is that an answer automated with automation? >> Big time. Automation is a big part of the answer, right? I think we need to have tooling that first of all works quickly and provides the answers we need. And we'll have to have a way to verify that the answer are in place without slowing down developers.Splunk is, I mean, out of a utility of DevSecOps in particular is around that, right? That we need to do it in a way that doesn't get in the way of, of let's say the developer and the velocity at which they're trying to move, but also at the same time, collect all the data and make sure, you know, we know what's going on in the environment. >> Is AI ops and dev sec ops and GET ops all the same thing in your mind, or is it all just labels >> It's not necessarily the same thing because I think AI ops, in my opinion applies, let's say to even more traditional environments, what are you going to automate? Let's say IT workflows in like legacy applications and infrastructure. Getops in my mind is maybe the equivalent when you're talking about like cloud native solutions, but as a concept, potentially they are very close I guess. >> Well, great stuff. Great insight. Thanks for coming on the Cube. Final point is what's your take this year of the live we're in person, but it's virtual, we're streaming out. It's kind of a hybrid media environment. Splunk's now in the media business with the studios, everything great announcements. What's your takeaway from the keynote this week? What's your, you got to share to the audience, this week's summary. >> First of all, I really hope next year, we're all going to be in one place, but still given the limitations we had I think it was a great production and thanks to everybody who was involved. So my key takeaway is that we truly actually have moved to the data age and data is at the heart of everything we do. Right? And I think Splunk has always been that as a company, but I think we ourselves really embraced that and everything we do is everything. Most of the problems we solve are data problems, whether it's security, observability, DevSecOps, et cetera. So. >> Yeah, and I would say, I would add to that by saying that my observations during the pandemic now we're coming, hopefully to the end of it, you guys have been continuing to ship code and with real, not vaporware real product, the demos were real. And then the success on the open source. Congratulations. >> Thank you. >> All right. Thanks for coming on and we appreciate it >> Thanks alot _Cube coverage here at dot com Splunk annual conference. Virtual is the Cube. We're here live at the studios here at Splunk studios for their event. I'm John Farrow with the Cube. Thanks for watching. (joyful tune)

Published Date : Oct 20 2021

SUMMARY :

Splunk is here inside the cube, Spiros, of all the stories SiliconANGLE's and security are the two Now it's the center of all the action. We're the utility, we're the tool, in the context of what you're is that the personnel to that the shifting left is going of the shift left trend. And including the cultural aspects of it. let's say all the way to production, that's the number one problem, but that's not the reality of most, on the customer side, everyone obviously believes that original And of course the more having the ability to add And even allowing them to use open source, That's the way the world Splunk from the beginning source and the work you're doing so that the users have control all the innovations has been If you have silos that equals bad future is baked into everything now. the answers as opposed to So the trend is, still decouple the groups but doesn't get in the way of developers. guardrails for the developers that doesn't get in the way It's not necessarily the same thing the keynote this week? Most of the problems we the pandemic now we're coming, Thanks for coming on and we appreciate it Virtual is the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FurrierPERSON

0.99+

John FarrowPERSON

0.99+

OmnitionORGANIZATION

0.99+

two reasonsQUANTITY

0.99+

GarthPERSON

0.99+

TodayDATE

0.99+

last yearDATE

0.99+

last weekDATE

0.99+

SplunkORGANIZATION

0.99+

Silicon valleyLOCATION

0.99+

SignalfxORGANIZATION

0.99+

todayDATE

0.99+

Spiros XanthosPERSON

0.99+

BothQUANTITY

0.99+

next yearDATE

0.99+

CNCFORGANIZATION

0.99+

SplankORGANIZATION

0.99+

secondQUANTITY

0.99+

one problemQUANTITY

0.99+

OneQUANTITY

0.98+

three years agoDATE

0.98+

oneQUANTITY

0.98+

two thingsQUANTITY

0.98+

this weekDATE

0.98+

one placeQUANTITY

0.98+

SpirosPERSON

0.98+

bothQUANTITY

0.98+

one quick questionQUANTITY

0.98+

one policyQUANTITY

0.97+

single vendorQUANTITY

0.97+

one platformQUANTITY

0.97+

thousands of appsQUANTITY

0.97+

CNCORGANIZATION

0.97+

FirstQUANTITY

0.97+

pandemicEVENT

0.96+

one placeQUANTITY

0.94+

DevSecOpsTITLE

0.94+

dot comORGANIZATION

0.93+

Victor opsORGANIZATION

0.92+

two most common buzzwordsQUANTITY

0.91+

CubeConORGANIZATION

0.91+

10 years agoDATE

0.9+

few years agoDATE

0.9+

OpenTracingORGANIZATION

0.87+

firstQUANTITY

0.85+

SiliconANGLEORGANIZATION

0.85+

threeQUANTITY

0.85+

Plumbr syntheticsORGANIZATION

0.81+

months laterDATE

0.8+

FlowmillORGANIZATION

0.78+

KubernetesORGANIZATION

0.74+

CubeCOMMERCIAL_ITEM

0.73+

five,DATE

0.72+

last two yearsDATE

0.71+

Rigor andORGANIZATION

0.71+

CICBORGANIZATION

0.69+

CubeORGANIZATION

0.68+

yearDATE

0.68+

SplunkEVENT

0.64+

_CubePERSON

0.63+

KubernetesPERSON

0.57+

CoopConORGANIZATION

0.5+

Veeru Ramaswamy, IBM | CUBEConversation


 

(upbeat music) >> Hi we're at the Palo Alto studio of SiliconANGLE Media and theCUBE. My name is George Gilbert, we have a special guest with us this week, Veeru Ramaswamy who is VP IBM Watson IoT platform and he's here to fill us in on the incredible amount of innovation and growth that's going on in that sector of the world and we're going to talk more broadly about IoT and digital twins as a broad new construct that we're seeing in how to build enterprise systems. So Veeru, good to have you. Why don't you introduce yourself and tell us a little bit about your background. >> Thanks George, thanks for having me. I've been in the technology space for a long time and if you look at what's happening in the IoT, in the digital space, it's pretty interesting the amount of growth, the amount of productivity and efficiency the companies are trying to achieve. It is just phenomenal and I think we're now turning off the hype cycle and getting into real actions in a lot of businesses. Prior to joining IBM, I was junior offiicer and senior VP of data science with Cable Vision where I led the data strategy for the entire company and prior to that I was the GE of one of the first two guys who actually built the Cyamon digital center. GE digital center, it's a center of excellence. Looking at different kinds of IoT related projects and products along with leading some of the UX and the analytics and the club ration or the social integration. So that's the background. >> So just to set context 'cause this is as we were talking before, there was another era when Steve Jobs was talking about the next work station and he talked about objectory imitation and then everything was sprinkled with fairy dust about objects. So help us distinguish between IoT and digital twins which GE was brilliant in marketing 'cause that concept everyone could grasp. Help us understand where they fit. >> The idea of digital twin is, how do you abstract the actual physical entity out there in the world, and create an object model out of it. So it's very similar in that sense, what happened in the 90s for Steve Jobs and if you look at that object abstraction, is what is now happening in the digital twin space from the IoT angle. The way we look at IoT is we look at every center which is out there which can actually produce a metric on every device which produces a metric we consider as a sense so it could be as simple as the pressure, temperature, humidity sensors or it could be as complicated as cardio sensors and your healthcare and so on and so forth. The concept of bringing these sensors into the to the digital world, the data from that physical world to the digital world is what is making it even more abstract from a programming perspective. >> Help us understand, so it sounds like we're going to have these fire hoses of data. How do we organize that into something that someone who's going to work on that data, someone is going to program to it. How do they make sense out of it the way a normal person looks at a physical object? >> That's a great question. We're looking at sensors as a device that we can measure out of and that we call it a device twin. Taking the data that's coming from the device, we call that as a device twin and then your physical asset, the physical thing itself, which could be elevators, jet engines anything, physical asset that we have what we call the asset twin and there's hierarchical model that we believe that will have to be existing for the digital twin to be actually constructed from an IoT perspective. The asset twins will basically encompass some of the device twins and then we actually take that and represent the digital twin on a physical world of that particular asset. >> So that would be sort of like as we were talking about earlier like an elevator might be the asset but the devices within it might be the bricks and the pulleys and the panels for operating it. >> Veeru: Exactly. >> And it's then the hierarchy of these or in manufacturing terms, the building materials that becomes a critical part of the twin. What are some other components of this digital twin? >> When we talk about digital twin, we don't just take the blueprint as schematics. We also think about the system, the process, the operation that goes along with that physical asset and when we capture that and be able to model that, in the digital world, then that gives you the ability to do a lot of things where you don't have to do it in the physical world. For instance, you don't have to train your people but on the physical world, if it is periodical systems and so on and so forth, you could actually train them in the digital world and then be able to allow them to operate on the physical world whenever it's needed. Or if you want to increase your productivity or efficiency doing predictive models and so forth, you can test all the models in your digital world and then you actually deploy it in your physical world. >> That's great for context setting. How would you think of, this digital twins is more than just a representation of the structure, but it's also got the behavior in there. So in a sense it's a sensor and an actuator in that you could program the real world. What would that look like? What things can you do with that sort of approach? >> So when you actually have the data coming this humongous amount of terabyte data that comes from the sensors, once you model it and you get the insights out of that, based on the insight, you can take an actionable outcome that could be turning off an actuator or turning on an actuator and simple thngs like in the elevator case, open the door, shut the door, move the elevator up, move the elevator down etc. etc All of these things can be done from a digital world. That's where it makes a humongous difference. >> Okay, so it's a structured way of interacting with the highly structured world around us. >> Veeru: That's right. >> Okay, so it's not the narrow definition that many of us have been used to like an airplane engine or the autonomous driving capability of a car. It's more general than that. >> Yeah, it is more general than that. >> Now let's talk about having sort of set context with the definition so everyone knows we're talking about a broader sense that's going on. What are some of the business impacts in terms of operational efficiency, maybe just the first-order impact. But what about the ability to change products into more customizable services that have SLAs or entirely new business models including engineered order instead of make to stock. Tell us something about that hierarchy of value. >> That's a great question. You're talking about things like operations optimization and predicament and all of that which you can actually do from the digital world it's all on digital twin. You also can look into various kinds of business models now instead of a product, you can actually have a service out of the product and then be able to have different business models like powered by the hour, pay per use and kinds of things. So these kinds of models, business models can be tried out. Think about what's happening in the world of Air BnB and Uber, nobody owns any asset but still be able to make revenue by pay per use or power by the hour. I think that's an interesting model. I don't think it's being tested out so much in the physical asset world but I think that could be interesting model that you could actually try. >> One thing that I picked up at the Genius of Things event in Munich in February was that we really have to rethink about software markets in the sense that IBM's customers become in the way your channel, sometimes because they sell to their customers. Almost like a supply chain master or something similar and also pricing changes from potentially we've already migrated or are migrating from perpetual licenses to service softwares or service but now we could do unit pricing or SLA-based pricing, in which case you as a vendor have to start getting very smart about, you owe your customers the risk in meeting an SLA so it's almost more like insurance, actuarial modeling. >> Correct so the way we want think about is, how can we make our customers more, what do you call, monetizable. Their products to be monetizable with their customers and then in that case, when we enter into a service level agreement with our customers, there's always that risk of what we deliver to make their products and services more successful? There's always a risk component which we will have to work with the customers to make sure that combined model of what our customers are going to deliver is going to be more beneficial, more contributing to both bottom line and top line. >> That implies that your modeling, someone's modeling and risk from you the supplier to your customer as vendor to their customer. >> Right. >> That sounds tricky. >> I'm pretty sure we have a lot of financial risk modeling entered into our SLAs when we actually go to our customers. >> So that's a new business model for IBM, for IBM's sort of supply chain master type customers if that's the right word. As this capability, this technology pervades more industries, customers become software vendors or if not software vendors, services vendors for software enhanced products or service enhanced products. >> Exactly, exactly. >> Another thing, I'd listened to a briefing by IBM Global Services where they thought, ultimately, this might end up where there's far more industries are engineered to order instead of make to stock. How would this enable that? >> I think the way we want think about it is that most of the IoT based services will actually start by co-designing and co-developing with your customers. And that's where you're going to start. That's how you're going to start. You're not going to say, here's my 100 data centers and you bring your billion devices and connect and it's going to happen. We are going to start that way and then our customers are going to say, hey by the way, I have these used cases that we want to start doing, so that's why platform becomes so imortant. Once you have the platform, now you can scale, into a scale, individual silos as a vertical use case for them. We provide the platform and the use cases start driving on top of the platform. So the scale becomes much easier for the customers. >> So this sounds like the traditional application. The traditional way an application vendor might turn into a platform vendor which is a difficult transition in itself but you take a few use cases and then generalize into a platform. >> We call that a zone application services. The zone application service is basically, is drawing on perfectly cold platform service which actually provides you the abilities. So for instance like an asset management. An asset management can be done in an oil and gas rig, you can look at asset management in power tub vine, you can can look at asset management in a jet engine. You can do asset management across any different vertical but that is a common horizontal application so most of the time you get 80% of your asset management API's if you will. Then you can be able to scale across multiple different vertical applications and solutions. >> Hold that thought 'cause we're going to come back to joint development and leveraging expertise from vendor and customer and sharing that. Let's talk just at a high level one of the things that I keep hearing is that in Europe industry 4.0 is sort of the hot topic and in the states, it's more digital twins. Help parse that out for us. >> So the way we believe how digital twin should be viewed is a component view. What we mean the component view is that we have your knowledge graph representation of the real assets in the digital world and then you bring in your IoT sensors and connections to the models then you have your functional, logical, physical models that you want to bring into your knowledge graph and then you also want to be able to give the ability of search visualize allies. Kind of an intelligent experience for the end consumer and then you want to bring your similation models when you do the actual similation models in digital to bring it in there and then your enterprise asset management, your ERP systems, all of that and then when you connect, when you're able to build a knowledge graph, that's when the digital twin really connects with your enterprise systems. Sort of bring the OT and the IT together. >> So this is sort of to try and summarize 'cause there are a lot of moving parts in there. You've got you've got the product hierarchy which, in product Kaiser call it building materials, sort of the explosion of parts in an assembly, sub-assembly and then that provides like a structure, a data model then the machine learning models in the different types of models that they could be represent behavior and then when you put a knowledge graph across that structure and behavior, is that what makes it simulation ready? >> Yes, so you're talking about entities and connecting these entities with the actual relationship between these entities. That's the graph that holds the relation between nodes and your links. >> And then integrating the enterprise systems that maybe the lower level operation systems. That's how you effect business processes. >> Correct. >> For efficiency or optimization, automation. >> Yes, take a look at what you can do with like a shop floor optimization. You have all the building materials, you need to know from your existing ERP systems and then you will actually have the actual real parts that's coming to your shop floors to manage them and now base supposing, depending on whether you want to repair, you want to replace, you want an overall, you want to modify whatever that is, you want to look at your existing building materials and see, okay do I first have it do we need more? Do we need to order more? So your auditing system naturally gets integrated into that and then you have to integrate the data that's coming from these models and the availability of the existing assets with you. You can integrate it and say how fast can you actually start moving these out of your shop, into the. >> Okay that's where you translate essentially what's more like intelligent about an object or a rich object into sort of operational implications. >> Veeru: Yes. >> Okay operational process. Let's talk about customer engagement so far. There's intense interest in this. I remember in the Munich event, they were like they had to shut off attendance because they couldn't find a big enough venue. >> Veeru: That's true. >> So what are the characteristics of some of the most successful engagements or the ones that are promising. Maybe it's a little early to say successful. >> So, I think the way you can definitely see success from customer engagement are two fold. One is show what's possible. Show what's possible with after all desire to connect, collection of data, all of that so that one part of it. The second part is understand the customer. The customer has certain requirements in their existing processes and operations. Understand that and then deliver based on what solutions they are expecting, what applications they want to build. How you bring them together is what is, so we're thinking about. That Munich center you talked about. We are actually bringing in chip manufacturers, sensor manufacturers, device manufacturers. We are binging in network providers. We are bringing in SIs, system integrators all of them into the fold and show what is possible and then your partners enable you to get to market faster. That's how we see the engagement with customer should happen in a much more foster manner and show them what's possible. >> It sounds like in the chip industry Moore's law for many years it wasn't deterministic that you we would do double things every 18 months or two years, it was actually an incredibly complex ecosystem web where everyone's sort of product release cycles were synchronized so as to enable that. And it sounds like you're synchronizing the ecosystem to keep up. >> Exactly The saxel of a particular organization IoT efforts is going to depend on how do you build this ecosystem and how do you establish that ecosystem to get to market faster. That's going to be extremely key for all your integration efforts with your customer. >> Let's start narrowly with you. IBM what are the key skills that you feel you need to own starting from sort of the base rocket scientists you know who not only work on machine learning models but they come up with new algorithms on top of say tons of flow work or something like that. And all the way up to the guys who are going to work in conjunction with the customer to apply that science to a particular industry. How does that hold together? >> So it all starts on the platform. On the platform side we have all the developers, the engineers who build these platform all the video connection and all of that to make the connections. So you need the highest software development engineers to build these on the platform and then you also need the solution builders so who is in front of the customer understanding what kind of solutions you want to build. Solutions could be anything. It could be predictive maintenance, it could be as simple as management, it could be remote monitoring and diagnostics. It could be any of these solutions that you want to build and then the solution builders and the platform builders work together to make sure that it's the holistic approach for the customer at the final deployment. >> And how much is the solution builder typically in the early stages IBM or is there some expertise that the customer has to contribute almost like agile development, but not two programmers but like 500 and 500 from different companies. >> 500 is a bit too much. (laughs) I would say this is the concept of co-designing and co-development. We definitely want the ultimate, the developer, the engineers form, the subject exports from our customers and we also need our analytics experts and software developers to come and sit together and understand what's the use case. How do we actually bring in those optimized solution for the customer. >> What level of expertise or what type of expertise are the developers who are contributing to this effort in terms of do they have to, if you're working with manufacturing let's say auto manufacturing. Do they have to have automotive software development expertise or are they more generically analytics and the automotive customer brings in the specific industry expertise. >> It depends. In some cases we have RGB for instance. We have dedicated servers, that particular vertical service provider. We understand some of this industry knowledge. In some cases we don't, in some cases it actually comes from the customer. But it has to be an aggregation of the subject matter experts with our platform developers and solution developers sitting together, finding what's the solution. Literally going through, think about how we actually bring in the UX. What does a typical day of a persona look like? We always by the way believe it's an augmented allegiance which means the human and the machine work together rather than a complete. It gives you the answer for everything you ask for. >> It's a debate that keeps coming up Doug Anglebad sort of had his own answer like 50 years ago which was he sort of set the path for modern computing by saying we're not going to replace people, we're going to augment them and this is just a continuation of that. >> It's a continuation of that. >> Like UX design sounds like someone on the IBM side might be talking to the domain expert and the customer to say how does this workflow work. >> Exactly. So have this design thinking, design sessions with our customers and then based on that we take that knowledge, take it back, we build our mark ups, we build our wire frames, visual designs and the analytics and software that goes behind it and then we provide on top of platform. So most of the platform work, the standard what do you call table state connections, collection of data. All of that as they are already existing then it's one level above as to what the particular solution a customer wants. That's when we actually. >> In terms of getting the customer organization aligned to make this project successful, what are some of the different configurations? Who needs to be a sponsor? Where does budget typically come from? How long are the pilots? That sort of stuff so to set expectations. >> We believe in all the agile thinking, agile development and we believe in all of that. It's almost given now. So depending on where the customer comes from so the customer could actually directly come and sign up to our platform on the existing cloud infrastructure and then they will say, okay we want to build applications then there are some customers really big customers, large enterprises who want to say, give me the platform, we have our solution folks. We will want to work on board with you but we also want somebody who understands building solutions. We integrate with our solution developers and then we build on top of that. They build on top of that actually. So you have that model as well and then you have a GBS which actually does this, has been doing this for years, decades. >> George: Almost like from the silicon. >> All the way up to the application level. >> When the customer is not outsourcing completely, The custom app that they need to build in other words when when they need to go to GBS Global Business Services, whereas if they want a semi-packaged app, can they go to the industry solutions group? >> Yes. >> I assume it's the IoT, Industry Solutions Group. >> Solutions group, yes. >> They then take a it's almost maybe a framework or an existing application that needs customization. >> Exactly so we have IoT-4. IoT for manufacturing, IoT for retail, IoT for insurance IoT for you name it. We have all these industry solutions so there would be some amount of template which is already existing in some fashion so when GBS gets a request to say here is customer X coming and asking for a particular solution. They would come back to IoT solutions group to say, they already have some template solutions from where we can start from rather than building it from scratch. You speed to market again is much faster and then based on that, if it's something that is to be customizable, both of them work together with the customer and then make that happen, and they leverage our platform underneath to do all the connection collection data analytics and so on and so forth that goes along with that. >> Tell me this from everything we hear. There's a huge talent shortage. Tell me in which roles is there the greatest shortage and then how do different members of the ecosystem platform vendors, solution vendors sort of a supply-chain master customers and their customers. How do they attract and retain and train? >> It's a fantastic question. One of the difficulties both in the valley and everywhere across is that three is a skill gap. You want advanced data scientists you want advances machinery experts, you want advanced AI specialists to actually come in. Luckily for us, we have about 1000 data scientists and AI specialists distributed across the globe. >> When you say 1000 data scientists and AI specialists, help us understand which layer are they-- >> It could be all the way from like a BI person all the way to people who can build advanced AI models. >> On top of an engine or a framework. >> We have our Watson APIs from which we build then we have our data signs experience which actually has some of the models then built on top of what's in the data platform so we take that as well. There are many different ways by which we can actually bring the AM model missionary models to build. >> Where do you find those people? Not just the sort of band strengths that's been with IBM for years but to grow that skill space and then where are they also attracted to? >> It's a great question. The valley definitely has a lot of talent, then we also go outside. We have multiple centers of excellence in Israel, in India, in China. So we have multiple centers of excellence we gather from them. It's difficult to get all the talent just from US or just from one country so it's naturally that talent has to be much more improvement and enhanced all the wat fom fresh graduates from colleges to more experienced folks in the in the actual profession. >> What about when you say enhancing the pool talent you have. Could it also include productivity improvements, qualitative productivity improvements in the tools that makes machine learning more accessible at any level? The old story of rising obstruction layers where deep learning might help design statistical models by doing future engineering and optimizing the search for the best model, that sort of stuff. >> Tools are very, very hopeful. There are so many. We have from our tools to python tools to psychic and all of that which can help the data scientist. The key part is the knowledge of the data scientist so data science, you need the algorithm, the statistical background, then you need your applications software development background and then you also need the domestics for engineering background. You have to bring all of them together. >> We don't have too many Michaelangelos who are these all around geniuses. There's the issue of, how do you to get them to work more effectively together and then assuming even each of those are in short supply, how do you make them more productive? >> So making them more productive is by giving them the right tools and resources to work with. I think that's the best way to do it, and in some cases in my organization, we just say, okay we know that a particular person is skilled is up skilled in certain technologies and certain skill sets and then give them all the tools and resources for them to go on build. There's a constant education training process that goes through that we in fact, we have our entire Watson ED platform that can be learned on Kosera today. >> George: Interesting. >> So people can go and learn how to build a platform from a Kosera. >> When we start talking with clients and with vendors, things we hear is that and we were kind of I think early that calling foul but in the open source infrastructure big data infrastructure this notion of mix-and-match and roll your own pipeline sounded so alluring, but in the end it was only the big Internet companies and maybe some big banks and telcos that had the people to operate that stuff and probably even fewer who could build stuff on it. Do we do we need to up level or simplify some of those roles because mainstream companies can't have enough or won't will have enough data scientists or other roles needed to make that whole team work >> I think it will be a combination of both one is we need to up school our existing students with the stem background, that's one thing and the other aspect is, how do you up scale your existing folks in your companies with the latest tools and how can you automate more things so that people who may not be schooled will still be able to use the tool to deliver other things but they don't have to go to a rigorous curriculum to actually be able to deal with it. >> So what does that look like? Give us an example. >> Think of tools like today. There are a lot of BI folks who can actually build. BI is usually your trends and graphs and charts that comes out of the data which are simple things. So they understand the distribution and so on and so forth but they may not know what is the random model. If you look at tools today, that actually gives you to build them, once you give the data to that model, it actually gives you the outputs so they don't really have to go dig deep I have to understand the decision tree model and so on and so forth. They have the data, they can give the data, tools like that. There are so many different tools which would actually give you the outputs and then they can actually start building app, the analytics application on top of that rather than being worried about how do I write 1000 line code or 2000 line code to actually build that model itself. >> The inbuilt machine learning models in and intend, integrated to like pentaho or what's another example. I'm trying to think, I lost my, I having a senior moment. These happen too often now. >> We do have it in our own data science tools. We already have those models supported. You can actually go and call those in your web portal and be able to call the data and then call the model and then you'll get all that. >> George: Splank has something like that. >> Splank does, yes. >> I don't know how functional it is but it seems to be oriented towards like someone who built a dashboard can sort of wire up a model, it gives you an example of what type of predictions or what type of data you need. >> True, in the Splank case, I think it is more of BI tool actually supporting a level of data science moral support on the back. I do not know, maybe I have to look at this but in our case we have a complete data science experience where you actually start from the minute the data gets ingested, you can actually start the storage, the transformation, the analytics and all of that can be done in less than 10 lines of coding. You can just actually do the whole thing. You just call those functions then it will the right there in front of you. So in twin you can do that. That I think is much more powerful and there are tools, there are many many tools today. >> So you're saying that data science experience is an enter in pipeline and therefore can integrate what were boundaries between separate products. >> The boundary is becoming narrower and narrower in some sense. You can go all the way from data ingestion to the analytics in just few clicks or few lines of course. That's what's happening today. Integrated experience if you will. >> That's different from the specialized skills where you might have a tri-factor, prexada or something similar as for the wrangling and then something else for sort of the the visualizations like Altracks or Tavlo and then into modeling. >> A year or so ago, most of data scientists try to spend a lot of time doing data wrangling because some of the models, they can actually call very directly but the wrangling is actually where they spend their time. How do you get the data crawl the data, cleanse the data, etc. That is all now part of our data platform. It is already integrated into the platform so you don't have to go through some of these things. >> Where are you finding the first success for that tool suite? >> Today it is almost integrated with, for instance, I had a case where we exchange the data we integrate that into what's in the Watson data platform and the Watson APIs is a layer above us in the platform where we actually use the analytics tools, more advanced AI tools but the simple machinery models and so on and so forth is already integrated into as part of the Watson data platform. It is going to become an integrated experience through and through. >> To connect data science experience into eWatson IoT platform and maybe a little higher at this quasi-solution layer. >> Correct, exactly. >> Okay, interesting. >> We are doing that today and given the fact that we have so much happening on the edge side of things which means mission critical systems today are expecting stream analysts to get to get insights right there and then be able to provide the outcomes at the edge rather than pushing all the data up to your cloud and then bringing it back down. >> Let's talk about edge versus cloud. Obviously, we can't for latency and band width reasons we can't forward all the data to the cloud, but there's different use cases. We were talking to Matasa Harry at Sparks Summit and one of the use cases he talked about was video. You can't send obviously all the video back and you typically on an edge device wouldn't have heavy-duty machine learning, but for video camera, you might want to learn what is anomalous or behavior call out for that camera. Help us understand some of the different use cases and how much data do you bring back and how frequently do retrain the models? >> In the case of video, it's so true that you want to do a lot of any object ignition and so on and so forth in the video itself. We have tools today, we have cameras outside where if a van goes it detect the particular object in the video live. Realtime streaming analytics so we can do that today. What I'm seeing today in the market is, in the transaction between the edge and the cloud. We believe edge is an extension of the cloud, closer to the asset or device and we believe that models are going to get pushed from the cloud, closer to the edge because the compute capacity and storage and the networking capacity are all improving. We are pushing more and more computing to their devices. >> When you talk about pushing more of the processing. you're talking more about predicts and inferencing then the training. >> Correct. >> Okay. >> I don't think I see so much of the training needs to be done at the edge. >> George: You don't see it. >> No, not yet at least. We see the training happening in the cloud and then once a train, the model has been trained, then you come to a steady, steady model and then that is the model you want to push. When you say model, it could be a bunch of coefficients. That could be pushed onto the edge and then when a new data comes in, you evaluate, make decisions on that, create insights and push it back as actions to the asset and then that data can be pushed back into the cloud once a day or once in a week, whatever that is. Whatever the capacity of the device you have and we believe that edge can go across multiple scales. We believe it could be as small with 128 MB it could be one or two which I see sitting in your local data center on the premise. >> I've had to hear examples of 32 megs in elevators. >> Exactly. >> There might be more like a sort of bandwidth and latency oriented platform at the edge and then throughput and an volume in the cloud for training. And then there's the issue of do you have a model at the edge that corresponds to that instance of a physical asset and then do you have an ensemble meaning, the model that maps to that instance, plus a master canonical model. Does that work for? >> In some cases, I think it'll be I think they have master canonical model and other subsidiary models based on what the asset, it could be a fleet so you in the fleet of assets which you have, you can have, does one asset in the fleet behave similar to another asset in the fleet then you could build similarity models in that. But then there will also be a model to look at now that I have to manage this fleet of assets which will be a different model compared to action similarity model, in terms of operations, in terms of optimization if I want to make certain operations of that asset work more efficiently, that model could be completely different with when compared to when you look at similarity of one model or one asset with another. >> That's interesting and then that model might fit into the information technology systems, the enterprise systems. Let's talk, I want to go get a little lower level now about the issue of intellectual property, joint development and sharing and ownership. IBM it's a nuanced subject. So we get different sort of answers, definitive answers from different execs, but at this high level, IBM says unlike Google and Facebook we will not take your customer data and make use of it but there's more to it than that. It's not as black-and-white. Help explain that for so us. >> The way you want to think is I would definitely paired back what our chairman always says customers' data is customers' data, customer insights is customer insights so they way we look at it is if you look at a black box engine, that could be your analytics engine, whatever it is. The data is your inputs and the insights are our outputs so the insights and outputs belong to them. we don't take their data and marry it with somebody else's data and so forth but we use the data to train the models and the model which is an abstract version of what that engine should be and then more we train the more better the model becomes. And then we can then use across many different customers and as we improve the models, we might go back to the same customers and hey we have an improved model you want to deploy this version rather than the previous version of the model we have. We can go to customer Y and say, here is a model which we believe it can take more of your data and fine tune that model again and then give it back to them. It is true that we don't actually take their data and share the data or the insights from one customer X to another customer Y but the models that make it better. How do you make that model more intelligent is what out job is and that's what we do. >> If we go with precise terminology, it sounds like when we talk about the black box having learned from the customer data and the insights also belonging to the customer. Let's say one of the examples we've heard was architecture engineering consulting for large capital projects has a model that's coming obviously across that vertical but also large capital projects like oil and gas exploration, something like that. There, the model sounds like it's going to get richer with each engagement. And let's pin down so what in the model is sort of not exposed to the next customer and what part of the model that has gotten richer does the next customer get the balance of? >> When we actually build a model, when we pass the data, in some cases, customer X data, the model is built out of customer X data may not sometimes work with the customer Y's data so in which case you actually build it from scratch again. Sometimes it doesn't. In some case it does help because of the similarity of the data in some instance because if the data from company X in oil gas is similar to company Y in oil gas, sometimes the data could be similar so in which case when you train that model, it becomes more efficient and the efficiency goes back to both customers. we will do that but there are places where it would really not work. What we are trying to do is. We are in fact trying to build some kind of knowledge bundles where we can actually what used to be a long process to train the model can ow shortened using that knowledge bundle of what we have actually gained. >> George: Tell me more about how it works. >> In retail for instance, when we actually provide analytics, from any kind of IoT sense, whatever sense of data this comes in we train the model, we get analytics used for ads, pushing coupons, whatever it is. That knowledge, what you have gained off that retail, it could be models of models, it could be metamodels, whatever you built. That can actually serve many different customers but the first customer who is trying to engage with us, you don't have any data to the model. It's almost starting from ground zero and so that would actually take a longer time when you are starting with a new industry and you don't have the data, it'll take you a longer time to understand what is that saturation point or optimization point where you think the model cannot go any further. In some cases, once you do that, you can take that saturated model or near saturated model and improve it based on more data that actually comes from different other segments. >> When you have a model that has gotten better with engagements and we've talked about the black box which produces the insights after taking in the customer data. Inside that black box there's like at the highest level we might call it the digital twin with the broad definition that we started with, then there's a data model which a data model which I guess could also be incorporated into the knowledge graft for the structure and then would it be fair to call the operational model the behavior? >> Yes, how does the system perform or behave with respect the data and the asset itself. >> And then underpinning that, the different models that correspond to the behaviors of different parts of this overall asset. So if we were to be really precise about this black box, what can move from one customer to the next and what what won't? >> The overall model, supposing I'm using a random data retrieval model, that remains but actual the coefficients are the feature rector, or whatever I use, that could be totally different for customers, depending on what kind of data they actually provide us. In data science or in analytics you have a whole platora of all the way from simple classification algorithms to very advanced predictive modeling algorithms. If you take the whole class when you start with a customer, you don't know which model is really going to work for a specific user case because the customer might come and can say, you might get some idea but you will not know exactly this is the model that will work. How you test it with one customer, that model could remain the same kind of use case for some of other customer, but that actual the coefficients the degree of the digital in some cases it might be two level decision trees, in others case it might be a six level decision tree. >> It is not like you take the model and the features and then just let different customers tweak the coefficients for the features. >> If you can do that, that will be great but I don't know whether you can really do it the data is going to change. The data is definitely going to change at some point of time but in certain cases it might be directly correlated where it can help, in certain cases it might not help. >> What I'm taking away is this is fundamentally different from traditional enterprise applications where you could standardize business processes and the transactional data that they were producing. Here it's going to be much more bespoke because I guess the processes, the analytic processes are not standardized. >> Correct, every business processes is unique for a business. >> The accentures of the world we're trying to tell people that when SAP shipped packaged processes, which were pretty much good enough, but that convince them to spend 10 times as much as the license fee on customization. But is there a qualitative difference between the processes here and the processes in the old ERP era? I think it's kind of different in the ERP era and the processes, we are more talking about just data management. Here we're talking about data science which means in the data management world, you're just moving data or transforming data and things like that, that's what you're doing. You're taking the data. transforming to some other form and then you're doing basic SQL queries to get some response, blah blah blah. That is a standard process that is not much of intelligence attached to it but now you are trying to see from the data what kind of intelligence can you derive by modeling the characteristics of the data. That becomes a much tougher problem so it now becomes one level higher of intelligence that you need to capture from the data itself that you want to serve a particular outcome from the insights you get from is model. >> This sounds like the differences are based on one different business objectives and perhaps data that's not as uniform that you would in enterprise applications, you would standardize the data here, if it's not standardized. >> I think because of the varied the disparity of the businesses and the kinds of verticals and things like that you're looking at, to get complete unified business model, is going to be extremely difficult. >> Last question, back-office systems the highest level they got to were maybe the CFO 'cause you had a sign off on a lot of the budget for the license and a much much bigger budget for the SI but he was getting something that was like close you quarter in three days or something instead of two weeks. It was a control function. Who do you sell to now for these different systems and what's the message, how much more strategic how do you sell the business impact differently? >> The platforms we directly interact with the CIO and CTOs or the head of engineering. And the actual solutions or the insights, we usually sell it to the COOs or the operational folks. So because the COO is responsible for showing you productivity, efficiency, how much of savings can you do on the bottom line top line. So the insights would actually go through the COOs or in some sense go through their CTOs to COOs but the actual platform itself will go to the enterprise IT folks in that order. >> This sounds like it's a platform and a solution sell which requires, is that different from the sales motions of other IBM technologies or is this a new approach? >> IBM is transforming on its way. The days where we believe that all the strategies and predictives that we are aligned towards, that actually needs to be the key goal because that's where the world is going. There are folks who, like Jeff Boaz talks about in the olden days you need 70 people to sell or 70% of the people to sell a 30% product. Today it's a 70% product and you need 30% to actually sell the product. The model is completely changing the way we interact with customers. So I think that's what's going to drive. We are transforming that in that area. We are becoming more conscious about all the strategy operations that we want to deliver to the market we want to be able to enable our customers with a much broader value proposition. >> With the industry solutions group and the Global Business Services teams work on these solutions. They've already been selling, line of business CXO type solutions. So is this more of the same, it's just better or is this really higher level than IBM's ever gotten in terms of strategic value? >> This is possibly in decades I would say a high level of value which come from a strategic perspective. >> Okay, on that note Veeru, we'll call it a day. This is great discussion and we look forward to writing it up and clipping all the videos and showering the internet with highlights. >> Thank you George. Appreciate it. >> Hopefully I will get you back soon. >> I was a pleasure, absolutely. >> With that, this George Gilbert. We're in our Palo Alto studio for wiki bond and theCUBE and we've been talking to Veeru Ramaswamy who's VP of Watson IoT platform and we look forward to coming back with Veeru sometime soon. (upbeat music)

Published Date : Aug 23 2017

SUMMARY :

and he's here to fill us in and the club ration or the social integration. the next work station and he talked about into the to the digital world, the way a normal person looks at a physical object? and represent the digital twin on a physical world and the pulleys and the panels for operating it. that becomes a critical part of the twin. in the digital world, then that gives you the ability in that you could program the real world. that comes from the sensors, once you model it Okay, so it's a structured way of interacting Okay, so it's not the narrow definition What are some of the business impacts and then be able to have different business models in the sense that IBM's customers become in the way Correct so the way we want think about is, someone's modeling and risk from you the supplier I'm pretty sure we have a lot of financial risk modeling if that's the right word. are engineered to order instead of make to stock. and you bring your billion devices and connect but you take a few use cases and then generalize so most of the time you get 80% of your asset management sort of the hot topic and in the states, and then you want to bring your similation models and behavior, is that what makes it simulation ready? That's the graph that holds the relation between nodes that maybe the lower level operation systems. and the availability of the existing assets with you. Okay that's where you translate essentially I remember in the Munich event, of some of the most successful engagements the way you can definitely see success It sounds like in the chip industry Moore's law is going to depend on how do you build this ecosystem And all the way up to the guys who are going to and all of that to make the connections. And how much is the solution builder and software developers to come and sit together and the automotive customer brings in We always by the way believe he sort of set the path for modern computing someone on the IBM side might be talking the standard what do you call In terms of getting the customer organization and then you have a GBS which actually or an existing application that needs customization. analytics and so on and so forth that goes along with that. and then how do different members of the ecosystem and AI specialists distributed across the globe. like a BI person all the way to people who can build then we have our data signs experience it's naturally that talent has to be much more the pool talent you have. and then you also need the domestics There's the issue of, and resources to work with. how to build a platform from a Kosera. that had the people to operate that stuff and the other aspect is, So what does that look like? and charts that comes out of the data in and intend, integrated to like pentaho and be able to call the data what type of data you need. the data gets ingested, you can actually start the storage, can integrate what were boundaries You can go all the way from data ingestion sort of the the visualizations like Altracks It is already integrated into the platform and the Watson APIs is a layer above us a little higher at this quasi-solution layer. and given the fact that we have and one of the use cases he talked about was video. and so on and so forth in the video itself. When you talk about pushing more of the processing. needs to be done at the edge. Whatever the capacity of the device you have and then do you have an ensemble meaning, so you in the fleet of assets which you have, about the issue of intellectual property, and share the data or the insights from There, the model sounds like it's going to get richer and the efficiency goes back to both customers. and you don't have the data, it'll take you a longer time incorporated into the knowledge graft for the structure Yes, how does the system perform or behave that correspond to the behaviors of different parts and can say, you might get some idea It is not like you take the model and the features the data is going to change. and the transactional data that they were producing. is unique for a business. and the processes, we are more talking about This sounds like the differences are based on and the kinds of verticals the highest level they got to were maybe the CFO So because the COO is responsible for showing you in the olden days you need 70 people to sell and the Global Business Services teams a high level of value which come from and showering the internet with highlights. Thank you George. and we look forward to coming back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

GeorgePERSON

0.99+

IBMORGANIZATION

0.99+

Steve JobsPERSON

0.99+

VeeruPERSON

0.99+

Jeff BoazPERSON

0.99+

IsraelLOCATION

0.99+

80%QUANTITY

0.99+

GBSORGANIZATION

0.99+

Doug AnglebadPERSON

0.99+

oneQUANTITY

0.99+

EuropeLOCATION

0.99+

UberORGANIZATION

0.99+

Veeru RamaswamyPERSON

0.99+

100 data centersQUANTITY

0.99+

IBM Global ServicesORGANIZATION

0.99+

128 MBQUANTITY

0.99+

1000 data scientistsQUANTITY

0.99+

GEORGANIZATION

0.99+

twoQUANTITY

0.99+

30%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

second partQUANTITY

0.99+

IndiaLOCATION

0.99+

two yearsQUANTITY

0.99+

FebruaryDATE

0.99+

10 timesQUANTITY

0.99+

USLOCATION

0.99+

MunichLOCATION

0.99+

GoogleORGANIZATION

0.99+

TodayDATE

0.99+

70%QUANTITY

0.99+

32 megsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

KaiserORGANIZATION

0.99+

70 peopleQUANTITY

0.99+

ChinaLOCATION

0.99+

six levelQUANTITY

0.99+

bothQUANTITY

0.99+

two programmersQUANTITY

0.99+

OneQUANTITY

0.99+

both customersQUANTITY

0.99+

eachQUANTITY

0.99+

two weeksQUANTITY

0.99+

Cable VisionORGANIZATION

0.99+

one partQUANTITY

0.99+

three daysQUANTITY

0.99+

GBS Global Business ServicesORGANIZATION

0.99+

two levelQUANTITY

0.98+

one customerQUANTITY

0.98+

todayDATE

0.98+

KoseraORGANIZATION

0.98+

one modelQUANTITY

0.98+

less than 10 linesQUANTITY

0.98+

90sDATE

0.98+

threeQUANTITY

0.98+

Air BnBORGANIZATION

0.98+

a dayQUANTITY

0.97+