Image Title

Search Results for SWIM:

Simon Crosby & Chris Sachs, SWIM | CUBE Conversation


 

>> Hi, I'm Peter Burris and welcome to another Cube Conversation. We're broadcasting from our beautiful Palo Alto studios and this time we've got a couple of great guests from SWIM. And one of them is Chris Sachs, who's the founder and lead architect. And the other one is Simon Crosby, who's the CTO. Welcome to the Cube, guys. >> Great to be here. >> Thank you. >> So let's start. Tell us a little bit about yourselves. Well, Chris, let's start with you. >> So my name's Chris Sachs. I'm a co-founder of SWIM, and my background is embedded in distributed systems and bringing those two worlds together. And I've spent the last three years building software from first principles for its computing. >> But embedded, very importantly, that's small devices, highly distributed with a high degree of autonomy-- >> Chris: Yes. >> And how they will interact with each other. >> Right. You need both the small footprint and you need to scale down and out, is one thing that we say. People get scaling out in the cloud and scaling up and out. For the edge, you need to scale down and out. There's similarities to how clouds scale and some very different principles. >> We're going to get into that. So Simon, CTO. >> Sure, my name is Simon Crosby. I came this way courtesy of being an academic, a long time ago, and then doing startups. This is startup number five for me. I was CTO and founder at XenSource. We built the Xen hypervisor. Also at Bromium, where we did micro-virtualization, and I'm privileged to be along for the ride with Chris. >> Excellent. So guys, the SWIM promise is edge AI. I like that, down and out. Tell us a little bit about it, Chris. >> So one of the key observations that we've made over the past half decade is there's a whole lot of compute cycles being showered on planet Earth. ARM is shipping five billion chips a quarter. And there's a tremendous amount of computing, generating a tremendous about of data and it's trapped in the edge. There are physics problems, economic problems with back on it all to the cloud, but there's tremendous, you're capturing the functionality of the world on these chips. >> We like to say that if software's going to eat the world, it's going to eat it at the edge. Is that kind of what you mean? >> Yes. >> That's right. >> And you start running into, when you decide you want to eat the edge, you run into problems very quickly with a traditional way of doing things. So one example is where does your database live if you live on the edge? Which telephone pole are you going to put at your database node in? >> Simon: How big does this need to be? >> There are a number of decisions that are very difficult to make. So SWIM's promises, now, you have some advantages as well in that billions of clock cycles go by on these chips in between that work packets. And if you can figure out how to squeeze your software into these slop cycles between network packets, you can actually do, you actually have a super computer, a global super computer on which you can do machine learning. You can try and predict the future of how physical systems are going to play out-- >> Hence, your background in distributive systems because the goal is to try to ensure that the network packets are as productive as possible. >> Chris: Exactly. >> Here's another way of looking at the problem. If you count top down, it's reasonable to think of things in the future, all sorts of things, which have got computer and maybe some networking in them, presenting to you a digital twin of themselves. Where's the thing come from? >> Now, describe digital twin. We've done a lot of research on this, but it's still is relatively novel concept. GE talked about it. IVM talks about it. When we say digital twin, we're talking about the simulacrum, the digital representation of an actual thing, right? >> Of an actual thing. There are a couple of ways you can get there. One way is if you give me the detailed design of a thing and exactly how it works, I can give you all of that detail and maybe (mumbles) can help use that to find a problem. The other way is to try and construct it automatically. And that's exactly what SWIM does. >> So it takes the thing and builds models around it that are-- >> Well, so what do things do? Things give us data. So the problem, then, becomes how can I build a digital twin just given the data? Just given the observations of what this thing is seeing, what its sensors are bleating about, what things near it are saying. How can I build a digital twin, which will analyze itself, tell you what its current state is and predict the future, just from the data? >> All right, so the bottom line is that you've got, you're providing a facility to help model real world things that tend to operate in an analog way and turning them into digital representations that then can be a full member, in fact, perhaps even a superior member in a highly distributed system of how things work together. >> Yes. >> Got that right. >> A few key points is digital twins are in the loop with the real world. And they are in the loop with their neighbors, and you start with digital twins that reflect the physical world, but they don't end there. You can have physical twins. You can have digital twins of concepts as well and other higher order notions. And from the masses of data that you get from physical devices, you can actually infer the existence of twins where you don't even have a sensor. >> It's making it real. So you could have a digital. If you happen to be tracking all of the buses in downtown San Francisco, you can infer PM10 pollution as a virtual sensor on a bus. And then you can pretty quickly work out something which is a value to somebody who's trying to sell insurance, for example. And that's not a real sensor on every bus, but you can then compose these things, given that you have these other digital twins which are manifesting themselves. >> So folks talk about the butterfly effect and things like chaos theory, which is a butterfly affecting the weather in China. But what we're talking about is locality really matters. It matters in real systems. And it matters in computers. And if you have something that's generating data, more than likely, that thing is going to want its own data because of locality. But also, the things near it are also going to want to be able to infer or understand the behavior of that thing, because it's going to have a consequential impact on them. >> Correct, so I'll give you two examples of that. We've been using aircraft manufacturing facility. The virtual twin here is some widget which has an RFID tag in it. We don't know what that is. We just know there's a tag and we can place it in three ways because it gets seen by multiple sensors we triangulate. And then, as these tags come together makes an aircraft sub-assembly. That meaning of an aircraft sub-assembly is kind of another thing but the nearness, it's the locality that gets you there. So I can say all these tags came together. Let's track that as a superior object. There's a containment notion there. And suddenly, we're tracking will assemblies instead of widgets. >> And this is where the AI comes in, because now, the AI is the basis for recognizing the patterns of these tags and being able to infer from the characteristics of these patterns that it's a sub-assembly. Have I got that right? >> Right. There's a unique opportunity that is opened up in AI when you're watching things unfold live in that you have this great unifying force to learn off of, which is causality. It's the what does everything have in common? It's that data that you've lost through time. And what do you do when you have billions of clock cycles to spare between network packets? Well, you can make a guess about what your particular digital twin might see next. So you can take a guess based on what you're state is, what the sensors around you are saying, and just make a guess. Then you can see what actually happens. You see what actually happens. You measure the error between what you predicted would happen and what actually happened. And you can correct for that. And you could do that just add in an item. Just trillions of times over the course of a year, you make small corrections for how you think. Your particular system will evolve, whether it's a street of traffic light trying to predict when it's going to change, when cars are going to show up, when pedestrians are going to push buttons, or it's a machine, a conveyor belt or a motor in a factory, trying to predict when it might break down, you can learn from these precise systems that very specific models of how they're going to evolve and you can play reality forward. You learn a simulation. And you can play your own, predict your own future. >> And there's a very cool thing that shows up from that. So instead of say, let's take a city and all of its lights. Instead of trying to gather all that data from the city and go then solve a big model, which is the cloud approach to doing this, big data in cloud approach, essentially each one of these digital twins is solving its own problem of how do I predict my own future? So instead of solving one big model, you'll have 200 different insections all predicting their own future, which is totally cool, because it distributes well in this fabric of space CPU cycles and can be very efficient to computers. >> And a consequence of that is, again, you can get these very rich patterns that then these things can learn more from and each acting autonomously in individual as groups. >> Even more than that. There's an even cooler thing. Imagine I set you down by an insection and I said, "Write me a program for how this thing is going to behave." First of all, you wouldn't know how to do it. Second, there aren't enough humans on planet Earth to do this. What we're saying is that we can construct this program from the data, from this thing as it evolves through time. We'll construct the program, and it will be merely a learned model. And then you could ask it how it's going to behave in the future. You could say, "Well, what if I do this? "What if a pedestrian pushes this button? "What will the response be?" So effectively, you're learning a program. You're learning the digital twin just from the data. >> All right, so how does SWIM do this? So we know now we know what it is. And we know that it's using, it's stealing cycles from CPUs that are mainly set up to gather, to sense things, and package data up and send it off somewhere else, but how does it actually work? What does the designer, the developer, the operator do with SWIM that they couldn't do before? >> So SWIM is a tiny, vertically integrated software stack that does all, has all the capabilities you'd find in an open source cloud platform. You have persistence. You have message dispatch. You have peer-to-peer routing. You have analytics and a number of other capabilities. But SWIM hides that and it takes care of it, abstracts over what you need to do to, rather than thinking about where do you place compute, it's when you think "What is my model? "What is my digital twin? "And what am I related to?" And SWIM dynamically maps these logical models to physical hardware at run time and dynamically moves these encapsulated agents around as needed based on the loads and the demand in the network. And in the same way that-- >> In the events? >> Yes, in the events. And in the same way that you, if you're using Microsoft Word, you don't really what CPU core is that running on? Who knows and who cares? It's a solved problem. We look from the ground up and the edge is just one big massively, multi-core computer. And there's similar principles to apply in terms of how you maintain consistency, how you efficiently route data that you can abstract over and eliminate as a problem that you have to be concerned about as a developer or a user who just wants to ingest some data and get insights on how-- >> So I'm going to make sure I got that. So if I look at the edge, which might have 200, might have 10 thousand sensors associated with it, we can imagine, for example, level of complexity like what happens on a drilling platform on an oil field. Probably is 10 thousand sensors on that thing, all of these different things. Each of those sensors are doing something. And they're sending, dispatching information. But what you're doing is you're basically saying we can now look at those sensors that can do their own thing, but we can also look at them as a cluster of processing capability. We'll put a little bit of software on there that will provide a degree of coordinated control so that models can-- >> So two things. >> Build up out of that? >> So first off, SWIM itself builds a distributed fabric on whatever computer's available. And you can smear SWIM between an embedded environment and a VM in the cloud. We just don't care. >> But the point is anything you pointed at becomes part of this cluster. >> Yes, but the second level of this is when you start to discover the entities in the real world. And you begin to discover the entities from that data. So I'll get all this gray stuff. I don't really know what it means, but I'm going to find these entities and what they're related to and then, for each entity, instantiate when these digital twins as an active, essentially the things that microservice. It's a stateful microservice, which is then just going to consume its own real world data and do its thing and then present what it knows by an API or graphical UI components. >> So I'm an operator. I install. What do I do to install? >> You start a process on whatever devices you have available. So SWIM is completely self-contained and has no external dependencies. So we can run as the (mumbles) analytics box or even without an operating system. >> So I basically target swim at the device and it installs? >> Chris: Correct. >> Once it's installed, how am I then acquiring it through software development? >> Ultimately, in this edge world, there is, you've asked the key question, which is how the hell do I get ahold of this stuff and how does it run? And I don't think the world knows the answer to all these questions. So, for example, in the traffic views case, the answer is this. We've published an API. It happens to be an (mumbles), but who cares? Where people like Uber and Lyft or UPS can show up and say what's this traffic light can do in the future. And they just hit that. What they're doing is going for the insides of digital twins in real time as a service. That's kind of an interesting thing to do, right? But you might find this embedded in a widget, because it's small enough to be able to do that. You might find that a customer installs them in a couple of boxes and it just runs. We don't really care. It will be there, and it's trivial to run. >> So you're going to be moving it into people who are building these embedded fixtures? >> Sure. >> Yes. >> Sure, but the key point here is that I know you, particularly in the Cube, you're hearing all these wonderful stories about DevOps and (mumbles) and all this guff up in the cloud, fine. That's where you want those people to be. >> Don't call it guff (laughs). >> But at the edge, no (mumbles). There aren't enough humans to run this stuff so it's got to be completely automatic. It's got to just wake up, run, find all the compute, run ceaselessly, distribute load, be resilient, be secure, all these things that just got to happen. >> So SWIM becomes a service that is shipped with an embedded system. >> Possibly, or there is a potential outcome where it's delivered as software which runs on a box close to some widget. >> Or willed out as a software update with some existing manufacturers. >> In this particular case of traffic, we should be on 60 thousand insections by the end of this year. The traffic infrastructure vendor, the vendor that delivers the traffic management system, just rolls up an upgrade and suddenly, a whole bunch of new insections appear in a cloud API. And an UBER or a Lyft or whatever, it's just hitting that thing and finding out what they are. >> Great, and so but as developers, am I going into a SWIM environment and doing anything? This is just the way that the data's being captured. >> Simon: So we take data. >> That the pattern's being identified. >> Take data, turn into digital twins with intelligent things to say and expose that as APIs or as UI components. >> So that now the developers can go off and use whatever tools they want and just invoke the service through the API. >> Bingo, so that's right. So developers, if they're doing something, just hit digital twins. >> All right, so we've talked a couple. We've talked a little bit about the traffic example and mentioned being in an oil field. What are some of the other big impacts? As this thing gets rolling, what is it going to, what kind of problems is this going to allow us to solve? Not just one, but there's definitely going to be a network effect here, right? >> Sure, so the interesting thing about the edge world is that it's massively diverse. So even one cookie factory's different from another cookie factory in that they might have the same equipment, but they're in different places on planet Earth, may have different operators in everything else. So the data will be different in everything else. So the challenge in general with the edge environment has been that we've been very professional services centric people bring in (mumbles) people and try and solve a local problem and it's very expensive. SWIM has this opportunity to basically just show up, consume this gray data, and tell you real stuff without enormous amounts of semantic knowledge a priority. So we hae this ability to conquer this diversity problem, which is characteristic of the edge, and also come up with highly realistic and highly accurate models for this particular thing. I want to be very clear. The widget in chocolate factory A is exactly the same as the widget in chocolate factory B, but the models will be 100% different and totally (mumbles) at either place, because if the pipes go bang at 6 a.m. here, it's in the model. >> And SWIM has the opportunity to reach the 99.9% of data that currently is generated and immediately forgotten, because it's too expensive to store. It's too expensive to transport. And it's too expensive to build applications to use. >> We should talk about cost, because that's a great one. So if you wanted to solve the problem of predicting what the lights in Palo Alto are going to do for the next five minutes, that's heading towards 10 thousand dollars a month in AWS. SWIM will solve that problem for a tiny fraction, like less than a 100th of that, just on stranded CPU cycles lying around at the edge. And you have say, bandwidth and a whole bunch of things. >> Yeah, and that's a very important point, because the edge is, it's been around for a while. Operational technology. People have been doing this for a while, but not in a way that's naturally, easily programmable. You're bringing the technology that makes it easy to self-discover simply by utilizing whatever cycles and whatever data's there and putting a persistence, making it really simple for that to be accessed through an API, and ultimately, it creates a lot of options on what you can do with your devices in the future. Makes existing assets more valuable, because you have options in what you can do with it. >> If you look at the traffic example, it's the AWS scenario is $50 per month per insection. No one's going to do that. But if it's like a buck, I'm in. And you can do things, 'cause then it's worthwhile for UBER to hit that API. >> All right, so we got to wrap this up. So one way of thinking about it is, I'm thinking. And there's so many metaphors that one could invoke, but this is kind of like the teeth that are going to eat the real world. The software teeth that's going to eat the real world at the edge. >> So if I can leave with one thought, which is SWIM loosely stems from software and motion. And the idea is that teeth edge. You need to move the software to where the data is. You can't move the data to where the software is. The data is huge. It's immobile. And the quantities of data are staggering. You essentially have a world of spam bots out there. It's intractable. But if you move the software to where the data is, then the world's yours. >> One thing to note is that software's still data. It just happens to be extremely well organized data. So the choice is do you move all the not-particularly-well-organized data somewhere where it can operate or would you move the really well organized and compact? And information theory says move the most structured thing you possibly can and that's the application of the software itself. All right. Chris Sachs, founder and lead architect of SWIM. Simon Crosby, CTO of SWIM. Thank you very much for being on the Cube. Great conversation. >> Thanks for having us. >> Good luck. >> Enjoy. >> And once again, I'm Peter Burris. And thank you for participating in another Cube conversation with SWIM. Talk to you again soon.

Published Date : Apr 4 2018

SUMMARY :

And the other one is Simon Crosby, who's the CTO. So let's start. And I've spent the last three years building software You need both the small footprint and you need We're going to get into that. and I'm privileged to be along for the ride with Chris. So guys, the SWIM promise is edge AI. So one of the key observations that we've made Is that kind of what you mean? And you start running into, And if you can figure out how to squeeze your software because the goal is to try to ensure presenting to you a digital twin of themselves. the digital representation of an actual thing, right? There are a couple of ways you can get there. and predict the future, just from the data? All right, so the bottom line is that you've got, And from the masses of data that you get And then you can pretty quickly work out But also, the things near it are also going to want to be able it's the locality that gets you there. because now, the AI is the basis And what do you do when you have billions of clock cycles So instead of say, let's take a city and all of its lights. And a consequence of that is, again, And then you could ask it the operator do with SWIM that they couldn't do before? And in the same way that-- And in the same way that you, So if I look at the edge, which might have 200, And you can smear SWIM But the point is anything you pointed at And you begin to discover the entities from that data. What do I do to install? on whatever devices you have available. the answer to all these questions. Sure, but the key point here is that But at the edge, no (mumbles). that is shipped with an embedded system. which runs on a box close to some widget. with some existing manufacturers. by the end of this year. This is just the way that the data's being captured. and expose that as APIs or as UI components. So that now the developers can go off So developers, if they're doing something, What are some of the other big impacts? So the challenge in general with the edge environment And SWIM has the opportunity to reach the 99.9% of data And you have say, bandwidth and a whole bunch of things. on what you can do with your devices in the future. And you can do things, that are going to eat the real world. You can't move the data to where the software is. So the choice is do you move Talk to you again soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Peter BurrisPERSON

0.99+

Simon CrosbyPERSON

0.99+

Chris SachsPERSON

0.99+

SWIMORGANIZATION

0.99+

XenSourceORGANIZATION

0.99+

SimonPERSON

0.99+

ChinaLOCATION

0.99+

100%QUANTITY

0.99+

UberORGANIZATION

0.99+

99.9%QUANTITY

0.99+

60 thousand insectionsQUANTITY

0.99+

UPSORGANIZATION

0.99+

200QUANTITY

0.99+

LyftORGANIZATION

0.99+

EachQUANTITY

0.99+

XenORGANIZATION

0.99+

6 a.m.DATE

0.99+

Palo AltoLOCATION

0.99+

SecondQUANTITY

0.99+

two examplesQUANTITY

0.99+

AWSORGANIZATION

0.99+

10 thousand sensorsQUANTITY

0.99+

ARMORGANIZATION

0.99+

SWIMTITLE

0.99+

two thingsQUANTITY

0.99+

GEORGANIZATION

0.99+

one thingQUANTITY

0.99+

trillions of timesQUANTITY

0.99+

second levelQUANTITY

0.99+

one thoughtQUANTITY

0.98+

UBERORGANIZATION

0.98+

oneQUANTITY

0.98+

three waysQUANTITY

0.98+

bothQUANTITY

0.98+

One wayQUANTITY

0.98+

one exampleQUANTITY

0.98+

end of this yearDATE

0.98+

OneQUANTITY

0.97+

200 different insectionsQUANTITY

0.97+

less than a 100thQUANTITY

0.97+

firstQUANTITY

0.96+

two worldsQUANTITY

0.96+

FirstQUANTITY

0.96+

each entityQUANTITY

0.95+

each oneQUANTITY

0.94+

eachQUANTITY

0.94+

one wayQUANTITY

0.91+

DevOpsTITLE

0.9+

MicrosoftORGANIZATION

0.89+

10 thousand dollars a monthQUANTITY

0.89+

EarthLOCATION

0.89+

PM10OTHER

0.87+

IVMORGANIZATION

0.86+

past half decadeDATE

0.85+

billions of clockQUANTITY

0.85+

BromiumORGANIZATION

0.85+

five billion chips a quarterQUANTITY

0.84+

CubeORGANIZATION

0.84+

first principlesQUANTITY

0.83+

a yearQUANTITY

0.83+

one cookie factoryQUANTITY

0.82+

San FranciscoLOCATION

0.8+

BingoTITLE

0.78+

one big modelQUANTITY

0.78+

billions of clock cyclesQUANTITY

0.76+

Simon Crosby Dirty | Cube On Cloud


 

>> Hi, I'm Stu Miniman, and welcome back to theCUBE on Cloud talking about really important topics as to how developers, were changing how they build their applications, where they live, of course, long discussion we've had for a number of years. You know, how do things change in hybrid environments? We've been talking for years, public cloud and private cloud, and really excited for this session. We're going to talk about how edge environment and AI impact that. So happy to welcome back one of our CUBE alumni, Simon Crosby, is currently the Chief Technology Officer with Swim. He's got plenty of viewpoints on AI, the edge and knows the developer world well. Simon, welcome back. Thanks so much for joining us. >> Thank you, Stu, for having me. >> All right, so let's start for a second. Let's talk about developers. You know, it used to be, you know, for years we talked about, you know, what's the level of abstraction we get. Does it sit, you know, do I put it on bare metal? Do I virtualize it? Do I containerize it? Do I make it serverless? A lot of those things, you know that the app developer doesn't want to even think about but location matters a whole lot when we're talking about things like AI where do I have all my data that I could do my training? Where do I actually have to do the processing? And of course, edge just changes by orders of magnitude. Some of the things like latency and where data lives and everything like that. So with that as a setup, would love to get just your framework as to what you're hearing from developers and what we'll get into some of the solutions that you and your team are helping them to do their jobs. >> Well, you're absolutely right, Stu. The data onslaught is very real. Companies that I deal with are facing more and more real-time data from products from their infrastructure, from their partners whatever it happens to be and they need to make decisions rapidly. And the problem that they're facing is that traditional ways of processing that data are too slow. So perhaps the big data approach, which by now is a bit old, it's a bit long in the tooth, where you store data and then you analyze it later, is problematic. First of all, data streams are boundless. So you don't really know when to analyze, but second you can't store it all. And so the store then analyze approach has to change and Swim is trying to do something about this by adopting a process of analyze on the fly, so as data is generated, as you receive events you don't bother to store them. You analyze them, and then if you have to, you store the data, but you need to analyze as you receive data and react immediately to be able to generate reasonable insights or predictions that can drive commerce and decisions in the real world. >> Yeah absolutely. I remember back in the early days of big data, you know, real time got thrown around a little but it was usually I need to react fast enough to make sure we don't lose the customer, react to something, but it was, we gather all the data and let's move compute to the data. Today as you talk about, you know, real time streams are so important. We've been talking about observability for the last couple of years to just really understand the systems and the outputs more than looking back historically at where things were waiting for alerts. So could you give us some examples if you would, as to you know, those streams, you know, what is so important about being able to interact and leverage that data when you need it? And boy, it's great if we can use it then and not have to store it and think about it later, obviously there's some benefits there, because-- >> Well every product nowadays has a CPU, right? And so there's more and more data. And just let me give you an example, Swim processes real-time data from more than a hundred million mobile devices in real time, for a mobile operator. And what we're doing there is we're optimizing connection quality between devices and the network. Now that volume of data is more than four petabytes per day, okay. Now there is simply no way you can ever store that and analyze it later. The interesting thing about this is that if you adopt and analyze, and then if you really have to store architecture, you get to take advantage of Moore's Law. So you're running at CPU memory speeds instead of at disk speed. And so that gives you a million fold speed up, and it also means you don't have the latency problem of reaching out to, or about storage, database, or whatever. And so that reduces costs. So we can do it on about 10% of the infrastructure that they previously had for Hadoop style implementation. >> So, maybe it would help if we just explain. When we say edge people think of a lot of different things, is it, you know an IOT device sitting out at the edge? Are we talking about the Telecom edge? We've been watching AWS for years, you know, spider out their services and into various environments. So when you talk about the type of solutions you're doing and what your customers have, is it the Telecom edge? Is it the actual device edge, you know, where does processing happen and where do these you know, services that work on it live? >> So I think the right way to think about edge is where can you reasonably process the data? And it obviously makes sense to process data at the first opportunity you have, but much data is encrypted between the original device, say, and the application. And so edge as a place doesn't make as much sense as edge as an opportunity to decrypt and analyze data in the clear. So edge computing is not so much a place in my view as the first opportunity you have to process data in the clear and to make sense of it. And then edge makes sense, in terms of latency, by locating, compute, as close as possible to the sources of data, to reduce latency and maximize your ability to get insights and return them to users, you know, quickly. So edge for me often is the cloud. >> Excellent, one of the other things I think about back from, you know, the big data days or even earlier, it was that how long it took to get from the raw data to processing that data, to be able to getting some insight, and then being able to take action. It sure sounds like we're trying to collapse that completely, is that, you know, how do we do that? You know, can we actually, you know, build the system so that we can, you know, in that real time, continuous model that you talk about, you know. Take care of it and move on. >> So one of the wonderful things, one of the wonderful things about cloud computing is that two major abstractions have really served us. And those are rest, which is static disk computing, and databases. And rest means any old server can do the job for me and then the database is just an API call away. The problem with that is that it's desperately slow. So when I say desperately slow, I mean, it's probably thrown away the last 10 years of Moore's law. Just think about it this way. Your CPU runs at gigahertz and the network runs at milliseconds. So by definition, every time you reach out to a data store you're going a million times slower than your CPU. That's terrible. It's absolutely tragic, okay. So a model which is much more effective is to have an in-memory computer architecture in which you engage in staple computation. So instead of having to reach out to a database every time to update the database and whatever, you know, store something, and then fetch it again a few moments later when the next event arrives, you keep state in memory and you compute on the fly as data arrives. And that way you get a million times speed up. You also end up with this tremendous cost reduction because you don't end up with as many instances having to compute, by comparison. So let me give you a quick example. If you go to a traffic.swim.ai you can see the real time state of the traffic infrastructure in Palo Alto. And each one of those intersections is predicting its own future. Now, the volume of data from just a few hundred lights in Palo Alto is about four terabytes a day. And sure you can deal with this in AWS Lambda. There are lots and lots of servers up there. But the problem is that the end to end per event latency is about 100 milliseconds. And, you know, if I'm dealing with 30,000 events a second, that's just too much. So solving that problem with a stateless architecture is extraordinarily expensive, more than $5,000 a month. Whereas the staple architecture which you could think of as an evolution of, you know, something reactive or the actor model, gets you, you know something like a 10th of the cost, okay. So cloud is fabulous for things that need to scale wide but a staple model is required for dealing with things which update you rapidly or regularly about their changes in state. >> Yeah, absolutely. You know, I think about if, I mentioned before AI training models, often, if you look at something like autonomous vehicles, the massive amounts of data that it needs to process, you know, has to happen in the public cloud. But then that gets pushed back down to the end device, in this case it's a car, because it needs to be able to react in real time and gets fed at a regular update, the new training algorithms that it has there. What are you seeing-- >> I have strong reason on this training approach and data science in general, and that is that there aren't enough data scientists or, you know, smart people to train these algorithms, deploy them to the edge and so on. And so there is an alternative worldview which is a much simpler one and that is that relatively simple algorithms deployed at scale to staple representatives, let's call them digital twins of things, can deliver enormous improvements in behavior as things learn for themselves. So the way I think the, at least this edge world, gets smarter is that relatively simple models of things will learn for themselves, create their own futures, based on what they can see and then react. And so this idea that we have lots and lots of data scientists dealing with vast amounts of information in the cloud is suitable for certain algorithms but it doesn't work for the vast majority of applications. >> So where are we with the state of what, what do developers need to think about? You mentioned that there's compute in most devices. That's true, but, you know, do they need some special Nvidia chip set out there? Are there certain programming languages that you are seeing more prevalent, interoperability, give us a little bit of, you know, some tips and tricks for those developing. >> Super, so number one, a staple architecture is fundamental and sure React is well known and there are ACA for example, and Spurling. Swim is another so I'm going to use some language and I would encourage you to look at swimos.org to go from play there. A staple architecture, which allows actors, small concurrent objects to stapely evolve their own state based on updates from the real world is fundamental. By the way, in Swim we use data to build these models. So these little agents, for things, we call them web agents because the object ID is a URI, they stapley evolve by processing their own real-world data, stapley representing it, And then they do this wonderful thing which is build a model on the fly. And they build a model by linking to things that they're related to. So a need section would link to all of its sensors but it would also link to all of its neighbors because the neighbors and linking is like a sub in Pub/Sub, and it allows that web agent then to continually analyze, learn, and predict on the fly. And so every one of these concurrent objects is doing this job of analyzing its own raw data and then predicting from that and streaming the result. So in Swim, you get streamed raw data in and what streams out is predictions, predictions about the future state of the infrastructure. And that's a very powerful staple approach which can run all their memory, no storage required. By the way, it's still persistent, so if you lose a node, you can just come back up and carry on but there's no need to store huge amounts of raw data if you don't need it. And let me just be clear. The volumes of raw data from the real world are staggering, right? So four terabytes a day from Palo Alto, but Las Vegas about 60 terabytes a day from the traffic lights. More than 100 million mobile devices is tens of petabytes per day, which is just too much to store. >> Well, Simon, you've mentioned that we have a shortage when it comes to data scientists and the people that can be involved in those things. How about from the developers side, do most enterprises that you're talking to do they have the skillset? Is the ecosystem mature enough for the company to get involved? What do we need to do looking forward to help companies be able to take advantage of this opportunity? >> Yeah, so there is this huge challenge in terms of, I guess, just cloud native skills. And this is exacerbated the more you get added to. I guess what you could think of is traditional kind of companies, all of whom have tons and tons of data sources. So we need to make it easy and Swim tries to do this by effectively using skills that people already have, Java or JavaScript, and giving them easy ways to develop, deploy, and then run applications without thinking about them. So instead of binding developers to notions of place and where databases are and all that sort of stuff if they can write simple object-oriented programs about things like intersections and push buttons, and pedestrian lights, and inroad loops and so on, and simply relate basic objects in the world to each other then we let data build the model by essentially creating these little concurrent objects for each thing, and they will then link to each other and solve the problem. We end up solving a huge problem for developers too, which is that they don't need to acquire complicated cloud-native skillsets to get to work. >> Well absolutely, Simon, it's something we've been trying to do for a long time is to truly simplify things. Want to let you have the final word. If you look out there, the opportunity, the challenge in the space, what final takeaways would you give to our audience? >> So very simple. If you adopt a staple competing architecture, like Swim, you get to go a million times faster. The applications always have an answer. They analyze, learn and predict on the fly and they go a million times faster. They use 10% less, no, sorry, 10% of the infrastructure of a store than analyze approach. And it's the way of the future. >> Simon Crosby, thanks so much for sharing. Great having you on the program. >> Thank you, Stu. >> And thank you for joining I'm Stu Miniman, thank you, as always, for watching theCUBE.

Published Date : Jan 5 2021

SUMMARY :

So happy to welcome back that you and your team and then you analyze it and leverage that data when you need it? And so that gives you a Is it the actual device edge, you know, at the first opportunity you have, so that we can, you and whatever, you know, store something, you know, has to happen or, you know, smart people that you are seeing more and I would encourage you for the company to get involved? the more you get added to. Want to let you have the final word. And it's the way of the future. Great having you on the program. And thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim SchafferPERSON

0.99+

Asim KhanPERSON

0.99+

Steve BallmerPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

David TorresPERSON

0.99+

AmazonORGANIZATION

0.99+

Simon CrosbyPERSON

0.99+

John FurrierPERSON

0.99+

DavidPERSON

0.99+

MicrosoftORGANIZATION

0.99+

SimonPERSON

0.99+

Peter SheldonPERSON

0.99+

LisaPERSON

0.99+

MagentoORGANIZATION

0.99+

2008DATE

0.99+

PagerDutyORGANIZATION

0.99+

CeCeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

sixty percentQUANTITY

0.99+

Hong KongLOCATION

0.99+

EuropeLOCATION

0.99+

10%QUANTITY

0.99+

Las VegasLOCATION

0.99+

thousandsQUANTITY

0.99+

New York CityLOCATION

0.99+

NYCLOCATION

0.99+

2015DATE

0.99+

3.5%QUANTITY

0.99+

PeterPERSON

0.99+

JohnPERSON

0.99+

48 hoursQUANTITY

0.99+

34%QUANTITY

0.99+

2017DATE

0.99+

fiveQUANTITY

0.99+

70%QUANTITY

0.99+

USLOCATION

0.99+

two hoursQUANTITY

0.99+

1.7%QUANTITY

0.99+

twoQUANTITY

0.99+

fifteen percentQUANTITY

0.99+

StuPERSON

0.99+

10thQUANTITY

0.99+

36 hoursQUANTITY

0.99+

CSCORGANIZATION

0.99+

Angry BirdsTITLE

0.99+

700 serversQUANTITY

0.99+

five minutesQUANTITY

0.99+

two guestsQUANTITY

0.99+

200 serversQUANTITY

0.99+

ten percentQUANTITY

0.99+

Suki KuntaPERSON

0.99+

Stu MinimanPERSON

0.99+

20 barsQUANTITY

0.99+

300,000 peopleQUANTITY

0.99+

December 8th Keynote Analysis | AWS re:Invent 2020


 

>>From around the globe. It's the cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS, and our community partners. >>Hi everyone. Welcome back to the cubes. Virtual coverage of AWS reinvent 2020 virtual. We are the cube virtual I'm John ferry, your host with my coach, Dave Alante for keynote analysis from Swami's machine learning, all things, data huge. Instead of announcements, the first ever machine learning keynote at a re-invent Dave. Great to see you. Thanks Johnny. And from Boston, I'm here in Palo Alto. We're doing the cube remote cube virtual. Great to see you. >>Yeah, good to be here, John, as always. Wall-to-wall love it. So, so, John, um, how about I give you my, my key highlights from the, uh, from the keynote today, I had, I had four kind of curated takeaways. So the first is that AWS is, is really trying to simplify machine learning and use machine intelligence into all applications. And if you think about it, it's good news for organizations because they're not the become machine learning experts have invent machine learning. They can buy it from Amazon. I think the second is they're trying to simplify the data pipeline. The data pipeline today is characterized by a series of hyper specialized individuals. It engineers, data scientists, quality engineers, analysts, developers. These are folks that are largely live in their own swim lane. Uh, and while they collaborate, uh, there's still a fairly linear and complicated data pipeline, uh, that, that a business person or a data product builder has to go through Amazon making some moves to the front of simplify that they're expanding data access to the line of business. I think that's a key point. Is there, there increasingly as people build data products and data services that can monetize, you know, for their business, either cut costs or generate revenue, they can expand that into line of business where there's there's domain context. And I think the last thing is this theme that we talked about the other day, John of extending Amazon, AWS to the edge that we saw that as well in a number of machine learning tools that, uh, Swami talked about. >>Yeah, it was great by the way, we're live here, uh, in Palo Alto in Boston covering the analysis, tons of content on the cube, check out the cube.net and also check out at reinvent. There's a cube section as there's some links to so on demand videos with all the content we've had. Dave, I got to say one of the things that's apparent to me, and this came out of my one-on-one with Andy Jassy and Andy Jassy talked about in his keynote is he kind of teased out this idea of training versus a more value add machine learning. And you saw that today in today's announcement. To me, the big revelation was that the training aspect of machine learning, um, is what can be automated away. And it's under a lot of controversy around it. Recently, a Google paper came out and the person was essentially kind of, kind of let go for this. >>But the idea of doing these training algorithms, some are saying is causes more harm to the environment than it does good because of all the compute power it takes. So you start to see the positioning of training, which can be automated away and served up with, you know, high powered ships and that's, they consider that undifferentiated heavy lifting. In my opinion, they didn't say that, but that's clearly what I see coming out of this announcement. The other thing that I saw Dave that's notable is you saw them clearly taking a three lane approach to this machine, learning the advanced builders, the advanced coders and the developers, and then database and data analysts, three swim lanes of personas of target audience. Clearly that is in line with SageMaker and the embedded stuff. So two big revelations, more horsepower required to process training and modeling. Okay. And to the expansion of the personas that are going to be using machine learning. So clearly this is a, to me, a big trend wave that we're seeing that validates some of the startups and I'll see their SageMaker and some of their products. >>Well, as I was saying at the top, I think Amazon's really trying, working hard on simplifying the whole process. And you mentioned training and, and a lot of times people are starting from scratch when they have to train models and retrain models. And so what they're doing is they're trying to create reusable components, uh, and allow people to, as you pointed out to automate and streamline some of that heavy lifting, uh, and as well, they talked a lot about, uh, doing, doing AI inferencing at the edge. And you're seeing, you know, they, they, uh, Swami talked about several foundational premises and the first being a foundation of frameworks. And you think about that at the, at the lowest level of their S their ML stack. They've got, you know, GPU's different processors, inferential, all these alternative processes, processors, not just the, the Xav six. And so these are very expensive resources and Swami talked a lot about, uh, and his colleagues talked a lot about, well, a lot of times the alternative processor is sitting there, you know, waiting, waiting, waiting. And so they're really trying to drive efficiency and speed. They talked a lot about compressing the time that it takes to, to run these, these models, uh, from, from sometimes weeks down to days, sometimes days down to hours and minutes. >>Yeah. Let's, let's unpack these four areas. Let's stay on the firm foundation because that's their core competency infrastructure as a service. Clearly they're laying that down. You put the processors, but what's interesting is the TensorFlow 92% of tensor flows on Amazon. The other thing is that pie torch surprisingly is back up there, um, with massive adoption and the numbers on pie torch literally is on fire. I was coming in and joke on Twitter. Um, we, a PI torch is telling because that means that TensorFlow is originally part of Google is getting, is getting a little bit diluted with other frameworks, and then you've got MX net, some other things out there. So the fact that you've got PI torch 91% and then TensorFlow 92% on 80 bucks is a huge validation. That means that the majority of most machine learning development and deep learning is happening on AWS. Um, >>Yeah, cloud-based, by the way, just to clarify, that's the 90% of cloud-based cloud, uh, TensorFlow runs on and 91% of cloud-based PI torch runs on ADM is amazingly massive numbers. >>Yeah. And I think that the, the processor has to show that it's not trivial to do the machine learning, but, you know, that's where the infrared internship came in. That's kind of where they want to go lay down that foundation. And they had Tanium, they had trainee, um, they had, um, infrared chow was the chip. And then, you know, just true, you know, distributed training training on SageMaker. So you got the chip and then you've got Sage makers, the middleware games, almost like a machine learning stack. That's what they're putting out there >>And how bad a Gowdy, which was, which is, which is a patrol also for training, which is an Intel based chip. Uh, so that was kind of interesting. So a lot of new chips and, and specialized just, we've been talking about this for awhile, particularly as you get to the edge and do AI inferencing, you need, uh, you know, a different approach than we're used to with the general purpose microbes. >>So what gets your take on tenant? Number two? So tenant number one, clearly infrastructure, a lot of announcements we'll go through those, review them at the end, but tenant number two, that Swami put out there was creating the shortest path to success for builders or machine learning builders. And I think here you lays out the complexity, Dave butts, mostly around methodology, and, you know, the value activities required to execute. And again, this points to the complexity problem that they have. What's your take on this? >>Yeah. Well you think about, again, I'm talking about the pipeline, you collect data, you just data, you prepare that data, you analyze that data. You, you, you make sure that it's it's high quality and then you start the training and then you're iterating. And so they really trying to automate as much as possible and simplify as much as possible. What I really liked about that segment of foundation, number two, if you will, is the example, the customer example of the speaker from the NFL, you know, talked about, uh, you know, the AWS stats that we see in the commercials, uh, next gen stats. Uh, and, and she talked about the ways in which they've, well, we all know they've, they've rearchitected helmets. Uh, they've been, it's really a very much database. It was interesting to see they had the spectrum of the helmets that were, you know, the safest, most safe to the least safe and how they've migrated everybody in the NFL to those that they, she started a 24%. >>It was interesting how she wanted a 24% reduction in reported concussions. You know, you got to give the benefit of the doubt and assume some of that's through, through the data. But you know, some of that could be like, you know, Julian Edelman popping up off the ground. When, you know, we had a concussion, he doesn't want to come out of the game with the new protocol, but no doubt, they're collecting more data on this stuff, and it's not just head injuries. And she talked about ankle injuries, knee injuries. So all this comes from training models and reducing the time it takes to actually go from raw data to insights. >>Yeah. I mean, I think the NFL is a great example. You and I both know how hard it is to get the NFL to come on and do an interview. They're very coy. They don't really put their name on anything much because of the value of the NFL, this a meaningful partnership. You had the, the person onstage virtually really going into some real detail around the depth of the partnership. So to me, it's real, first of all, I love stat cast 11, anything to do with what they do with the stats is phenomenal at this point. So the real world example, Dave, that you starting to see sports as one metaphor, healthcare, and others are going to see those coming in to me, totally a tale sign that Amazon's continued to lead. The thing that got my attention was is that it is an IOT problem, and there's no reason why they shouldn't get to it. I mean, some say that, Oh, concussion, NFL is just covering their butt. They don't have to, this is actually really working. So you got the tech, why not use it? And they are. So that, to me, that's impressive. And I think that's, again, a digital transformation sign that, that, you know, in the NFL is doing it. It's real. Um, because it's just easier. >>I think, look, I think, I think it's easy to criticize the NFL, but the re the reality is, is there anything old days? It was like, Hey, you get your bell rung and get back out there. That's just the way it was a football players, you know, but Ted Johnson was one of the first and, you know, bill Bellacheck was, was, you know, the guy who sent him back out there with a concussion, but, but he was very much outspoken. You've got to give the NFL credit. Uh, it didn't just ignore the problem. Yeah. Maybe it, it took a little while, but you know, these things take some time because, you know, it's generally was generally accepted, you know, back in the day that, okay, Hey, you'd get right back out there, but, but the NFL has made big investments there. And you can say, you got to give him, give him props for that. And especially given that they're collecting all this data. That to me is the most interesting angle here is letting the data inform the actions. >>And next step, after the NFL, they had this data prep data Wrangler news, that they're now integrating snowflakes, Databricks, Mongo DB, into SageMaker, which is a theme there of Redshift S3 and Lake formation into not the other way around. So again, you've been following this pretty closely, uh, specifically the snowflake recent IPO and their success. Um, this is an ecosystem play for Amazon. What does it mean? >>Well, a couple of things, as we, as you well know, John, when you first called me up, I was in Dallas and I flew into New York and an ice storm to get to the one of the early Duke worlds. You know, and back then it was all batch. The big data was this big batch job. And today you want to combine that batch. There's still a lot of need for batch, but when people want real time inferencing and AWS is bringing that together and they're bringing in multiple data sources, you mentioned Databricks and snowflake Mongo. These are three platforms that are doing very well in the market and holding a lot of data in AWS and saying, okay, Hey, we want to be the brain in the middle. You can import data from any of those sources. And I'm sure they're going to add more over time. Uh, and so they talked about 300 pre-configured data transformations, uh, that now come with stage maker of SageMaker studio with essentially, I've talked about this a lot. It's essentially abstracting away the, it complexity, the whole it operations piece. I mean, it's the same old theme that AWS is just pointing. It's its platform and its cloud at non undifferentiated, heavy lifting. And it's moving it up the stack now into the data life cycle and data pipeline, which is one of the biggest blockers to monetizing data. >>Expand on that more. What does that actually mean? I'm an it person translate that into it. Speak. Yeah. >>So today, if you're, if you're a business person and you want, you want the answers, right, and you want say to adjust a new data source, so let's say you want to build a new, new product. Um, let me give an example. Let's say you're like a Spotify, make it up. And, and you do music today, but let's say you want to add, you know, movies, or you want to add podcasts and you want to start monetizing that you want to, you want to identify, who's watching what you want to create new metadata. Well, you need new data sources. So what you do as a business person that wants to create that new data product, let's say for podcasts, you have to knock on the door, get to the front of the data pipeline line and say, okay, Hey, can you please add this data source? >>And then everybody else down the line has to get in line and Hey, this becomes a new data source. And it's this linear process where very specialized individuals have to do their part. And then at the other end, you know, it comes to self-serve capability that somebody can use to either build dashboards or build a data product. In a lot of that middle part is our operational details around deploying infrastructure, deploying, you know, training machine learning models that a lot of Python coding. Yeah. There's SQL queries that have to be done. So a lot of very highly specialized activities, what Amazon is doing, my takeaway is they're really streamlining a lot of those activities, removing what they always call the non undifferentiated, heavy lifting abstracting away that it complexity to me, this is a real positive sign, because it's all about the technology serving the business, as opposed to historically, it's the business begging the technology department to please help me. The technology department obviously evolving from, you know, the, the glass house, if you will, to this new data, data pipeline data, life cycle. >>Yeah. I mean, it's classic agility to take down those. I mean, it's undifferentiated, I guess, but if it actually works, just create a differentiated product. So, but it's just log it's that it's, you can debate that kind of aspect of it, but I hear what you're saying, just get rid of it and make it simpler. Um, the impact of machine learning is Dave is one came out clear on this, uh, SageMaker clarify announcement, which is a bias decision algorithm. They had an expert, uh, nationally CFUs presented essentially how they're dealing with the, the, the bias piece of it. I thought that was very interesting. What'd you think? >>Well, so humans are biased and so humans build models or models are inherently biased. And so I thought it was, you know, this is a huge problem to big problems in artificial intelligence. One is the inherent bias in the models. And the second is the lack of transparency that, you know, they call it the black box problem, like, okay, I know there was an answer there, but how did it get to that answer and how do I trace it back? Uh, and so Amazon is really trying to attack those, uh, with, with, with clarify. I wasn't sure if it was clarity or clarified, I think it's clarity clarify, um, a lot of entirely certain how it works. So we really have to dig more into that, but it's essentially identifying situations where there is bias flagging those, and then, you know, I believe making recommendations as to how it can be stamped. >>Nope. Yeah. And also some other news deep profiling for debugger. So you could make a debugger, which is a deep profile on neural network training, um, which is very cool again on that same theme of profiling. The other thing that I found >>That remind me, John, if I may interrupt there reminded me of like grammar corrections and, you know, when you're typing, it's like, you know, bug code corrections and automated debugging, try this. >>It wasn't like a better debugger come on. We, first of all, it should be bug free code, but, um, you know, there's always biases of the data is critical. Um, the other news I thought was interesting and then Amazon's claiming this is the first SageMaker pipelines for purpose-built CIC D uh, for machine learning, bringing machine learning into a developer construct. And I think this started bringing in this idea of the edge manager where you have, you know, and they call it the about machine, uh, uh, SageMaker store storing your functions of this idea of managing and monitoring machine learning modules effectively is on the edge. And, and through the development process is interesting and really targeting that developer, Dave, >>Yeah, applying CIC D to the machine learning and machine intelligence has always been very challenging because again, there's so many piece parts. And so, you know, I said it the other day, it's like a lot of the innovations that Amazon comes out with are things that have problems that have come up given the pace of innovation that they're putting forth. And, and it's like the customers drinking from a fire hose. We've talked about this at previous reinvents and the, and the customers keep up with the pace of Amazon. So I see this as Amazon trying to reduce friction, you know, across its entire stack. Most, for example, >>Let me lay it out. A slide ahead, build machine learning, gurus developers, and then database and data analysts, clearly database developers and data analysts are on their radar. This is not the first time we've heard that. But we, as the kind of it is the first time we're starting to see products materialized where you have machine learning for databases, data warehouse, and data lakes, and then BI tools. So again, three different segments, the databases, the data warehouse and data lakes, and then the BI tools, three areas of machine learning, innovation, where you're seeing some product news, your, your take on this natural evolution. >>Well, well, it's what I'm saying up front is that the good news for, for, for our customers is you don't have to be a Google or Amazon or Facebook to be a super expert at AI. Uh, companies like Amazon are going to be providing products that you can then apply to your business. And, and it's allowed you to infuse AI across your entire application portfolio. Amazon Redshift ML was another, um, example of them, abstracting complexity. They're taking, they're taking S3 Redshift and SageMaker complexity and abstracting that and presenting it to the data analysts. So that, that, that individual can worry about, you know, again, getting to the insights, it's injecting ML into the database much in the same way, frankly, the big query has done that. And so that's a huge, huge positive. When you talk to customers, they, they love the fact that when, when ML can be embedded into the, into the database and it simplifies, uh, that, that all that, uh, uh, uh, complexity, they absolutely love it because they can focus on more important things. >>Clearly I'm this tenant, and this is part of the keynote. They were laying out all their announcements, quick excitement and ML insights out of the box, quick, quick site cue available in preview all the announcements. And then they moved on to the next, the fourth tenant day solving real problems end to end, kind of reminds me of the theme we heard at Dell technology worlds last year end to end it. So we are starting to see the, the, the land grab my opinion, Amazon really going after, beyond I, as in pass, they talked about contact content, contact centers, Kendra, uh, lookout for metrics, and that'll maintain men. Then Matt would came on, talk about all the massive disruption on the, in the industries. And he said, literally machine learning will disrupt every industry. They spent a lot of time on that and they went into the computer vision at the edge, which I'm a big fan of. I just loved that product. Clearly, every innovation, I mean, every vertical Dave is up for grabs. That's the key. Dr. Matt would message. >>Yeah. I mean, I totally agree. I mean, I see that machine intelligence as a top layer of, you know, the S the stack. And as I said, it's going to be infused into all areas. It's not some kind of separate thing, you know, like, Coobernetti's, we think it's some separate thing. It's not, it's going to be embedded everywhere. And I really like Amazon's edge strategy. It's this, you, you are the first to sort of write about it and your keynote preview, Andy Jassy said, we see, we see, we want to bring AWS to the edge. And we see data center as just another edge node. And so what they're doing is they're bringing SDKs. They've got a package of sensors. They're bringing appliances. I've said many, many times the developers are going to be, you know, the linchpin to the edge. And so Amazon is bringing its entire, you know, data plane is control plane, it's API APIs to the edge and giving builders or slash developers, the ability to innovate. And I really liked the strategy versus, Hey, here's a box it's, it's got an x86 processor inside on a, throw it over the edge, give it a cool name that has edge in it. And here you go, >>That sounds call it hyper edge. You know, I mean, the thing that's true is the data aspect at the edge. I mean, everything's got a database data warehouse and data lakes are involved in everything. And then, and some sort of BI or tools to get the data and work with the data or the data analyst, data feeds, machine learning, critical piece to all this, Dave, I mean, this is like databases used to be boring, like boring field. Like, you know, if you were a database, I have a degree in a database design, one of my degrees who do science degrees back then no one really cared. If you were a database person. Now it's like, man data, everything. This is a whole new field. This is an opportunity. But also, I mean, are there enough people out there to do all this? >>Well, it's a great point. And I think this is why Amazon is trying to extract some of the abstract. Some of the complexity I sat in on a private session around databases today and listened to a number of customers. And I will say this, you know, some of it I think was NDA. So I can't, I can't say too much, but I will say this Amazon's philosophy of the database. And you address this in your conversation with Andy Jassy across its entire portfolio is to have really, really fine grain access to the deep level API APIs across all their services. And he said, he said this to you. We don't necessarily want to be the abstraction layer per se, because when the market changes, that's harder for us to change. We want to have that fine-grained access. And so you're seeing that with database, whether it's, you know, no sequel, sequel, you know, the, the Aurora the different flavors of Aurora dynamo, DV, uh, red shift, uh, you know, already S on and on and on. There's just a number of data stores. And you're seeing, for instance, Oracle take a completely different approach. Yes, they have my SQL cause they know got that with the sun acquisition. But, but this is they're really about put, is putting as much capability into a single database as possible. Oh, you only need one database only different philosophy. >>Yeah. And then obviously a health Lake. And then that was pretty much the end of the, the announcements big impact to health care. Again, the theme of horizontal data, vertical specialization with data science and software playing out in real time. >>Yeah. Well, so I have asked this question many times in the cube, when is it that machines will be able to make better diagnoses than doctors and you know, that day is coming. If it's not here, uh, you know, I think helped like is really interesting. I've got an interview later on with one of the practitioners in that space. And so, you know, healthcare is something that is an industry that's ripe for disruption. It really hasn't been disruption disrupted. It's a very high, high risk obviously industry. Uh, but look at healthcare as we all know, it's too expensive. It's too slow. It's too cumbersome. It's too long sometimes to get to a diagnosis or be seen, Amazon's trying to attack with its partners, all of those problems. >>Well, Dave, let's, let's summarize our take on Amazon keynote with machine learning, I'll say pretty historic in the sense that there was so much content in first keynote last year with Andy Jassy, he spent like 75 minutes. He told me on machine learning, they had to kind of create their own category Swami, who we interviewed many times on the cube was awesome. But a lot of still a lot more stuff, more, 215 announcements this year, machine learning more capabilities than ever before. Um, moving faster, solving real problems, targeting the builders, um, fraud platform set of things is the Amazon cadence. What's your analysis of the keynote? >>Well, so I think a couple of things, one is, you know, we've said for a while now that the new innovation cocktail is cloud plus data, plus AI, it's really data machine intelligence or AI applied to that data. And the scale at cloud Amazon Naylor obviously has nailed the cloud infrastructure. It's got the data. That's why database is so important and it's gotta be a leader in machine intelligence. And you're seeing this in the, in the spending data, you know, with our partner ETR, you see that, uh, that AI and ML in terms of spending momentum is, is at the highest or, or at the highest, along with automation, uh, and containers. And so in. Why is that? It's because everybody is trying to infuse AI into their application portfolios. They're trying to automate as much as possible. They're trying to get insights that, that the systems can take action on. >>And, and, and actually it's really augmented intelligence in a big way, but, but really driving insights, speeding that time to insight and Amazon, they have to be a leader there that it's Amazon it's, it's, it's Google, it's the Facebook's, it's obviously Microsoft, you know, IBM's Tron trying to get in there. They were kind of first with, with Watson, but with they're far behind, I think, uh, the, the hyper hyper scale guys. Uh, but, but I guess like the key point is you're going to be buying this. Most companies are going to be buying this, not building it. And that's good news for organizations. >>Yeah. I mean, you get 80% there with the product. Why not go that way? The alternative is try to find some machine learning people to build it. They're hard to find. Um, so the seeing the scale of kind of replicating machine learning expertise with SageMaker, then ultimately into databases and tools, and then ultimately built into applications. I think, you know, this is the thing that I think they, my opinion is that Amazon continues to move up the stack, uh, with their capabilities. And I think machine learning is interesting because it's a whole new set of it's kind of its own little monster building block. That's just not one thing it's going to be super important. I think it's going to have an impact on the startup scene and innovation is going, gonna have an impact on incumbent companies that are currently leaders that are under threat from new entrance entering the business. >>So I think it's going to be a very entrepreneurial opportunity. And I think it's going to be interesting to see is how machine learning plays that role. Is it a defining feature that's core to the intellectual property, or is it enabling new intellectual property? So to me, I just don't see how that's going to fall yet. I would bet that today intellectual property will be built on top of Amazon's machine learning, where the new algorithms and the new things will be built separately. If you compete head to head with that scale, you could be on the wrong side of history. Again, this is a bet that the startups and the venture capitals will have to make is who's going to end up being on the right wave here. Because if you make the wrong design choice, you can have a very complex environment with IOT or whatever your app serving. If you can narrow it down and get a wedge in the marketplace, if you're a company, um, I think that's going to be an advantage. This could be great just to see how the impact of the ecosystem this will be. >>Well, I think something you said just now it gives a clue. You talked about, you know, the, the difficulty of finding the skills. And I think that's a big part of what Amazon and others who were innovating in machine learning are trying to do is the gap between those that are qualified to actually do this stuff. The data scientists, the quality engineers, the data engineers, et cetera. And so companies, you know, the last 10 years went out and tried to hire these people. They couldn't find them, they tried to train them. So it's taking too long. And now that I think they're looking toward machine intelligence to really solve that problem, because that scales, as we, as we know, outsourcing to services companies and just, you know, hardcore heavy lifting, does it doesn't scale that well, >>Well, you know what, give me some machine learning, give it to me faster. I want to take the 80% there and allow us to build certainly on the media cloud and the cube virtual that we're doing. Again, every vertical is going to impact a Dave. Great to see you, uh, great stuff. So far week two. So, you know, we're cube live, we're live covering the keynotes tomorrow. We'll be covering the keynotes for the public sector day. That should be chock-full action. That environment is going to impact the most by COVID a lot of innovation, a lot of coverage. I'm John Ferrari. And with Dave Alante, thanks for watching.

Published Date : Dec 9 2020

SUMMARY :

It's the cube with digital coverage of Welcome back to the cubes. people build data products and data services that can monetize, you know, And you saw that today in today's And to the expansion of the personas that And you mentioned training and, and a lot of times people are starting from scratch when That means that the majority of most machine learning development and deep learning is happening Yeah, cloud-based, by the way, just to clarify, that's the 90% of cloud-based cloud, And then, you know, just true, you know, and, and specialized just, we've been talking about this for awhile, particularly as you get to the edge and do And I think here you lays out the complexity, It was interesting to see they had the spectrum of the helmets that were, you know, the safest, some of that could be like, you know, Julian Edelman popping up off the ground. And I think that's, again, a digital transformation sign that, that, you know, And you can say, you got to give him, give him props for that. And next step, after the NFL, they had this data prep data Wrangler news, that they're now integrating And today you want to combine that batch. Expand on that more. you know, movies, or you want to add podcasts and you want to start monetizing that you want to, And then at the other end, you know, it comes to self-serve capability that somebody you can debate that kind of aspect of it, but I hear what you're saying, just get rid of it and make it simpler. And so I thought it was, you know, this is a huge problem to big problems in artificial So you could make a debugger, you know, when you're typing, it's like, you know, bug code corrections and automated in this idea of the edge manager where you have, you know, and they call it the about machine, And so, you know, I said it the other day, it's like a lot of the innovations materialized where you have machine learning for databases, data warehouse, Uh, companies like Amazon are going to be providing products that you can then apply to your business. And then they moved on to the next, many, many times the developers are going to be, you know, the linchpin to the edge. Like, you know, if you were a database, I have a degree in a database design, one of my degrees who do science And I will say this, you know, some of it I think was NDA. And then that was pretty much the end of the, the announcements big impact And so, you know, healthcare is something that is an industry that's ripe for disruption. I'll say pretty historic in the sense that there was so much content in first keynote last year with Well, so I think a couple of things, one is, you know, we've said for a while now that the new innovation it's, it's, it's Google, it's the Facebook's, it's obviously Microsoft, you know, I think, you know, this is the thing that I think they, my opinion is that Amazon And I think it's going to be interesting to see is how machine And so companies, you know, the last 10 years went out and tried to hire these people. So, you know, we're cube live, we're live covering the keynotes tomorrow.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ted JohnsonPERSON

0.99+

Dave AlantePERSON

0.99+

Julian EdelmanPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

New YorkLOCATION

0.99+

JohnnyPERSON

0.99+

AWSORGANIZATION

0.99+

DallasLOCATION

0.99+

JohnPERSON

0.99+

Palo AltoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SwamiPERSON

0.99+

DavePERSON

0.99+

John FerrariPERSON

0.99+

FacebookORGANIZATION

0.99+

80%QUANTITY

0.99+

24%QUANTITY

0.99+

90%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

BostonLOCATION

0.99+

December 8thDATE

0.99+

IBMORGANIZATION

0.99+

MattPERSON

0.99+

NFLORGANIZATION

0.99+

80 bucksQUANTITY

0.99+

PythonTITLE

0.99+

91%QUANTITY

0.99+

92%QUANTITY

0.99+

75 minutesQUANTITY

0.99+

OracleORGANIZATION

0.99+

todayDATE

0.99+

last yearDATE

0.99+

cube.netOTHER

0.99+

IntelORGANIZATION

0.99+

Steven Dietch, HPE - HPE Discover 2017


 

>> Announcer: Live from Las Vegas, it's the Cube, covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. >> Okay, welcome back, everyone. We are here live in Las Vegas for the Cube's exclusive coverage, three days of HPE, Hewlett Packard Enterprises Discover 2017. I'm John Furrier. My co-host Dave Vallente. Seven years of coverage, in our seventh year, and of course, we've had many guests on over those years. And our next guest has been on every year. Steven Deitch, Vice-President Worldwide Service Provider Business. Great to see you. >> Good to see you. >> Seven year Cube alumni. You've been on every year. >> That's right. >> Great to see you. >> Good, just getting older. (laughing) >> And smarter. >> Co-Host: I think we started at VM World. >> We did, way back. >> Yeah, yeah, at Barcelona, I think you were on at Barcelona. We had no live. Alot's changed. I mean, what's up with you right now? Before we get into some of the historical on where we've been and where we're going, what's happening for you in the news here at HPE Discover? What's the big story? >> Well, you know, the headline, and what Meg and Antonio and everybody else have been talking about, is HPE strategy core belief's vision, which revolves around three elements. Making hybrid IT simple. Powering the Edge and then the expertise that goes along to bring that all together. My focus is really around that hybrid IT portion. Hybrid IT is pervasive, on prem, off prem, traditional IT, private cloud, sorry, public cloud. And customers are increasingly moving to that model given the value that they see, of optimizing their IT environment and sticking workloads or sourcing applications from the best execution venue. My personal focus right now is around the service providers that will deliver the off-premise element of HPE's hybrid strategy going forward. Because we made some very clear decisions that we weren't going to do that anymore. We had a public cloud before, that we decided to shut down. With the spinoff of enterprise services, that leaves us dependent, or actually embracing partners to deliver all of that consumption-based off-premise service element. >> I mean, a lot's changed, I mean, the elephant in the room is, obviously, the decline in people buying boxes and, or hardware, peddling hardware, but IT's not declining. IT's shifting. The services model is interesting. Service provider roles are changing. You know, anyone who's in the SAAS businesses, enterprises having SAAS products that they offer their customers. In essence, a traditional enterprise buying data center hardware and software from HPE is now providing a service to their customers. >> Steve: That's right. >> With digital. >> Steve: That's right. >> This is the digital transformation. How does that shift? How do you guys talk to customers now? Because now, the service provider definition has increased. Enterprises have, maybe a portion of traditional enterprise, but also now service provider component. How do you guys talk to customers? 'Cause this is truly where the business transformation is hitting the road. How do you guys talk to customers about this trend? >> Well, let's start at, you mentioned a digital transformation. At the end of the day and in simple terms, it's entities utilizing digital technologies to improve the experience of their constituents, partners, customers, employees, processes, systems, and so forth. Hybrid IT ultimately is one of the enablers behind the digital transformation. We're extremely passionate about that because you're right, that's where everybody's going whether you're small in the market, you're mid, you're a large enterprise, or you're a service provider. You're going through your own transformation as you go forward to be able to deliver against that digital transformation process. >> You said before we're kind of reliant on, then you sort of amended that and said embracing the cloud. In fact, if you don't have a cloud strategy today, you're toast. You are relying on your partners for a big part of that strategy. It's not just Azure. It's not a one trick pony. Can you talk about sort of beyond the big partner, what you're doing to differentiate within that next tier and how they're differentiating from the big guys like AWS. >> Right, and you're absolutely right. We firmly believe the world's going to be multi-cloud so certain work loads will stay in the data center, certain will be private cloud in the data center, others will move to managed private cloud, off premise. Then others will make a lot of sense to go to a hyper-scale provider like Amazon, Azure or Google. You want the best execution venue for that application or workload. It makes all the sense in the world. That's what we call, and you've heard us talk about this before, the right mix. As customers look to where they're going to put those workloads we're working with service providers below those big hyper-scale, big gorillas, to project or deliver value that the hyper-scale providers can not. Everything is not going to go to Amazon. It's a fact. >> It's not a winner take all game. >> It's not a winner take all. The world is way too diverse. Diverse workloads, diverse geographies, diverse business requirements. The way we look at it and we embrace the service provider's below the gorillas, we want to collectively go after opportunities that the hyper-scale providers can not deliver on. It really revolves around three things that we believe, we collectively, but more importantly those service providers should be able to do. One is embrace customer complexity. Go beyond simple services. Full stack SOAs. Drive digital transformations. Embrace customer intimacy. The big gorillas, they have a very broad set of services, very rich set of services, but when it comes time to intimacy and customization, you're not going to go there. A lot of customers remember 98% of the value in the market today is still traditional apps. Number two is geography. We all know that the big boys are in about 15 or 16% physical countries today. There's still 200 countries that don't have a physical presence and when you look at data resonancy, data privacy and so forth, or even performance in latency, you still need that physical presence. Even in the countries where the big hyper-scale providers are, you still need the girth of resources. Technical, sales and so fort and sometimes that's missing. Enterprise customers and mid-market customers want to embrace that. Finally, you know this as well as everybody else, and you made that point before, as customers evolve to hybrid, they have to manage that environment. The combination of on-prem, let's call it Tier 2/Tier 3 service providers, and then the AWSs and the Azures and the Googles of the world. That's a challenge. That's a big challenge to be able to manage that hybrid environment. The service providers that we're working with, we want them to be that hybrid manager, we want them to be that broker in order to mitigate the risk, determine the best execution venue and really deal with the challenges that these guys are going, including cost, time, and skill sets. >> If I could follow up on that in terms of the sustainability of those three differentiators. Complexity, I think you're okay. I think IT just keeps getting more and more complex. The GEOs, maybe slowly over time that changes, but your point is the local resources is probably not something that the big guys are going to put in place any time soon. Belly to belly. It was interesting to hear the CEO of Wipro talk about hyper automating, but we're still decades away from eliminating all the people required. Managing multi-cloud. That seems to be a big one that is a white space right now that nobody has really cornered. >> John: Huge. >> It's not likely that any one of the businesses, Amazon's not going to own multi-cloud management. That's really not even their interest. >> John: That's single cloud. >> To me, number three is a multi-hundred billion dollar opportunity for the market and HP specifically. >> Absolutely. We go hard at all three of those and some are more defendable than others, the geography, you're absolutely right, but the resources will continue to be a challenge for folks. Number one and number three are clearly ways that our service provider partners can take advantage of opportunities that the hyper scale providers will not be able to. >> John: Why HP? >> Why HP? At the end of the day, we bring best of class technology, we bring best in class commercial models, we bring collaborative go to markets. By the way, we don't compete with our partners. I challenge folks to look at their existing vendors and ask those questions. Particularly if you're a service provider partner. Ask those questions to your existing vendors and ask them, why are you competing against me. We are very, I'll use the word clean. Strategy is very simple, very clean, we're not competing. >> John: No hair on those partnership deals. >> No hair. >> If you take out the big hyper-scalers, AWS, Google, and FaceBooks of the world, there's a big torso mid range market that you guys are going after. You didn't have competition. You're going to have all your normal competitors that we all know and talk about going after that same space. Differentiations are what you said. How are they approaching it? They're going to try and create fud around what you guys are doing and certainly this transformation market that we're in is kind of confusing. People now being more educated on the cloud which is a good thing. There's still no real definition of what multi-cloud is. Multi-cloud is happening. how are you guys competing directly with the competitors and how are you guys going to go in to win? >> I think, the fact that we're partner first and we already understand how partners function and what they need in the requirements, it sounds a little simplistic, but at the end of the day, you have a whole lot of service provider partners out there that are pure, but you have thousands of service providers and what they've done? They've evolved from being a traditional reseller or a solution provider, to adding a third business model of being a consume oriented service provider and the fact that we understand the journey that they've been on, the challenges that they go through, I will challenge our competitors to have that deep of insight that have not been channel friendly at all. >> Is that the big transformation? That third point you mentioned, is that the big change in the service provider transformation, is that that consumer focus?L >> It is because we all recognize the big service providers whether you're a big cloud service provider or a consumer service provider like an Uber or Spotify, or a Telco. Think about all of the service providers out there, let's call them, for the lack of a better word a hybrid partner. They have a resale business where they do transactions, they have a solutions business and then they have a consume business. Those are the ones that are actually capable of pulling off the differentiation. They can get intimate with the customer - >> John: They have specialism. >> They have specialism, they have professional services, they have industry insight and they understand their customers much better. >> The channel's turning into the customer for you guys in the way the partner first message - >> It's a different type of partner. Different type of partner. Absolutely. Those three swim lanes. We look at partners will either be in one, two, or all three of them. >> Steve, thanks for coming on the Cube again. Appreciate seeing you. Big takeaway from the show here, the transformations in full swing, the market's kind of going crazy with cloud and IOT. What's your big takeaway from this show this year? >> The clarity. The clarity and the focus that Hewlett Packard has and the fact that our partners and customers are really embracing it. That's the key message that I've heard from everybody. Everybody's super excited and there's a focus. I think maybe in the past, because we've been so big and so complex, but the fact of our skinning down, going in opposite directions as some of our competitors, that clarity will lead to execution excellence, I believe. >> Awesome. Stephen, thanks for taking the time. This is the Cube live coverage from HPE Discover 2017, our 7th year we're covering the transformation. More live coverage after this short break. Stay with us. We'll be right back.

Published Date : Jun 7 2017

SUMMARY :

it's the Cube, covering HPE Discover 2017. for the Cube's exclusive coverage, You've been on every year. What's the big story? is around the service providers that will deliver I mean, the elephant in the room is, obviously, This is the digital transformation. At the end of the day and in simple terms, from the big guys like AWS. that the hyper-scale providers can not. We all know that the big boys are in is probably not something that the big guys It's not likely that any one of the businesses, opportunity for the market and HP specifically. that the hyper scale providers will not be able to. At the end of the day, we bring best of class technology, AWS, Google, and FaceBooks of the world, and the fact that we understand Think about all of the service providers out there, and they understand their customers much better. We look at partners will either be in one, two, the market's kind of going crazy with cloud and IOT. and the fact that our partners and customers This is the Cube live coverage from HPE Discover 2017,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VallentePERSON

0.99+

Steven DeitchPERSON

0.99+

StevePERSON

0.99+

Steven DietchPERSON

0.99+

StephenPERSON

0.99+

TelcoORGANIZATION

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MegPERSON

0.99+

Hewlett PackardORGANIZATION

0.99+

John FurrierPERSON

0.99+

UberORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

98%QUANTITY

0.99+

thousandsQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

AWSsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AntonioPERSON

0.99+

twoQUANTITY

0.99+

Seven yearsQUANTITY

0.99+

HPORGANIZATION

0.99+

Las VegasLOCATION

0.99+

200 countriesQUANTITY

0.99+

16%QUANTITY

0.99+

third pointQUANTITY

0.99+

WiproORGANIZATION

0.99+

seventh yearQUANTITY

0.99+

Seven yearQUANTITY

0.99+

HPEORGANIZATION

0.99+

7th yearQUANTITY

0.99+

BarcelonaLOCATION

0.99+

Hewlett Packard EnterprisesORGANIZATION

0.98+

oneQUANTITY

0.98+

threeQUANTITY

0.98+

three differentiatorsQUANTITY

0.97+

this yearDATE

0.97+

three thingsQUANTITY

0.97+

first messageQUANTITY

0.97+

GooglesORGANIZATION

0.97+

OneQUANTITY

0.97+

three daysQUANTITY

0.96+

three elementsQUANTITY

0.95+

todayDATE

0.94+

third businessQUANTITY

0.93+

multi-hundred billion dollarQUANTITY

0.93+

about 15QUANTITY

0.93+

three swim lanesQUANTITY

0.92+

CubeORGANIZATION

0.92+

single cloudQUANTITY

0.91+

AzureORGANIZATION

0.87+

HPE DiscoverORGANIZATION

0.86+

Number twoQUANTITY

0.85+

SAASTITLE

0.83+

firstQUANTITY

0.82+

HPE Discover 2017EVENT

0.76+

Tier 2/Tier 3OTHER

0.73+

one trickQUANTITY

0.72+

decadesQUANTITY

0.71+

VicePERSON

0.7+

CubeCOMMERCIAL_ITEM

0.7+

Number oneQUANTITY

0.68+

Discover 2017EVENT

0.67+

Worldwide Service Provider BusinessORGANIZATION

0.66+

AzureTITLE

0.64+

number threeQUANTITY

0.62+

AzuresORGANIZATION

0.6+

FaceBooksORGANIZATION

0.6+

VM WorldORGANIZATION

0.59+

PresidentPERSON

0.55+

HPEEVENT

0.49+

numberQUANTITY

0.48+

serviceQUANTITY

0.41+

Irfan Khan, SAP | SAP SapphireNow 2016


 

>> Voiceover: It's theCUBE covering Sapphire Now. Headlines sponsored by SAP HANA Cloud, the leader in platform as a service. With support from Console Inc., the cloud internet company. Now, here are your hosts: John Furrier and Peter Burris. >> Okay, welcome back, everyone. We are here live in Orlando, Florida, for exclusive coverage of SAP Sapphire Now. This is theCUBE's SiliconANGLE's flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier, Peter Burris. I want to thank our sponsors for allowing us to get down here, SAP HANA Cloud Platform, Console Inc., Capgemini, and EMC, thanks so much for supporting us. Our next guest is Ifran Khan, who is the SVP General Manager of digital enterprise platforms which includes HANA, end-to-end. Welcome back to theCUBE. >> Thank you. >> John: Good to see you. >> Lovely to be back here again. >> John: So, you know theCUBE history. We go way back, we've done pretty much every Hadoop World up until 2013, now we have an event the same day, week Estrada, New York, NSV, and we've been to every Sapphire since 2010 except for 2014, 2015. We had a little conflict of events, but it's been great. It's been big data. I remember Bill McDermott got up there when HANA was announced, kind of, or pre-built before Hadoop hit. So, you had HANA coming out of the oven, Hadoop hits the scene, Hadoop gets all the press, HANA's now rolling, so then you roll forward to four more years, we're here. What's your take on this, because it's been an interesting shift. Hadoop, some are saying, is hard to use, total costs of ownership. Now, HANA's rising, Hadoop is sliding. That's my opinion, but what's your opinion? >> Well, that's a well, sort of, summarized history lesson there, so to speak. Well, firstly, great to be on theCUBE again. It's always lovely to see you gentlemen here, you do a wonderful job. What I'd perhaps just highlight is maybe some of they key milestones that I've observed over the last four or five years. Ironically, 2010 when I arrived at SAP, when the entire, sort of if you like, trajectory of HANA started going in that direction, and Hadoop was sort of there, but it was maybe petering out a little bit because it was the unknown, the uncertainty of scale in whether or not this is going to be only batch or whether it's going to ever become real-time. So, I would maybe make the two or three milestones from the SAP side. HANA started off as a disruptive technology, which was perhaps conceived as being a response to a lot of internal challenges that we were running into using the systems of record of yester-era. They were incapable of dealing with SAP applications, incapable of giving us what we now refer to as a digital core, and that were incapable of giving our customers truly what they needed. As a response, HANA was introduced into the market, but it wasn't limited in scope to the, if you like the historical baggage of the relational era, or even the Hadoop era, so to speak. It was completely new imagined technologies built around in-memory computing, a columnar architecture, and therefore it gave us an opportunity to project ultimately what we could achieve with this as a foundation. So, HANA came into the market focusing on analytics to start with, going full circle into being able to do transactionality, as well, and where we are today? I think Hadoop is now being recognized, I would say probably as a de facto data operating system. So, HDFS is a very significant sort of extension to most IT organizations, but it's still lacking the compute capabilities. This is what's given their eyes a spark, and of course with HANA, HANA isn't, within itself, a very significant computing engine. >> John: And Vora. And Vora a-- >> Ifran: Of course, and Vora, as well. Now you're finishing off my sentences. Thank you. >> (laughs) This is what theCUBE is all about, we got a good cadence going here. Alright, so but now the challenge. HANA's also, by the way, was super fast when it came out, but then it didn't really fire in my opinion. It's swim-lane. It seems now, it's so clear that the fruit is coming off the tree, now. You're seeing it blossom beautifully. You got S/4 HANA, you got the core... Explain that because people get confused. Am I buying HANA Cloud, am I buying HANA Cloud Platform? Share how this is all segmented to the buyer, to the customer, to the customer. >> Sure, I mean firstly, SAP applications need to have a system of record. HANA is a system of record. It has a database capability, but ultimately HANA is not just a database. It's an entire platform with integration, and application services, and, of course, with data services. Now, as a consequence, when we talk about the HANA Cloud Platform, this is taking HANA as a core technology, as a platform, embedding it inside of a cloud deployment environment called a HANA Cloud Platform. It gives on opportunity where customers are perhaps implementing on premise S/4, or even in a public S/4 instance, an opportunity to extend those applications as perhaps they may need or require to do so for their business requirements. So, in layman's terms, you have a system of record requirement with SAP applications, that is HANA. It is only HANA now in the case of S/4. And in order to extend the application as customers want to customize those applications, there is one definitive extension venue, and that's called the HANA Cloud Platform. >> John: And that mainly is for developers, too. I call it the developer cloud, for lack of a better description or a more generic one. That's the cloud foundry. Basically the platform is a service that is actually bolting on, I guess a developer on-ramp, if you will. Is that a safe way to look at it? >> Ifran: Yeah, I mean I think the developer interaction point with SAP now certainly becomes HCP, but it also is a significant ecosystem enabler, as well. Only last week, or week-before-last in fact, we announced the relationship with Apple, which is a phenomenal extension of what we do with business applications, and HCP is the definitive venue for the Apple relationship in effect. >> So, tell us a little bit about borrowing or building upon that. What is increasingly... How should an executive, when I think about digitalization, how should they think about it? Is this something that is a new set of channels, or the ability to reach new customers, or is there something for fundamental going on here? Is it really about trying to translate more of your business into data in a way that it's accessible so it can be put to use and put to work in more and different ways? >> Sure, it's a great question. So, what is digitalization? Well, firstly, it's not new. I mean, SAP didn't invent digitalization, but I think we know a fair bit about where digitalization is going to take many businesses in the next three to five years. So, I would say that there's five prevailing trends that are fueling the need to go digital. The first thing is about hyperconnectivity. If we understand that data and information is not only just consumed, it's created in a variety of places, and geographically just about anywhere now is connected. I mean, in fact, I read one statistic that 90 percent of the world's inhabitable land masses have either cellular or wireless reception. So, truly, we're hyperconnected. The second thing is about the scale of the cloud, right? The cloud gives us compute, not just on the desktop, but anywhere; and by definition of anywhere, we're saying if you have a smart appliance at an edge, that is, in fact, supercomputing because it gives you an extension to be able to get to any compute device. And then you've got cloud, and on top of which, you have cyber-security, and a variety of other things like IOT. These things are all fueling the need to become digitally aware enterprises, and what's ultimately happening is that business transformation is happening because somebody without any premises, without any assets, comes along and disrupts a business. In fact, one study from Capgemini and, of course, from MIT back in 2013, was revealing that in the year 2,000 and 20, 2020 rather, out of the SMP 500, approximately 40 percent of the businesses are going to cease to exist. For the simple reason, those business transformations that are going on disrupting their classical business models are going to change the way that they operate. So, I would just, in a concatenated way of answering your question, digital transformation at the executive level is about, not just surviving, it's about thriving. It's about taking advantage of the digital trends. It's about making sure that, as you reinvent your businesses, you're not just looking at what you do today. You're always looking at that as a line that's been deprecated. What are you going to do in addition to that? That's where your growth is going to come from, and SAP's all about helping customers become digitally aware and transform their organizations. >> Paul: So, you're having conversations with customers all the time about the evolution of data management technologies, and your argument being is that HANA is more advanced, a columnar database in memory, speed, more complexity in the IO, all kinds of wonderful things that it makes possible can then be reflected in more complex, or more rich, value creating applications. But, the data is often undervalued. >> Ifran: Of course. >> The data itself. We haven't figured out how to look at that data, and start treating it literally as capital. We talk about a business problem, we talk about how much money we want to put there, how much people we want to put there, but we don't yet talk about how much data is going to be required either to go there and make it work, or that we're going to capture out of it. How are you working with customers to think that problem through? Are they thinking it through differently in your experience? >> Yeah, that's a great question. So, firstly, if I was to look at their value association with data, we can borrow from the airline industry perhaps as an analogy. If you look at data, it's very equivalent to passengers. The businesses that we typically operate on are working on first and business class data. They've actually made significant investments around how to securely store, access, process, manage all of this business class and first class data. But, there's an economy class of data which is significant and very pervasive, and if you look at it from the airline's point of view, an economy class individual passenger doesn't really equate to an awful lot, but if you aggregate all the economy class passengers, it's significant. It's actually more than your business and first class revenue, so to speak. So, consequently, large organizations have to start looking at data, monetizing the data, and not ignoring all of the noise signals that come out of the sensors, out of the various machinery, and making sure that they can aggregate that data, and build context around it. So, we have to start thinking along those ways. >> John: Yes, I love that analogy, so good. But, let's take that one step further. I want to make sure I go on the right plane, right? So, one, that's the data aware. So, digital assets is the data, so evaluation techniques come into play, but having a horizontally traversal data plane really, in real time, is a big thing because, not only do I go through security, put my shoes through, my laptop out, that's just IT. The plane is where the action is. I want to be on the right plane. That's making data aware, the alchemy behind it, that's the trick. What's your thoughts on that because this is a cutting area. You hear AI ontolgies and stuff going on there now, machine learning, certainly. Surely not advancing to the point where it's really working yet. It's getting there, but what's your thoughts on all this? >> Yeah, so I think the vehicle that you're referring to, whether it's a plane or whatever the mode of transportation is, at a metaphor level, we have to understand that there is a value in association with making decisions at the right time when you have all the information that you need, and by definition, we have created a culture in IT where we segregate data. We create this almost two swim lane approach. This is my now data, this is my transactional data, and here's my data that will then feed into some other environment, and I may look to analyze it after the event. Now, getting back to the HANA philosophy from day one, it was about creating a simplified model where you can do live analytics on transactional data. This is a big, significant shift. So, using your aircraft analogy, as I'm on there, I don't want to suddenly worry about I didn't pick up my magazine from Duty Free or whatever, from the newspaper stand. I've got no content now, I can't do anything. Alright, for the next nine hours, I'm on a plane now and I've got nothing to do. I've got no internet, I've got no connectivity. The idea is that you want to have all of the right information readily available and make real time decisions. That calls for simplified architectures all about HANA. >> We're getting the signal here. I know you're super busy. Thanks so much for coming on theCUBE. I want to get one final question in. What's your vision around your plans? I'll say it's cutting-edge, you get a great area, ecosystem's developing nicely. What's your goals for the next year? What are you looking to do? What are your key KPI's? What are you trying to knock down this year? What's your plans? >> I mean, first and foremost, we've spent an awful lot of time talking about SAP transformations and around SAP customer landscape transformations. S/4 is all about that. That is a digital core. The translation of digital core to SAP should not be inhibiting other customers who don't have an SAP transaction or application foundation. We want to be able to take SAP to every single platform usage out there and most customers will have a need for HANA-like technology. So, the top of my agenda is let's increase the full use requirements and actual value of HANA, and we're seeing an awful lot of traction there. The second thing is, we're now driving towards the cloud. HCP is the definitive venue not just for the ecosystem, for the developer and also for the traditional SAP customers, and we're going to be promoting an awful lot more exciting relationships, and I'd love to be able to speak to you again in the future about how the evolution is taking place. >> John: We wish we had more time. You're a super guest, great insight. Thank you for sharing the data here >> Ifran: Thank you for having me. >> John: On theCUBE. We'll be right back with more live coverage here inside the cube at Sapphire Now. You're watching theCUBE. (techno music) (calm music) >> Voiceover: There'll be millions of people in the near future that want to be involved in their own personal well-being and well--

Published Date : May 19 2016

SUMMARY :

the leader in platform as a service. We go out to the events and extract an event the same day, or even the Hadoop era, so to speak. John: And Vora. and Vora, as well. that the fruit is coming and that's called the HANA Cloud Platform. I call it the developer cloud, and HCP is the definitive venue or the ability to reach new customers, that are fueling the need to go digital. all the time about the evolution is going to be required either and not ignoring all of the noise signals So, digital assets is the data, at the right time when you have all We're getting the signal here. HCP is the definitive venue Thank you for sharing the data here here inside the cube at Sapphire Now.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VoraPERSON

0.99+

JohnPERSON

0.99+

PaulPERSON

0.99+

Ifran KhanPERSON

0.99+

AppleORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

twoQUANTITY

0.99+

IfranPERSON

0.99+

John FurrierPERSON

0.99+

2014DATE

0.99+

Irfan KhanPERSON

0.99+

2013DATE

0.99+

2015DATE

0.99+

Bill McDermottPERSON

0.99+

HANATITLE

0.99+

2010DATE

0.99+

Console Inc.ORGANIZATION

0.99+

next yearDATE

0.99+

last weekDATE

0.99+

EMCORGANIZATION

0.99+

HANA Cloud PlatformTITLE

0.99+

S/4TITLE

0.99+

CapgeminiORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

SAPORGANIZATION

0.99+

second thingQUANTITY

0.99+

todayDATE

0.98+

one final questionQUANTITY

0.98+

MITORGANIZATION

0.98+

firstQUANTITY

0.97+

HadoopTITLE

0.97+

2016DATE

0.97+

HANA CloudTITLE

0.97+

oneQUANTITY

0.97+

approximately 40 percentQUANTITY

0.96+

firstlyQUANTITY

0.96+

one studyQUANTITY

0.96+

four more yearsQUANTITY

0.96+

three milestonesQUANTITY

0.95+

five prevailing trendsQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

five yearsQUANTITY

0.93+

one statisticQUANTITY

0.92+

this yearDATE

0.91+

SAP HANA CloudTITLE

0.91+

first thingQUANTITY

0.91+

day oneQUANTITY

0.9+

2020DATE

0.9+