Ajay Mungara, Intel | Red Hat Summit 2022
>>mhm. Welcome back to Boston. This is the cubes coverage of the Red Hat Summit 2022. The first Red Hat Summit we've done face to face in at least two years. 2019 was our last one. We're kind of rounding the far turn, you know, coming up for the home stretch. My name is Dave Valentin here with Paul Gillon. A J monger is here is a senior director of Iot. The Iot group for developer solutions and engineering at Intel. AJ, thanks for coming on the Cube. Thank you so much. We heard your colleague this morning and the keynote talking about the Dev Cloud. I feel like I need a Dev Cloud. What's it all about? >>So, um, we've been, uh, working with developers and the ecosystem for a long time, trying to build edge solutions. A lot of time people think about it. Solutions as, like, just computer the edge. But what really it is is you've got to have some component of the cloud. There is a network, and there is edge and edge is complicated because of the variety of devices that you need. And when you're building a solution, you got to figure out, like, where am I going to push the computer? How much of the computer I'm going to run in the cloud? How much of the computer? I'm gonna push it at the network and how much I need to run it at the edge. A lot of times what happens for developers is they don't have one environment where all of the three come together. And so what we said is, um, today the way it works is you have all these edge devices that customers by the instal, they set it up and they try to do all of that. And then they have a cloud environment they do to their development. And then they figure out how all of this comes together. And all of these things are only when they are integrating it at the customer at the solution space is when they try to do it. So what we did is we took all of these edge devices, put it in the cloud and gave one environment for cloud to the edge. Very good to your complete solution. >>Essentially simulates. >>No, it's not >>simulating span. So the cloud spans the cloud, the centralised cloud out to the edge. You >>know, what we did is we took all of these edge devices that will theoretically get deployed at the edge like we took all these variety of devices and putting it put it in a cloud environment. So these are non rack mountable devices that you can buy in the market today that you just have, like, we have about 500 devices in the cloud that you have from atom to call allusions to F. P. G s to head studio cards to graphics. All of these devices are available to you. So in one environment you have, like, you can connect to any of the cloud the hyper scholars, you could connect to any of these network devices. You can define your network topology. You could bring in any of your sources that is sitting in the gate repository or docker containers that may be sitting somewhere in a cloud environment, or it could be sitting on a docker hub. You can pull all of these things together, and we give you one place where you can build it where you can test it. You can performance benchmark it so you can know when you're actually going to the field to deploy it. What type of sizing you need. So >>let me show you, understand? If I want to test, uh, an actual edge device using 100 gig Ethernet versus an Mpls versus the five G, you can do all that without virtualizing. >>So all the H devices are there today, and the network part of it, we are building with red hat together where we are putting everything on this environment. So the network part of it is not quite yet solved, but that's what we want to solve. But the goal is here is you can let's say you have five cameras or you have 50 cameras with different type of resolutions. You want to do some ai inference type of workloads at the edge. What type of compute you need, what type of memory you need, How many devices do you need and where do you want to push the data? Because security is very important at the edge. So you gotta really figure out like I've got to secure the data on flight. I want to secure the data at Brest, and how do you do the governance of it. How do you kind of do service governance? So that all the services different containers that are running on the edge device, They're behaving well. You don't have one container hogging up all the memory or hogging up all the compute, or you don't have, like, certain points in the day. You might have priority for certain containers. So all of these mortals, where do you run it? So we have an environment that you could run all of that. >>Okay, so take that example of AI influencing at the edge. So I've got an edge device and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing in real time. You got something? They become some kind of streaming data coming in, and I want you to persist, uh, every hour on the hour. I want to save that time stamp. Or if the if some event, if a deer runs across the headlights, I want you to persist that day to send that back to the cloud and you can develop that tested, benchmark >>it right, and then you can say that. Okay, look in this environment I have, like, five cameras, like at different angles, and you want to kind of try it out. And what we have is a product which is into, um, open vino, which is like an open source product, which does all of the optimizations you need for age in France. So you develop the like to recognise the deer in your example. I developed the training model somewhere in the cloud. Okay, so I have, like, I developed with all of the things have annotated the different video streams. And I know that I'm recognising a deer now. Okay, so now you need to figure out Like when the deer is coming and you want to immediately take an action. You don't want to send all of your video streams to the cloud. It's too expensive. Bandwidth costs a lot. So you want to compute that inference at the edge? Okay. In order to do that inference at the edge, you need some environment. You should be able to do it. And to build that solution What type of age device do you really need? What type of compute you need? How many cameras are you computing it? What different things you're not only recognising a deer, probably recognising some other objects could do all of that. In fact, one of the things happened was I took my nephew to San Diego Zoo and he was very disappointed that he couldn't see the chimpanzees. Uh, that was there, right, the gorillas and other things. So he was very sad. So I said, All right, there should be a better way. I saw, like there was a stream of the camera feed that was there. So what we did is we did an edge in friends and we did some logic to say, At this time of the day, the gorillas get fed, so there's likelihood of you actually seeing the gorilla is very high. So you just go at that point and so that you see >>it, you >>capture, That's what you do, and you want to develop that entire solution. It's based on whether, based on other factors, you need to bring all of these services together and build a solution, and we offer an environment that allows you to do it. Will >>you customise the the edge configuration for the for the developer If if they want 50 cameras. That's not You don't have 50 cameras available, right? >>It's all cameras. What we do is we have a streaming capability that we support so you can upload all your videos. And you can say I want to now simulate 50 streams. Want to simulate 30 streams? Or I want to do this right? Or just like two or three videos that you want to just pull in. And you want to be able to do the infant simultaneously, running different algorithms at the edge. All of that is supported, and the bigger challenge at the edge is developing. Solution is fine. And now when you go to actual deployment and post deployment monitoring, maintenance, make sure that you're like managing it. It's very complicated. What we have seen is over 50% 51% to be precise of developers are developed some kind of a cloud native applications recently, right? So that we believe that if you bring that type of a cloud native development model to the edge, then you're scaling problem. Your maintenance problem, you're like, how do you actually deploy it? All of these challenges can be better managed, Um, and if you run all of that is an orchestration later on kubernetes and we run everything on top of open shift, so you have a deployment ready solution already there it's everything is containerised everything. You have it as health charged Dr Composed. You have all their you have tested and in this environment, and now you go take that to the deployment. And if it is there on any standard kubernetes environment or in an open ship, you can just straight away deploy your application. >>What's that edge architecture looked like? What's Intel's and red hats philosophy around? You know what's programmable and it's different. I know you can run a S, a p a data centre. You guys got that covered? What's the edge look like? What's that architecture of silicon middleware? Describe that for us. >>So at the edge, you think about it, right? It can run traditional, Uh, in an industrial PC. You have a lot of Windows environment. You have a lot of the next. They're now in a in an edge environment. Quite a few of these devices. I'm not talking about Farage where there are tiny micro controllers and these devices I'm talking about those devices that connect to these forage devices. Collect the data. Do some analytics do some compute that type of thing. You have foraged devices. Could be a camera. Could be a temperature sensor. Could be like a weighing scale. Could be anything. It could be that forage and then all of that data instead of pushing all the data to the cloud. In order for you to do the analysis, you're going to have some type of an edge set of devices where it is collecting all this data, doing some decisions that's close to the data. You're making some analysis there, all of that stuff, right? So you need some analysis tools, you need certain other things. And let's say that you want to run like, UH, average costs or rail or any of these operating systems at the edge. Then you have an ability for you to manage all of that. Using a control note, the control node can also sit at the edge. In some cases, like in a smart factory, you have a little data centre in a smart factory or even in a retail >>store >>behind a closet. You have, like a bunch of devices that are sitting there, correct. And those devices all can be managed and clustered in an environment. So now the question is, how do you deploy applications to that edge? How do you collect all the data that is sitting through the camera? Other sensors and you're processing it close to where the data is being generated make immediate decisions. So the architecture would look like you have some club which does some management of this age devices management of this application, some type of control. You have some network because you need to connect to that. Then you have the whole plethora of edge, starting from an hybrid environment where you have an entire, like a mini data centre sitting at the edge. Or it could be one or two of these devices that are just collecting data from these sensors and processing it that is the heart of the other challenge. The architecture varies from different verticals, like from smart cities to retail to healthcare to industrial. They have all these different variations. They need to worry about these, uh, different environments they are going to operate under, uh, they have different regulations that they have to look into different security protocols that they need to follow. So your solution? Maybe it is just recognising people and identifying if they are wearing a helmet or a coal mine, right, whether they are wearing a safety gear equipment or not, that solution versus you are like driving in a traffic in a bike, and you, for safety reasons. We want to identify the person is wearing a helmet or not. Very different use cases, very different environments, different ways in which you are operating. But that is where the developer needs to have. Similar algorithms are used, by the way, but how you deploy it very, quite a bit. >>But the Dev Cloud make sure I understand it. You talked about like a retail store, a great example. But that's a general purpose infrastructure that's now customised through software for that retail environment. Same thing with Telco. Same thing with the smart factory, you said, not the far edge, right, but that's coming in the future. Or is that well, that >>extends far edge, putting everything in one cloud environment. We did it right. In fact, I put some cameras on some like ipads and laptops, and we could stream different videos did all of that in a data centre is a boring environment, right? What are you going to see? A bunch of racks and service, So putting far edge devices there didn't make sense. So what we did is you could just have an easy ability for you to stream or connect or a Plourde This far edge data that gets generated at the far edge. Like, say, time series data like you can take some of the time series data. Some of the sensor data are mostly camera data videos. So you upload those videos and that is as good as your streaming those videos. Right? And that means you are generating that data. And then you're developing your solution with the assumption that the camera is observing whatever is going on. And then you do your age inference and you optimise it. You make sure that you size it, and then you have a complete solution. >>Are you supporting all manner of microprocessors at the edge, including non intel? >>Um, today it is all intel, but the plan, because we are really promoting the whole open ecosystem and things like that in the future. Yes, that is really talking about it, so we want to be able to do that in the future. But today it's been like a lot of the we were trying to address the customers that we are serving today. We needed an environment where they could do all of this, for example, and what circumstances would use I five versus i nine versus putting an algorithm on using a graphics integrated graphics versus running it on a CPU or running it on a neural computer stick. It's hard, right? You need to buy all those devices you need to experiment your solutions on all of that. It's hard. So having everything available in one environment, you could compare and contrast to see what type of a vocal or makes best sense. But it's not >>just x 86 x 86 your portfolio >>portfolio of F. P. G s of graphics of like we have all what intel supports today and in future, we would want to open it up. So how >>do developers get access to this cloud? >>It is all free. You just have to go sign up and register and, uh, you get access to it. It is difficult dot intel dot com You go there, and the container playground is all available for free for developers to get access to it. And you can bring in container workloads there, or even bare metal workloads. Um, and, uh, yes, all of it is available for you >>need to reserve the endpoint devices. >>Comment. That is where it is. An interesting technology. >>Govern this. Correct. >>So what we did was we built a kind of a queuing system. Okay, So, schedule, er so you develop your application in a controlled north, and only you need the edge device when you're scheduling that workload. Okay, so we have this scheduling systems, like we use Kafka and other technologies to do the scheduling in the container workload environment, which are all the optimised operators that are available in an open shift, um, environment. So we regard those operators. Were we installed it. So what happens is you take your work, lord, and you run it. Let's say on an I seven device, when you're running that workload and I summon device, that device is dedicated to you. Okay, So and we've instrumented each of these devices with telemetry so we could see at the point your workload is running on that particular device. What is the memory looking like power looking like How hard is the device running? What is a compute looking like? So we capture all that metrics. Then what you do is you take it and run it on a 99 or run it on a graphic, so can't run it on an F p g a. Then you compare and contrast. And you say Huh? Okay for this particular work, Lord, this device makes best sense. In some cases, I'll tell you. Right, Uh, developers have come back and told me I don't need a bigger process that I need bigger memory. >>Yeah, sure, >>right. And some cases they've said, Look, I have I want to prioritise accuracy over performance because if you're in a healthcare setting, accuracy is more important. In some cases, they have optimised it for the size of the device because it needs to fit in the right environment in the right place. So every use case where you optimise is up to the solution up to the developer, and we give you an ability for you to do that kind >>of folks are you seeing? You got hardware developers, you get software developers are right, people coming in. And >>we have a lot of system integrators. We have enterprises that are coming in. We are seeing a lot of, uh, software solution developers, independent software developers. We also have a lot of students are coming in free environment for them to kind of play with in sort of them having to buy all of these devices. We're seeing those people. Um I mean, we are pulling through a lot of developers in this environment currently, and, uh, we're getting, of course, feedback from the developers. We are just getting started here. We are continuing to improve our capabilities. We are adding, like, virtualisation capabilities. We are working very closely with red hat to kind of showcase all the goodness that's coming out of red hat, open shift and other innovations. Right? We heard, uh, like, you know, in one of the open shift sessions, they're talking about micro shifts. They're talking about hyper shift, the talking about a lot of these innovations, operators, everything that is coming together. But where do developers play with all of this? If you spend half your time trying to configure it, instal it and buy the hardware, Trying to figure it out. You lose patience. What we have time, you lose time. What is time and it's complicated, right? How do you set up? Especially when you involve cloud. It has network. It has got the edge. You need all of that right? Set up. So what we have done is we've set up everything for you. You just come in. And by the way, not only just that what we realised is when you go talk to customers, they don't want to listen to all our optimizations processors and all that. They want to say that I am here to solve my retail problem. I want to count the people coming into my store, right. I want to see that if there is any spills that I recognise and I want to go clean it up before a customer complaints about it or I have a brain tumour segmentation where I want to identify if the tumour is malignant or not, right and I want to telehealth solutions. So they're really talking about these use cases that are talking about all these things. So What we did is we build many of these use cases by talking to customers. We open sourced it and made it available on Death Cloud for developers to use as a starting point so that they have this retail starting point or they have this healthcare starting point. All these use cases so that they have all the court we have showed them how to contain arise it. The biggest problem is developers still don't know at the edge how to bring a legacy application and make it cloud native. So they just wrap it all into one doctor and they say, OK, now I'm containerised got a lot more to do. So we tell them how to do it, right? So we train these developers, we give them an opportunity to experiment with all these use cases so that they get closer and closer to what the customer solutions need to be. >>Yeah, we saw that a lot with the early cloud where they wrapped their legacy apps in a container, shove it into the cloud. Say it's really hosting a legacy. Apps is all it was. It wasn't It didn't take advantage of the cloud. Never Now people come around. It sounds like a great developer. Free resource. Take advantage of that. Where do they go? They go. >>So it's def cloud dot intel dot com >>death cloud dot intel dot com. Check it out. It's a great freebie, AJ. Thanks very much. >>Thank you very much. I really appreciate your time. All right, >>keep it right there. This is Dave Volonte for Paul Dillon. We're right back. Covering the cube at Red Hat Summit 2022. >>Mhm. Yeah. Mhm. Mm.
SUMMARY :
We're kind of rounding the far turn, you know, coming up for the home stretch. devices that you need. So the cloud spans the cloud, the centralised You can pull all of these things together, and we give you one place where you can build it where gig Ethernet versus an Mpls versus the five G, you can do all that So all of these mortals, where do you run it? and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing So you develop the like to recognise the deer in your example. and we offer an environment that allows you to do it. you customise the the edge configuration for the for the developer So that we believe that if you bring that type of a cloud native I know you can run a S, a p a data So at the edge, you think about it, right? So now the question is, how do you deploy applications to that edge? Same thing with the smart factory, you said, So what we did is you could just have an easy ability for you to stream or connect You need to buy all those devices you need to experiment your solutions on all of that. portfolio of F. P. G s of graphics of like we have all what intel And you can bring in container workloads there, or even bare metal workloads. That is where it is. So what happens is you take your work, So every use case where you optimise is up to the You got hardware developers, you get software developers are What we have time, you lose time. container, shove it into the cloud. Check it out. Thank you very much. Covering the cube at Red Hat Summit 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Valentin | PERSON | 0.99+ |
Ajay Mungara | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
France | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
50 cameras | QUANTITY | 0.99+ |
five cameras | QUANTITY | 0.99+ |
50 streams | QUANTITY | 0.99+ |
30 streams | QUANTITY | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Paul Dillon | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
three videos | QUANTITY | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.99+ |
about 500 devices | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
ipads | COMMERCIAL_ITEM | 0.98+ |
Iot | ORGANIZATION | 0.98+ |
Kafka | TITLE | 0.98+ |
each | QUANTITY | 0.97+ |
Windows | TITLE | 0.97+ |
three | QUANTITY | 0.97+ |
AJ | PERSON | 0.97+ |
first | QUANTITY | 0.96+ |
red hat | TITLE | 0.96+ |
Death Cloud | TITLE | 0.95+ |
one doctor | QUANTITY | 0.94+ |
over 50% 51% | QUANTITY | 0.93+ |
Farage | ORGANIZATION | 0.92+ |
intel dot com | ORGANIZATION | 0.9+ |
intel | ORGANIZATION | 0.9+ |
this morning | DATE | 0.89+ |
one cloud | QUANTITY | 0.88+ |
San Diego Zoo | LOCATION | 0.87+ |
99 | QUANTITY | 0.86+ |
one container | QUANTITY | 0.86+ |
one environment | QUANTITY | 0.86+ |
2019 | DATE | 0.85+ |
half | QUANTITY | 0.85+ |
one place | QUANTITY | 0.84+ |
least two years | QUANTITY | 0.83+ |
Dev Cloud | TITLE | 0.81+ |
monger | PERSON | 0.77+ |
time | QUANTITY | 0.76+ |
I five | OTHER | 0.76+ |
P. G | PERSON | 0.75+ |
red hat | TITLE | 0.74+ |
two of | QUANTITY | 0.73+ |
Brest | ORGANIZATION | 0.63+ |
nine | TITLE | 0.61+ |
86 | OTHER | 0.58+ |
devices | QUANTITY | 0.56+ |
things | QUANTITY | 0.51+ |
five G | OTHER | 0.49+ |
86 | QUANTITY | 0.48+ |
Cloud | TITLE | 0.46+ |
seven | COMMERCIAL_ITEM | 0.4+ |
dot | ORGANIZATION | 0.34+ |
cloud | TITLE | 0.32+ |
4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
Why Use IaaS When You Can Make Bare Metal Cloud-Native?
>>Hi, Oleg. So great of you to join us today. I'm really looking forward to our session. Eso Let's get started. So if I can get you to give a quick intro to yourself and then if you can share with us what you're going to be discussing today >>Hi, Jake. In my name is Oleg Elbow. I'm a product architect and the Doctor Enterprise Container Cloud team. Uh, today I'm going to talk about running kubernetes on bare metal with a container cloud. My goal is going to tell you about this exciting feature and why we think it's important and what we actually did to make it possible. >>Brilliant. Thank you very much. So let's get started. Eso from my understanding kubernetes clusters are typically run in virtual machines in clouds. So, for example, public cloud AWS or private cloud maybe open staff based or VM ware V sphere. So why why would you go off and run it on their mettle? >>Well, uh, the Doctor Enterprise container cloud already can run Coburn eighties in the cloud, as you know, and the idea behind the container clouds to enable us to manage multiple doctor enterprise clusters. But we want to bring innovation to kubernetes. And instead of spending a lot of resources on the hyper visor and virtual machines, we just go all in for kubernetes directly environmental. >>Fantastic. So it sounds like you're suggesting then to run kubernetes directly on their mettle. >>That's correct. >>Fantastic and without a hyper visor layer. >>Yes, we all know the reasons to run kubernetes and virtual machines it's in The first place is mutual mutual isolation off workloads, but virtualization. It comes with the performance, heat and additional complexity. Uh, another. And when Iran coordinated the director on the hardware, it's a perfect opportunity for developers. They can see performance boost up to 30% for certain container workloads. Uh, this is because the virtualization layer adds a lot off overhead, and even with things like enhanced placement awareness technologies like Numa or processor opinion, it's it's still another head. By skipping over the virtualization, we just remove this overhead and gained this boost. >>Excellent, though it sounds like 30% performance boost very appealing. Are there any other value points or positive points that you can pull out? >>Yes, Besides, the hyper visor over had virtual machines. They also have some static resource footprint. They take up the memory and CPU cycles and overall reintroduces the density of containers per host. Without virtual machines, you can run upto 16% more containers on the same host. >>Excellent. Really great numbers there. >>One more thing to point out directly. Use environmental makes it easier to use a special purpose hardware like graphic processors or virtual no virtual network functions for don't work interfaces or the field programmable gate arrays for custom circuits, Uh, and you can share them between containers more efficiently. >>Excellent. I mean, there's some really great value points you pulled out there. So 30% performance boost, 60% density boost on it could go off and support specialized hardware a lot easier. But let's talk about now. The applications. So what sort of applications do you think would benefit from this The most? >>Well, I'm thinking primarily high performance computations and deep learning will benefit, Uh, which is the more common than you might think of now they're artificial Intelligence is gripping into a lot off different applications. Uh, it really depends on memory capacity and performance, and they also use a special devices like F P G s for custom circuits widely sold. All of it is applicable to the machine learning. Really? >>And I mean, that whole ai piece is I mean, really exciting. And we're seeing this become more commonplace across a whole host of sectors. So you're telcos, farmers, banking, etcetera. And not just I t today. >>Yeah, that's indeed very exciting. Uh, but creating communities closer environmental, unfortunately, is not very easy. >>Hope so it sounds like there may be some challenges or complexities around it. Ondas this, I guess. The reason why there's not many products then out there today for kubernetes on their metal on baby I like. Could you talk to us then about some of the challenges that this might entail? >>Well, there are quite a few challenges first, and for most, there is no one way to manage governmental infrastructures Nowadays. Many vendors have their solutions that are not always compatible with each other and not necessarily cover all aspects off this. Um So we've worked an open source project called metal cube metal cooped and integrated it into the doctor Enterprise Container Cloud To do this unified bar middle management for us. >>And you mentioned it I hear you say is that open source? >>There is no project is open source. We had a lot of our special sauce to it. Um, what it does, Basically, it enables us to manage the hardware servers just like a cloud server Instances. >>And could you go? I mean, that's very interesting, but could you go into a bit more detail and specifically What do you mean? As cloud instances, >>of course they can. Generally, it means to manage them through some sort of a p I or programming interface. Uh, this interface has to cover all aspects off the several life cycle, like hardware configuration, operating system management network configuration storage configuration, Uh, with help off Metal cube. We extend the carbonated C p i to enable it to manage bare metal hosts. And aled these suspects off its life cycle. The mental que project that's uses open stack. Ironic on. Did it drops it in the Cuban. It s a P I. And ironic does all the heavy lifting off provisioned. It does it in a very cloud native way. Uh, it configures service using cloud they need, which is very familiar to anyone who deals with the cloud and the power is managed transparently through the i p my protocol on. But it does a lot to hide the differences between different hardware hosts from the user and in the Doctor Enterprise Container Cloud. We made everything so the user doesn't really feel the difference between bare metal server and cloud VM. >>So, Oleg, are you saying that you can actually take a machine that's turned off and turn it on using the commands? >>That's correct. That's the I. P M I. R Intelligent platform management interface. Uh, it gives you an ability to interact directly with the hardware. You can manager monitor things like power, consumption, temperature, voltage and so on. But what we use it for is to manage the food source and the actual power state of the server. So we have a group of service that are available and we can turn them on. And when we need them, just if we were spinning the VM >>Excellent. So that's how you get around the fact that while aled cloud the ends of the same, the hardware is all different. But I would assume you would have different server configurations in one environment So how would you get around that? >>Uh, yeah, that Zatz. Excellent questions. So some elements of the berm mental management the FBI that we developed, they are specifically to enable operators toe handle wider range of hardware configurations. For example, we make it possible to consider multiple network interfaces on the host. We support flexible partitioning off hard disks and other storage devices. We also make it possible thio boot remote live using the unified extended firmware interface for modern systems. Or just good old bias for for the legacy ones. >>Excellent. So yeah, thanks. Thanks for sharing that that. Now let's take a look at the rest of the infrastructure and eggs. So what about things like networking and storage house that managed >>Oh, Jakey, that's some important details. So from the networking standpoint, the most important thing for kubernetes is load balancing. We use some proven open source technologies such a Zengin ICS and met a little bit to handle. Handle that for us and for the storage. That's ah, a bit more tricky part. There are a lot off different stories. Solutions out. There s o. We decided to go with self and ah cooperator for self self is very much your and stable distributed stories system. It has incredible scalability. We actually run. Uh, pretty big clusters in production with chef and rock makes the life cycle management for self very robust and cloud native with health shaking and self correction. That kind of stuff. So any kubernetes cluster that Dr Underprice Container Cloud provision for environmental Potentially. You can have the self cluster installed self installed in this cluster and provide stories that is accessible from any node in the cluster to any port in the cluster. So that's, uh, called Native Storage components. Native storage. >>Wonderful. But would that then mean that you'd have to have additional hardware so mawr hardware for the storage cluster, then? >>Not at all. Actually, we use Converse storage architecture in the current price container cloud and the workloads and self. They share the same machines and actually managed by the same kubernetes cluster A. Some point in the future, we plan to add more fully, even more flexibility to this, uh, self configuration and enable is share self, where all communities cluster will use a single single self back, and that's that's not the way for us to optimize our very basically. >>Excellent. So thanks for covering the infrastructure part. What would be good is if we can get an understanding them for that kind of look and feel, then for the operators and the users of the system. So what can they say? >>Yeah, the case. We know Doc Enterprise Container Cloud provides a web based user interface that is, uh, but enables to manage clusters. And the bare metal management actually is integrated into this interface and provides provides very smooth user experience. A zone operator, you need to add or enrolled governmental hosts pretty much the same way you add cloud credentials for any other for any other providers for any other platforms. >>Excellent. I mean, Oleg, it sounds really interesting. Would you be able to share some kind of demo with us? It be great to see this in action. Of >>course. Let's let's see what we have here. So, >>uh, thank you. >>Uh, so, first of all, you take a bunch of governmental service and you prepare them, connect and connect them to the network is described in the dogs and bootstrap container cloud on top of these, uh, three of these bare metal servers. Uh, once you put through, you have the container cloud up and running. You log into the u I. Let's start here. And, uh, I'm using the generic operator user for now. Its's possible to integrate it with your in the entity system with the customer and the entity system and get real users there. Mhm. So first of all, let's create a project. It will hold all off our clusters. And once we created it, just switched to it. And the first step for an operator is to add some burr metal hosts of the project. As you see it empty, uh, toe at the berm. It'll host. You just need a few parameters. Uh, name that will allow you to identify the server later. Then it's, ah, user name and password to access the IBM. My controls off the server next on, and it's very important. It's the hardware address off the first Internet port. It will be used to remotely boot the server over network. Uh, finally, that Z the i p address off the i p m i n point and last, but not the least. It's the bucket, uh, toe Assign the governmental host to. It's a label that is assigned to it. And, uh, right now we offer just three default labels or buckets. It's, ah, manager, manager, hosts, worker hosts and storage hosts. And depending on the hardware configuration of the server, you assign it to one of these three groups. You will see how it's used later in the phone, so note that least six servers are required to deploy managed kubernetes cluster. Just as for for the cloud providers. Um, there is some information available now about the service is the result of inspection. By the way, you can look it up. Now we move. Want to create a cluster, so you need to provide the name for the cluster. Select the release off Dr Enterprise Engine and next next step is for provider specific information. You need to specify the address of the Class three guy and point here, and the range of feathers is for services that will be installed in the cluster. The user war close um kubernetes Network parameter school be changed as well, but the defaults are usually okay. Now you can enable or disable stack light the monitoring system for the Burnett's cluster and provide some parameters to eat custom parameters. Uh, finally you click create to create the cluster. It's an empty cluster that we need to add some machines to. So we need a least three manager notes. The form is very simple. You just select the roll off the community snowed. It's either manager of worker Onda. You need to select this label bucket from which the environmental hospital we picked. We go with the manager label for manager notes and work your label for the workers. Uh, while question is deploying, let's check out some machine information. The storage data here, the names off the disks are taken from the environmental host Harbor inspection data that we checked before. Now we wait for servers to be deployed. Uh, it includes ah, operating system, and the government is itself. So uh, yeah, that's that's our That's our you user interface. Um, if operators need to, they can actually use Dr Enterprise Container Container cloud FBI for some more sophisticated, sophisticated configurations or to integrate with an external system, for example, configuration database. Uh, all the burr mental tasks they just can be executed through the carbonated C. P. I and by changing the custom resources customer sources describing the burr mental notes and objects >>Mhm, brilliant. Well, thank you for bringing that life. It's always good. Thio See it in action. I guess from my understanding, it looks like the operators can use the same tools as develops or developers but for managing their infrastructure, then >>yes, Exactly. For example, if you're develops and you use lands, uh, to monitor and manage your cluster, uh, the governmental resources are just another set of custom resources for you. Uh, it is possible to visualize and configure them through lands or any other developer to for kubernetes. >>Excellent. So from what I can see, that really could bridge the gap, then between infrastructure operators on develops and developer teams. Which is which is a big thing? >>Yes, that's that's Ah, one of our aspirations is to unify the user experience because we've seen a lot of these situations when infrastructure is operated by one set of tools and the container platform uses agnostic off it end users and offers completely different set of tools. So as a develops, you have to be proficient in both, and that's not very sustainable for some developers. Team James. >>Sure. Okay, well, thanks for covering that. That's great. E mean, there's obviously other container platforms out there in the market today. It would be great if you could explain only one of some of the differences there and in how Dr Enterprise Container Cloud approaches bare metal. >>Yeah, that's that's a That's an excellent question, Jake. Thank you. So, uh, in container cloud in the container Cloud Burr Mental management Unlike another container platforms, Burr metal management is highly and is tightly integrated in the in the product. It's integrated on the U and the A p I, and on the back and implementation level. Uh, other platforms typically rely on the user to provision in the ber metal hosts before they can deploy kubernetes on it. Uh, this leaves the operating system management hardware configuration hardware management mostly with dedicated infrastructure greater steam. Uh, Dr Enterprise Container Cloud might help to reduce this burden and this infrastructure management costs by just automated and effectively removing the part of responsibility from the infrastructure operators. And that's because container cloud on bare metal is essentially full stack solution. It includes the hardware configuration covers, operating system lifecycle management, especially, especially the security updates or C e updates. Uh, right now, at this point, the only out of the box operating system that we support is you, Bhutto. We're looking to expand this, and, as you know, the doctor Enterprise engine. It makes it possible to run kubernetes on many different platforms, including even Windows. And we plan to leverage this flexibility in the doctor enterprise container cloud full extent to expand this range of operating systems that we support. >>Excellent. Well, Oleg, we're running out of time. Unfortunately, I mean, I've thoroughly enjoyed our conversation today. You've pulled out some excellent points you talked about potentially up to a 30% performance boost up to 60% density boost. Um, you've also talked about how it can help with specialized hardware and make this a lot easier. Um, we also talked about some of the challenges that you could solve, obviously, by using docker enterprise container clouds such as persistent storage and load balancing. There's obviously a lot here, but thank you so much for joining us today. It's been fantastic. And I hope that we've given some food for thoughts to go out and try and deployed kubernetes on Ben. It'll so thanks. So leg >>Thank you for coming. BJ Kim
SUMMARY :
Hi, Oleg. So great of you to join us today. My goal is going to tell you about this exciting feature and why we think it's So why why would you go off And instead of spending a lot of resources on the hyper visor and virtual machines, So it sounds like you're suggesting then to run kubernetes directly By skipping over the virtualization, we just remove this overhead and gained this boost. Are there any other value points or positive points that you can pull out? Yes, Besides, the hyper visor over had virtual machines. Excellent. Uh, and you can share them between containers more efficiently. So what sort of applications do you think would benefit from this The most? Uh, which is the more common than you might think And I mean, that whole ai piece is I mean, really exciting. Uh, but creating communities closer environmental, the challenges that this might entail? metal cooped and integrated it into the doctor Enterprise Container Cloud to it. We made everything so the user doesn't really feel the difference between bare metal server Uh, it gives you an ability to interact directly with the hardware. of the same, the hardware is all different. So some elements of the berm mental Now let's take a look at the rest of the infrastructure and eggs. So from the networking standpoint, so mawr hardware for the storage cluster, then? Some point in the future, we plan to add more fully, even more flexibility So thanks for covering the infrastructure part. And the bare metal management actually is integrated into this interface Would you be able to share some Let's let's see what we have here. And depending on the hardware configuration of the server, you assign it to one of these it looks like the operators can use the same tools as develops or developers Uh, it is possible to visualize and configure them through lands or any other developer Which is which is a big thing? So as a develops, you have to be proficient in both, It would be great if you could explain only one of some of the differences there and in how Dr in the doctor enterprise container cloud full extent to expand Um, we also talked about some of the challenges that you could solve, Thank you for coming.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Oleg | PERSON | 0.99+ |
Oleg Elbow | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
Jake | PERSON | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Jakey | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first step | QUANTITY | 0.98+ |
three groups | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
BJ Kim | PERSON | 0.98+ |
Windows | TITLE | 0.97+ |
up to 30% | QUANTITY | 0.97+ |
Doctor Enterprise | ORGANIZATION | 0.96+ |
Iran | ORGANIZATION | 0.93+ |
three | QUANTITY | 0.91+ |
single | QUANTITY | 0.91+ |
Ben | PERSON | 0.91+ |
Onda | ORGANIZATION | 0.9+ |
James | PERSON | 0.9+ |
Eso | ORGANIZATION | 0.89+ |
three manager | QUANTITY | 0.87+ |
Burnett | ORGANIZATION | 0.86+ |
One more thing | QUANTITY | 0.84+ |
three default | QUANTITY | 0.84+ |
each | QUANTITY | 0.83+ |
upto 16% more | QUANTITY | 0.81+ |
60% density | QUANTITY | 0.79+ |
single self | QUANTITY | 0.76+ |
up to 60% | QUANTITY | 0.75+ |
Zengin ICS | TITLE | 0.73+ |
IaaS | TITLE | 0.73+ |
six servers | QUANTITY | 0.72+ |
Harbor | ORGANIZATION | 0.68+ |
P G | TITLE | 0.68+ |
Enterprise | TITLE | 0.67+ |
Dr Enterprise | ORGANIZATION | 0.67+ |
I. P M | TITLE | 0.64+ |
three | OTHER | 0.64+ |
up | QUANTITY | 0.63+ |
Dr Enterprise Container Cloud | ORGANIZATION | 0.63+ |
Doctor | ORGANIZATION | 0.6+ |
Cuban | OTHER | 0.58+ |
Coburn eighties | ORGANIZATION | 0.58+ |
tools | QUANTITY | 0.56+ |
Thio | PERSON | 0.55+ |
Bhutto | ORGANIZATION | 0.55+ |
Cloud | TITLE | 0.54+ |
Doc Enterprise Container | TITLE | 0.5+ |
Doctor Enterprise Container | TITLE | 0.5+ |
Zatz | PERSON | 0.49+ |
Team | PERSON | 0.49+ |
Container Cloud | TITLE | 0.36+ |
Bob Ghaffari, Intel Corporation | VMworld 2019
>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum World 2019. Brought to you by VM Wear and its ecosystem partners. >> Welcome back. We're here. Of'em World 2019. You're watching the Cubans? Our 10th year of coverage at the event. I'm stupid. And my co host this afternoon is Justin Warren. And happy to welcome back to the program. Bob Ghaffari, who's the general manager of the Enterprise and Claude networking division at Intel. Bob, welcome back. Great. Great to be here. Thank you. S Oh, uh, you know, it's a dressing. And I think that last year I felt like every single show that I went to there was an Intel executive up on the stage. You know, there's a way we talked about. You know, the tic tac of the industry is something that drove things. So last year? Ah, lot going on. Um, haven't seen intel quite as much, but we know that means that, you know, you're you and your team aren't really busy. You know a lot of things going on here. VM worldwide. Give us the update since last we spoke. Well, you know, um >> So I think we have to just go back a little bit in terms of how until has been involved in terms of really driving. Just hold this whole network transformation. I want to say it started about a decade ago when we were really focused on trying to go Dr. You know, a lot of the capabilities on to more of a standard architecture, right? In the past, you know, people were encumbered by challenging architectures, you know, using, you know, proprietary kind of network processors. We were able to bring this together until architecture we open source dp decay, which is really this fast packet processing, you know, library that we basically enabled the industry on. And with that, there's basically been this. I want to say this revolution in terms of how networking has come together. And so what we've seen since last year is you know how NSX via Miranda sex itself has really grown up and be able to sort of get to these newer, interesting usage models. And so, for us, you know what really gets us excited is being really involved with enabling hybrid cloud multi cloud from a network perspective. And that's just what really gets me out of bed every day. Yeah, An s >> t n is, I think, gone from that early days where it was all a bit scary and new, and people weren't quite sure that they wanted to have that. Whereas now Stu is the thing, it's people are quite happy and comfortable to use it. It's it's now a very accepted way of doing networking. What have you noticed about that change where people have gone? Well, actually, it's accepted. Now, what is that enabling customers to do with S T. N. >> You know, um I mean, I think what you know S Dan really does. It gives you a lot of the enterprise customers and cloud customers, and a lot of other is really the flexibility to be able to do what you really need to do much better. And so if you can imagine the first stage, we had to go get a lot of the functions virtualized, right? So we did that over the last 10 years, getting the functions virtualized, getting him optimized and making sure that the performance is there as a virtual function. The next step here is really trying to make sure that you know you weaken enable customers to be able to do what they need to end their micro service's and feels. Or do this in a micro segmented kind of view. When and so um and also being in a scenario, we don't have to trombone the traffic, you know, off to be there, be it's inspected or, you know, our load balance and bringing that capability in a way, in a distributed fashion to where the workloads Neto happen. >> Yeah, who you mentioned micro segmentation there, And that's something which has been spoken about again for quite a while. What's the state of play with micro segmentation? Because it some customs have been trying to use it and found it a little bit tricky. And so they were seeing lots of vendors who come in and say We'll help you manage that. What's the state of play with Michael segmentation From your perspective, >> you know, I would say the way I would categorize it as micro segmentation has definitely become a very important usage model. In turn, how did really contain, you know, uh, policies within certain segments, right? So, one you know, you're able to sort of get to a better way of managing your environments. And you're also getting to a better way of containing any kind of threats. And so the fact that you can somehow, you know, segment off, um, you know, areas and FAA. And if you basically get some kind of, like attack or some kind of, you know, exploit, it's not gonna, you know, will go out of that segmented area to to some extent, that simplifies how you look at your environment, but you want to be able to do it in the fashion that you know, helps. Ultimately, the enterprises managed what they got on their environments. >> So, Bob, one of things that really struck me last year was the messaging that VM were had around networking specifically around multi cloud. It really hearken back to what I had heard from my syrup reacquisition on. Of course. Now, Veum, we're extending that with of'em or cloud in all of you know, aws the partnerships they are false, extended with azure, with Google in non premises with Delhi emcee and others. And a big piece of that message is we're gonna be able to have the same stack on on both sides. You could kind of explain. Where does Intel fit in there? How does Intel's networking multi cloud story dovetail with what we're hearing from VM? Where Right, So I >> think >> the first thing is that until has been very involved in terms of being into, um, any on Prem or public clouds, we get really involved there. What were you really trying to do on my team does is really focusing on the networking aspects. And so, for us is to not only make sure that if you're running something on prime, you get the best experience on from but also the consistency of having a lot of the key instruction sets and any cloud and be able to sort of, ah, you know, managed that ballistically, especially when you're looking at a hybrid cloud environment where you're basically trying to communicate between a certain cloud. It could be on Prem to another cloud that might be somewhere else. Having the consistent way of managing through encrypted tunnels and making sure you're getting the kind of performance that you need to be able to go address that I think these are the kind of things that we really focus on, and I think that for us, it's not only really bring this out and, um improving our instructions that architecture's so most recently What we did is, you know, we launched our second generations Aeon Scaleable processors that really came out in April, and so for us that really takes it to the next level. We get some really interesting new instruction, sets things like a V X 5 12 We get also other kind of, you know, you know more of, like inference, analytic inference capabilities with things like Deal Boost that really brings things together so you can be more effective and efficient in terms of how you look at your workloads and what you need to do with them, making sure they're secure but also giving you the insights that you need to be able to make that kind of decisions you want from a enterprise perspective >> steward. It always amuses me how much Intel is involved in all of his cloud stuff when it it would support. We don't care about hardware anymore. It's all terribly obstructed. And come >> on, Justin, there is no cloud. It's just someone tells his computer and there's a reasonable chance there's an Intel component or two Wednesday, right? >> Isn't Intel intelligence and the fact that Intel comes out and is continuing to talk to customers and coming to these kinds of events and showing that it's still relevant, and the technology that you're creating? Exactly how that ties into what's happening in cloud and in networking, I think is an amazing credit to what? To Intel's ability to adapt. >> You know, it's definitely been very exciting, and so not only have we really been focused on, how do we really expand our processor franchise really getting the key capabilities we need. So any time, anywhere you're doing any kind of computer, we want to make sure we're doing the best for our customers as possible. But in addition to that, what we've really done is we've been helped us around doubt our platform capabilities from a solution perspective to really bring out not only what has historically been a very strong franchise, pressed with her what we call our foundational nicks or network interface cards, but we've been eldest would expand that to be able to bring better capabilities no matter what you're trying to do. So let's say, for example, you know, um, you are a customer that wants to be able to do something unique, and you want to be able to sort of accelerate, you know, your own specific networking kind of functions or virtual switches. Well, we have the ability to do that. And so, with her intel, f p g. A and 3000 card as an example, you get that capability to be able to expand what you would traditionally do from a platform level perspective. >> I want to talk about the edge, but before we go there, there's a topic that's hot conversation here. But when I've been talking to Intel for a lot of years out container ization in general and kubernetes more specifically, you know, where does that fit into your group? I mentioned it just cause you know that the last time Intel Developer forum happened, a friend of mine gave a presentation working for intel, and, you know, just talking about how much was going on in that space on. Do you know, I made a comment back there this few years ago. You know, we just spent over a decade fixing all the networking and storage issues with virtualization. Aren't we going to have to do that again? And containers Asian? Of course, we know way are having toe solve some of those things again. So, you >> know, and for us, you know, as you guys probably know, until it's been really involved in one of the biggest things that you know sometimes it's kept as a secret is that we're probably one of the bigger, um, employers of software engineers. And so until was really, really involved. We have a lot of people that started off with, you know, open source of clinics and being involved there. And, of course, containers is sort of evolution to that. And for us really trying to be involved in making sure that we can sort of bring the capabilities that's needed from our instructions, said architecture is to be able to do containers kubernetes, and, you know, to do this efficient, efficiently and effectively is definitely key to what we want to get done. >> All right, so that was a setup. I I wanted for EJ computing because a lot of these we have different architectures we're gonna be doing when we're getting to the edge starting here. A little bit of that show that this show. But it's in overall piece of that multi cloud architecture that we're starting to build out. You know, where's your play? >> Well, so for us, I mean the way that we look at it as we think it starts all, obviously with the network. So when you are really trying to do things often times Dedge is the closest to word that data is being, you know, realized. And so for us making sure that, you know, we have the right kind of platform level capabilities that can take this data. And then you have to do something with this data. So there's a computer aspect to it, and then you have to be able to really ship it somewhere else, right? And so it's basically going to be to another cloud and might be to another micro server somewhere else. And so for us, what really sets the foundation is having a scale will set a platform sort of this thick, too thin kind of concept. That sort of says, depending on what you're trying to do, what you need to have something that could go the answer mold into that. And so for us, having a scaleable platform that can go from our Biggers eons down to an Adam processor is really important. And then also what we've been doing is working with the ecosystem to make sure that the network functions and software defined when and you know that we think sets a foundation to how you want to go and live in this multi cloud world. But starting off of the edge, you want to make sure that that is really effective, efficient. We can basically provide this in a very efficient capability because there's some areas where you know this. It's gonna be very price sensitive. So we think we have this awesome capability here with our Adam processors. In fact, yesterday was really interesting. We had Tom Burns and Tom Gillis basically get on the stage and talk about how Dell and VM we're collaborating on this. Um, and this basically revolves around platforms based on the Adam Process sitter, and that could scale up to our ze aan de processors and above that, so it depends on what you're trying to do, and we've been working with our partners to make sure that these functions that start off with networker optimized and you can do as much compute auras little computer as you want on that edge >> off the customers who were starting to use age because it's it's kind of you, but it's also kind of not. It's been around for a while. We just used to call it other things, like robots for the customers who were using engine the moment. What's what's the most surprising thing that you've seen them do with your technology? >> You know what is interesting is, you know, we sometimes get surprised by this ourselves and so one of the things that you know, some customers say, Well, you know, we really need low cost because all we really care about is just low level. You know, we we want to build the deploy this into a cafe, and we don't think you're gonna be all that the price spot because they automatically think that all intel does is Biggs eons, and we do a great job with that. But what is really interesting is that with their aunt in processors, we get to these very interesting, you know, solutions that are cost effective and yet gives you the scalability of what you might want to do. And so, for example, you know, we've seen customers that say, Yeah, you know, we want to start off with this, but you know, I'm networking, is it? But you know what? We have this plan, and this plan is like this. Maybe it's a 90 day plan or it could be up to a two year plan in terms of how they want to bring more capabilities at that branch and want to want to be able to do more. They want to be able to compute more. They want to make decisions more. They want to be able to give their customers at that place a much better experience that we think we have a really good position here with their platforms and giving you this mix and match capability, but easily built to scale up and do what our customers want. Great >> Bob, You know, when I think about this space in general, we haven't talked about five g yet, and you know, five g WiFi six, you know, expected to have a significant impact on networking. We're talking a little bit about you know edge. It's gonna play in that environment. Uh, what do you hear from Augusta Summers? How much is that involved with the activities you're working through? You know, >> it's definitely, really interesting. So, uh, five g is definitely getting a lot of hype. Were very, very involved. We've been working on this for a while until it's, uh, on the forefront of enabling five G, especially as it relates to network infrastructure, one of the key focus areas for us. And so the way that we sort of look at this on the edges that a lot of enterprises, some of them are gonna be leading, especially for cases where Leighton see is really important. You want to be able to make decisions, you know, really rather quickly. You want to be able to process it right there. Five g is gonna be one of these interesting technologies that starts, and we're already starting to see it enabled these new or used cases, and so we're definitely really excited about that. We're already starting to see this in stadium experience being enabled by five G what we're doing on the edge. There's experiences like that that we really get excited when we're part of, and we're really able to provide this model of enabling, you know, these new usage models. So for us, you know the connectivity aspects five g is important. Of course, you know, we're going to see a lot of work clothes used for G as basically predominant option. And, of course, the standard wired connective ity of I p m pl less and other things. >> I want to give you the final word. Obviously, Intel long partnership. As we know you know, current CEO Pack else under, you know, spent a good part of his, you know, early part of career at Intel. Give us the takeaway intel VM wear from VM 2019. You know, I mean, we've had a >> long partnership here between intel on VM, where we definitely value the partnership for us. It started off with virtual light servers a while back. Now we've been working on networking and so for us, the partnership has been incredible. You know, we continue to be able to work together. Of course. You know, we continue to see challenges as we go into hybrid cloud Malta Cloud. We are very excited to how in terms of how we can take this to the next level. And, you know, we're very happy to be be great partners with them. >> All right. Well, Bob Ghaffari, thank you for giving us the Intel networking update. We go up the stack down the stack, Multi cloud, all out the edge, coyote and all the applications for Justin Warren. I'm stupid. Men will be back for our continuing coverage of the emerald 2019. Thanks for watching the Cube.
SUMMARY :
Brought to you by VM Wear and its ecosystem partners. Um, haven't seen intel quite as much, but we know that means that, you know, you're you and your team aren't And so what we've seen since last year is you know how NSX via have you noticed about that change where people have gone? you know, off to be there, be it's inspected or, you know, our load balance and And so they were seeing lots of vendors who come in and say We'll help you manage that. And so the fact that you can in all of you know, aws the partnerships they are false, extended with azure, with Google in non ah, you know, managed that ballistically, especially when you're looking at a hybrid cloud And come It's just someone tells his computer and there's a reasonable chance there's an Intel Isn't Intel intelligence and the fact that Intel comes out and is continuing to talk to customers and So let's say, for example, you know, um, you are a customer specifically, you know, where does that fit into your group? We have a lot of people that started off with, you know, open source of clinics and being involved of these we have different architectures we're gonna be doing when we're getting to the edge starting here. to word that data is being, you know, realized. off the customers who were starting to use age because it's it's kind of you, but it's also kind of not. You know what is interesting is, you know, we sometimes get surprised Bob, You know, when I think about this space in general, we haven't talked about five g yet, and you know, You want to be able to make decisions, you know, really rather quickly. As we know you know, And, you know, we're very happy to be be great partners with them. down the stack, Multi cloud, all out the edge, coyote and all the applications
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bob Ghaffari | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Justin | PERSON | 0.99+ |
Tom Gillis | PERSON | 0.99+ |
90 day | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
10th year | QUANTITY | 0.99+ |
April | DATE | 0.99+ |
last year | DATE | 0.99+ |
Tom Burns | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
intel | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Michael | PERSON | 0.99+ |
both sides | QUANTITY | 0.99+ |
Augusta Summers | PERSON | 0.98+ |
first stage | QUANTITY | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
five G | ORGANIZATION | 0.98+ |
Veum World 2019 | EVENT | 0.97+ |
NSX | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.94+ |
five g | ORGANIZATION | 0.94+ |
3000 card | COMMERCIAL_ITEM | 0.93+ |
first thing | QUANTITY | 0.93+ |
Miranda sex | ORGANIZATION | 0.92+ |
p g. A | COMMERCIAL_ITEM | 0.91+ |
Biggs eons | ORGANIZATION | 0.91+ |
Leighton | ORGANIZATION | 0.9+ |
Cubans | PERSON | 0.9+ |
two | QUANTITY | 0.9+ |
few years ago | DATE | 0.9+ |
second generations | QUANTITY | 0.9+ |
G | ORGANIZATION | 0.87+ |
over a decade | QUANTITY | 0.87+ |
Adam | PERSON | 0.86+ |
a decade ago | DATE | 0.81+ |
2019 | DATE | 0.81+ |
Delhi emcee | ORGANIZATION | 0.8+ |
this afternoon | DATE | 0.79+ |
Of'em World 2019 | EVENT | 0.79+ |
Five g | ORGANIZATION | 0.78+ |
VMworld | EVENT | 0.74+ |
5 | COMMERCIAL_ITEM | 0.73+ |
every single show | QUANTITY | 0.72+ |
about | DATE | 0.72+ |
two year | QUANTITY | 0.72+ |
Enterprise | ORGANIZATION | 0.69+ |
VM Wear | ORGANIZATION | 0.68+ |
Wednesday | DATE | 0.67+ |
Asian | LOCATION | 0.67+ |
Aeon | COMMERCIAL_ITEM | 0.64+ |
Adam | COMMERCIAL_ITEM | 0.64+ |
last 10 years | DATE | 0.63+ |
five G | TITLE | 0.63+ |
VM | ORGANIZATION | 0.62+ |
S T. N. | ORGANIZATION | 0.62+ |
Veum | PERSON | 0.55+ |
VM | EVENT | 0.53+ |
V | COMMERCIAL_ITEM | 0.52+ |
Amber Hameed, Dollar Shave Club | Adobe Summit 2019
>> Live, from Las Vegas. It's theCUBE, covering Adobe Summit 2019. Brought to you by Adobe. >> Hey welcome back everyone, this is CUBE's live coverage at Adobe Summit here in Las Vegas. I'm John Furrier, host of theCUBE with Jeff Frick, co-host for the next two days' live coverage. Our next guest is Amber Hameed, vice-president of Information Systems at the Dollar Shave Club. Welcome to theCUBE, thanks for coming on. >> It's great to be here. >> So I love your title, we were talking about it before the camera came on. It's not, it's Information Systems. Why is that different, tell us, what about the title. >> I think, everything from a technology point of view, there's no such thing as a purest anymore. I think it's really important to understand every aspect of the business as a technologist, to really evolve with the technology itself. I think, from a role that I play at Dollar Shave Club, I have the fortune of actually working very closely with all aspects of our business, From marketing, to fintech, to data, to technology, which is what our IT function is, is essentially embedded and ingrained within the entire holistic approach to technology. So it's not isolated anymore. And when we look at technologists, we actually look at how they actually interface with all of the aspects of business processes first. That's how we actually understand what the needs of the business are, to then cater the innovation and the technology to it. >> So is there a VP of IT, Information Technology? 'Cause IT is kind of a word that people think of the data center or cloud or buying equipment. It's a different role right, I mean that's not you. >> It is, if you look at the Information Systems evolution, you will see that, more and more systems are geared towards business needs, and less and less towards pure-play technology. So back in the day, you had a CTO role in an organization, which was focused on infrastructure, networks, technology, as DevOps is considered to be. Information Systems is actually focused more on the business itself, how do we enable marketing, how do we enable finance, how do we enable digital technology as a platform. But not so much as how do we develop a technology platform, that's part and parcel of what the business solution proposes, that drives how the technology operates. >> So what's old is new is coming back, Jeff, remember MIS, Management Information Systems? >> You don't want to remember this John. (laughing) >> Data Processing Systems Department. But if you think about it, we're doing Management Information Systems and we're processing a lot of data, kind of just differently, it's all with cloud now, so it's kind of important. >> That's exactly right. So technology's one aspect of bringing information together. So data is one aspect of it, business processes is another aspect of it and your resources, the way your teams are structured, are part and parcel of the strategy of any technology platform. >> Right, well what you're involved in, the topic of this show, is really not using that to so much support the business, but to be the business. And to take it to another level, to actually not support the product, but support the experience of the customer with your brand that happens to be built around some products, some of which are used for Shaving. So it's a really different way and I would imagine, except for actually holding the products in their hands, 99% of the customer engagement with your Dollar Shave Club is electronic. >> Well I mean our customer experience is a very, very unique combination within Dollar Shave Club. And that makes it even more challenging as a technologist to be able to cater, and bring that experience to what we call our members. So when we talk about a 360-degree approach from a technology platform point of view, we're taking into account, the interaction with the customer from the time we identify them, who they are, who are segmented market is, to the time they actually interact with us in any capacity, whether that's looking at our content, whether it's coming to our site, whether it's looking at our app, and then actually how we service them once we acquire them. So there's a big focus, an arm of our customer strategy, that's focused on the customer experience itself, once they are acquired, once they become part of the club. And it's that small community experience, that we want to give them, that's integral to our brand. >> You guys have all the elements of what the CEO of Adobe said on stage, we move from an old software model we're too slow, now we're fast, new generation of users, reimagining the product experience. You guys did that, that was an innovation. How do you keep that innovation going because you're a direct-to-consumer, but you got a club and a member model, you've got to constantly be raising the bar on capabilities and value to your members. What's the secret sauce how do you guys do that? >> It's exactly right. So as I mentioned it's an evolving challenge, we have to keep our business very, very agile obviously, 'cause our time to market is essential. How quickly the consumers actually change their minds, you know, so we have to target them, we have to be effective in that targeting. And how quickly do we actually deliver personalized content to them, that they can relate to, is integral to it. When we look at our our technology stack, we consider ourselves to be, you know, a cut above the others because we want to be on the bleeding edge of technology stack no matter what we do. We have an event-driven architecture. We invested quite a bit in our data infrastructure. I happen to be overseeing our data systems platform, and when I started with the organization, that was our central focus. In fact, before we invested in Adobe as a stack, which is helping us tremendously and drive some of the 360-degree view of customer centralization, we actually built our entire data architecture first, in order to make the Adobe products a success. And it was that architecture and platform, that then enabled a very successful implementation of Adobe Audience Manager going forward. >> How do you do that, because this is one of the things, that keeps coming up on the themes of every event we cover, all the different conversations with experts, people are trying to crack the code on the data architecture. I've heard people say it's a moving train, it's really hard. It is hard, how did you guys pull it off? Did you take kind of a slow approach? Was it targeted, was there a methodology to it? Can you explain? >> Yeah, so essentially, you know, as you can imagine, being a consumer driven organization, we have data coming out from all aspects. From all of our applications what we call first party data. We also have what we call second party data, which is essentially with our external marketers, information that we are using. They're using our information to channel, and we're using all of that channeled information back in, to then use that and make other strategic decisions. It was really, really important for us, to set up an architecture that is the core foundation of any sort of a data organization that you want to set. The other big challenge is the resources, as you can imagine this is a very competitive environment for data resources, so how do you keep them interested, how do you bring them to your brand, to work on your data architecture, is to make sure that you're providing them, with them latest and greatest opportunities, to take advantage of. So we're actually a big data organization, we run heavily on an AWS stack. We have bleeding edge technology stacks, that actually resources are interested in getting their their hands into, and learning and building on their skill sets. So when you take that ingredient in, the biggest driver is once you have that architecture set up, how do you get your organization, to be as a data-driven organization? And that is when you start, to start the adoption process slowly. You start delivering the insights, you start bringing your business along and explaining what those insights look like. >> I'm just curious, what are some of the KPIs that you guys take a look at, that probably a traditional marketer that graduated from P&G, thirty years ago, you know, wasn't really thinking about, that are really fundamentally different than just simply sales, and revenue, and profitability, and some of those things. >> Well I mean, I don't think there's a magic bullet, but I think they're things, that are key drivers in our business, obviously, because we're a subscription model, we are an industry disruptor there, and we started out by really looking at what the value is, that we can bring to our customers. So when we put them on a subscription model, it was very important for us to look at, how much we're spending in the acquisition, of that customer so our CPA and what we call the Golden Rule, and then how are we delivering on those. And the key KPI there is the LTV the longevity, which is the lifetime value of a customer. So we're very proud to have a pretty substantiated customer base, these members they've been with us for over six years. And the way we keep them interested, is refreshing all of that information that we're providing to them, in a very personalized way. >> How much do you think in terms of the information that they consume to stay engaged with the company, is the actual, what percentage of the value, you know, is the actual razor blade, or the actual product and the use of that versus, all the kind of ancillary material, the content, the being part of a club, and there's other things. I would imagine it's a much higher percentage on the ladder, than most people think. >> Exactly right, so our members are, we get this feedback constantly. I mean once they get into, usually a large customer base, we have over three million subscribers of our mail magazine, which is independent content delivery, from our site. And when people come and read the magazine, they automatically, they don't know at first, that it's part of the Dollar Shave Club umbrella. But once they get interested and they find out that it is, they automatically are attracted to the site and they land on it, so that's one arm that essentially targets through original content. The other aspect of it is, once you are a member, every shipment that you receive, actually has an original content insert in it. So the idea is that when you're in the bathroom, you're enjoying your products, you're also enjoying something that refreshes, keeps your mind, just as healthy as your body. >> So original content's critical to your strategy? >> It is, yes. >> On engagement and then getting that data. So I got to ask you a question, this is really an earned media kind of conversation, in that it used the parlance of the industry. Earning that trust is hard, and I see people changing their strategies from the old way of thinking of communities, forum software, login be locked in to me being more open. Communities' a hard nut to crack these days. You got to earn it, you know, you can't buy community. How is the community equation changing? You guys are doing it really, well what's the formula, obviously content's one piece. How would you share, how someone should set up their community strategy? >> Well I think it's also a lot of personal interaction. You know, we have club pros, that are exclusively dedicated to our members, and meeting our member needs. And it's world-class customer service. And from a technology point of view we have to make sure that our club pros understand our customers holistically. They understand how they've previously interacted with us. They understand what they like. We also do member surveys and profile reviews, with our members on a regular basis. We do what we call social scraping, so we understand what they're talking about, when they're talking about in social media, about our brand. And all of that is part of the technology stack. So we gather all this information, synthesize it, and provide it to our club pros. So when a member calls in, that information needs to be available to them, to interact properly and adequately. >> So it's intimacy involved, I can get an alignment. >> Absolutely yeah, it's hard core customer service, like right information, at the right time, in the right hands of our club pros. >> So he's a trick question for you, share a best practice in the industry. >> I think the idea of best practices, is sort of kind of on its way out. I think it's what we call evolving practices. I think that the cornerstone of every team, every culture, every company, is how you're learning constantly from the experiences, that you're having with your customers. And you bring a notion and it quickly goes out the door, based on feedback that you've received from your customers, or an interaction you've had. So you have to constantly keep on evolving on what are true and tested best practices. >> And that begs a question then that, if there's best practices used to be a term, like boiler plate, standards, when you have personalization, that's at the micro targeted level, personalization, that's the best practice, but it's not a practice it's unique to everybody. >> That's true and I think it's sort of, kind of a standing ground, it's a foundation. It gives you somewhere to start, but I think it would be you'd be hard-pressed, to say that that, is going to be the continuation of your experience. I think it's going to change and evolve drastically, especially in a world that we live in, which is highly digitized. Customer experiences and their attention span is so limited, that you cannot give them stale best practices, you have to keep changing. >> So the other really key piece is the subscription piece. A, it's cool that it's a club right, it's not just a subscription you're part of the club, but subscriptions are such a powerful tool, to force you to continue to think about value, continue to deliver value, to continue to innovate, because you're taking money every month and there's an there's an option for them to opt out every month. I wonder, how hard is that, to kind of get into people's heads that have not worked in that way, you know, I've worked in a product, we ship a new product once a year, we send it out, you know, okay we're working on the next PRD and MRD. Versus, you guys are almost more like a video game. Let's talk about video games, because a competitor will come out with a feature, suddenly, tomorrow and you're like, ah stop everything. Now we need to, you know, we need to feature match that. So it's a very different kind of development cycle as you said, you've got to move. >> Yeah exactly right. So there's different things that we deliver with every interaction with our customers. So one of the key ingredients is, is obviously we have an evolving brand and the content, a physical product of our brand. We recently launched groundskeeper, which is our deodorant brand, and essentially we want to make sure, that the idea is that, our consumers never actually have to leave their house. So the idea is to provide cheaper products, right, that a good quality, effective and they are delivered to your door. The idea of convenience is never outdated, never goes out of practice. But to your point, it's important to continue to listen to what your customers are asking for. So if they're asking, if they're bald, you don't want to continue to market Boogie's products, which is our hair care product to them. But if they're shaving their heads we want to, you know, evolve on our razors to be able to give them that flexibility so they have a holistic approach. >> Get that data flywheel going, see if the feedback loop coming in, lot of touch points. I got to ask the question around your success in innovation, which is awesome, congratulations. >> Thank you. >> There are a lot of people out there trying to get to kind of where you're at, maybe at the beginning of their journey, let's just say you have an innovative marketer out there or an IT, I mean a Information Systems person who says, we have a lot of members but we don't have a membership. We have a network, we have people, we're different content, we're great at original content. They have the piece parts, but now everything's not pulled together. What's your advice to that person watching, because you start to see people start to develop original content as an earned media strategy, they have open network effective content flowing. They might have members, do they do a membership? What's the playbook? >> Well I think the concept at the base of it all is, how do we, we need to stay very true to our mission. And I think that's the focal point, that sort of brings everything together. We never diverge too far away from that, is for men to really be able to take care of their minds and their bodies. So the area where we focus in on a lot, is that we can't just bombard you, with products, after products, after products. We have to be able to cater to your needs specifically. So when we're listening in to people and what they're talking about in their own personal grooming, personal care needs, we're also going out there and finding information and content to constantly allow them, to hear in on what their questions are all about. What their needs are on a daily basis. How do men interact with grooming products in general, when they go into a retail brick-and-mortar environment, versus when they are online. So all of that is the core ingredient that when we are actually positioning our technology around it. When it comes to innovation, my personal approach to innovation is, the people that are working for you in your organization, whether they're marketers, whether they're technologists, it's very, very important to keep them intrigued. So I personally have introduced what we call an innovation plan. And what that does, is as part of our roadmap delivery for technology, I allow my team members to think about, what they would want to do in the next phase of what they want to to deliver, outside of what they do everyday as their main job. That gets their creativity going and it adds a lot of value to the brand itself. >> And it's great for retention, 'cause innovative people want to solve hard problems, they want to work with other innovative people. So you got to kind of keep that going, you know, so the company wins. >> Exactly, and the company is very approachable when it comes to lunch-and-learn opportunities and essentially learning days. So you keep your resources, and your team's really, really invigorated and working on core things, that are important to the business. >> Amber, thank you so much for coming on, and sharing these amazing insights. >> Thank you, I appreciate it. >> I'll give you the final word, just a final parting word. Share an experience of something, that you've learned over your journey, as VP of Information. Something that you, maybe some scar tissue, something that was a bump in the road, that, a failure that you overcame and you grew from. >> I think as is as a female technologist, I think I would say, and I would encourage most women out there is that it's really important to focus on your personal brand. It's really important to understand what you stand for, what your message is and one of the things that I have learned is that takes a village, it takes a community of people, to really help you grow and really staying strong and connected to your resources, whether they are working with you directly, whether they're reporting to you, you learn constantly from them. And just to be open and approachable, and be able to be open to learning, and then evolving as you grow. >> Amber thank you for sharing. >> Great advice. >> Thank you so much. >> It's theCUBE live coverage here in Las Vegas, for Adobe Summit 2019. I'm John Furrier with Jeff Frick, stay with us. After this short break we'll be right back. (upbeat music)
SUMMARY :
Brought to you by Adobe. co-host for the next before the camera came on. and the technology to it. the data center or cloud So back in the day, you had a You don't want to remember this John. But if you think about it, are part and parcel of the strategy that happens to be built from the time we identify You guys have all the elements and drive some of the It is hard, how did you guys pull it off? And that is when you start, that you guys take a look at, And the way we keep them interested, of the value, you know, that it's part of the So I got to ask you a question, and provide it to our club pros. So it's intimacy involved, in the right hands of our club pros. share a best practice in the industry. So you have to constantly keep on evolving that's at the micro targeted that you cannot give them to force you to continue So the idea is to provide I got to ask the question around maybe at the beginning of their journey, So all of that is the core So you got to kind of keep that going, that are important to the business. Amber, thank you so much for coming on, a failure that you and connected to your resources, I'm John Furrier with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Amber Hameed | PERSON | 0.99+ |
Dollar Shave Club | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
99% | QUANTITY | 0.99+ |
P&G | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
360-degree | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Boogie | ORGANIZATION | 0.99+ |
Amber | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Adobe Summit | EVENT | 0.99+ |
over six years | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
thirty years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
once a year | QUANTITY | 0.98+ |
Adobe Summit 2019 | EVENT | 0.98+ |
over three million subscribers | QUANTITY | 0.98+ |
LTV | ORGANIZATION | 0.98+ |
CUBE | ORGANIZATION | 0.97+ |
one aspect | QUANTITY | 0.97+ |
one arm | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Information Systems | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.93+ |
second | QUANTITY | 0.85+ |
CEO | PERSON | 0.83+ |
days' | QUANTITY | 0.69+ |
Dollar Shave Club | ORGANIZATION | 0.66+ |
culture | QUANTITY | 0.63+ |
Systems | ORGANIZATION | 0.61+ |
team | QUANTITY | 0.6+ |
Data | ORGANIZATION | 0.59+ |
president | PERSON | 0.58+ |
Manager | TITLE | 0.57+ |
MRD. | ORGANIZATION | 0.56+ |
next two | DATE | 0.45+ |
ingredients | QUANTITY | 0.45+ |
Alan Boehme, Procter & Gamble | Mayfield50
Sand Hill Road to the heart of Silicon Valley it's the cute presenting the people first Network insights from entrepreneurs and tech leaders when I'm John Ferrari with the cube I'm the co-host also the founder of Silicon angle me we are here on Sand Hill Road at Mayfield for the people first conversations I'm John furry with the cube weird Allen being global CTO and IT of innovation at Procter & Gamble formerly the same position at coca-cola has done a lot of innovations over the years also a reference account back in the day for web methods when they call on the financing of that one of the most famous IPOs which set the groundwork for web services and has a lot of history going back to the 80s we were just talking about it welcome this conversation on people first network thank you for inviting me so the people first network is all about people and it's great to have these conversations you're old school you were doing some stuff back on the 80s talking about doing RPA 3270 you've been old school here yeah I go back to APL as my first programming language went through the the third generation languages and of course the old 30 to 70 emulation which is what we know today is our PA one of the cool things I was excited to hear some of your background around your history web methods you were a reference call for venture financing of web methods which was financed on the credit card for the two founders husband and wife probably one of the most successful I appeals but more importantly at the beginning of the massive wave that we now see with web services this is early days this was very early days when I was at DHL we were looking at what we're gonna do for the future and in fact we built one of the first object-oriented frameworks in C++ at the time because that was all that was available to us or the best was available we rejected Corbis and we said look if we're gonna go this direction and one of my developers found web methods found philip merrick it was literally at the time working out of his garage and had this technology that was going to allow us to start moving into this object-oriented approach and I remember the day Robin Vasan form a field called and said hey I'm thinking about investing in web methods what do you think about it and not only was it one of the first startups that I ever worked with but it's actually the first time I met anybody in the venture community way back in nineteen I think 1997 is what had happened and that was a computing time in computer science and then the rest is history and then XML became what it became lingua franca for the web web services now Amazon Web Services you see in cloud computing micro services kubernetes service meshes this is a new stack that's being developed in the cloud and this is the new generation you've seen many waves and at Procter & Gamble formerly coca-cola you're the same role you have to navigate this so what's different now what's different say 15 20 years ago how are you looking at this market how you implementing some of the IT and infrastructure and software development environments I think what's change is you know when we got into the the early 2000s Nicolas car came out and said IT doesn't matter and I think anybody that was an IT had this very objectionable response initially but when you step back and you looked at it what she realised was in many cases IT didn't matter and those were those areas that were non-competitive those things that could be commoditized and it was completely right the reality is IT has always mattered that technology does give you a competitive advantage in certain markets and certain capabilities for a company but back then we had to go out and we had to purchase equipment we had to configure the equipment there was a lot of heavy lifting in corporations just did not want to invest the capital so they outsource the stuff wholesale I think General Motors was the first one that just out sourced everything and was followed by other companies including Procter & Gamble the decision at that time was probably right but as we go forward and we see what's happened with corporations we see the valuations of corporations the amount of return on equity based on the on the capital that's being invested we can see that data is important we can see that agility flexibility is key to competing in the future and therefore what's changing is we are now moving into an age of away from ERP so we're moving into an age away from these outsource providers on a wholesale basis and using it selectively to drive down costs and allowing us to free up money in order to invest in those things that are most important to the company so you're saying is that the folks naturally the server consolidation they've bought all this gear all this software over you know 18-month rollouts before they even see the first implementation those are the glory days of gravy trains for the vendor's yeah not good for the practitioners but you're saying that the folks who reinvested are investing in IT as a core competency are seeing a competitive advantage they certainly are you know I think I made the statement front of a number of the vendors and a few years ago and people were not comfortable with it but what I said was like you gone are the ears of these 10 20 million dollar deals gone are the ears of the million two million dollar deals we're in the ear of throwaway technology I need to be able to use and invest in technology for a specific purpose for a specific period of time and be able to move on to the next one it's the perfect time for startups but startups shouldn't be looking at the big picture they should be looking at the tail on these investments let me try things let me get out in the market let me have a competitive advantage in marketing which is most important to me or in supply chain those are the areas that I can make a difference with my consumers and my customers and that's where the investments have to go so just in constant of throwaway technology and you know you'd also be said of you know being more agile though interesting to look at the cloud SAS business model if Amazon for us I think that's the gold standard where they actually lower prices on a per unit basis and increase more services and value but in the aggregate you're still paying more but you have more flexibility and that's kind of a good tell sign so that you're seeing that ability to reuse either the infrastructure that's commoditized to shift the value this is are people having a hard time understanding this so I want to get your reaction to how should I tea leaders understand that the wave of cloud the wave of machine learning what a I can bring to the table these new trends how how should leaders figure this out is there a playbook as there are things that you've learned that you could share you know that there's really a playbook it's still early on everyone's looking for one cloud fits all the reality is whether it's Google whether it's Amazon whether it's Microsoft whether it's IBM all clouds are different all clouds have our special are purpose-built for different solutions and I think as an IT leader you have to understand you're not going to take everything and lift and shift that's what we used to do we're now in the position where we have to deconstruct our business we have to understand the services the capabilities that we want to bring to market and not lock ourselves in its building blocks its Legos we're in the period of Legos putting these things together in different manners in order to create new solutions if we try to lock ourselves in the past of how we've always financed things how we've always built things then we're not going to be any better off in the new world than we were in the old alan i want to get your reaction to to two words our PA and containers well as i said earlier our PA is 3270 emulation from the 1980s and for those of us that are old enough to remember that i I still remember scraping the the old green screens and and putting a little process around it it what's nice though is that we have moved forward machine learning and AI and other other capabilities are now present so that we can do this I actually played around with neural nets probably back in 1985 with an Apollo computer so that tells you how far back I go but technologies change processing speeds change everything the technology trends are allowing us to now to do these things the question that we have is also a moral dilemma is are we trying to replace people or are we trying to make improvements and I think that you don't look at our PA as a way simply to replace work it's a way to enhance what we're doing in order to create new value for the customer or for the consumer in our case I think in the in the area of containers you know again been around for a while been around for a while it's just another another approach that we're not we don't want lock in we don't want to be dependent on specific vendors we want the portability we want the flexibility and I think as we start moving containers out to the edge that's where we're gonna start seeing more value as the business processes and the capabilities are spread out again the idea of centralized cloud computing is very good however it doesn't need to be distributed what's interesting I find about the conversation here is that you mentioned a couple things earlier you mentioned the vendors locking you in and saying here's the ERP buy this and with this you have to have a certain process because this is our technology you got to use it this way and you were slave to their their tech on your process serve their tech with containers and say orchestration you now the ability to manage workloads differently and so an interesting time there's that does that change the notion of rip and replace lift and shift because if I a container I could just put a container around it and not have to worry about killing the old to bring in the new this is on the fundamental kind of debate going on do you have to kill the old to bring in the new well you need to kill the old sometimes just because it's old it's time to go other times you do need to repackage it and other times I hate to say it you do need to lift and shift if you're a legacy organization they have a long history such as most of the manufacturing companies in the world today we can't get rid of old things that quickly we can't afford to a lot of the processes are still valid as we're looking to the future we certainly are breaking these things down into services we're looking to containerize these things we're looking to move them into areas where we can compute where we want to when we want to at the right price we're just at the beginning of that journey in the industry I still think there's about five to seven years to go to get there now I'll talk about the role of the edge role of cloud computing as it increases the surface area of IT potentially combined with the fact that IT is a competitive advantage bring those two notions together what's the role of the people because you used to have people that would just manage the rack and stack I'm provisioning some storage I'm doing this as those stovepipes start to be broken down when the service area of IT is bigger how does that change the relationship of the people involved you know you win with people at the end of the day you don't win with technology you know a company of such as Proctor and Gamble and I think what's happened if you look at historically the ERP vendors came out probably 99 2000 and it used to be and remember these I'm old to be honest with you but I remember that we used to have to worry about the amount of memory we were managing we had to be able to tune databases in all of this and the vendors went ahead and they started automating all those processes with the idea that we can do it better than a human and a lot of people a lot of the technology talent then started leaving the organizations and organizations were left with people that we're focusing on process and people a process excuse me process and the the the business which is very good because you need the subject matter experts going forward we have to reinvest in people our people have the subject matter expertise they have some technology skills that they've developed over the years and they've enhanced it on their own but we're in this huge change right now where we have to think different we have to act different and we have to behave differently so doubling down on people is the best thing that you can do and the old outsource model of outsourcing everything kind of reduces the core competency of the people yeah now you got to build it back up again exactly I mean we when we left at P&G 15 years ago about 5,000 people left the organization when we outsource them when we outsource the technology to our partner at that time now it's time we're starting to bring it back in we've brought the network team back in and stood up our own sock in our own NOC for the first time in years just this past year we're doing the same thing by moving things out to the cloud more and more is moving to the cloud we're setting up our own cloud operations and DevOps capabilities I can tell you having been on both sides of it it's a lot harder to be able to bring it back in than it is to take it out and you know interesting proctoring games well known as being a very intimate with the data very data-driven company the data is valuable and having that infrastructure NIT to support the data that's important what's your vision on the data future of the data in the world well I think data is has a value to itself but when you tie it to products you tie it to your customers and consumers it's even more valuable and we're in the process now of things that we used to do completely internally with our own technology or technology partners we're now moving all of that out into the cloud now and I must say cloud its clouds plural again going back to certain clouds are better for certain things so you're seeing a dramatic shift we have a number of projects underway that are in the cloud space but for customers and consumers number of cloud projects in the way for our own internal employees it's all about collecting the data processing the data protecting that data because we take that very seriously and being able to use it to make better decisions I want to get your reaction on two points and two quite lines of questioning here because I think it's very relevant on the enterprise side you're a big account for the big whales the old ERP so the big cloud providers so people want to sell you stuff at the same time you're also running IT innovation so you want to play with the new shiny new toys and experiments start up so if startups want to get your attention and big vendors want to sell to you the tables have kind of turned it's been good this is a good it's a good buyers market right now in my opinion so what's your thoughts on that so you know start with the big companies what do they got to do to win you over well they got to look like how they got to engage and for startups how do they get your attention I think the biggest thing for either startup or large companies understanding the company you're dealing with whether it's Procter & Gamble whether it's coca-cola whether it was DHL if you understand how I operate if you understand how decisions are made if you understand how I'm organized that's gonna give you an a competitive advantage now the large corporations understand this because they've been around through the entire journey of computing with these large corporations the startups need to step back and take a look and see where do I add that competitive advantage many times when you're selling to a large corporate you're not selling to a large corporate you're selling two divisions you're selling two functions and that's how you get in I've been working with startups as I said back since web methods and it was just a two-person company but we brought them in for a very specific capability I then took web methods with me when I left DHL I took them to GE when I left GE I took them to ing because I trusted them and they matured along the way I think finding that right individual that has the right need is the key and working it slowly don't think you're gonna close the deal fast if you're start-up know it's gonna take some time and decide if that's in your best interest or not slow things down focus don't try to boil the ocean over too many of them try to boy you're right Jimmy people try to boil the ocean get that win one win will get you another one which will get you another win and that's the best way to succeed get that beachhead Ellen so if you could go back and knowing what you know now and you're breaking into the IT leadership's position looking forward what would you do differently can do a mulligan hey what would you do differently well you know I think one of the one of the dangers of being an innovator in IT is that you really are risk taker and taking risks is counterculture to corporations so I think I would probably try to get by in a little bit more I mean someone once told me that you know you see the force through the trees before anybody else does your problem is you don't bring people along with you so I think I would probably slow down a little bit not in the adoption of technology but I'd probably take more time to build the case to bring people along a lot faster so that they can see it and they can take credit for it and they can move that needle as well yeah always sometimes early adopters and pioneers had the arrows on the back as they say I've had my share now thanks for sharing your experience what's next for you what's the next mountain you're going to climb well I think that as we're looking forward latency is still an issue you know we have to find a way to defeat latency we're not going to do it through basic physics so we're gonna have to change our business models change our technology distribution change everything that we're doing consumers and customers are demanding instant access to enhanced information through AI and m/l right at the point where they want it and that means we're now dealing with milliseconds and nanoseconds of having to make decisions so I'm very interested in looking at how are we going to change consumer behavior and customer behavior by combining a lot of the new technology trends that are underway and we have to do it also with the security in mind now before we security was secondary now as we're seeing with all of the hacks and the malware and everything that's going on in the world we have to go in and think a little bit different about how we're gonna do that so I'm very much engaged in working with a lot of startups I live here in the Silicon Valley I commute to Cincinnati for Procter & Gamble I'm spending time and just flew in from tel-aviv literally an hour ago I'm in the middle of all the technology hotspots trying to find that next big thing and it's a global it's global innovation happens everywhere and anywhere the venture community if you look at the amount of funds it used to be invested out of the Silicon Valley versus the rest of the world it continues to be on a downward trend not because the funding isn't here in the Silicon Valley but because everyone is recognizing that innovation and technology is developed everywhere in the world Alan Bain was the CTO global CTO and IT innovator there at the cube conversation here in San Hill Road I'm John for a year thanks for watching you
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
1985 | DATE | 0.99+ |
Alan Bain | PERSON | 0.99+ |
Procter & Gamble | ORGANIZATION | 0.99+ |
General Motors | ORGANIZATION | 0.99+ |
Procter & Gamble | ORGANIZATION | 0.99+ |
Procter & Gamble | ORGANIZATION | 0.99+ |
John Ferrari | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Robin Vasan | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
DHL | ORGANIZATION | 0.99+ |
philip merrick | PERSON | 0.99+ |
GE | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1997 | DATE | 0.99+ |
C++ | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Silicon Valley | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
18-month | QUANTITY | 0.99+ |
two founders | QUANTITY | 0.99+ |
two points | QUANTITY | 0.99+ |
15 years ago | DATE | 0.99+ |
Proctor and Gamble | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
two functions | QUANTITY | 0.99+ |
early 2000s | DATE | 0.99+ |
Mayfield | LOCATION | 0.98+ |
coca-cola | ORGANIZATION | 0.98+ |
Ellen | PERSON | 0.98+ |
Sand Hill Road | LOCATION | 0.98+ |
Cincinnati | LOCATION | 0.98+ |
first time | QUANTITY | 0.98+ |
an hour ago | DATE | 0.98+ |
both sides | QUANTITY | 0.98+ |
P&G | ORGANIZATION | 0.98+ |
first one | QUANTITY | 0.98+ |
million | QUANTITY | 0.98+ |
third generation | QUANTITY | 0.98+ |
first network | QUANTITY | 0.97+ |
seven years | QUANTITY | 0.97+ |
two words | QUANTITY | 0.97+ |
San Hill Road | LOCATION | 0.97+ |
1980s | DATE | 0.97+ |
two-person | QUANTITY | 0.97+ |
Amazon Web Services | ORGANIZATION | 0.97+ |
80s | DATE | 0.96+ |
two notions | QUANTITY | 0.96+ |
two divisions | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
lines | QUANTITY | 0.95+ |
first time | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
10 20 million dollar | QUANTITY | 0.95+ |
first implementation | QUANTITY | 0.95+ |
nineteen | DATE | 0.95+ |
first | QUANTITY | 0.93+ |
first conversations | QUANTITY | 0.93+ |
Legos | ORGANIZATION | 0.91+ |
Allen | PERSON | 0.91+ |
15 20 years ago | DATE | 0.9+ |
RPA 3270 | OTHER | 0.9+ |
30 | QUANTITY | 0.9+ |
70 | QUANTITY | 0.89+ |
one win | QUANTITY | 0.88+ |
about 5,000 people | QUANTITY | 0.88+ |
tel-aviv | ORGANIZATION | 0.88+ |
Alan Boehme | PERSON | 0.87+ |
first programming language | QUANTITY | 0.86+ |
a year | QUANTITY | 0.85+ |
Corbis | ORGANIZATION | 0.84+ |
few years ago | DATE | 0.83+ |
first startups | QUANTITY | 0.82+ |
Nicolas | PERSON | 0.81+ |
Apollo | ORGANIZATION | 0.8+ |
about five | QUANTITY | 0.8+ |
two million dollar | QUANTITY | 0.8+ |
past year | DATE | 0.77+ |
Wolfgang Ulaga, ASU | PTC LiveWorx 2018
>> From Boston, Massachusetts, it's theCUBE. Covering LiveWorx 18, brought to you by PTC. >> Welcome back to Boston, everybody. This is theCUBE, the leader in live tech coverage, and we are here, day one of the PTC LiveWorx conference, IOT, blockchain, AI, all coming together in a confluence of innovation. I'm Dave Vellante with my co-host, Stu Miniman. Wolfgang Ulaga is here. He's the AT&T Professor of Services Leadership and Co-Executive Director, the Center for Services Leadership at Arizona State University. Wolfgang, welcome to theCUBE, thank you so much for coming on. >> Thank you. >> So services leadership, what should we know? Where do we start this conversation around services leadership? >> The Center of Services Leadership is a center that has been created 30 years ago around a simple idea, and that is putting services front and center of everything a company does. So this is all about service science, service business, service operations, people and culture. When you touch service, you immediately see that you have to be 360 in your approach. You have to look at all the aspects. You have to look at structures and people. You have to look at operations with a service-centric mindset. >> I mean, it sounds so obvious. Anytime we experience, as consumers, great service, we maybe fall in love with a company, we're loyal, we tell everybody. But so often, services fall down. I mean, it seems obvious. Why is it just not implemented in so many organizations? >> One of the problems is that companies tend to look at services as an afterthought. Think about the word after-sales service, which in my mind is already very telling about how it's from a cultural perspective perceived. It's something that you do after the sale has been done. That's why oftentimes, there is the risk that it falls back, it slips from the priority list. You do it once, you have done all the other things. But in reality, businesses are there to serve customers. Service should be the center of what the company does, not at the periphery. >> Or even an embedded component of what the company, I mean, is Amazon a good example of a company that has embraced that? Or is Netflix maybe even a better example? I don't even know what the service department looks like at Netflix, it's just there. Is that how we should envision modern-day service? >> It excites me at the conference at LiveWorx. We see so many companies talking about technology and changes. And you really can sense and see how all of them are thinking about how can they actually grow the business from historic activities into new data-enabled activities. But the interesting challenge for many firms is that this is going to be also journey of learning how to serve its customers through data analytics. So data-enabled services is going to be a huge issue in the next coming years. >> Wolfgang, you're speaking here at the conference. I believe you also wrote a book about advanced services. For those that aren't familiar with the term, maybe walk us through a little bit about what that is. >> Earlier this morning, I presented the book "Service Strategy in Action", which is a very managerial book that we wrote over 10 years of experience of doing studies, working with companies on this journey from a product-centric company that wants to go into a service and solution-centric world and business. Today we see many of the companies picking up the pace, going into that direction, and I would say that with data analytics, this is going to be an even more important phenomenon for the next years to come. >> A lot of companies struggle with service as well because they don't see it as a scale component of their business. It's harder to scale services than it is to scale software, for example. In thinking about embedding services into your core business, how do you deal as an organization with the scale problem? Is it a false problem? How are organizations dealing with that? >> No, you're absolutely right. Many companies know and learn when they are small and they control operations. It's easy to actually have your eyes on service excellence. Once you scale up, you run into this issue of how do you maintain service quality. How do you make sure that each and every time to replicate into different regions, into different territories, into different operations, that you keep that quality up and running. One way to do it is to create a service culture among the people because one way to control that quality level is to push responsibility as low as possible down so that each and every frontline employee knows what he or she has to do, can take action if something goes wrong, and can maintain that service quality at the level we want. That's where sometimes you see challenges and issues popping up. >> What role do you see machines playing? You're seeing a lot of things like Chatbox or voice response. What role will machines play in the services of the future? >> I think it's a fascinating movement that is now put in place where, machine, artificial intelligence, is there to actually enhance value being created for customers. Sometimes you hear this as a threat or as a danger, but I would rather see it as an opportunity to raise levels of service qualities, have this symbiosis between human and machine to actually provide better, outstanding service for customers. >> Could you share some examples of successes there or things that you've studied or researched? >> Yeah so for example, if I take a consumer marketing example. In Europe I worked with a company, which is Nespresso. They do this coffee machines and capsules. In their boutique, they don't call it a store, by the way, they call it a boutique, they have injected a lot of new technology into helping customers to have different touchpoints, get served the way they want to, at the time they want to, how they want to. So this multi-channel, multi-experience for customers, is actually a growing activity. When you look at it from a consumer perspective, I get more opportunities, I get more choices. I can pick and choose when, where, and how I want to be served. A similar example is Procter & Gamble here in the United States. P&G has recently rolled out a new service business, taking a brand, Tide, and creating Tide Dry Cleaners here in America. It's a fascinating example. They use technology like apps on a smartphone to give the customer a much better experience. I think there's many of these example we'll see in the future. >> When we talk about IOT, one of the things that caught our ear in the keynote this morning is, it's going to take 20 to 25 partners putting together this solution. Not only is there integration of software, but one of the big challenges there, I think, is how do you set up services and transform services to be able to live in this multi-vendor environment. I wonder if you could comment on that? >> I agree, I agree. What I see, which makes me as a business professor very excited and that is, of course there's technology, of course there's hardware and software. But one of the biggest challenges will be the business challenges. How do you implement all of these offers? How do you roll it out? One of my talk topics today were how do you commercialize it? How do you actually make money with it? How do you get paid for it? One of my research areas is what they call free to fee. How do you get the r out of the free, and make customers pay for value you create? What I find, especially in the digital services space, there's so much value being created, but not every company is able to capture the value. Getting adequately paid for the value, this is a huge challenge. In sum, I would say it's really an issue about business challenges as much as it's a technological issue or technical challenges. >> I think about IOT, so many of the different transfer protocols, it's open source, that free to fee. Any advice you can give to people out there as to how they capture that value and capture revenue? >> I think you have to be super careful where the commoditization will kick in. If over time, something that was a differentiator yesterday, with the open sources and everything, will become not so much differentiator tomorrow. So where is your competitive edge? How do you stand out from competition? I know these are very classic questions, but you know what? In the IOT and digital space, they resurface, they come back, and having the right answers on these questions will make the difference between you and competition. >> Last question, we got to go. The trend toward self-service, is that a good thing, a bad thing, a depends thing? >> I think everything that allows customers to have choices. Customers today want to be in charge. They want to be in control. They, in fact, want all of it. They want to have service when they want it, but they want to have a non-self-service option if they feel like. So I think the trick is to know, how can I be nimble and give customers all of these choices so that they are in charge and pick and choose. >> Wolfgang, thanks so much for coming to theCUBE. >> Appreciate it, >> It's a pleasure having you, >> thank you very much, >> good to see you. All right, keep it right there, everybody. Stu and I will be back with our next guest right after this short break. We're here at the PTC LiveWorx show, you're watching theCUBE. (electronic music)
SUMMARY :
brought to you by PTC. the PTC LiveWorx conference, that you have to be 360 in your approach. I mean, it sounds so obvious. It's something that you do Is that how we should that this is going to be I believe you also wrote a I presented the book how do you deal as an organization that you keep that quality up and running. in the services of the future? is there to actually here in the United States. that caught our ear in the How do you actually make money with it? it's open source, that free to fee. I think you have to be super careful is that a good thing, a bad thing, so that they are in charge much for coming to theCUBE. We're here at the PTC LiveWorx show,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Wolfgang | PERSON | 0.99+ |
Wolfgang Ulaga | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Nespresso | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Procter & Gamble | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
America | LOCATION | 0.99+ |
P&G | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
Center for Services Leadership | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Tide | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Service Strategy in Action | TITLE | 0.99+ |
United States | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
25 partners | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
30 years ago | DATE | 0.98+ |
one way | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Center of Services Leadership | ORGANIZATION | 0.97+ |
LiveWorx | ORGANIZATION | 0.97+ |
ASU | ORGANIZATION | 0.97+ |
each | QUANTITY | 0.94+ |
Earlier this morning | DATE | 0.93+ |
One way | QUANTITY | 0.92+ |
PTC | ORGANIZATION | 0.9+ |
360 | QUANTITY | 0.89+ |
Arizona State University | ORGANIZATION | 0.87+ |
2018 | DATE | 0.86+ |
theCUBE | ORGANIZATION | 0.86+ |
LiveWorx 18 | EVENT | 0.82+ |
this morning | DATE | 0.8+ |
Professor | PERSON | 0.75+ |
next coming years | DATE | 0.74+ |
LiveWorx | EVENT | 0.72+ |
PTC LiveWorx | EVENT | 0.66+ |
next years | DATE | 0.6+ |
once | QUANTITY | 0.58+ |
day one | QUANTITY | 0.55+ |
Chatbox | TITLE | 0.54+ |
every | QUANTITY | 0.53+ |
talk topics | QUANTITY | 0.48+ |
IOT | ORGANIZATION | 0.38+ |