4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
Tom Gillis, VMware & Tom Burns, Dell EMC | VMworld 2019
>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum World 2019 brought to you by the M wear and its ecosystem partners. >> Welcome back. I'm Stew Minuteman here with John Troyer. We're have three days, Walter Wall coverage here at VM World 2019 with lobbying Mosconi North and happy to welcome to the program. To my right is Tom Burns, who is the senior vice president general manager of networking and Solutions at Delhi Emcee and sitting to his right. Another Tom. We have Tom Gillis, who's the S V p and general manager of networking of Security inside VM wear. So I'm super excited. Go back to my roots of networking. Tom and Tom thanks so much for joining us. >> Thanks for having us. Thanks for All >> right. So, you know, Tom, you and I have talked for years now about you know, it was not just s t n, but you know, the changes in the environment. Of course, you know, networking and compute, you know, smashing together and where the role of software in this whole environment has changed. So, you know, let's start, you know, there's some news. Let's get that cover the hard news first. VM Where has the networking pieces? Dell has some software networking pieces also, and there's some more co mingling of those. So maybe walk us through that. >> Absolutely. I think the story this week is about the collaboration that's happening between Tom's team and my team in kind of innovating and disrupting in the traditional networking world. You know, Tom Sad NSX around micro segmentation network virtualization lot going on with analytics and capability to really see what's going on. The network from Cord Out EJ to cloud the acquisition of RV, which is outstanding. Other things that are going on in Vienna, where deli emcee disrupting around the segregation of hardware and software, giving customers that capability to run the nasty need for the connective ity they need, depending upon where the network is sitting. So this week we got two announcements. One is we've got worldwide shipment of the Delhi M CST Land solutions powered by being more great, you know better than none. Software combined with better than none. Hardware coming from del you see, on a global basis worldwide, you know, secure supply chain plus professional service worldwide is a parameter there, right? >> And Tom, maybe bring us in. You know, we'd watch Fellow Cloud before the acquisition esti weigh on. You know, there's a lot of solutions that fit in a couple of different markets. It's not a homogeneous market there. Maybe give us just kind of the camp point from Avella Clubs. Esty Esty. >> Wind is a white Hart market on because it has the classic combination a better, faster, cheaper. It delivers a better end user experience. It is so easy to deploy this and it saves money, NPLs, circuits and back hauling traffic those that was, ah, 19 nineties idea. It was a good idea back then, but it's time for a different approach. >> And just when I've talked to some customers and talk to them about their multi cloud environment, SD Wind, one of those enabling technologies that you know they will bring up to a mad allowed them to actually do that. >> It was it was the movement really >> office 3 65 and sass applications that drove the best human revolution and that back hauling all this traffic to headquarters and then going out to office for 65 when a user might be in, You know, Des Moines, that doesn't make any sense. And so so with us, the win we intelligently route the traffic where it needs to go delivers a better end user experience, and it saves a bunch of money. It's not hard to imagine that cheap broadband links are on order of magnitude lower than these dedicated mpls circuits. And the interesting math is that you could take two or three low cost links and deliver a better experience than with a single dedicated circuit. >> I'm kind of interested in the balance between hardware and software, right? The family trees of networking and compute kind of were different because if they had specialized needs in silicon, so where are we now? It's 2019. Where are we now? With with line speeds and X 86 then the hardware story. >> I think it'll let Tom join the discussion around speeds and feeds is not dead, but it should be dying to get a quick right. You know, it's around virtual network functions and everything really moving to the software layer. Sitting on top of commoditized X 86 based you know, hardware and the combination of these two factors help our customers a lot more with flexibility, agility, time to deploy, return on investment, all these types of things. But I mean, that's my view is a recurring theme you're gonna hear. Is that in networking? And think you're alluding to this You needed these dedicated kind of magical black boxes that had custom hardware in order to do some pretty basic processing. Whether it be switching, routing, advanced security, you had to run things like, you know, hardware. Regular expression, matching et cetera was about three years ago that Intel introduced a technology called D P D. K, which is an acceleration that allowed VM wear to deliver in software on a single CPU. You know, we could push traffic at line rates, and so so or, you know, faster than one rates. And so that was sort of like there wasn't the champagne didn't go off in the, you know, the bald in drop in Times Square. But it's a really important milestone because all of a sudden it doesn't make any sense to build these dedicated black boxes with custom hardware. Now, general purpose hardware, when you have a global supply chain and logistics partner like Dell, coupled with distributed software, can not only replace these network functions, but we can do things completely differently. And that's really you know, we're just beginning this journey because it's only recently that we've been able to do that. But I think you're gonna see a lot more that in the future. >> So we talked about SD win. Uh, there was a second announcement >> that goes back into the court. You know, the creation of a fabric inside of the data center is still a bit difficult. I mean, I've heard quotes saying It's something like 120 lines of cli, you know, per switch. So let's say 4 to 6 Leafs pitches, switches and two spine switches could take days to set up a fabric. What we've announced is the smart Fabric Director, which is a joint collaboration and development between Veum Wear and Delhi emcee that creates this capability to tightly integrate NSX envy Center into the deli emcee power switch, family of data center switches, really eliminating several cases and in fact, setting up that same fabric in less than two minutes. And we're really happy about not just the initial release. But Tom and I have a lot of plans for this particular product and in the road map for, you know, quarters and years to come about really simplifying again, the network automating it. And then, really, our version of intent based networking is the networking operating the way you configured it, you know, when you set it up and I think not just not just on day one, but two, you know and a N and you know you hit the nail on the head. Networking has changed, is no longer about speeds and feeds. It's about availability and simplicity. And so, you know, Del and GM, where I think are uniquely positions to deliver a level of automation where this stuff just works, right? I don't need to go and configure these magic boxes individually. I want to just right, you know, a line of code where my infrastructure is built into the C I. C. D pipeline. And then when I deploy workload, it just works. I don't need an army of people to go figure that out right, and and I think that's the power of what we're working together to unleash. >> So when something technology comes up like like SD win. Sometimes there's a lot of confusion in the marketplace. Vendors going out one size fits all. This will do everything Course. Where are we in the development of SD win and what is the solution? Who should be looking at taking a look at the solution now? >> SD win market, as I said, is growing depend on whose estimate you look at between 50 and 100% a year. And the reason is better, faster, cheaper. Right? So everyone has figured out, you know, like maybe it's timeto think differently about about architecture and save some money. Eso we just announced it on the PM or side, an important milestone. We have more than 13,000 network virtualization customers that includes our data center as well as yesterday, and we don't report them separately. But 13,000 is, you know, that's almost double where it was a year ago. So significant customer growth we also announced were deployed together with our partner from Del 130,000 branches around the world. So by many metrics, I think of'em, where is the number one vendor in this space to your point it is a crowded, noisy space. Everybody's throwing their hat in the Rangel. >> We do it too. >> But I think the thing that is driving the adoption and the sales of our product is that when you put this thing in, it fundamentally changes the experience for the end user. There's not a lot of networking products that do that. Like I meet customers like this thing is magic. You plug it in and all this and streaming just works, you know, like Google hangouts or Web X is like they just work and they worked seamlessly all the time that there's something there that I think it's still unique to the PM or product, and I think it's gonna continue to drive sales in the future. So I think the other strong differentiation when it comes to Del Technologies bm where in Delhi emcee combined is we have this vision around the cloud. You know, EJ core cloud and you know this hybrid multi cloud approach. And obviously SD Ram plays a critical part as one of the stepping stones as relates toe, you know, creating the environment for this multi cloud environment. So, you know, fantastic market opportunity huge growth. As Tom said, markets probably doubling in size each year. I don't know what the damn numbers are. I hate to quote, but you know, we really feel is, though now having this product in this capability inside a deli emcee, again combining our two assets, it could be the next VX rail. We're really good way. Believe the esteem and it's gonna be a gigantic market. And I think that what's interesting about our partnership is that we can reach different segments of the market in a V M, where we tend to focus on the very high end, large enterprise customers. Technically very sophisticated, delicate, rich customers we don't even know we don't even talk to, And a product is simple enough that it works in all segments. We win the very, very biggest, and we win these. You know, smaller accounts where the simplicity of a one quick deployment really really matters. >> Tom. One of the things that excited me a year ago at this show was the networking vision for a multi cloud world reminded to be of nice syrup. React. You know, when we look at networking today, most remote network admin a lot of the network they need to manage. They don't touch the gear. They don't know where it lives, but they're still responsible. Keep it up and running. And if something goes wrong, it's there. It is the update as to where we stand with that where your >> customers are asking the question, right? So our mantra is infrastructure is code, and so no one should ever have to log in with switch. No one should have to look into a Q. And you know, we should have to be like trying to move packets from here. They're just It's very, very difficult. I'm not really feasible. And so So as networking becomes software and those general purpose processors I talk about are giving us the ability to to think about not just a configuration of the network but the operation of the network in ways that were never before possible. So, for example, we announce that the show today with our monitoring product ve realise network in sight. We call it Bernie, not always such clever with the names that were really good at writing code, Vernon gives us the ability to measure application response time from the data center all the way out to the edge. So a single pane of glass we can show you. Oh, here's where it's broken whether it's in the network, whether it's in the server, whether it's the database, that's that's not responding. And we do this all without agents, right? So it's like when the infrastructure gets smart enough to be able to provide that inside, it changes the way the customer operates on. That translates into real savings and real adoption. And that's what's driving all of this momentum, right? That 7 500 to more than 13,000 customers, something has to be behind that. I think it's It's the simplicity of automation. >> CLI has come up a couple times here, and so that's kind of a dirty word. Maybe even these days, it kind of depends on who you're talking with, I think Veum Way. Rendell both spent a lot of time and effort educating the networking engineering market and also educating the kind of data center you know, the rest of the data center crew about, you know, about each other's worlds. Where again, where are we at now? It sounds like with director on with the innocent. The NSX whole stack? Yes. Uh, the role is changing of a network engineer. But again, where are we in that? In that evolution? >> I think you know, we're early on, but it's moving quite rapidly. I think the traditional network in engineer and networking admin is gonna need to evolve. You know more to this, Dev Ops. How do I bring applications? How do I manage the infrastructure? More like a platform. I mean, Tom and I truly believe that the difference between cute and network infrastructure is really going to start to dissolve over time. And why shouldn't it? I mean, based upon what's happening with the commoditization and speeds of the CPU versus the MP use coming from Mersin silicon, it's really beginning to blur. So I think, you >> know, we're in the early >> stages. I mean, certainly from a deli, see perspective. We still, at times, you know, have those discussions and challenges with traditional networking people. But let's face it, they have a tough job. When something's not working, the network administrator usually gets blamed, And so I think it's a journey, uh, and things such as the del Technology Cloud Open networking, NSX, and now SD when it will continue to drive that. And I think we're going to see a rapid change in networking over the next 12 18 to 24 months. I talked to a number of customers that has said, You know, this journey that Tom was talking about is this is a challenge because the skill set is different. My developers need to learn software, and so what? We're working with the M where is trying t o make that software easier and easier to use it actually approach like English language. So latest versions of NSX have these very simple, declarative AP eyes that you can say, Oh, server A talk to server be but not server see, Click Don Deploy. And now, in our partnership with L, we can take that Paulson push it right down into the metal, right down into the silicon. And so so. Simplification and automation are the name of the game, but it is definitely a fundamental change in the skill set necessary to do Networking. Networking is becoming more like software as opposed to, you know, speeds and feeds and packet sniffers and more the old traditional approaches. >> Tom, I don't want to give you the final word as to Ah, you know what people should be taken away from Dell in and Veum wear in the networking space. Well, >> I think across deli emcee and in being work, there's a great amount of collaboration, whether it's the Del Technology Cloud with of'em were really taking the leadership from from that perspective with this multi hybrid cloud. But in the area of networking, you know, Trudeau. Five years ago, when we announced the desegregation of hardware and software, I am in this to disrupt a networking business and to make networking very different tomorrow and in the future than it has been in the past for our customers around. He's deployment, automation and management, and I think that's a shared vision with Tom and his team and the rest of BM, where >> Tom Gillis, Tom Burns, thank you so much faster. Having eight, we'll be back with more coverage here from VM 2019 for John Troyer on stew. Minutemen as always. Thanks for watching the Cube
SUMMARY :
brought to you by the M wear and its ecosystem partners. and Solutions at Delhi Emcee and sitting to his right. Thanks for having us. it was not just s t n, but you know, the changes in the environment. of the Delhi M CST Land solutions powered by being more great, you know better And Tom, maybe bring us in. It is so easy to deploy this and SD Wind, one of those enabling technologies that you know they will bring up to a mad allowed them to actually And the interesting math is that you could take two or three low cost links and deliver a better experience I'm kind of interested in the balance between hardware and software, right? And that's really you know, So we talked about SD win. And so, you know, Del and GM, Who should be looking at taking a look at the solution now? So everyone has figured out, you know, like maybe it's timeto think differently I hate to quote, but you know, we really feel is, though now having this product It is the update as to where we stand with that where your And you know, we should have to be like trying to move packets from here. also educating the kind of data center you know, the rest of the data center crew about, I think you know, we're early on, but it's moving quite rapidly. Networking is becoming more like software as opposed to, you know, speeds and feeds and packet sniffers and more the Tom, I don't want to give you the final word as to Ah, you know what people should be taken away from Dell But in the area of networking, you know, Trudeau. Tom Gillis, Tom Burns, thank you so much faster.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom | PERSON | 0.99+ |
Tom Burns | PERSON | 0.99+ |
Tom Gillis | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Vienna | LOCATION | 0.99+ |
120 lines | QUANTITY | 0.99+ |
John Troyer | PERSON | 0.99+ |
Stew Minuteman | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
Veum Wear | ORGANIZATION | 0.99+ |
NSX | ORGANIZATION | 0.99+ |
Del Technologies | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
less than two minutes | QUANTITY | 0.99+ |
Delhi | LOCATION | 0.99+ |
more than 13,000 customers | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
Walter Wall | PERSON | 0.99+ |
two assets | QUANTITY | 0.99+ |
Five years ago | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
Times Square | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Rendell | PERSON | 0.99+ |
second announcement | QUANTITY | 0.99+ |
VM World 2019 | EVENT | 0.99+ |
Veum | ORGANIZATION | 0.98+ |
three days | QUANTITY | 0.98+ |
M wear | ORGANIZATION | 0.98+ |
two announcements | QUANTITY | 0.98+ |
each year | QUANTITY | 0.98+ |
4 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
7 500 | QUANTITY | 0.98+ |
VM wear | ORGANIZATION | 0.97+ |
one rates | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
two factors | QUANTITY | 0.97+ |
6 Leafs | QUANTITY | 0.97+ |
tomorrow | DATE | 0.97+ |
Vernon | PERSON | 0.96+ |
Avella Clubs | ORGANIZATION | 0.96+ |
Bernie | PERSON | 0.95+ |
day one | QUANTITY | 0.95+ |
Veum Way | ORGANIZATION | 0.95+ |
Veum World 2019 | EVENT | 0.95+ |
Mosconi North | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.93+ |
Delhi | ORGANIZATION | 0.93+ |
Tom Sad | PERSON | 0.93+ |
English | OTHER | 0.93+ |
Dell EMC | ORGANIZATION | 0.93+ |
Del 130,000 | ORGANIZATION | 0.93+ |
Eso | ORGANIZATION | 0.92+ |
two spine switches | QUANTITY | 0.91+ |
del Technology Cloud | ORGANIZATION | 0.91+ |
double | QUANTITY | 0.89+ |
Fellow Cloud | ORGANIZATION | 0.89+ |
GM | ORGANIZATION | 0.88+ |
Cord | ORGANIZATION | 0.87+ |
office 3 65 | TITLE | 0.86+ |
Intel | ORGANIZATION | 0.85+ |
about three years ago | DATE | 0.85+ |
more than 13,000 network virtualization | QUANTITY | 0.85+ |
single pane | QUANTITY | 0.85+ |
Radhesh Balakrishnan, Red Hat | OpenStack Summit 2018
(upbeat music) >> Narrator: Live from Vancouver, Canada, It's theCUBE. Covering OpenStack Summit, North America, 2018. Brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. >> Welcome back to theCUBE's coverage of OpenStack Summit 2018, here in Vancouver. Three days wall-to-wall coverage. I'm Stu Miniman with my cohost, John Troyer. Happy to welcome back to the program, Radhesh Balakrishnan, who is the general manager of OpenStack with Red Hat. Radhesh, great to see you. It's been a week since John talked to you, and always good to have you on at the show. >> Great to be on. Good to be here talking about OpenStack at OpenStack Summit. >> Yeah so, look, OpenStack is in the title of your job. I believe, did we have a birthday cake and a party celebrating a certain milestone? >> That is indeed true; so it's the fifth anniversary of that fact that we've had a product, Red Hat OpenStack Platform, on the market. And so, we've been doing a little bit of a look back at how far we have come in the last five years as well as looking ahead at, you know, how does the next three to five years shape as well. >> Yeah, Radhesh, I'm going to date myself and when I think back to, gosh it was 18 years ago, I was working with Linux, and there were kernels all over the place and things like that. And then, I worked for an enterprise storage company and was like, ugh, like keeping up with Chrome.org was a pain in the neck. There came out this thing called Red Hat Advanced Server that was like, oh wait, we can glom onto this, we can support this with our customers, and that eventually turned into RHEL, which, of course, kind of becomes the main standard for how to do Linux. I feel like we have a lot of similarities. >> Radhesh: Absolutely, absolutely. >> In how we did. RHOSP, I believe, is the acronym, so. >> That's exactly right, and we like to have long names. >> Which are very descriptive, but Red Hat OpenStack Platform, fundamentally, to your point brings the same valid proposition that RHEL brought to Linux, to OpenStack, with the twist that, it's not just curated OpenStack, but it's a co-engineered solution of Linux and Cavium and OpenStack. And along the way we learned that, look, it's not just OpenStack and the infrastructure solution. It's done in conjunction with the software-defined storage solution or it's done in conjunction with software-defined networking. Or, fast-forward all the way now, it's being done in conjunction with cloud-native applications running on top of it, right? But regardless, in five years we've been able to grow to address these different demands being placed at infrastructure level, and at the same time evolved to address new-use cases as well; Telco is an example of that. >> Radhesh, let's spend a couple of minutes, though, on the OpenStack Platform itself from Red Hat. Some of the things, guys, that you were bringing to market, I know we talked about, here at the show, fast-forward upgrades, for instance were, they were just introducing, and maybe some other things in the Queens release that you all are bringing forward and have engineered. >> Yeah, thanks for that question, very topical, in the sense that yesterday we launched OSP 13, which is the latest and greatest version based on Queens release. If you look at the innovation packed in that it fundamentally falls in three buckets. One is the bread part that you talked about, whereby, anybody who is standing on OSP 10, which was the prior, long-release lifecycle product, over to 13, how do you kind of get over there in a graceful manner is the first area that we have addressed. The second area is around security, because how do you make sure that OpenStack-based clouds are secure by default, from the day you roll out all the way to until you retire it, right? I don't know if there's going to be a retirement, but that's the intent of all the security enablements that we have in the product as well. And the third one, how do we make sure that containers in OpenStack can come together in a nice manner. >> Yeah, the container piece is something else that, so a lot of effort, here at the show. They announced Kata containers, which, trying to give the security of a VM, lightweight VM. How does Red Hat look at Kata containers? I know Red Hat, you know Linux's containers, you know, very strong position, fill us in on that. >> Yeah, to maybe pull back a little bit and then look at the larger picture of there is the notion of infrastructure or the open infrastructure that you need and OpenStack is a good starting point for that. And then, you overlay on top of that an application deployment management configuration, lifecycle management solution that's the container platform called OpenShift, right. These are the two centers of gravity for the stack. Now, aspects such as Kata containers or Hubbard, which is for again, similar concept of addressing how do you use virtualization in addition to containers to bring some of the value around security et cetera, right? So we are continuing to engage in all these upstream projects, but we'll be careful and methodical in bringing those technologies into our products as we go along. >> Okay, how about Edge is the other kind of major topic that we're having here, I know I've interviewed some Red Hat customers looking at NFV solutions, so some of the big telcos you know specifically that use various pieces. What do you hear from your customers and help us kind of draw that line between the NFV to the Edge. Yeah, so Edge has become the center is kind of the new joke in the sense that, from an NFV perspective, customers have already effectively addressed the CORD errors and the challenges, now it's about how do you scale that and deploy that on a massive scale, right? That's a good problem to have. Now the goodness of virtualization can be brought all the way down to the radio Edge so that a programmable network becomes the reality that a telco or a carrier can get into. So in that context, Edge becomes a series of use cases. You know, it's not just one destination. Another way to say it is there is Edge an objective and there is Edge as a noun. Edge as the objective is a set of technologies that are enabling Edge, Edge networking, right. Edge management, for example, and then Edge as a destination where you have a series of Edge locations starting from CORD error center going all the way to radio. Now, the technology answers for all these are just being figured out right now. So you can say, you know, put crudely, KBM, OpenStack, containers, and Ansible will be all good elements that will come into the picture when it comes to a solution for all these footprints. >> Nice. Radhesh, maybe let's switch over to talk about the summit here, and the people here, filled with people being productive with OpenStack, right? Either looking at it, upgrading it, inheriting it. We talked to people in a bunch of different scenarios Red Hat, huge installed base, and you are good at helping and supporting, and uplifting, and upskilling a set of operators who started with Linux and now have to be responsible for an entire cloud infrastructure. Plus, now, at this conference, we've been talking about containers, we've been talking about open dev, right. That's again broadening the scope of what an operator might have to deal with. How does Red Hat look at that? How are you and your team helping upskill and enhance the role of the operator? >> Yeah, so I think it comes down to, how do we make sure that we are understanding the journey that the operator himself or herself is taking from a career perspective, right, the skill set of evolving from Linux and core automation-related skills to going to being able to understand what does it mean to live with cloud implementation on a day-to-day basis. What does it mean to live with network function virtualization as the way in which new services are going to be deployed. So, our course curriculum has evolved to be able to address all these needs today. That's one dimension, the other dimension is how do we make sure that the product itself is so easy that the journey is getting to a point where the infrastructure is invisible, and the focus is on the application platform on top. So I think we have multiple areas of focus to get to the point where it's so relevant that it's invisible, if that paradox makes sense. That's what we're trying to make happen with OpenStack. >> Radhesh, Red Hat has a very large presence at the show here; we were noting in the keynote the underlying infrastructure didn't get a lot of discussion because it is more mature, and therefore, we can talk about everything like VGPUs and containers, and everything like that. But Red Hat has a lot in the portfolio that helps in some of those underlying pieces. So maybe you can give us some of the highlights there. >> Absolutely, so we aren't looking at OpenStack as the be-all end-all destination for customers, but rather an essential ingredient in the journey to a hybrid cloud. So when you have that lens it becomes natural to you that a portfolio of our offerings, which are either first-party or in conjunction with our partners --we have over 400 partners with whom we have joint solutions as well -- so you naturally take a holistic view and then say, "How do you optimize the experience of ceph plus OpenStack for example." So we were talking about Edge recently, right, in the context of Edge we realize that there is a particular use-case for hyperconverged infrastructure whereby you need to collocate, compute, and store it in a way that the footprint is so small and easy to manage plus you want to have one life-cycle both for OpenStack and ceph right, so to address that we announced, right at hypercloud infrastructure for cloud, as an offering that is co-engineered between ceph team, or our storage team, and the OpenStack team. Right, that's just an example of how, by bringing the rest of the portfolio, we're able to address needs being expressed by our customers today. Or you look forward in terms of use-case, one thing that we are hearing from all our large customers, such as the Amadeus's of the world is, make the experience of OpenShift on OpenStack, easy to deploy and manage, as well as reduce the penalty of running containers on VMs. Because we understand the benefits of security and all of that, but we want to be able to get that without having any penalty of using a virtual infrastructure so that's why we're heavily focused on OpenShift, on OpenStack, as the form-factor for delivering that while continuing to work on things such as Kata containers as well as, you know, Kuryrs, as technology is evolving to make communities much richer as well as the infrastructure management at OpenStack level richer. >> You brought up an interesting point, we spoke a little bit yesterday with John Allessio and Margaret Dawson, about really that kind of multi-cloud world out there, because pieces like Kubernetes and Ansible, aren't just in the data center with this one stack, it's spanning across multiple environments and when we talk to customers, they do cloud, and cloud is multiple things in multiple places and changing all the time. So I'd love to get your viewpoint on what you hear from customers, how Red Hat's helping them across all those environments. >> Absolutely, so the key differentiation we see in being able to provide to our customers is that unlike some of the other providers out there, they're where they are stitching you with a particular private cloud, with the particular public cloud, and then saying, "Hey, this is sort of the equivalent of the AOL walled gardens, if you will, right, that's being created for a particular private and public cloud. What we're saying is fundamentally three things. First is, the foundation of Linux skills from RHEL that you have is going to be what you can build on to innovate for today and tomorrow, that's number one. Secondly, you can invest in infrastructure that is 100% open using OpenStack so that you can use commodity hardware, bring in multiple use-cases which are bleeding it, such as data lags, big data, Apache Spark, or going all the way to cloud-native application, development on top of OpenStack. And then last but not least, when you are embarking on a multi-cloud journey it is important that you're not tied to innovation speed of one particular public cloud provider, or even a private cloud provider, for that matter, so being able to get to a container platform, which is OpenShift, that can run pretty much everywhere, either on PREM or on a public cloud, and give you that single pane of consistency for your application, which is where business and IT alignment is the focus right now, then I think you've got the best of all the worlds. You know, freedom from vendor-lock in, and a future-proof infrastructure and application platform that can take you to where you need to go, right. So pretty excited to be able to deliver on that consistently as of today, as well as in the coming years. >> All right, we just want to give you the final word, for people out there that ... you know, often they get their opinion based on when they first heard of something. OpenStack's been around for a number of years, five years now, with your platform. Give us the takeaway for 2018 here from OpenStack Summit as to how they should be thinking about OpenStack, in that larger picture. >> The key takeaway is that OpenStack is rock-solid, that you can bring into your environment, not just to power your virtual machine infrastructure, but also baremetal infrastructure on which you can bring in containers as well. So if you're thinking about an infrastructure fabric, either to power your telco network or to power your private cloud in its entirety OpenStack is the only place that you need to be looking at and our OpenStack platform from end to end delivers that value proposition. Now the second aspect to think about is, OpenStack is a step in the journey of a hybrid future destination that you can get to. Red Hat not only has the set of surround products and technologies to round-up the solution, but also have the largest partner ecosystem to offer you choice. So what's your excuse from getting to a hybrid cloud today if not tomorrow? >> Well, Radhesh Balakrishnan, thank you for all the updates appreciate catching up with you once again. For John Troyer, I'm Stu Minimam, getting near the end of three days wall-to-wall coverage here in Vancouver, thank you so much for watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Red Hat, the OpenStack Foundation, and always good to have you on at the show. Great to be on. Yeah so, look, OpenStack is in the title of your job. how does the next three to five years shape as well. the main standard for how to do Linux. RHOSP, I believe, is the acronym, so. and at the same time evolved to address in the Queens release that you all are all the way to until you retire it, right? Yeah, the container piece is something else that, or the open infrastructure that you need and the challenges, now it's about how do you scale that That's again broadening the scope that the journey is getting to a point where at the show here; we were noting in the keynote that the footprint is so small and easy to manage Kubernetes and Ansible, aren't just in the data center of the AOL walled gardens, if you will, right, All right, we just want to give you the final word, OpenStack is the only place that you need to be looking at getting near the end of three days wall-to-wall coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Radhesh Balakrishnan | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Radhesh | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
Margaret Dawson | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Stu Minimam | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
RHEL | TITLE | 0.99+ |
John Allessio | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
AOL | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.99+ |
second aspect | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
fifth anniversary | QUANTITY | 0.99+ |
Three days | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
second area | QUANTITY | 0.99+ |
over 400 partners | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.98+ |
Edge | TITLE | 0.98+ |
yesterday | DATE | 0.98+ |
Vancouver, Canada | LOCATION | 0.98+ |
first | QUANTITY | 0.98+ |
OpenStack | TITLE | 0.98+ |
OpenStack Summit 2018 | EVENT | 0.98+ |
OpenStack Summit | EVENT | 0.98+ |
North America | LOCATION | 0.97+ |
first area | QUANTITY | 0.97+ |
OSP 10 | TITLE | 0.97+ |
18 years ago | DATE | 0.97+ |
single | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Secondly | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Amadeus | ORGANIZATION | 0.96+ |
Ansible | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.95+ |
OpenStack | ORGANIZATION | 0.95+ |
today | DATE | 0.94+ |
three days | QUANTITY | 0.94+ |
13 | TITLE | 0.94+ |
OSP 13 | TITLE | 0.94+ |
Robert Walsh, ZeniMax | PentahoWorld 2017
>> Announcer: Live from Orlando, Florida it's theCUBE covering Pentaho World 2017. Brought to you by Hitachi Vantara. (upbeat techno music) (coughs) >> Welcome to Day Two of theCUBE's live coverage of Pentaho World, brought to you by Hitachi Vantara. I'm your host Rebecca Knight along with my co-host Dave Vellante. We're joined by Robert Walsh. He is the Technical Director Enterprise Business Intelligence at ZeniMax. Thanks so much for coming on the show. >> Thank you, good morning. >> Good to see ya. >> I should say congratulations is in order (laughs) because you're company, ZeniMax, has been awarded the Pentaho Excellence Award for the Big Data category. I want to talk about the award, but first tell us a little bit about ZeniMax. >> Sure, so the company itself, so most people know us by the games versus the company corporate name. We make a lot of games. We're the third biggest company for gaming in America. And we make a lot of games such as Quake, Fallout, Skyrim, Doom. We have game launching this week called Wolfenstein. And so, most people know us by the games versus the corporate entity which is ZeniMax Media. >> Okay, okay. And as you said, you're the third largest gaming company in the country. So, tell us what you do there. >> So, myself and my team, we are primarily responsible for the ingestion and the evaluation of all the data from the organization. That includes really two main buckets. So, very simplistically we have the business world. So, the traditional money, users, then the graphics, people, sales. And on the other side we have the game. That's where a lot of people see the fun in what we do, such as what people are doing in the game, where in the game they're doing it, and why they're doing it. So, get a lot of data on gameplay behavior based on our playerbase. And we try and fuse those two together for the single viewer or customer. >> And that data comes from is it the console? Does it come from the ... What's the data flow? >> Yeah, so we actually support many different platforms. So, we have games on the console. So, Microsoft, Sony, PlayStation, Xbox, as well as the PC platform. Mac's for example, Android, and iOS. We support all platforms. So, the big challenge that we have is trying to unify that ingestion of data across all these different platforms in a unified way to facilitate downstream the reporting that we do as a company. >> Okay, so who ... When it says you're playing the game on a Microsoft console, whose data is that? Is it the user's data? Is it Microsoft's data? Is it ZeniMax's data? >> I see. So, many games that we actually release have a service act component. Most of our games are actually an online world. So, if you disconnect today people are still playing in that world. It never ends. So, in that situation, we have all the servers that people connect to from their desktop, from their console. Not all but most data we generate for the game comes from the servers that people connect to. We own those. >> Dave: Oh, okay. >> Which simplifies greatly getting that data from the people. >> Dave: So, it's your data? >> Exactly. >> What is the data telling you these days? >> Oh, wow, depends on the game. I think people realize what people do in games, what games have become. So, we have one game right now called Elder Scrolls Online, and this year we released the ability to buy in-game homes. And you can buy furniture for your in-game homes. So, you can furnish them. People can come and visit. And you can buy items, and weapons, and pets, and skins. And what's really interesting is part of the reason why we exist is to look at patterns and trends based on people interact with that environment. So for example, we'll see America playerbase buy very different items compared to say the European playerbase, based on social differences. And so, that helps immensely for the people who continuously develop the game to add items and features that people want to see and want to leverage. >> That is fascinating that Americans and Europeans are buying different furniture for their online homes. So, just give us some examples of the difference that you're seeing between these two groups. >> So, it's not just the homes, it applies to everything that they purchase as well. It's quite interesting. So, when it comes to the Americans versus Europeans for example what we find is that Europeans prefer much more cosmetic, passive experiences. Whereas the Americans are much things that stand out, things that are ... I'm trying to avoid stereotypes right now. >> Right exactly. >> It is what it is. >> Americans like ostentatious stuff. >> Robert: Exactly. >> We get it. >> Europeans are a bit more passive in that regard. And so, we do see that. >> Rebecca: Understated maybe. >> Thank you, that's a much better way of putting it. But games often have to be tweaked based on the environment. A different way of looking at it is a lot of companies in career in Asia all of these games in the West and they will have to tweak the game completely before it releases in these environments. Because players will behave differently and expect different things. And these games have become global. We have people playing all over the world all at the same time. So, how do you facilitate it? How do you support these different users with different needs in this one environment? Again, that's why BI has grown substantially in the gaming industry in the past five, ten years. >> Can you talk about the evolution of how you've been able to interact and essentially affect the user behavior or response to that behavior. You mentioned BI. So, you know, go back ten years it was very reactive. Not a lot of real time stuff going on. Are you now in the position to effect the behavior in real time, in a positive way? >> We're very close to that. We're not quite there yet. So yes, that's a very good point. So, five, ten years ago most games were traditional boxes. You makes a game, you get a box, Walmart or Gamestop, and then you're finished. The relationship with the customer ends. Now, we have this concept that's used often is games as a service. We provide an online environment, a service around a game, and people will play those games for weeks, months, if not years. And so, the shift as well as from a BI tech standpoint is one item where we've been able to streamline the ingest process. So, we're not real time but we can be hourly. Which is pretty responsive. But also, the fact that these games have become these online environments has enabled us to get this information. Five years ago, when the game was in a box, on the shelf, there was no connective tissue between us and them to interact and facilitate. With the games now being online, we can leverage BI. We can be more real time. We can respond quicker. But it's also due to the fact that now games themselves have changed to facilitate that interaction. >> Can you, Robert, paint a picture of the data pipeline? We started there with sort of the different devices. And you're bringing those in as sort of a blender. But take us through the data pipeline and how you're ultimately embedding or operationalizing those analytics. >> Sure. So, the game theater, the game and the business information, game theater is most likely 90, 95% of our total data footprint. We generate a lot more game information than we do business information. It's just due to how much we can track. We can do so. And so, a lot of these games will generate various game events, game logs that we can ingest into a single data lake. And we can use Amazon S3 for that. But it's not just a game theater. So, we have databases for financial information, account users, and so we will ingest the game events as well as the databases into one single location. At that point, however, it's still very raw. It's still very basic. We enable the analysts to actually interact with that. And they can go in there and get their feet wet but it's still very raw. The next step is really taking that raw information that is disjointed and separated, and unifying that into a single model that they can use in a much more performant way. In that first step, the analysts have the burden of a lot of the ETL work, to manipulate the data, to transform it, to make it useful. Which they can do. They should be doing the analysis, not the ingesting the data. And so, the progression from there into our warehouse is the next step of that pipeline. And so in there, we create these models and structures. And they're often born out of what the analysts are seeing and using in that initial data lake stage. So, they're repeating analysis, if they're doing this on a regular basis, the company wants something that's automated and auditable and productionized, then that's a great use case for promotion into our warehouse. You've got this initial staging layer. We have a warehouse where it's structured information. And we allow the analysts into both of those environments. So, they can pick their poison in respects. Structured data over here, raw and vast over here based on their use case. >> And what are the roles ... Just one more follow up, >> Yeah. >> if I may? Who are the people that are actually doing this work? Building the models, cleaning the data, and shoring data. You've got data scientists. You've got quality engineers. You got data engineers. You got application developers. Can you describe the collaboration between those roles? >> Sure. Yeah, so we as a BI organization we have two main groups. We have our engineering team. That's the one I drive. Then we have reporting, and that's a team. Now, we are really one single unit. We work as a team but we separate those two functions. And so, in my organization we have two main groups. We have our big data team which is doing that initial ingestion. Now, we ingest billions of troves of data a day. Terabytes a data a day. And so, we have a team just dedicated to ingestion, standardization, and exposing that first stage. Then we have our second team who are the warehouse engineers, who are actually here today somewhere. And they're the ones who are doing the modeling, the structuring. I mean the data modeling, making the data usable and promoting that into the warehouse. On the reporting team, basically we are there to support them. We provide these tool sets to engage and let them do their work. And so, in that team they have a very split of people do a lot of report development, visualization, data science. A lot of the individuals there will do all those three, two of the three, one of the three. But they do also have segmentation across your day to day reporting which has to function as well as the more deep analysis for data science or predictive analysis. >> And that data warehouse is on-prem? Is it in the cloud? >> Good question. Everything that I talked about is all in the cloud. About a year and a half, two years ago, we made the leap into the cloud. We drunk the Kool-Aid. As of Q2 next year at the very latest, we'll be 100% cloud. >> And the database infrastructure is Amazon? >> Correct. We use Amazon for all the BI platforms. >> Redshift or is it... >> Robert: Yes. >> Yeah, okay. >> That's where actually I want to go because you were talking about the architecture. So, I know you've mentioned Amazon Redshift. Cloudera is another one of your solutions provider. And of course, we're here in Pentaho World, Pentaho. You've described Pentaho as the glue. Can you expand on that a little bit? >> Absolutely. So, I've been talking about these two environments, these two worlds data lake to data warehouse. They're both are different in how they're developed, but it's really a single pipeline, as you said. And so, how do we get data from this raw form into this modeled structure? And that's where Pentaho comes into play. That's the glue. If the glue between these two environments, while they're conceptually very different they provide a singular purpose. But we need a way to unify that pipeline. And so, Pentaho we use very heavily to take this raw information, to transform it, ingest it, and model it into Redshift. And we can automate, we can schedule, we can provide error handling. And so it gives us the framework. And it's self-documenting to be able to track and understand from A to B, from raw to structured how we do that. And again, Pentaho is allowing us to make that transition. >> Pentaho 8.0 just came out yesterday. >> Hmm, it did? >> What are you most excited about there? Do you see any changes? We keep hearing a lot about the ability to scale with Pentaho World. >> Exactly. So, there's three things that really appeal to me actually on 8.0. So, things that we're missing that they've actually filled in with this release. So firstly, we on the streaming component from earlier the real time piece we were missing, we're looking at using Kafka and queuing for a lot of our ingestion purposes. And Pentaho in releasing this new version the mechanism to connect to that environment. That was good timing. We need that. Also too, get into more critical detail, the logs that we ingest, the data that we handle we use Avro and Parquet. When we can. We use JSON, Avro, and Parquet. Pentaho can handle JSON today. Avro, Parquet are coming in 8.0. And then lastly, to your point you made as well is where they're going with their system, they want to go into streaming, into all this information. It's very large and it has to go big. And so, they're adding, again, the ability to add worker nodes and scale horizontally their environment. And that's really a requirement before these other things can come into play. So, those are the things we're looking for. Our data lake can scale on demand. Our Redshift environment can scale on demand. Pentaho has not been able to but with this release they should be able to. And that was something that we've been hoping for for quite some time. >> I wonder if I can get your opinion on something. A little futures-oriented. You have a choice as an organization. You could just take roll your own opensource, best of breed opensource tools, and slog through that. And if you're an internet giant or a huge bank, you can do that. >> Robert: Right. >> You can take tooling like Pentaho which is end to end data pipeline, and this dramatically simplifies things. A lot of the cloud guys, Amazon, Microsoft, I guess to a certain extent Google, they're sort of picking off pieces of the value chain. And they're trying to come up with as a service fully-integrated pipeline. Maybe not best of breed but convenient. How do you see that shaking out generally? And then specifically, is that a challenge for Pentaho from your standpoint? >> So, you're right. That why they're trying to fill these gaps in their environment. To what Pentaho does and what they're offering, there's no comparison right now. They're not there yet. They're a long way away. >> Dave: You're saying the cloud guys are not there. >> No way. >> Pentaho is just so much more functional. >> Robert: They're not close. >> Okay. >> So, that's the first step. However, though what I've been finding in the cloud, there's lots of benefits from the ease of deployment, the scaling. You use a lot of dev ops support, DBA support. But the tools that they offer right now feel pretty bare bones. They're very generic. They have a place but they're not designed for singular purpose. Redshift is the only real piece of the pipeline that is a true Amazon product, but that came from a company called Power Excel ten years ago. They licensed that from a separate company. >> Dave: What a deal that was for Amazon! (Rebecca and Dave laugh) >> Exactly. And so, we like it because of the functionality Power Excel put in many year ago. Now, they've developed upon that. And it made it easier to deploy. But that's the core reason behind it. Now, we use for our big data environment, we use Data Breaks. Data Breaks is a cloud solution. They deploy into Amazon. And so, what I've been finding more and more is companies that are specialized in application or function who have their product support cloud deployment, is to me where it's a sweet middle ground. So, Pentaho is also talking about next year looking at Amazon deployment solutioning for their tool set. So, to me it's not really about going all Amazon. Oh, let's use all Amazon products. They're cheap and cheerful. We can make it work. We can hire ten engineers and hack out a solution. I think what's more applicable is people like Pentaho, whatever people in the industry who have the expertise and are specialized in that function who can allow their products to be deployed in that environment and leverage the Amazon advantages, the Elastic Compute, storage model, the deployment methodology. That is where I see the sweet spot. So, if Pentaho can get to that point, for me that's much more appealing than looking at Amazon trying to build out some things to replace Pentaho x years down the line. >> So, their challenge, if I can summarize, they've got to stay functionally ahead. Which they're way ahead now. They got to maintain that lead. They have to curate best of breed like Spark, for example, from Databricks. >> Right. >> Whatever's next and curate that in a way that is easy to integrate. And then look at the cloud's infrastructure. >> Right. Over the years, these companies that have been looking at ways to deploy into a data center easily and efficiently. Now, the cloud is the next option. How do they support and implement into the cloud in a way where we can leverage their tool set but in a way where we can leverage the cloud ecosystem. And that's the gap. And I think that's what we look for in companies today. And Pentaho is moving towards that. >> And so, that's a lot of good advice for Pentaho? >> I think so. I hope so. Yeah. If they do that, we'll be happy. So, we'll definitely take that. >> Is it Pen-ta-ho or Pent-a-ho? >> You've been saying Pent-a-ho with your British accent! But it is Pen-ta-ho. (laughter) Thank you. >> Dave: Cheap and cheerful, I love it. >> Rebecca: I know -- >> Bless your cotton socks! >> Yes. >> I've had it-- >> Dave: Cord and Bennett. >> Rebecca: Man, okay. Well, thank you so much, Robert. It's been a lot of fun talking to you. >> You're very welcome. >> We will have more from Pen-ta-ho World (laughter) brought to you by Hitachi Vantara just after this. (upbeat techno music)
SUMMARY :
Brought to you by Hitachi Vantara. He is the Technical Director for the Big Data category. Sure, so the company itself, gaming company in the country. And on the other side we have the game. from is it the console? So, the big challenge that Is it the user's data? So, many games that we actually release from the people. And so, that helps examples of the difference So, it's not just the homes, And so, we do see that. We have people playing all over the world affect the user behavior And so, the shift as well of the different devices. We enable the analysts to And what are the roles ... Who are the people that are and promoting that into the warehouse. about is all in the cloud. We use Amazon for all the BI platforms. You've described Pentaho as the glue. And so, Pentaho we use very heavily about the ability to scale the data that we handle And if you're an internet A lot of the cloud So, you're right. Dave: You're saying the Pentaho is just So, that's the first step. of the functionality They have to curate best of breed that is easy to integrate. And that's the gap. So, we'll definitely take that. But it is Pen-ta-ho. It's been a lot of fun talking to you. brought to you by Hitachi
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Robert Walsh | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Pentaho | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
America | LOCATION | 0.99+ |
ZeniMax Media | ORGANIZATION | 0.99+ |
ZeniMax | ORGANIZATION | 0.99+ |
Power Excel | TITLE | 0.99+ |
second team | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
two main groups | QUANTITY | 0.99+ |
two groups | QUANTITY | 0.99+ |
Wolfenstein | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
two functions | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
90, 95% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Kool-Aid | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
today | DATE | 0.99+ |
Doom | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
Hitachi Vantara | ORGANIZATION | 0.99+ |
two main buckets | QUANTITY | 0.98+ |
Gamestop | ORGANIZATION | 0.98+ |
Fallout | TITLE | 0.98+ |
two environments | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
one item | QUANTITY | 0.98+ |
Five years ago | DATE | 0.98+ |
Android | TITLE | 0.98+ |
one game | QUANTITY | 0.98+ |
Pentaho World | TITLE | 0.98+ |
three things | QUANTITY | 0.98+ |
first stage | QUANTITY | 0.98+ |
Pen-ta-ho World | ORGANIZATION | 0.98+ |
Pentaho Excellence Award | TITLE | 0.98+ |
this year | DATE | 0.98+ |