Usman Nasir, Verizon | AIOps Virtual Forum 2020
>>from around the globe. It's the Cube with digital coverage of AI ops Virtual Forum Brought to you by Broadcom Welcome back to the Broadcom AI Ops Virtual Forum Lisa Martin here talking with Usman Naseer Global Product Management at Verizon we spend Welcome back. >>Uh huh. Hello, Good >>to see you. So 2020 The year of that needs no explanation. With the year of massive challenges, I wanted to get your take on the challenges that organizations are facing this year as the demand to deliver digital products and services has never been higher. >>Yeah, I e I think this is something is so close to all the part part right? It's something that's impacted the whole world equally. And I think regardless off which industry you win, you have been impacted by this in one form or the other and the i c t industry, the information and communication technology industry. You know, Verizon being really massive player in that whole arena, it has just been sort of struck with this massive confirmation that we have talked about for a long time. We have talked about these remote surgery capabilities whereby you got patients in Kenya were being treated by experts sitting in London or New York and also this whole consciousness about, you know, our carbon footprint and being environmentally conscious. This pandemic has taught a school of that and brought this to the forefront off organizational priority, right? The demand. I think that Zaveri natural consequence of everybody sitting at home. And the only thing that can keep things still going is the data communication, Right? But I would just say that that is what kind of at the heart of all of this. Just imagine if we are to realize any of these targets that the world is world leadership is setting for themselves. Hey, we have >>to be carbon >>neutral by Xia as a country as a geography, etcetera etcetera. You know, all of these things require you to have this remote working capability this remote interaction, not just between human but machine to machine interaction. And this is a unique value chain which is now getting created that you've got people we're communicating with other people or were communicating with other machines. But the communication is much more. I won't even use the term really time because we've used real time for voice and video, etcetera. We're talking low latency microsecond to see and making that can either cut somebody's, you know, um, our trees or that could actually go and remove the tumor, that kind of stuff. So that has become a reality. Everybody's asking for it. Remote learning, being an extremely massive requirement where, you know, we've had to enable these thes virtual classrooms ensuring the type of connectivity, ensuring the type of type of privacy which is just so, so critical. You can't just have everybody you know, Go on the internet and access the data source. You have to be. I'm sorry about the integrity and security of >>that. They've >>had the foremost. So I think all of these things, Yes. We have not been caught off guard. We were should be pretty forward looking in our, you know, plans in our evolution. But yes, it does this fast track a journey that we would probably the least we would have taken in three years. It has brought that down to two quarters where we had to execute them. >>Right? Massive acceleration. All right, so you articulated the challenges really well and a lot of the realities that many of our viewers air facing. Let's talk now about motivations ai ops as a tool as a catalyst for helping organizations overcome those challenges. >>So, yeah, now all that I said you can imagine, you know, it requires microsecond the sea and making which human being on this planet can do microsecond the sea and making on complex network infrastructure, which is impacting, and user applications which have multitudes off effect. You know, in real life, I used the example of a remote surgeon. Just imagine, if you know, even because you just lose your signal on the quality of that communication for that microsecond, it could be the difference between killing somebody in saving somebody's life. Is that particular? We talk about autonomous vehicles way talk about the transition to electric vehicles, smart motorways, etcetera, etcetera in federal environment. How is all of that going to work? You have so many different components coming in. You don't just have a natural can security anymore. You have software defined networking that's coming becoming a part of this. You have mobile edge computing that is rented for the technologies. Five g enables we're talking augmented reality. We're talking virtual reality all of these things require that resource is. And while we carbon conscious, we don't just wanna build a billionaire, a terrorist on the planet, right? We we have to make sure that resource is air given on demand and the best way of re sources can be given on demand and could be most efficient. Is that we're making is being made at million microsecond. And those resource is our accordingly being distribute. Right? If you're 10 flying on, people sipping their coffee is having teeth talking to somebody else. You know, just being away on holiday. I don't think we're gonna be able to handle that world that we have already stepped into. Risen's five g has already started businesses on the transformational journey where they're talking about end user experience, personalization. You're gonna have, you know, events where people are going to go. And it's going to be three dimensional experiences that are purely customized for you. How How does that all happen without this intelligence having their and a network with all of these multiple layers assaults spectrum, it doesn't just need to be intuitive. Hey, this is my private I p traffic. This is public traffic. You know it has to now be into or this is an application that to privatize over another has to be intuitive to the criticality in the context, off those transactions again that surgeons surgery is much more important than husband sitting and playing a video game. >>Yeah, I'm glad that you think that that's excellent. Let's go into some specific use cases. What are in some of the examples that you gave? Let's kind of dig deeper into some of that. What you think are the lowest hanging fruit for organizations, kind of pan industry to go after here. >>Excellent, right? And I think this just like different ways to look at the lowest timing food. Like for somebody like Verizon, who is the managed services provider, you know, very comprehensive medicines. But we obviously have food timing much lower than potentially for some of our customers who want to go on that journey, right? So for them to just >>go and try and >>harness the power of help, the food's might be a bit higher hanging. But for somebody like God, the immediate ones would be to reduce the number off alarms that are being generated by these overlays services. You've got your basic network. Then you've got your software defined networking. On top of that, you have your hybrid clouds. You have your edge computing coming on top of that, you know? So ALOF this means if there is an outrage on one device on the network, gonna make this very real for everybody, right? It's right out. I'm not divisive. Network does not stop all of those multiple applications for monitoring tools from raising havoc and raising thousands off alarms and everyone capacity. If people are attending to those thousands off alarms, it's like you having a police force. And there's a burglary in one bank and the alarm goes off in $50. How you gonna make the best use of your police force? You're gonna go investigate 50 banks? You wanna investigate one where the problem is. So it's as realize that and I think that's the first wind where people can save so much cost, which is currently being wasted. And resource is running around primary figure stuff up immediately. Anti this with network and security network and security is something which has eluded even the most. You know, amazing off brings in or engineering. Well, we took it. We have network expert, separate people. Security experts separate people to look for different things. But there are security events that can impact the performance of the network and then use your application, cetera, etcetera, which could be falsely attributed to the network. And then if you've got multiple parties, which are then which have to clear stakeholders, you can imagine the blame game that goes on pointing fingers, taking names, not taking responsibility. That is how all this happened. This is the only way to bring it all together to say Okay, this is what takes priority. If there's an event that has happened, what is its correlation to the other downstream systems, devices, components and user applications. And it subsequently, you know, like isolating into the right cause where you can most effectively resolve that problem. Certainly, I would say on demand virtualized resource virtualized resource is the heart and soul of the spirit of status that you can have them on them up so you can automate the allocation of these. Resource is based on, you know, customers consumption, their peaks, their crimes. All of that comes in. You see Hey, typically on a Wednesday, their traffic goes up significantly from this particular application. You know, going to this particular data center, you could have this automated this AI ops, which is just providing those resource, is, you know, on demand and tell us to have a much better commercial engagement with customers and just a much better service assurance model. And then one more thing on top of that, which is very critical, is that, as I was saying, giving that intelligence to the network to start having context of the criticality of a transaction that doesn't exist to it. You can't have that because for that you need to have this, you know, multi layer data. You need to have multiple system which are monitoring and controlling different aspects of your overall and user application value chain to be communicating with each other. And, you know, that's that's the only way to sort of achieve that goal. And that only happens with AI off. It's not possible with them. You can paradise Comdex. >>So Guzman, you clearly articulated some obvious low hanging for use cases that organizations can go after. Let's talk now about some of the considerations you talked about the importance of the network in AI ops. The approach, I assume, needs to be modular support needs to be heterogeneous. Talk to us about some of those key considerations that you would recommend >>absolutely. So again, basically starting with the network. Because if there is, if the network sitting at the middle of all of this is not working, then things from communicate with each other, right? And the cloud doesn't work. Nothing. None of this person has hit the hardest all of this. But then subsequently, when you talk about machine to machine communication or i o T. Which is the biggest transformation to spend, every company is going priority now to drive those class efficiencies enhancements. We've got some experience. The integrity off the tab becomes paramount, right? The security integrity of that. How do you maintain integrity off your detail beyond just the secured network components that Trevor right? That's where you get into the whole arena Blockchain technology where you have these digital signatures or barcodes that machine then and then an intelligent system is automatically able to validate and verify the integrity of the data and the commands that are being executed by those and you determine. But I think the terminal. So I o. T machines, right, that is paramount. And if anybody is not keeping that into their equation, that in its own self, is any eye off system that is therefore maintaining the integrity off your commands and your quote that sits on those those machines Right. Second, you have your network. You need to have any off platform, which is able to rationalize all the fat network information, etcetera. And couple that with that. The integrity peace. Because for the management, ultimately, they need to have a co haven't view off the analytics, etcetera, etcetera. They need to. They need to know where the problems are again, right? So let's see if there's a problem with the integrity off the commands that are being executed by a machine. That's a much bigger problems than not being able to communicate with that machine. And the first thing because you'd rather not talk to the machine or haven't do anything if it's going to start doing the wrong thing, So I think that's where it's just very intuitive. It's natural. You have to have subsequently if you have some kind of say and let me use that use case Off Autonomous comes again. I think we're going to see in the next five years it's much water rates, etcetera. It will set for autonomous because it's much more efficient. It's much more space, etcetera, etcetera. So whether that equation you're gonna have systems which will be specialist in looking at aspects and Trump's actions related to those systems, for example, an autonomous moving vehicle's brakes are much more important than the Vipers, Right? So this kind of intelligence, there will be multiple systems who have to sit and nobody has to. One person has to go and on these systems, I think these systems should be open source enough that you are able to integrate them, right? If something sitting in the cloud you were able to integrate for that with obviously the regard off the security and integrity off their data, that has two covers from one system to the extremely. >>So I'm gonna borrow that integrity theme for a second as we go into our last question. And that is this kind of take a macro. Look at the overall business impact that AI ops can help customers make. I'm thinking of, you know, the integrity of teams aligning business and I t. Which we probably can't talk about enough. We're helping organizations really effectively measure KP eyes that deliver that digital experience that all of us demanding consumers expect. What's the overall impact? What would you say in separation? >>So I think the overall impact is a lot. Of course, that customers and businesses give me term got prior to the term enterprises defense was inevitable. There's something that for the first time will come to light. And it's something that is going to, you know, start driving cost efficiencies and consciousness and awareness within their own business, which is obviously going to have, you know, abdominal kind of an effect. So what example being that, you know, you have a problem? Isolation? I talked about network security, this multilayered architectural which enables this new world of five g um, at the heart of all of it. It is to identify the problem to the source, right? Not be bogged down by 15 different things that are going wrong. What is causing those 15 things to go wrong, right that speed to isolation and its own self can make millions and millions off dollars to organizations every organization. Next one is obviously overall impacted customer experience. The five g waas. You can have your customers expecting experiences from you, even if you're not expecting to deliver them in 2021 2022. You'll have customers asking for those experiences or walking away if you do not provide those experiences. So for it's almost like a business can do nothing. Every year they don't have to reinvest if they just want to die on the wine. Businesses want to remain relevant. Businesses want to adopt the latest and greatest in technology, which enables them to, you know, have that superiority and continue it. So from that perspective that continue ity, we're ready that there are intelligence system sitting, rationalizing information and making this in supervised by people, of course, who were previously making some of those here. >>That was a great summary because you're right, you know, with how demanding consumers are. We don't get what we want. Quickly we turn right, we go somewhere else, and we could find somebody that can meet those expectations. So it was spent Thanks for doing a great job of clarifying the impact and the value that AI ops can bring to organizations. That sounds really now is we're in this even higher demand for digital products and services, which is not going away. It's probably going to only increase. It's table stakes for any organization. Thank you so much for joining me today and giving us your thoughts. >>Pleasure. Thank you. >>We'll be right back with our next segment.
SUMMARY :
AI ops Virtual Forum Brought to you by Broadcom Welcome With the year of massive challenges, I wanted to get your take on the challenges that organizations This pandemic has taught a school of that and brought this to the forefront off organizational You can't just have everybody you know, Go on the internet and access the data source. that. It has brought that down to two quarters where we had to execute them. and a lot of the realities that many of our viewers air facing. How is all of that going to work? What are in some of the examples that you gave? you know, very comprehensive medicines. You know, going to this particular data center, you could have this automated this AI ops, Let's talk now about some of the considerations you talked about the importance You have to have subsequently if you have some kind of say and let me use I'm thinking of, you know, the integrity of teams aligning business and I t. There's something that for the first time will come to light. Thank you so much for joining me today and giving us your thoughts. Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
London | LOCATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Trump | PERSON | 0.99+ |
Kenya | LOCATION | 0.99+ |
50 banks | QUANTITY | 0.99+ |
Usman Nasir | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
millions | QUANTITY | 0.99+ |
15 things | QUANTITY | 0.99+ |
$50 | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
one bank | QUANTITY | 0.99+ |
Guzman | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
one device | QUANTITY | 0.99+ |
Usman Naseer | PERSON | 0.99+ |
first wind | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
15 different things | QUANTITY | 0.98+ |
one system | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
Trevor | PERSON | 0.97+ |
two quarters | QUANTITY | 0.96+ |
pandemic | EVENT | 0.95+ |
first thing | QUANTITY | 0.94+ |
Risen | ORGANIZATION | 0.94+ |
Wednesday | DATE | 0.93+ |
2021 2022 | DATE | 0.93+ |
million microsecond | QUANTITY | 0.9+ |
One person | QUANTITY | 0.87+ |
five g | ORGANIZATION | 0.87+ |
Five g | ORGANIZATION | 0.86+ |
Comdex | ORGANIZATION | 0.86+ |
one form | QUANTITY | 0.85+ |
Vipers | ORGANIZATION | 0.82+ |
next five years | DATE | 0.82+ |
one | QUANTITY | 0.81+ |
God | PERSON | 0.79+ |
thousands off | QUANTITY | 0.79+ |
this year | DATE | 0.79+ |
millions off | QUANTITY | 0.78+ |
second | QUANTITY | 0.74+ |
one more thing | QUANTITY | 0.71+ |
three | QUANTITY | 0.64+ |
AIOps Virtual Forum | EVENT | 0.61+ |
Next | QUANTITY | 0.55+ |
Zaveri | ORGANIZATION | 0.5+ |
Xia | ORGANIZATION | 0.45+ |
g | ORGANIZATION | 0.37+ |
4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
Neuromorphic in Silico Simulator For the Coherent Ising Machine
>>Hi everyone, This system A fellow from the University of Tokyo before I thought that would like to thank you she and all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today or some of the recent works that have been done either by me or by character of Hong Kong Noise Group indicating the title of my talk is a neuro more fic in silica simulator for the commenters in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then I will show some proof of concept of the game in performance that can be obtained using dissimulation in the second part and the production of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is adapted from a recent natural tronics paper from the Village Back hard People. And this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, Interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba purification machine, or a recently proposed restricted Bozeman machine, FPD eight, by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition influx you beat or the energy efficiency off memory sisters uh P. J. O are still an attractive platform for building large theorizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particle in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system in this respect, the f. D. A s. They are interesting from the perspective, off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see. And so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for suggesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics. Orphan, chaotic because of symmetry, is interconnectivity. The infrastructure. No neck talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's a schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the Cortes in machine, which is a growing toe the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo F represents the monitor optical parts, the district optical parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback cooking cm using oh, more than detection and refugee A then injection off the cooking time and eso this dynamics in both cases of CME in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the Eyes in coping and the H is the extension of the rising and attorney in India and expect so. >>Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of >>this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted convergence to the global minimum of there's even 20 and using this approach. And so this is >>why we propose toe introduce a macro structure the system or where one analog spin or one D o. P. O is replaced by a pair off one and knock spin and one error on cutting. Viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a >>learning process for searching for the ground state of the icing. Every 20 >>within this massacre structure the role of the ER variable eyes to control the amplitude off the analog spins to force the amplitude of the expense toe, become equal to certain target amplitude. A Andi. This is known by moderating the strength off the icing complaints or see the the error variable e I multiply the icing complain here in the dynamics off UH, D o p o on Then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different, I think introduces a >>symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here for solving certain current size off, um, escape problem, Uh, in which the exiled from here in the i r. From here and the value of the icing energy is shown in the bottom plots. And you see this Celtics search that visit various local minima of the as Newtonian and eventually finds the local minima Um, >>it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing hamiltonian so that we're gonna do not get stuck in any of them. On more over the other types of attractors, I can eventually appear, such as the limits of contractors or quality contractors. They can also be destabilized using a moderation of the target amplitude. And so we have proposed in the past two different motivation of the target constitute the first one is a moderation that ensure the 100 >>reproduction rate of the system to become positive on this forbids the creation of any non tree retractors. And but in this work I will talk about another modulation or Uresti moderation, which is given here that works, uh, as well as this first, uh, moderation, but is easy to be implemented on refugee. >>So this couple of the question that represent the current the stimulation of the cortex in machine with some error correction, they can be implemented especially efficiently on an F B G. And here I show the time that it takes to simulate three system and eso in red. You see, at the time that it takes to simulate the X, I term the EI term, the dot product and the rising everything. Yet for a system with 500 spins analog Spain's equivalent to 500 g. O. P. S. So in f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements tobacco cm in which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as, ah one gear repression to replicate the post phaser CIA. Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts, all the dog products, respect to the problem size. And and if we had a new infinite amount of resources and PGA to simulate the dynamics, then the non in optical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a low carrot off end and while the kite off end. Because computing the dot product involves the summing, all the terms in the products, which is done by a nephew, Jay by another tree, which heights scares a logarithmic any with the size of the system. But this is in the case if we had an infinite amount of resources on the LPGA food but for dealing for larger problems off more than 100 spins, usually we need to decompose the metrics into ah smaller blocks with the block side that are not you here. And then the scaling becomes funny non inner parts linear in the and over you and for the products in the end of you square eso typically for low NF pdf cheap P a. You know you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance started path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product. By increasing the size of this at the tree and this can be done by organizing Yeah, click the extra co components within the F p G A in order which is shown here in this right panel here in order to minimize the finding finance of the system and to minimize the long distance that the path in the in the fpt So I'm not going to the details of how this is implemented the PGA. But just to give you a new idea off why the Iraqi Yahiko organization off the system becomes extremely important toe get good performance for simulator organizing mission. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should result for solving escape problems, free connected person, randomly person minus one, spin last problems and we sure, as we use as a metric the numbers >>of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with Nina successful BT against the problem size here and and in red here there's propose F B J implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior. It's similar to the car testing machine >>and security. You see that the scaling off the numbers of metrics victor product necessary to solve this problem scales with a better exponents than this other approaches. So so So that's interesting feature of the system and next we can see what is the real time to solution. To solve this, SK instances eso in the last six years, the time institution in seconds >>to find a grand state of risk. Instances remain answers is possibility for different state of the art hardware. So in red is the F B G. A presentation proposing this paper and then the other curve represent ah, brick, a local search in in orange and center dining in purple, for example, and So you see that the scaring off this purpose simulator is is rather good and that for larger politicizes, we can get orders of magnitude faster than the state of the other approaches. >>Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FBT implementation would be faster than risk Other recently proposed izing machine, such as the Hope you know network implemented on Memory Sisters. That is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the >>restricted Bosman machine implemented a PGA proposed by some group in Brooklyn recently again, which is very fast for small promise sizes. But which canning is bad So that, uh, this worse than the purpose approach so that we can expect that for promise sizes larger than, let's say, 1000 spins. The purpose, of course, would be the faster one. >>Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better cut values that have been previously found by any other >>algorithms. So they are the best known could values to best of our knowledge. And, um, or so which is shown in this paper table here in particular, the instances, Uh, 14 and 15 of this G set can be We can find better converse than previously >>known, and we can find this can vary is 100 times >>faster than the state of the art algorithm and cp to do this which is a recount. Kasich, it s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. >>So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g A onda and carefully routing the trickle components within the P G A. And and we can draw some projections of what type of performance we can achieve in >>the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape problems respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital and, you know, free to is shown in the green here, the green >>line without that's and, uh and we should two different, uh, prosthesis for this productions either that the time to solution scales as exponential off n or that >>the time of social skills as expression of square root off. So it seems according to the data, that time solution scares more as an expression of square root of and also we can be sure >>on this and this production showed that we probably can solve Prime Escape Program of Science 2000 spins to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP or optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this, what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account out on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will >>be just based on the simple common line access for the simulator and in which will have just a classical approximation of the system. We don't know Sturm, binary weights and Museum in >>term, but then will propose a second version that would extend the current arising machine to Iraq off eight f p g. A. In which we will add the more refined models truncated bigger in the bottom question model that just talked about on the supports in which he valued waits for the rising problems and support the cement. So we will announce >>later when this is available, and Farah is working hard to get the first version available sometime in September. Thank you all, and we'll be happy to answer any questions that you have.
SUMMARY :
know that the classical approximation of the Cortes in machine, which is a growing toe So the well known problem of And so this is And the addition of this chemical structure introduces learning process for searching for the ground state of the icing. off the analog spins to force the amplitude of the expense toe, symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here is a moderation that ensure the 100 reproduction rate of the system to become positive on this forbids the creation of any non tree in the in the fpt So I'm not going to the details of how this is implemented the PGA. of the mattress Victor products since it's the bottleneck of the computation, uh, You see that the scaling off the numbers of metrics victor product necessary to solve So in red is the F B G. A presentation proposing Moreover, the relatively good scanning off the But which canning is bad So that, scheme scales well that you can find the maximum cut values off benchmark the instances, Uh, 14 and 15 of this G set can be We can find better faster than the state of the art algorithm and cp to do this which is a recount. So given that the performance off the design depends on the height the near future based on the, uh, implementation that we are currently working. the time of social skills as expression of square root off. And so the idea of this model is that instead of having the very be just based on the simple common line access for the simulator and in which will have just a classical to Iraq off eight f p g. A. In which we will add the more refined models any questions that you have.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brooklyn | LOCATION | 0.99+ |
September | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Hong Kong Noise Group | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
1000 spins | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
second version | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Farah | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
Scott | PERSON | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
500 g. | QUANTITY | 0.98+ |
Mexican | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Kasich | PERSON | 0.98+ |
first version | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Iraq | LOCATION | 0.98+ |
third part | QUANTITY | 0.98+ |
13 clock cycles | QUANTITY | 0.98+ |
43 clock cycles | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
0.5 microseconds | QUANTITY | 0.97+ |
Jay | PERSON | 0.97+ |
Haider | LOCATION | 0.97+ |
15 | QUANTITY | 0.97+ |
one microseconds | QUANTITY | 0.97+ |
Spain | LOCATION | 0.97+ |
about 10 seconds | QUANTITY | 0.97+ |
LPGA | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.96+ |
500 timer | QUANTITY | 0.96+ |
one strategy | QUANTITY | 0.96+ |
both cases | QUANTITY | 0.95+ |
one error | QUANTITY | 0.95+ |
20 watts | QUANTITY | 0.95+ |
Nina | PERSON | 0.95+ |
about 0.1 microseconds | QUANTITY | 0.95+ |
nine | QUANTITY | 0.95+ |
each graph | QUANTITY | 0.93+ |
14 | QUANTITY | 0.92+ |
CME | ORGANIZATION | 0.91+ |
Iraqi | OTHER | 0.91+ |
billions of neurons | QUANTITY | 0.91+ |
99 success | QUANTITY | 0.9+ |
about 100 | QUANTITY | 0.9+ |
larger than 500 speeds | QUANTITY | 0.9+ |
Vector | ORGANIZATION | 0.89+ |
spins | QUANTITY | 0.89+ |
Victor | ORGANIZATION | 0.89+ |
last six years | DATE | 0.86+ |
one | QUANTITY | 0.85+ |
one analog | QUANTITY | 0.82+ |
hamiltonian | OTHER | 0.82+ |
Simulator | TITLE | 0.8+ |
European | OTHER | 0.79+ |
three neuro inspired principles | QUANTITY | 0.78+ |
Bosman | PERSON | 0.75+ |
three system | QUANTITY | 0.75+ |
trump | PERSON | 0.74+ |
Xia Pios | COMMERCIAL_ITEM | 0.72+ |
100 | QUANTITY | 0.7+ |
one gear | QUANTITY | 0.7+ |
P. | QUANTITY | 0.68+ |
FPD eight | COMMERCIAL_ITEM | 0.66+ |
first one | QUANTITY | 0.64+ |
Escape Program of Science 2000 | TITLE | 0.6+ |
Celtics | OTHER | 0.58+ |
Toby | PERSON | 0.56+ |
Machine | TITLE | 0.54+ |
Refugee A | TITLE | 0.54+ |
couple | QUANTITY | 0.53+ |
Tektronix | ORGANIZATION | 0.51+ |
Opa | OTHER | 0.51+ |
P. J. O | ORGANIZATION | 0.51+ |
Bozeman | ORGANIZATION | 0.48+ |
Scott Hunter, AstraZeneca | Commvault GO 2019
>>live from Denver, Colorado. It's the Q covering com vault. Go 2019. Brought to you by combo. >>Welcome to the Cube. Lisa Martin with student A man we're covering Day one of convo go 19 from Colorado. Stew and I are pleased to welcome to the Cube one of combo longtime customers from AstraZeneca. We have Scott 100 global infrastructure surfaces. Director. Hey, Scott. >>Good afternoon. >>Good afternoon. Welcome to the Cube. AstraZeneca is a name that probably a lot of folks know in the bio medical pharmaceutical space. But for those that don't give us an overview of AstraZeneca. What you guys are what you d'oh! >>Obviously we're we're, ah, bio pharmaceutical company with global presence way be used. The primary care takes off. Medicines be sale throughout the world. So everything from Kearney care to tiu oncology onda also are massive. That diabetes franchise, as well as other core core therapies that are used by our patients, were like, >>All right, So, Scott, maybe bring us inside. What is data mean to your organization? >>And it means loss. Lots of things taxes and cut through our organization from go boat framed in that next molecules to discover them bleeding edge medicines for our patients all the way to have our sales People commercial use data to identify the patients for further rate kid as well and ofcourse, backoffice tree I t enabling functions like each HR and finances. Well, therefore, is this apartment for business >>you've got Global infrastructure service could just lay out a little bit what that entails and how data fits into the picture, what's in your purview and what you have to work with other groups on. >>So my idea looks after architecture, designing governance and for cyber security infrastructure, savage seas, AstraZeneca. So So we be like after within either on premise within their own try our own data centers are in the public load as well. So as you can imagine, their movement on Deron realms and that can environment is pivotal to the coming been successful go forward >>when every time we, you know, you talk about data being the life blood of an organization or the new oil when we're talking about a patient information and the information that could be used to find the next, you know, cure for a particular disease, this it's this is literally life and death data on the ability to have access to it, but also to make sure that it's protected and secure table stakes. Right, so talk to us about when you came on board, he said. Around six years ago, before we went live knowing how critical data is toe AstraZeneca's business, What was the data strategy like a few years ago? >>It was pretty convoluted six years ago when am I fought during the actual Danica over largely exhaust to various companies? So their strategy basically that have one. We don't really have much of a strategy for looking after our deal with five or six different backup products, but then the cinnamon on lane their storage products is now. So over the last 56 years, can stealing that down to one key data storage provider in the APP and also for backup from the store combo. We do still have some leg. It's a very fast environments, but they're being decommissioned. That moved over combo. I speak >>from a what can I t. An initial initiative perspective. A few years ago, six years ago, we didn't have a date, a strategy. What was some of the you know from the top, down from the C suite down, maybe from the board down saying, Hey, guys, we have to get our hands around. I mean, this is before GDP are But in terms of the opportunities that I provided the company, where did that initiative come from? And a new year old come about now. But you guys want a couple of different routes, Talk to us a little bit about that initiative and the initial directions to where you are now. >>Yes, O. R. Xia Old Smalley obviously had a vision for how the country is going to progress. Set sail in his tenure on a massive pile that was understanding with our data waas how it was used on but most importantly have it was protected as well. And so that kind of drove the insertion from likes of HCL Congress and emphasis into looking after our own environment. You can, after our own idea for choosing strategies as well, so that organically company could grow based on best directions for using that there that we could meet from what we had the radio through collaboration with other bio farmers is a game just for the greater good of fame than that. That next medical molecules to help proficient. All right, >>Scott, have you been toe the combo Cho shows before? >>Second thing second time. Tell us a >>little >>bit about you know what brings you to the show? A lot of announcements here. Anything jump out so far? >>Yeah, it is interesting to see some of the new collaborations are Sorry. Party sees it comes I'll be making over the last little well Hedvig acquisition looks looks breaks on the metallic venture that, doomed for public sass is, well, looks like equals x, a n and ammunition and came environments that convo play on. So I think things to very good moves. >>So you're leveraging Public Cloud. How does Khan Vault fit into that? You're to be used babies >>convoked for for M backing up on restoring and our public load environments. Whether we need a B s robotics, start watching in the jury's there with You're in the club zero stack as well. And then we're in the process of bringing on lane production environments and Google Cloud Platform zone. So having that one back up in the store strategies pivotal Isabella's enabling us to move our day off using visibility solution to get calm. Boulders now, which is very powerful, is >>one of the things I noticed when I watched the video that combo has done with you. And they actually shared a quote from you during the keynote before Actually, before everyone walked. It is, you said this constant evolution that come about is delivering was one of the things that that you really like. From a business perspective, Combo has done a lot of evolving in the last nine months with the new leadership. It's too. And you were talking about some of the new technology, some of the new announcements from that evolutionary perspective and what you guys like about it. What are you seeing in terms of them going forward? Are you saying hey, there really listening? They're looking at use cases like ours, learning from it to not only make the technology is better, but to expand their portfolio. >>I mean, for a lot of it's based in the constant evolution of the FBI's that convo used for access and videos need parts. Technology will be backing up of the M two, backing up kubernetes containers and using that in the Secret Service's environment is Val to Tolliver's to ensure that whatever it comes to get from Lourdes cannot feel like several. It's computing environments that don't understand what what they put watch. So we can either reuse it, destroy, are used different manner, so that for us, that's great. Because obviously for our own C A c d pay planes, they're all FBI driven on to be able to use a convert production. The same kind of fashion is >>so, Scott, do you keep up on the quarterly cadence that combo doing and is there anything, uh, kind of either on the road map for things that you're asking for that would make your environment even better? >>And we're kind of used the 90 day cadences for ourselves to ensure that our own strategies are kept in check and we could take advantage off in your aunties are coming not only from convo but for other parts of our the infrastructure really be now for our only storage or a video. Various other providers that be used for insurance are dead as a decibel and used in a proper fashion. I >>want to get into a little bit of the use case, I knew that you had a number of different competing backup solutions in place. Did you start from a data started perspective, like within one division or one part of the company to maybe pilot, because you ended up with a whole bunch of different software solutions in there. Now you standardize on combo, walk us through that process, those decisions and what you're getting by having this now single pane of less >>some of the populated back up in the store sprawl was caused by individual parts of his had been able to do a little thing having little ineighty budget. So give it up. Some parts of business want to use a backup from Veritas or the emcee products that were in play at the time when we source between IBM and HDL chores each pdp for a pre media centers. So I decided another another backup restore productive in the mix. So for us, it just became untenable when we started insourcing, you know, to build a support team support organization to work after that many technologies was pretty difficult hands by way to go for 11 stop strategy. >>You said in the video. That combo had a pretty significantly higher success rate compared to some of the other solutions, so that must have made it a no brainer. >>So our backup critical applications is 99.8% successful to stay on, and that's that. That's what come won't get themselves. So that was a great comfort on the series. Is more and more of our applications move over on the convo platform, then have ah more wrong deeds approached, You know, backup success, but success in the store and say the things as well as Bella's, you know, using the analytics on a more timely fashion again for for drug and manufacturing research. >>So I know that you guys looked at our sorry spoke with a number of combat customers before you made this decision. And now here you are, on the other side of the coin, talking to a lot of combo customers. What advice would you give companies in any industry who in almost 2020 may not have a really robust data strategy? Your recommendations >>should look it over, not just our backup in the store solution. You know, the could base, which has put together for involves very powerful from the beta index. Ease with information going through construe the product to you can use out for things like D. R H A and also immigration off records. Two different defense centers are different parts of public low dreamer, you know, And the new new vision that I have for the analytics is very powerful as well. Forget the name of this tour today that someone that, you know, maybe we've started to use ourselves in a big way. We've got a little science team within my operation, which is made in that they are not coming, that they're more efficient manner. Feed that into our. Praised the architecture so that they could take advantage of what? Worried they got their own confines and makes out with what they need to do for for new discoveries. >>Scott, thank you for joining. Stewing me on the cube today, sharing with us what you're doing at AstraZeneca and looking forward to hearing the next molecule that discovers some great breakthrough. >>Thank you. >>First to Minutemen. I'm Lisa Martin. You're watching the cue from combo go 19
SUMMARY :
Brought to you by combo. Stew and I are pleased to welcome to the Cube one of combo What you guys are what you d'oh! from Kearney care to tiu oncology onda also are massive. What is data mean to your organization? from go boat framed in that next molecules to discover data fits into the picture, what's in your purview and what you have to work with other groups on. and that can environment is pivotal to the coming been successful so talk to us about when you came on board, he said. So over the last some of the you know from the top, down from the C suite down, maybe from the board down saying, Hey, guys, we have to get And so that kind of drove the insertion from Tell us a bit about you know what brings you to the show? So I think things to You're to be used babies the club zero stack as well. some of the new announcements from that evolutionary perspective and what you guys like about I mean, for a lot of it's based in the constant evolution of the FBI's that convo are coming not only from convo but for other parts of our the infrastructure really be now for pilot, because you ended up with a whole bunch of different software solutions in there. some of the populated back up in the store sprawl was caused by individual parts That combo had a pretty significantly higher success rate compared to some of the other solutions, and say the things as well as Bella's, you know, using the analytics on a more timely So I know that you guys looked at our sorry spoke with a number of combat customers to you can use out for things like D. R H A and also immigration Stewing me on the cube today, sharing with us what you're doing at AstraZeneca and You're watching the cue from combo go 19
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
AstraZeneca | ORGANIZATION | 0.99+ |
Colorado | LOCATION | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
99.8% | QUANTITY | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
90 day | QUANTITY | 0.99+ |
Stew | PERSON | 0.99+ |
Isabella | PERSON | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
one part | QUANTITY | 0.99+ |
six years ago | DATE | 0.99+ |
one division | QUANTITY | 0.98+ |
second time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
11 stop | QUANTITY | 0.97+ |
HDL | ORGANIZATION | 0.97+ |
Secret Service | ORGANIZATION | 0.95+ |
HCL Congress | ORGANIZATION | 0.94+ |
one | QUANTITY | 0.94+ |
few years ago | DATE | 0.94+ |
First | QUANTITY | 0.93+ |
each pdp | QUANTITY | 0.92+ |
six different backup products | QUANTITY | 0.91+ |
Danica | ORGANIZATION | 0.91+ |
Scott Hunter | PERSON | 0.91+ |
Lourdes | LOCATION | 0.91+ |
100 | QUANTITY | 0.91+ |
2020 | DATE | 0.89+ |
Khan Vault | PERSON | 0.89+ |
Two different defense centers | QUANTITY | 0.88+ |
O. R. Xia Old Smalley | PERSON | 0.88+ |
Day one | QUANTITY | 0.85+ |
one key data storage | QUANTITY | 0.84+ |
2019 | DATE | 0.84+ |
last nine months | DATE | 0.8+ |
Hedvig | PERSON | 0.76+ |
single pane | QUANTITY | 0.76+ |
19 | OTHER | 0.75+ |
Google Cloud Platform | TITLE | 0.72+ |
each HR | QUANTITY | 0.67+ |
Deron | ORGANIZATION | 0.66+ |
Cube | ORGANIZATION | 0.66+ |
last 56 years | DATE | 0.65+ |
Val to Tolliver | TITLE | 0.63+ |
Scott | ORGANIZATION | 0.61+ |
Commvault | TITLE | 0.6+ |
Around | DATE | 0.59+ |
R H | TITLE | 0.57+ |
Cho | PERSON | 0.46+ |
go 19 | COMMERCIAL_ITEM | 0.45+ |
GO | COMMERCIAL_ITEM | 0.42+ |
Bella | PERSON | 0.42+ |
M | TITLE | 0.27+ |
two | OTHER | 0.26+ |
Recep Ozdag, Keysight | CUBEConversation
>> from our studios in the heart of Silicon Valley, Palo Alto, California It is >> a cute conversation. Hey, welcome back. Get ready. Geoffrey here with the Cube. We're gonna rip out the studios for acute conversation. It's the middle of the summer, the conference season to slow down a little bit. So we get a chance to do more cute conversation, which is always great. Excited of our next guest. He's Ridge, IP, Ops Statik. He's a VP and GM from key. Cite, Reject. Great to see you. >> Thank you for hosting us. >> Yeah. So we've had Marie on a couple of times. We had Bethany on a long time ago before the for the acquisition. But for people that aren't familiar with key site, give us kind of a quick overview. >> Sure, sure. So I'm within the excess solutions group Exhale really started was founded back in 97. It I peered around 2000 really started as a test and measurement company quickly after the I poet became the number one vendor in the space, quickly grew around 2012 and 2013 and acquired two companies Net optics and an ooey and net optics and I knew we were in the visibility or monitoring space selling taps, bypass witches and network packet brokers. So that formed the Visibility Group with a nice Xia. And then around 2017 key cite acquired Xia and we became I S G or extra Solutions group. Now, key site is also a very large test and measurement company. It is the actual original HB startup that started in Palo Alto many years ago. An HB, of course, grew, um it also started as a test and measurement company. Then later on it, it became a get a gun to printers and servers. HB spun off as agile in't, agile in't became the test and measurement. And then around 2014 I would say, or 15 agile in't spun off the test and measurement portion that became key site agile in't continued as a life and life sciences organization. And so key sites really got the name around 2014 after spinning off and they acquired Xia in 2017. So more joy of the business is testing measurement. But we do have that visibility and monitoring organization to >> Okay, so you do the test of measurement really on devices and kind of pre production and master these things up to speed. And then you're actually did in doing the monitoring in life production? Yes, systems. >> Mostly. The only thing that I would add is that now we are getting into live network testing to we see that mostly in the service provider space. Before you turn on the service, you need to make sure that all the devices and all the service has come up correctly. But also we're seeing it in enterprises to, particularly with security assessments. So reach assessment attacks. Security is your eye to organization really protecting the network? So we're seeing that become more and more important than they're pulling in test, particularly for security in that area to so as you. As you say, it's mostly device testing. But then that's going to network infrastructure and security networks, >> Right? So you've been in the industry for a while, you're it. Until you've been through a couple acquisitions, you've seen a lot of trends, so there's a lot of big macro things happening right now in the industry. It's exciting times and one of the ones. Actually, you just talked about it at Cisco alive a couple weeks ago is EJ Computer. There's a lot of talk about edges. Ej the new cloud. You know how much compute can move to the edge? What do you do in a crazy oilfield? With hot temperatures and no powers? I wonder if you can share some of the observations about EJ. You're kind of point of view as to where we're heading. And what should people be thinking about when they're considering? Yeah, what does EJ mean to my business? >> Absolutely, absolutely. So when I say it's computing, I typically include Io TI agent. It works is along with remote and branch offices, and obviously we can see the impact of Io TI security cameras, thermal starts, smart homes, automation, factory automation, hospital animation. Even planes have sensors on their engines right now for monitoring purposes and diagnostics. So that's one group. But then we know in our everyday lives, enterprises are growing very quickly, and they have remote and branch offices. More people are working from remotely. More people were working from home, so that means that more data is being generated at the edge. What it's with coyote sensors, each computing we see with oil and gas companies, and so it doesn't really make sense to generate all that data. Then you know, just imagine a self driving car. You need to capture a lot of data and you need to process. It just got really just send it to the cloud. Expect a decision to mate and then come back and so that you turn left or right, you need to actually process all that data, right? We're at the edge where the source of the data is, and that means pushing more of that computer infrastructure closer to the source. That also means running business critical applications closer to the source. And that means, you know, um, it's it's more of, ah, madness, massively distributed computer architecture. Um, what happens is that you have to then reliably connect all these devices so connectivity becomes important. But as you distribute, compute as well as applications, your attack surface increases right. Because all of these devices are very vulnerable. We're probably adding about 5,000,000 I ot devices every day to our network, So that's a lot of I O T. Devices or age devices that we connect many of these devices. You know, we don't really properly test. You probably know from your own home when you can just buy something and could easily connect it to your wife. I Similarly, people buy something, go to their work and connect to their WiFi. Not that device is connected to your entire network. So vulnerabilities in any of these devices exposes the entire network to that same vulnerability. So our attack surfaces increasing, so connection reliability as well as security for all these devices is a challenge. So we enjoy each computing coyote branch on road officers. But it does pose those challenges. And that's what we're here to do with our tech partners. Toe sold these issues >> right? It's just instinct to me on the edge because you still have kind of the three big um, the three big, you know, computer things. You got the networking right, which is just gonna be addressed by five g and a lot better band with and connectivity. But you still have store and you still have compute. You got to get those things Power s o a cz. You're thinking about the distribution of that computer and store at the edge versus in the cloud and you've got the Leighton see issue. It seems like a pretty delicate balancing act that people are gonna have to tune these systems to figure out how much to allocate where, and you will have physical limitations at this. You know the G power plant with the sure by now the middle of nowhere. >> It's It's a great point, and you typically get agility at the edge. Obviously, don't have power because these devices are small. Even if you take a room order branch office with 52 2 100 employees, there's only so much compute that you have. But you mean you need to be able to make decisions quickly. They're so agility is there. But obviously the vast amounts of computer and storage is more in your centralized data center, whether it's in your private cloud or your public cloud. So how do you do the compromise? When do you run applications at the edge when you were in applications in the cloud or private or public? Is that in fact, a compromise and year You might have to balance it, and it might change all the time, just as you know, if you look at our traditional history off compute. He had the mainframes which were centralized, and then it became distributed, centralized, distributed. So this changes all the time and you have toe make decisions, which which brings up the issue off. I would say hybrid, I t. You know, they have the same issue. A lot of enterprises have more of a, um, hybrid I t strategy or multi cloud. Where do you run the applications? Even if you forget about the age even on, do you run an on Prem? Do you run in the public cloud? Do you move it between class service providers? Even that is a small optimization problem. It's now even Matt bigger with H computer. >> Right? So the other thing that we've seen time and time again a huge trend, right? It's software to find, um, we've seen it in the networking space to compete based. It's offered to find us such a big write such a big deal now and you've seen that. So when you look at it from a test a measurement and when people are building out these devices, you know, obviously aton of great functional capability is suddenly available to people, but in terms of challenges and in terms of what you're thinking about in software defined from from you guys, because you're testing and measuring all this stuff, what's the goodness with the badness house for people, you really think about the challenges of software defined to take advantage of the tremendous opportunity. >> That's a really good point. I would say that with so far defined it working What we're really seeing is this aggregation typically had these monolithic devices that you would purchase from one vendor. That wonder vendor would guarantee that everything just works perfectly. What software defined it working, allows or has created is this desegregated model. Now you have. You can take that monolithic application and whether it's a server or a hardware infrastructure, then maybe you have a hyper visor or so software layer hardware, abstraction, layers and many, many layers. Well, if you're trying to get that toe work reliably, this means that now, in a way, the responsibility is on you to make sure that you test every all of these. Make sure that everything just works together because now we have choice. Which software packages should I install from which Bender This is always a slight differences. Which net Nick Bender should I use? If PJ smart Nick Regular Nick, you go up to the layer of what kind of ax elation should I use? D. P. D K. There's so many options you are responsible so that with S T N, you do get the advantage of opportunity off choice, just like on our servers and our PCs. But this means that you do have to test everything, make sure that everything works. So this means more testing at the device level, more testing at the service being up. So that's the predeployment stage and wants to deploy the service. Now you have to continually monitor it to make sure that it's working as you expected. So you get more choice, more diversity. And, of course, with segregation, you can take advantage of improvements on the hardware layer of the software layer. So there's that the segregation advantage. But it means more work on test as well as monitoring. So you know there's there's always a compromise >> trade off. Yeah, so different topic is security. Um, weird Arcee. This year we're in the four scout booth at a great chat with Michael the Caesars Yo there. And he talked about, you know, you talk a little bit about increasing surface area for attack, and then, you know, we all know the statistics of how long it takes people to know that they've been reach its center center. But Mike is funny. He you know, they have very simple sales pitch. They basically put their sniffer on your network and tell you that you got eight times more devices on the network than you thought. Because people are connecting all right, all types of things. So when you look at, you know, kind of monitoring test, especially with these increased surface area of all these, Iet devices, especially with bring your own devices. And it's funny, the H v A c seemed to be a really great place for bad guys to get in. And I heard the other day a casino at a casino, uh, connected thermometer in a fish tank in the lobby was the access point. How is just kind of changing your guys world, you know, how do you think about security? Because it seems like in the end, everyone seems to be getting he breached at some point in time. So it's almost Maur. How fast can you catch it? How do you minimize the damage? How do you take care of it versus this assumption that you can stop the reaches? You >> know, that was a really good point that you mentioned at the end, which is it's just better to assume that you will be breached at some point. And how quickly can you detect that? Because, on average, I think, according to research, it takes enterprise about six months. Of course, they're enterprise that are takes about a couple of years before they realize. And, you know, we hear this on the news about millions of records exposed billions of dollars of market cap loss. Four. Scout. It's a very close take partner, and we typically use deploy solutions together with these technology partners, whether it's a PM in P. M. But very importantly, security, and if you think about it, there's terabytes of data in the network. Typically, many of these tools look at the packet data, but you can't really just take those terabytes of data and just through it at all the tools, it just becomes a financially impossible toe provide security and deploy such tools in a very large network. So where this is where we come in and we were the taps, we access the data where the package workers was essentially groom it, filtering down to maybe tens or hundreds of gigs that that's really, really important. And then we feed it, feed it to our take partners such as Four Scout and many of the others. That way they can. They can focus on providing security by looking at the packets that really matter. For example, you know some some solutions only. Look, I need to look at the package header. You don't really need to see the send the payload. So if somebody is streaming Netflix or YouTube, maybe you just need to send the first mega byte of data not the whole hundreds of gigs over that to our video, so that allows them to. It allows us or helps us increase the efficiency of that tool. So the end customer can actually get a good R Y on that on that investment, and it allows for Scott to really look at or any of the tech partners to look at what's really important let me do a better job of investigating. Hey, have I been hacked? And of course, it has to be state full, meaning that it's not just looking at flow on one data flow on one side, looking at the whole communication. So you can understand What is this? A malicious application that is now done downloading other malicious applications and infiltrating my system? Is that a DDOS attack? Is it a hack? It's, Ah, there's a hole, equal system off attacks. And that's where we have so many companies in this in this space, many startups. >> It's interesting We had Tom Siebel on a little while ago actually had a W s event and his his explanation of what big data means is that there's no sampling air. And we often hear that, you know, we used to kind of prior to big day, two days we would take a sample of data after the fact and then tried to to do someone understanding where now the more popular is now we have a real time streaming engines. So now we're getting all the data basically instantaneously in making decisions. But what you just bring out is you don't necessarily want all the data all the time because it could. It can overwhelm its stress to Syria. That needs to be a much better management approach to that. And as I look at some of the notes, you know, you guys were now deploying 400 gigabit. That's right, which is bananas, because it seems like only yesterday that 100 gigabyte Ethan, that was a big deal a little bit about, you know, kind of the just hard core technology changes that are impacting data centers and deployments. And as this band with goes through the ceiling, what people are physically having to do, do it. >> Sure, sure, it's amazing how it took some time to go from 1 to 10 gig and then turning into 40 gig, but that that time frame is getting shorter and shorter from 48 2 108 100 to 400. I don't even know how we're going to get to the next phase because the demand is there and the demand is coming from a number of Trans really wants five G or the preparation for five G. A lot of service providers are started to do trials and they're up to upgrading that infrastructure because five G is gonna make it easier to access state of age quickly invest amounts of data. Whenever you make something easy for the consumer, they will consume it more. So that's one aspect of it. The preparation for five GS increasing the need for band with an infrastructure overhaul. The other piece is that we're with the neutralization. We're generating more Eastern West traffic, but because we're distributed with its computing, that East West traffic can still traverse data centers and geography. So this means that it's not just contained within a server or within Iraq. It actually just go to different locations. That also means your data center into interconnect has to support 400 gig. So a lot of network of hitmen manufacturers were typically call them. Names are are releasing are about to release 400 devices. So on the test side, they use our solutions to test these devices, obviously, because they want to release it based the standards to make sure that it works on. So that's the pre deployment phase. But once these foreign jiggy devices are deployed and typically service providers, but we're start slowly starting to see large enterprises deploy it as a mention because because of visualization and computing, then the question is, how do you make sure that your 400 gig infrastructure is operating at the capacity that you want in P. M. A. P M. As well as you're providing security? So there's a pre deployment phase that we help on the test side and then post deployment monitoring face. But five G is a big one, even though we're not. Actually we haven't turned on five year service is there's tremendous investment going on. In fact, key site. The larger organization is helping with a lot of these device testing, too. So it's not just Xia but key site. It's consume a lot of all of our time just because we're having a lot of engagements on the cellphone side. Uh, you know, decide endpoint side. It's a very interesting time that we're living in because the changes are becoming more and more frequent and it's very hot, so adapt and make sure that you're leading that leading that wave. >> In preparing for this, I saw you in another video camera. Which one it was, but your quote was you know, they didn't create electricity by improving candles. Every line I'm gonna steal it. I'll give you credit. But as you look back, I mean, I don't think most people really grown to the step function. Five g, you know, and they talk about five senior fun. It's not about your phone. It says this is the first kind of network built four machines. That's right. Machine data, the speed machine data and the quantity of Mr Sheen data. As you sit back, What kind of reflectively Again? You've been in this business for a while and you look at five G. You're sitting around talking to your to your friends at a party. So maybe some family members aren't in the business. How do you How do you tell them what this means? I mean, what are people not really seeing when they're just thinking it's just gonna be a handset upgrade there, completely missing the boat? >> Yeah, I think for the for the regular consumer, they just think it's another handset. You know, I went from three G's to 40 year. I got I saw bump in speed, and, you know, uh, some handset manufacturers are actually advertising five G capable handsets. So I'm just going to be out by another cell phone behind the curtain under the hurt. There's this massive infrastructure overhaul that a lot of service providers are going through. And it's scary because I would say that a lot of them are not necessarily prepared. The investment that's pouring in is staggering. The help that they need is one area that we're trying to accommodate because the end cell towers are being replaced. The end devices are being replaced. The data centers are being upgraded. Small South sites, you know, Um, there's there's, uh how do you provide coverage? What is the killer use case? Most likely is probably gonna be manufacturing just because it's, as you said mission to make mission machine learning Well, that's your machine to mission communication. That's where the connected hospitals connected. Manufacturing will come into play, and it's just all this machine machine communication, um, generating vast amounts of data and that goes ties back to that each computing where the edge is generating the data. But you then send some of that data not all of it, but some of that data to a centralized cloud and you develop essentially machine learning algorithms, which you then push back to the edge. The edge becomes a more intelligent and we get better productivity. But it's all machine to machine communication that, you know, I would say that more of the most of the five communication is gonna be much information communication. Some small portion will be the consumers just face timing or messaging and streaming. But that's gonna be there exactly. Exactly. That's going to change. I'm of course, we'll see other changes in our day to day lives. You know, a couple of companies attempted live gaming on the cloud in the >> past. It didn't really work out just because the network latency was not there. But we'll see that, too, and was seeing some of the products coming out from the lecture of Google into the company's where they're trying to push gaming to be in the cloud. It's something that we were not really successful in the past, so those are things that I think consumers will see Maur in their day to day lives. But the bigger impact is gonna be for the for the enterprise >> or jet. Thanks for ah, for taking some time and sharing your insight. You know, you guys get to see a lot of stuff. You've been in the industry for a while. You get to test all the new equipment that they're building. So you guys have a really interesting captaincy toe watches developments. Really exciting times. >> Thank you for inviting us. Great to be here. >> All right, Easier. Jeff. Jeff, you're watching the Cube. Where? Cube studios and fellow out there. Thanks for watching. We'll see you next time.
SUMMARY :
the conference season to slow down a little bit. But for people that aren't familiar with key site, give us kind of a quick overview. So more joy of the business is testing measurement. Okay, so you do the test of measurement really on devices and kind of pre production and master these things you need to make sure that all the devices and all the service has come up correctly. I wonder if you can share some of the observations about EJ. You need to capture a lot of data and you need to process. It's just instinct to me on the edge because you still have kind of the three big um, might have to balance it, and it might change all the time, just as you know, if you look at our traditional history So when you look are responsible so that with S T N, you do get the advantage of opportunity on the network than you thought. know, that was a really good point that you mentioned at the end, which is it's just better to assume that you will be And as I look at some of the notes, you know, gig infrastructure is operating at the capacity that you want in P. But as you look back, I mean, I don't think most people really grown to the step function. you know, Um, there's there's, uh how do you provide coverage? to be in the cloud. So you guys have a really interesting captaincy toe watches developments. Thank you for inviting us. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2017 | DATE | 0.99+ |
1 | QUANTITY | 0.99+ |
Tom Siebel | PERSON | 0.99+ |
Recep Ozdag | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
400 gig | QUANTITY | 0.99+ |
40 gig | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Iraq | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
400 devices | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
Geoffrey | PERSON | 0.99+ |
Marie | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
40 year | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
97 | DATE | 0.99+ |
10 gig | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Four Scout | ORGANIZATION | 0.99+ |
400 | QUANTITY | 0.99+ |
about six months | QUANTITY | 0.99+ |
Scott | PERSON | 0.98+ |
Exhale | ORGANIZATION | 0.98+ |
billions of dollars | QUANTITY | 0.98+ |
eight times | QUANTITY | 0.98+ |
Xia | ORGANIZATION | 0.98+ |
I S G | ORGANIZATION | 0.98+ |
This year | DATE | 0.98+ |
Bethany | PERSON | 0.97+ |
Leighton | ORGANIZATION | 0.97+ |
agile | TITLE | 0.97+ |
one aspect | QUANTITY | 0.97+ |
Cube | ORGANIZATION | 0.96+ |
52 2 100 employees | QUANTITY | 0.96+ |
Sheen | PERSON | 0.96+ |
YouTube | ORGANIZATION | 0.96+ |
EJ | ORGANIZATION | 0.96+ |
2012 | DATE | 0.96+ |
hundreds of gigs | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
two days | QUANTITY | 0.95+ |
one vendor | QUANTITY | 0.95+ |
one area | QUANTITY | 0.95+ |
Syria | LOCATION | 0.94+ |
400 gigabit | QUANTITY | 0.94+ |
100 gigabyte | QUANTITY | 0.94+ |
five senior | QUANTITY | 0.93+ |
48 | QUANTITY | 0.93+ |
2014 | DATE | 0.92+ |
Five g | ORGANIZATION | 0.92+ |
one group | QUANTITY | 0.91+ |
Trans | ORGANIZATION | 0.91+ |
Palo Alto, California | LOCATION | 0.9+ |
first mega byte | QUANTITY | 0.9+ |
Bender | PERSON | 0.9+ |
four scout booth | QUANTITY | 0.89+ |
Visibility Group | ORGANIZATION | 0.89+ |
four machines | QUANTITY | 0.89+ |
each computing | QUANTITY | 0.88+ |
five communication | QUANTITY | 0.88+ |
Silicon Valley, | LOCATION | 0.87+ |
five G. | ORGANIZATION | 0.87+ |
Four | QUANTITY | 0.86+ |
three G | ORGANIZATION | 0.86+ |
100 | QUANTITY | 0.86+ |
couple weeks ago | DATE | 0.86+ |
15 | QUANTITY | 0.85+ |
one side | QUANTITY | 0.84+ |
Net optics | ORGANIZATION | 0.84+ |
about millions of records | QUANTITY | 0.83+ |
108 | QUANTITY | 0.82+ |
five G. | TITLE | 0.81+ |
H v A c | COMMERCIAL_ITEM | 0.81+ |
Michael the | PERSON | 0.8+ |
about 5,000,000 I ot | QUANTITY | 0.8+ |
a couple of years | QUANTITY | 0.79+ |
three | QUANTITY | 0.79+ |
Matt | PERSON | 0.79+ |
many years ago | DATE | 0.78+ |
Shia Liu, Scalyr | Scalyr Innovation Day 2019
>> from San Matteo. It's the Cube covering scaler Innovation Day Brought to you by scaler. >> I'm John for the Cube. We are here in San Mateo, California, for special Innovation Day with scaler at their headquarters. Their new headquarters here. I'm here. She here. Lou, Who's Xia Liu? Who's the software engineering team? Good to see you. Thanks for joining. >> Thank you. >> So tell us, what do you do here? What kind of programming? What kind of engineering? >> Sure. Eso i'ma back and suffer engineer at scaler. What I work on from the day to day basis is building our highly scaleable distributed systems and serving our customers fast queries. >> What's the future that you're building? >> Yeah. So one of the project that I'm working on right now is it will help our infrastructure to move towards a more stateless infrastructure s o. The project itself is a meta data storage component and a series of AP ice that Comptel are back and servers where to find a lock file. That might sound really simple, but at the massive scale of ours, it is actually a significant challenge to do it fast and reliably. >> And you're getting date is a big challenge or run knows that data is the new oil date is the goal. Whatever the people saying, the states is super important. You guys have a unique architecture around data ingest What's so unique about it? You mind sharing? >> Of course, s O. We have a lot of things that we do or don't do. Uniquely. I would like to start with the ingestion front of things and what we don't do on that front. So we don't do keywords indexing which most other extinct existing solutions, too. By not doing that, not keeping the index files up to date with every single log message that's incoming. We saved a lot of time and resource, actually, from the moment that our customers applications generate a logline Teo that logline becoming available to for search in scaler. You y that takes just a couple of seconds on DH on other existing solutions. That can take hours. >> So that's the ingests I What about the query side? Because you got in just now. Query. What's that all about? >> Yeah, of course. Actually. Do you mind if we go to black board a little bit? >> Take a look. >> Okay. Grab a chart real quick. Um, so we have a lot of servers around here. We have, uh, Q >> servers. Let's see. >> These are accused servers and, um, a lot of back and servers, Um, just to reiterate on the interest inside a little bit. When locks come in, they will hit one of these Q servers, and you want them Any one of them. And the Q server will kind of batch the log messages together and then pick one of the bag and servers at random and send the batch of locks. Do them any Q can reach any back in servers. And that's how we kind of were able to handle gigs of laughs. How much ever log that you give us way in jazz? Dozens of terabytes of data on a daily basis. Um, and then it is this same farm of back and servers. That's kind of helping us on the query funds crave front. Um, our goal is when a query comes in, we summon all of these back and servers at once. We get all of their computation powers, all of their CPU cores, to serve this one queer Ari, And that is just a massively scalable multi tenant model and in my mind is really economies of scale at its best. >> So scales huge here. So they got the decoupled back in and accused Q system. But yet they're talking to each other. So what's the impact of the customer? What some of the order of magnitude scale we're talking about here? >> Absolutely. So for on the loch side, we talked about seconds response time from logs being generated, too. They see the lock show up and on the query side, um, the median response time of our queries is under 100 milli second. And we defined that response time from the moment the customer hit in the return button on their laptop to they see results show up and more than 90% of our queries return results in under one second. >> So what's the deployment model for the customers? So I'm a customer. Oh, that sounds great. Leighton sees a huge issue one of low late and seek. His legacy is really the lag issue for data. Do I buy it as a service on my deploying boxes? What does this look like here? >> Nope. Absolutely. Adult were 100 plan cloud native. All of this is actually in our cloud infrastructure and us a customer. You just start using us as a sulfur is a service, and when you submit a query, all of our back and servers are at your service. And what's best about this model is that asks Keller's business girls. We will add more back and servers at more computation power and you as a customer's still get all of that, and you don't need to pay us any extra for the increased queries. >> What's the customer news case for this given you, given example of who would benefit from this? >> Absolutely. So imagine your e commerce platform and you're having this huge black Friday sales. Seconds of time might mean millions of revenues to you, And you don't wantto waste any time on the logging front to debug into your system to look at your monitoring and see where the problem is. If you ever have a problem, so we give you a query response time on the magnitude of seconds versus other is existing solutions. Maybe you need to wait for minutes anxiously in front of your computer. >> She What's the unique thing here? This looks like a really good actor, decoupling things that might make sense. But what's the What's the secret sauce? You? What's the big magic here? >> Yeah, absolutely. So anyone can kind of do a huge server farm Route Fours query approach. But the 1st 80% of a brute force algorithm is easy. It's really the last 20%. That's kind of more difficult, challenging and really differentiate. That's from the rest of others. Solutions. So to start with, we make every effort we can teo identify and skip the work that we don't have to do. S O. Maybe we can come back to your seats. >> Cut. >> Okay, so it's so it's exciting. >> Yeah. So we there are a couple things we do here to skip the work that we don't have to do. As we always say, the fastest queries are those we don't even have to run, which is very true. We have this Colin, our database that wee boat in house highly performance for our use case that can lead us only scan the columns that the customer cares about and skipped all the rest. And we also build a data structure called bloom Filters And if a query term does not occur in those boom filters, we can just skip the whole data set that represents >> so that speed helps on the speed performance. >> Absolutely. Absolutely. If we don't even have to look at that data set, >> You know, I love talking to suffer engineers, people on the cutting edge because, you know, you guys were startup. Attracting talent is a big thing, and people love to work on hard problems. What's the hard problem that you guys are solving here? >> Yeah, absolutely. S o we we have this huge server farm at at our disposal. It's, however, as we always say, the key to brute force algorithms is really to recruit as much force as possible as fast as we can. If you have hundreds thousands, of course lying around. But you don't have an effective way to some of them around when you need them. Then there's no help having them around 11 of the most interesting things that my team does is we developed this customised scatter gather algorithm in order to assign the work in a way that faster back and servers will dynamically compensate for slower servers without any prior knowledge. And I just love that >> how fast is going to get? >> Well, I have no doubt that will one day reach light speed. >> Specialist. Physics is a good thing, but it's also a bottleneck. Just what? Your story. How did you get into this? >> Yeah, s o. I joined Scaler about eight months ago as an ap s server, Actually. Sorry. As an FBI engineer, actually eso during my FBI days. I use scaler, the product very heavily. And it just became increasingly fascinated about the speed at which our queria runs. And I was like, I really want to get behind the scene and see what's going on in the back end. That gives us such fast query. So here I am. Two months ago, I switched the back and team. >> Well, congratulations. And thanks for sharing that insight. >> Thank you, John. Thank >> jumper here with Cuban Sites Day and Innovation Day here in San Mateo. Thanks for watching
SUMMARY :
Day Brought to you by scaler. I'm John for the Cube. basis is building our highly scaleable distributed systems and serving That might sound really simple, but at the massive scale of ours, Whatever the people saying, not keeping the index files up to date with every single log message that's incoming. So that's the ingests I What about the query side? Yeah, of course. so we have a lot of servers around here. And the Q server will kind of batch the log messages together and What some of the order of magnitude scale we're So for on the loch side, we talked about seconds His legacy is really the lag issue for data. for the increased queries. so we give you a query response time on the magnitude of seconds versus She What's the unique thing here? the work that we don't have to do. the work that we don't have to do. If we don't even have to look at that data set, What's the hard problem that you guys are solving here? of the most interesting things that my team does is we developed this customised How did you get into this? behind the scene and see what's going on in the back end. And thanks for sharing that insight. Thanks for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
San Mateo | LOCATION | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Xia Liu | PERSON | 0.99+ |
Comptel | ORGANIZATION | 0.99+ |
Colin | PERSON | 0.99+ |
Two months ago | DATE | 0.99+ |
Lou | PERSON | 0.99+ |
San Mateo, California | LOCATION | 0.99+ |
more than 90% | QUANTITY | 0.99+ |
Keller | PERSON | 0.98+ |
millions | QUANTITY | 0.98+ |
Cuban Sites Day | EVENT | 0.98+ |
black Friday | EVENT | 0.98+ |
under 100 milli second | QUANTITY | 0.97+ |
1st 80% | QUANTITY | 0.97+ |
Shia Liu | PERSON | 0.97+ |
Dozens of terabytes of data | QUANTITY | 0.96+ |
hundreds thousands | QUANTITY | 0.96+ |
under one second | QUANTITY | 0.95+ |
Innovation Day | EVENT | 0.94+ |
one | QUANTITY | 0.94+ |
Innovation Day | EVENT | 0.92+ |
around 11 | QUANTITY | 0.88+ |
San Matteo | ORGANIZATION | 0.87+ |
Seconds | QUANTITY | 0.85+ |
20% | QUANTITY | 0.83+ |
one day | QUANTITY | 0.83+ |
eight months ago | DATE | 0.81+ |
Scalyr | PERSON | 0.78+ |
Leighton | PERSON | 0.77+ |
Route Fours | OTHER | 0.75+ |
single log message | QUANTITY | 0.75+ |
100 | QUANTITY | 0.74+ |
Scalyr Innovation Day 2019 | EVENT | 0.73+ |
couple of seconds | QUANTITY | 0.73+ |
about | DATE | 0.61+ |
Cube | ORGANIZATION | 0.57+ |
seconds | QUANTITY | 0.56+ |
plan | ORGANIZATION | 0.51+ |
minutes | QUANTITY | 0.49+ |
Scaler | ORGANIZATION | 0.49+ |
scaler | TITLE | 0.38+ |