Machine Learning Applied to Computationally Difficult Problems in Quantum Physics
>> My name is Franco Nori. Is a great pleasure to be here and I thank you for attending this meeting and I'll be talking about some of the work we are doing within the NTT-PHI group. I would like to thank the organizers for putting together this very interesting event. The topics studied by NTT-PHI are very exciting and I'm glad to be part of this great team. Let me first start with a brief overview of just a few interactions between our team and other groups within NTT-PHI. After this brief overview or these interactions then I'm going to start talking about machine learning and neural networks applied to computationally difficult problems in quantum physics. The first one I would like to raise is the following. Is it possible to have decoherence free interaction between qubits? And the proposed solution was a postdoc and a visitor and myself some years ago was to study decoherence free interaction between giant atoms made of superconducting qubits in the context of waveguide quantum electrodynamics. The theoretical prediction was confirmed by a very nice experiment performed by Will Oliver's group at MIT was probably so a few months ago in nature and it's called waveguide quantum electrodynamics with superconducting artificial giant atoms. And this is the first joint MIT Michigan nature paper during this NTT-PHI grand period. And we're very pleased with this. And I look forward to having additional collaborations like this one also with other NTT-PHI groups, Another collaboration inside NTT-PHI regards the quantum hall effects in a rapidly rotating polarity and condensates. And this work is mainly driven by two people, a Michael Fraser and Yoshihisa Yamamoto. They are the main driving forces of this project and this has been a great fun. We're also interacting inside the NTT-PHI environment with the groups of marandI Caltech, like McMahon Cornell, Oliver MIT, and as I mentioned before, Fraser Yamamoto NTT and others at NTT-PHI are also very welcome to interact with us. NTT-PHI is interested in various topics including how to use neural networks to solve computationally difficult and important problems. Let us now look at one example of using neural networks to study computationally difficult and hard problems. Everything we'll be talking today is mostly working progress to be extended and improve in the future. So the first example I would like to discuss is topological quantum phase transition retrieved through manifold learning, which is a variety of version of machine learning. This work is done in collaboration with Che, Gneiting and Liu all members of the group. preprint is available in the archive. Some groups are studying a quantum enhanced machine learning where machine learning is supposed to be used in actual quantum computers to use exponential speed-up and using quantum error correction we're not working on these kind of things we're doing something different. We're studying how to apply machine learning applied to quantum problems. For example how to identify quantum phases and phase transitions. We shall be talking about right now. How to achieve, how to perform quantum state tomography in a more efficient manner. That's another work of ours which I'll be showing later on. And how to assist the experimental data analysis which is a separate project which we recently published. But I will not discuss today because the experiments can produce massive amounts of data and machine learning can help to understand these huge tsunami of data provided by these experiments. Machine learning can be either supervised or unsupervised. Supervised is requires human labeled data. So we have here the blue dots have a label. The red dots have a different label. And the question is the new data corresponds to either the blue category or the red category. And many of these problems in machine learning they use the example of identifying cats and dogs but this is typical example. However, there are the cases which are also provides with there are no labels. So you're looking at the cluster structure and you need to define a metric, a distance between the different points to be able to correlate them together to create these clusters. And you can manifold learning is ideally suited to look at problems we just did our non-linearities and unsupervised. Once you're using the principle component analysis along this green axis here which are the principal axis here. You can actually identify a simple structure with linear projection when you increase the axis here, you get the red dots in one area, and the blue dots down here. But in general you could get red green, yellow, blue dots in a complicated manner and the correlations are better seen when you do an nonlinear embedding. And in unsupervised learning the colors represent similarities are not labels because there are no prior labels here. So we are interested on using machine learning to identify topological quantum phases. And this requires looking at the actual phases and their boundaries. And you start from a set of Hamiltonians or wave functions. And recall that this is difficult to do because there is no symmetry breaking, there is no local order parameters and in complicated cases you can not compute the topological properties analytically and numerically is very hard. So therefore machine learning is enriching the toolbox for studying topological quantum phase transitions. And before our work, there were quite a few groups looking at supervised machine learning. The shortcomings that you need to have prior knowledge of the system and the data must be labeled for each phase. This is needed in order to train the neural networks . More recently in the past few years, there has been increased push on looking at all supervised and Nonlinear embeddings. One of the shortcomings we have seen is that they all use the Euclidean distance which is a natural way to construct the similarity matrix. But we have proven that it is suboptimal. It is not the optimal way to look at distance. The Chebyshev distances provides better performance. So therefore the difficulty here is how to detect topological quantifies transition is a challenge because there is no local order parameters. Few years ago we thought well, three or so years ago machine learning may provide effective methods for identifying topological Features needed in the past few years. The past two years several groups are moving this direction. And we have shown that one type of machine learning called manifold learning can successfully retrieve topological quantum phase transitions in momentum and real spaces. We have also Shown that if you use the Chebyshev distance between data points are supposed to Euclidean distance, you sharpen the characteristic features of these topological quantum phases in momentum space and the afterwards we do so-called diffusion map, Isometric map can be applied to implement the dimensionality reduction and to learn about these phases and phase transition in an unsupervised manner. So this is a summary of this work on how to characterize and study topological phases. And the example we used is to look at the canonical famous models like the SSH model, the QWZ model, the quenched SSH model. We look at this momentous space and the real space, and we found that the metal works very well in all of these models. And moreover provides a implications and demonstrations for learning also in real space where the topological invariants could be either or known or hard to compute. So it provides insight on both momentum space and real space and its the capability of manifold learning is very good especially when you have the suitable metric in exploring topological quantum phase transition. So this is one area we would like to keep working on topological faces and how to detect them. Of course there are other problems where neural networks can be useful to solve computationally hard and important problems in quantum physics. And one of them is quantum state tomography which is important to evaluate the quality of state production experiments. The problem is quantum state tomography scales really bad. It is impossible to perform it for six and a half 20 qubits. If you have 2000 or more forget it, it's not going to work. So now we're seeing a very important process which is one here tomography which cannot be done because there is a computationally hard bottleneck. So machine learning is designed to efficiently handle big data. So the question we're asking a few years ago is chemistry learning help us to solve this bottleneck which is quantum state tomography. And this is a project called Eigenstate extraction with neural network tomography with a student Melkani , research scientists of the group Clemens Gneiting and I'll be brief in summarizing this now. The specific machine learning paradigm is the standard artificial neural networks. They have been recently shown in the past couple of years to be successful for tomography of pure States. Our approach will be to carry this over to mixed States. And this is done by successively reconstructing the eigenStates or the mixed states. So it is an iterative procedure where you can slowly slowly get into the desired target state. If you wish to see more details, this has been recently published in phys rev A and has been selected as a editor suggestion. I mean like some of the referees liked it. So tomography is very hard to do but it's important and machine learning can help us to do that using neural networks and these to achieve mixed state tomography using an iterative eigenstate reconstruction. So why it is so challenging? Because you're trying to reconstruct the quantum States from measurements. You have a single qubit, you have a few Pauli matrices there are very few measurements to make when you have N qubits then the N appears in the exponent. So the number of measurements grows exponentially and this exponential scaling makes the computation to be very difficult. It's prohibitively expensive for large system sizes. So this is the bottleneck is these exponential dependence on the number of qubits. So by the time you get to 20 or 24 it is impossible. It gets even worst. Experimental data is noisy and therefore you need to consider maximum-likelihood estimation in order to reconstruct the quantum state that kind of fits the measurements best. And again these are expensive. There was a seminal work sometime ago on ion-traps. The post-processing for eight qubits took them an entire week. There were different ideas proposed regarding compressed sensing to reduce measurements, linear regression, et cetera. But they all have problems and you quickly hit a wall. There's no way to avoid it. Indeed the initial estimate is that to do tomography for 14 qubits state, you will take centuries and you cannot support a graduate student for a century because you need to pay your retirement benefits and it is simply complicated. So therefore a team here sometime ago we're looking at the question of how to do a full reconstruction of 14-qubit States with in four hours. Actually it was three point three hours Though sometime ago and many experimental groups were telling us that was very popular paper to read and study because they wanted to do fast quantum state tomography. They could not support the student for one or two centuries. They wanted to get the results quickly. And then because we need to get these density matrices and then they need to do these measurements here. But we have N qubits the number of expectation values go like four to the N to the Pauli matrices becomes much bigger. A maximum likelihood makes it even more time consuming. And this is the paper by the group in Inns brook, where they go this one week post-processing and they will speed-up done by different groups and hours. Also how to do 14 qubit tomography in four hours, using linear regression. But the next question is can machine learning help with quantum state tomography? Can allow us to give us the tools to do the next step to improve it even further. And then the standard one is this one here. Therefore for neural networks there are some inputs here, X1, X2 X3. There are some weighting factors when you get an output function PHI we just call Nonlinear activation function that could be heavy side Sigmon piecewise, linear logistic hyperbolic. And this creates a decision boundary and input space where you get let's say the red one, the red dots on the left and the blue dots on the right. Some separation between them. And you could have either two layers or three layers or any number layers can do either shallow or deep. This cannot allow you to approximate any continuous function. You can train data via some cost function minimization. And then there are different varieties of neural nets. We're looking at some sequel restricted Boltzmann machine. Restricted means that the input layer speeds are not talking to each other. The output layers means are not talking to each other. And we got reasonably good results with the input layer, output layer, no hidden layer and the probability of finding a spin configuration called the Boltzmann factor. So we try to leverage Pure-state tomography for mixed-state tomography. By doing an iterative process where you start here. So there are the mixed States in the blue area the pure States boundary here. And then the initial state is here with the iterative process you get closer and closer to the actual mixed state. And then eventually once you get here, you do the final jump inside. So you're looking at a dominant eigenstate which is closest pure state and then computer some measurements and then do an iterative algorithm that to make you approach this desire state. And after you do that then you can essentially compare results with some data. We got some data for four to eight trapped-ion qubits approximate W States were produced and they were looking at let's say the dominant eigenstate is reliably recorded for any equal four, five six, seven, eight for the ion-state, for the eigenvalues we're still working because we're getting some results which are not as accurate as we would like to. So this is still work in progress, but for the States is working really well. So there is some cost scaling which is beneficial, goes like NR as opposed to N squared. And then the most relevant information on the quality of the state production is retrieved directly. This works for flexible rank. And so it is possible to extract the ion-state within network tomography. It is cost-effective and scalable and delivers the most relevant information about state generation. And it's an interesting and viable use case for machine learning in quantum physics. We're also now more recently working on how to do quantum state tomography using Conditional Generative Adversarial Networks. Usually the masters student are analyzed in PhD and then two former postdocs. So this CGANs refers to this Conditional Generative Adversarial Networks. In this framework you have two neural networks which are essentially having a dual, they're competing with each other. And one of them is called generator another one is called discriminator. And there they're learning multi-modal models from the data. And then we improved these by adding a cost of neural network layers that enable the conversion of outputs from any standard neural network into physical density matrix. So therefore to reconstruct the density matrix, the generator layer and the discriminator networks So the two networks, they must train each other on data using standard gradient-based methods. So we demonstrate that our quantum state tomography and the adversarial network can reconstruct the optical quantum state with very high fidelity which is orders of magnitude faster and from less data than a standard maximum likelihood metals. So we're excited about this. We also show that this quantum state tomography with these adversarial networks can reconstruct a quantum state in a single evolution of the generator network. If it has been pre-trained on similar quantum States. so requires some additional training. And all of these is still work in progress where some preliminary results written up but we're continuing. And I would like to thank all of you for attending this talk. And thanks again for the invitation.
SUMMARY :
And recall that this is difficult to do
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael Fraser | PERSON | 0.99+ |
Franco Nori | PERSON | 0.99+ |
Yoshihisa Yamamoto | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
NTT-PHI | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
two layers | QUANTITY | 0.99+ |
Clemens Gneiting | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
three hours | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
three layers | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
one week | QUANTITY | 0.99+ |
Melkani | PERSON | 0.99+ |
14 qubits | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
one area | QUANTITY | 0.98+ |
first example | QUANTITY | 0.98+ |
Inns brook | LOCATION | 0.98+ |
six and a half 20 qubits | QUANTITY | 0.98+ |
24 | QUANTITY | 0.98+ |
four hours | QUANTITY | 0.98+ |
Will Oliver | PERSON | 0.98+ |
two centuries | QUANTITY | 0.98+ |
Few years ago | DATE | 0.98+ |
first joint | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
each phase | QUANTITY | 0.97+ |
three point | QUANTITY | 0.96+ |
Fraser Yamamoto | PERSON | 0.96+ |
two networks | QUANTITY | 0.96+ |
first one | QUANTITY | 0.96+ |
2000 | QUANTITY | 0.96+ |
six | QUANTITY | 0.95+ |
five | QUANTITY | 0.94+ |
14 qubit | QUANTITY | 0.94+ |
Boltzmann | OTHER | 0.94+ |
a century | QUANTITY | 0.93+ |
one example | QUANTITY | 0.93+ |
eight qubits | QUANTITY | 0.92+ |
Caltech | ORGANIZATION | 0.91+ |
NTT | ORGANIZATION | 0.91+ |
centuries | QUANTITY | 0.91+ |
few months ago | DATE | 0.91+ |
single | QUANTITY | 0.9+ |
Oliver | PERSON | 0.9+ |
two former postdocs | QUANTITY | 0.9+ |
single qubit | QUANTITY | 0.89+ |
few years ago | DATE | 0.88+ |
14-qubit | QUANTITY | 0.86+ |
NTT-PHI | TITLE | 0.86+ |
eight | QUANTITY | 0.86+ |
Michigan | LOCATION | 0.86+ |
past couple of years | DATE | 0.85+ |
two neural | QUANTITY | 0.84+ |
seven | QUANTITY | 0.83+ |
eight trapped- | QUANTITY | 0.83+ |
three or so years ago | DATE | 0.82+ |
Liu | PERSON | 0.8+ |
Pauli | OTHER | 0.79+ |
one type | QUANTITY | 0.78+ |
past two years | DATE | 0.77+ |
some years ago | DATE | 0.73+ |
Cornell | PERSON | 0.72+ |
McMahon | ORGANIZATION | 0.71+ |
Gneiting | PERSON | 0.69+ |
Chebyshev | OTHER | 0.68+ |
few years | DATE | 0.67+ |
phys rev | TITLE | 0.65+ |
past few years | DATE | 0.64+ |
NTT | EVENT | 0.64+ |
Che | PERSON | 0.63+ |
CGANs | ORGANIZATION | 0.61+ |
Boltzmann | PERSON | 0.57+ |
Euclidean | LOCATION | 0.57+ |
marandI | ORGANIZATION | 0.5+ |
Hamiltonians | OTHER | 0.5+ |
each | QUANTITY | 0.5+ |
NTT | TITLE | 0.44+ |
-PHI | TITLE | 0.31+ |
PHI | ORGANIZATION | 0.31+ |
Amudha Nadesan, Applied Materials | Splunk .conf18
>> Announcer: Live from Orlando, Florida it's theCUBE. Covering .conf18. Brought to you by Splunk. >> Hi everybody welcome back to Orlando. You're watching theCUBE, the leader in live tech coverage. We go out to the events, we extract the signal from the noise. My name is Dave Vellante, I'm here with my co-host Stu Miniman. This is day one of .conf18, Splunk's big user conference. You know we're talking a lot about AI at these conferences, talking a lot about data, one of the enablers is semiconductors, the power of semiconductors, and the cheap storage, have enabled people to ingest a lot of data. And when you look into the supply chain, beneath the semiconductors, there are companies who provide semiconductor equipment. One of those companies is Applied Materials and Amudha Nadesan is here, he's a senior manager at Applied Materials, symbol AMAT. Welcome Amudha, thanks for coming on theCUBE. >> Yeah, thank you, thank you for inviting me. >> You're welcome. So as I say, there's a semiconductor boom going on right now, which is obviously a great tailwind for your business. You're on the data side, obviously. >> Right. >> Dave: Getting your hands dirty. Give us a sense of your role and we'll get into it. >> Yeah, so I'm a senior manager in the software group of the Applied Materials actually. So Applied's core business is always the hardware which is the semiconductor and display equipment manufacturer, so every new chip that was kind of manufactured, or any new display equipment displays coming out, that's manufactured using the Applied tool actually. We are the software world that kind of interfaces with the Applied tools, so we get all the data from the Applied tools and non-Applied tools, and we kind of do all the analytics using our software, actually. So, I'm kind of the technology group leader within the automation products group, so we are responsible for bringing in the new technologies into our products, actually. And our products, now we are kind of trying to align with the industry for final principles, so we are trying to bring in all the new technologies like mobility, virtualization, IoT, then predictive monitoring, predictive analytics, all these new technologies, we are trying to kind of bring into our products right now. >> So I know that, certainly, the tolerances in the semiconductor business are so tight, and given that you're manufacturing semiconductor equipment and providing software associated with that, is it your job to try to analyze the performance and the efficacy of the equipment and feed that back to your engineers and your customers in a collaborative mode? What's the outcome that your team is trying to drive? >> So, my team's main responsibility is to kind of maintain that finite availability for all the data that is coming from the tools into our products, actually. Right so, our products need to be up and running all the time, actually. If our product stops, the production line will stop, actually, right if the production line stops, then there's going to be a big business impact, actually. So that's where we are kind of trying to leverage all these new technologies, so we can really kind of run our software with finite availability, actually. >> You mentioned three things, mobility, virtualization, prediction. There may be others. >> Right. >> So the mobility, presumably, is a productivity aspect. So people can work at home on the weekends, or wherever they are, teasing of course. Virtualization, getting more out of, that's an asset utilization play. And prediction, that's using machine intelligence to predict failures, optimize the equipment, maybe you could describe what's behind each of those. >> Yeah, I'll kind of go one by one, actually. All of our products, they are like at least twenty, thirty years old, actually. They have been all big clients, actually, running on desktops and laptops, actually. Right so now we are kind of trying to bring the user experience, where the end users who are using the UI for our products, they can get a good experience, and that can kind of improve the productivity. So that's what the mobility is. So we are kind of trying to model the latest technologies like Angular and STML for our product UI, actually. And with respect to the virtualization, we have been kind of running our softwares on physical servers, actually, in an enterprise fashion, and that is kind of taking up lot of cost, actually. So we are kind of getting into this virtualization world where we can kind of reduce the TCO of our assets actually that is running all these softwares. >> Help connect the dots with us as to how Splunk fits into your environment. >> Oh, okay, so we just got into Splunk just two years back, actually, we have close to 25 to 30 software products that kind of completely automate that manufacturing line, actually. All these products, they generate so much of logs, actually, on a daily basis. If you take in a year, they kind of generate about 100 gigs of just log files, actually, and those log files have lot of critical information within the log file, and when we didn't have the Splunk two years back, what we would do is, whenever there is a problem in our customer production line, it allows them to kind of FTP those logs, actually. And then we have to kind of manually go and scan through all those logs and identify the issue, actually. Sometimes, even to identify the issue, it takes about like a week, actually, right? And after we identify the issue, we are to come up with the resolution to kind of fix the problem, and then it takes months, sometimes. I worked on a problem, even for six months to kind of bring a resolution to it, and the customers are very upset, actually. >> Yeah, it's interesting, go back to your earlier statements, you know, we've talked for years, decades, our whole careers, about how important uptime is, and then you talk about your people and there's a lot more efficient things they could be doing if they're not looking after and doing all these manual things. You've been there 22 years, what is something like Splunk, how do you measure that, the success of the outcome of using a tool like that? >> Yeah, so right now we can see the success immediately because we have implemented Splunk, and we are kind of remotely monitoring our production lines. At least five customers, right now, we are remotely monitoring them. Every customer, they have down time at least once or twice a year, actually, so when they have a down time, if it's a small customer, they take a loss of about 10 K per hour, actually. So and if it is a medium, then probably 100 K, if it's a large, then it's 1 million actually, per hour. I have experience in the last 22 years, I've experienced at least, a customer has one to two down times a year, sometimes even more than that, actually. So after we implemented Splunk, the last two years, one of our customers we are remotely monitoring, we never had a down time, so that itself is a big success, actually, but we are not done with it yet, actually, we are continuing to innovate with Splunk on the log monitor. >> Make sure I understood what you said. So, rough rules of thumb, these things vary, we always understand that, but you say in small customers, when their down time, you said $10,000 an hour, medium $100,000 an hour, large customer's a million dollars, and probably up with huge companies. >> Yes, yeah, it really kind of depends upon, when I say a small customer, they have less number of tools, actually, which means they have less number of operators. So less number of people impact it, actually, when the production line stops, but when you go for a kind of go for a medium size, they have more tools, more people are working with those tools, they don't have to work which means right it's a disruption, actually, in the production line. And if it's a large fab, there are more number of operators actually working in the production line, so that's how we kind of calculate the loss, actually. >> When they have, right, the math is pretty simple to calculate but when they have a down time like that, do they try and make it up on the weekends? Or can they not do that because people have lives, or they are already actually running 24/7? >> It's already running 24 by seven. >> And you can't get more time in a day. >> Yeah, they can't make it over the weekend, actually it's already running 24 by seven, and when the production line stops, that means it's a revenue loss for them, and then also their operators are sitting idle actually. >> Dave: These are companies with a fab, right? >> These are companies with fab actually. >> Which is a multi-billion dollar investment oftentimes, right? >> Yes, yes. Name any semiconductor companies like Intel or Samsung, they're all using Applied tools to run their manufacturing. >> And when they're down, it's right in the bottom line. >> Yes, that's right, and they all use our softwares to kind of like completely automate their factory end to end, actually. >> Can you directly attribute the lack of down time, the reduction in that down time, to Splunk? >> That's right, actually, yeah. At least one of the customers we are remotely monitoring right now, those customers are monitored using Splunk. We are, right now, scaling up with more and more customers for the remote monitoring. >> The other thing you said is you're starting to innovate even more with Splunk, maybe you can elaborate a little on that. >> Yeah, we are trying to kind of, right now we are just using the basic machine learning algorithms that are available from Splunk for kind of doing the anomaly detection, our outlier detection, our trend analysis. So we are expecting to kind of introduce more and more machine learning algorithms that can accurately predict the servers going down, that can kind of give us more lead time to kind of proactively address the issues before the user can see an impact, actually. Currently, most of the time it is kind of more reactive, we see the issue and then we kind of react to it. We want to be more proactive and that is where Splunk is playing a big role, actually. >> Your role is customer facing, is that right? Your software is customer facing? Or are you guys using this internally as well? >> We are using both internally. Right now, it is customer facing, but our IT organization, after seeing the success with how we are kind of monitoring our customers they are also kind of adapting it, and there are other business units now who are kind of receiving lot of data from these tools actually like the sensor data from the tools, they are also kind of trying to use Splunk and see how they can kind of predict the issues in the tool more proactively or accurately. >> Splunk is not a new company. I'm just curious, and Applied Materials is obviously a huge company, you know, $35 billion market cap, why did it take you so long to find out about Splunk and adopt Splunk? Was it just organizational, was it your processes are so delicate and hardened? I wonder if you could explain. >> Yeah, so that's a very good question actually. Right so, only in the last two years we have started investing more on the R&D, especially on the software products actually. Mostly the investment was on the hardware products where they want to kind of improve the productivity, they want to kind of improve the testing methodology, all those things. Most of the investment was going to the hardware components, so they were not even looking at all these software innovations that were happening. So last two years, they're kind of investing more on the software groups actually, which they want to kind of bring it, or kind of take it to that industry 4.0 revolution, actually, right? So that's where we started investing on all, we started looking at many technologies, and one of the first technologies to adapt was the Splunk, actually. And then especially we kind of came up with this remote monitoring concept where most of our customers are, the small customers, I would say, they did not have their own IT organization, right so whenever they had a down, they had to kind of literally log a call and they had to wait for us to kind of come in, fix their problem, and it took days, actually. And they took a big impact because of that. So then they said, we don't have our own IT organization, why don't you kind of take the IT responsibilities off, keep making sure those softwares are kind of up and running all the time? So that's the time when we went to Splunk, and we got it, we implemented it, we tested it, and we are kind of seeing a good success with it, actually. >> And do you guys buy this as a subscription, or is it a perpetual license? Or how do you guys do that? >> It is a perpetual license, yeah, we have an on-prem. That's another concern with our customers, because they want to make sure their IP does not go out, actually, they don't want to put anything on the cloud. This is for every semiconductor companies, they are not there on the cloud yet, actually. So that's why we are to host the Splunk, on-prem, and kind of transfer all the data from our customers through a secure FTP, bring it to our on-prem Splunk servers and do all the analytics, actually. >> We've heard Splunk and many other companies this year and for the last couple of years talking about AI and ML. Does that resonate with you, those sort of solutions that you think you'll be looking for, that kind of functionality, how does that play into your environment? >> That's right, actually. So we are trying to kind of get into that. We have to a certain extent, we are kind of already into the machine learning algorithms, actually, but we kind of want to go more deeper into that, actually, so that currently our prediction, whatever we have built up in house, actually, our prediction algorithms can predict only 60%, actually. So that's the accuracy we could get, but we want to get somewhere in the 90% or 93% accuracy, which means we have to get more, we have to get more on the accuracy part, actually right, we have to get more accurate machine learning algorithms developed actually, so that is where we are trying to kind of see if the platform can kind of provide more of this machine learning algorithms, which can predict more accurately, actually, the problem. >> So that's data, the modeling, iterations, just time, right? You'll eventually get there. Amudha, thanks very much for coming to theCUBE, it was great to hear your story. Last question is, we hear this story of Splunk, I call it land and expand. >> Right. >> We have, you know, one use case, and then there are other use cases, is that your situation? You've only been a customer for a couple years now, do you see using Splunk potentially in other areas? >> Yes, we are trying to kind of expand to other areas, right now we started with remote monitoring, we are going to use it for IT, our IT is going to use it, and then we want to kind of go to the predictive analytics actually, that means we want to kind of look at the tool data like the data that is coming from the sensors from the tool, we want to kind of do the analytics and then make sure that we can predict the problems, we can predict the maintenance that we need to do, actually, so all those things we want to do, actually, that's the area we want to kind of more expand with, we will really kind of add value to our customers, actually. >> Amudha Nadesan from Applied Materials, thanks so much for coming on theCUBE, appreciate your time. >> Yeah, thank you. >> Alright, keep it right there, everybody, we'll be back with our next guest, I'm Dave Vellante, he's Stu Miniman, we'll be right back, you're watching theCUBE from Splunk .conf18. (techno music)
SUMMARY :
Brought to you by Splunk. We go out to the events, we extract You're on the data side, obviously. Dave: Getting your hands dirty. And our products, now we are kind of trying and running all the time, actually. You mentioned three things, So the mobility, presumably, is a productivity aspect. So we are kind of trying to model Help connect the dots with us and the customers are very upset, actually. of the outcome of using a tool like that? and we are kind of remotely monitoring our production lines. we always understand that, but you say we kind of calculate the loss, actually. and when the production line stops, all using Applied tools to run their manufacturing. to kind of like completely automate and more customers for the remote monitoring. to innovate even more with Splunk, for kind of doing the anomaly detection, the success with how we are kind of monitoring our customers to find out about Splunk and adopt Splunk? So then they said, we don't have our own IT organization, and do all the analytics, actually. of solutions that you think you'll be looking for, So that's the accuracy we could get, So that's data, the modeling, iterations, actually, that's the area we want thanks so much for coming on theCUBE, appreciate your time. we'll be back with our next guest,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Amudha | PERSON | 0.99+ |
Amudha Nadesan | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
93% | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Applied Materials | ORGANIZATION | 0.99+ |
100 K | QUANTITY | 0.99+ |
1 million | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
$35 billion | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
22 years | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
two years back | DATE | 0.99+ |
Applied | ORGANIZATION | 0.99+ |
this year | DATE | 0.98+ |
.conf18 | EVENT | 0.98+ |
One | QUANTITY | 0.98+ |
multi-billion dollar | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
a year | QUANTITY | 0.96+ |
about 100 gigs | QUANTITY | 0.96+ |
Angular | TITLE | 0.96+ |
$100,000 an hour | QUANTITY | 0.96+ |
$10,000 an hour | QUANTITY | 0.95+ |
60% | QUANTITY | 0.94+ |
about 10 K per hour | QUANTITY | 0.93+ |
seven | QUANTITY | 0.93+ |
twice a year | QUANTITY | 0.93+ |
first technologies | QUANTITY | 0.91+ |
30 software products | QUANTITY | 0.91+ |
last two years | DATE | 0.91+ |
last couple of years | DATE | 0.89+ |
Covering | EVENT | 0.88+ |
each | QUANTITY | 0.86+ |
last 22 years | DATE | 0.79+ |
thirty years old | QUANTITY | 0.79+ |
a day | QUANTITY | 0.76+ |
25 | QUANTITY | 0.75+ |
a million dollars | QUANTITY | 0.75+ |
least twenty | QUANTITY | 0.74+ |
couple years | QUANTITY | 0.71+ |
least five customers | QUANTITY | 0.7+ |
a week | QUANTITY | 0.69+ |
two down times | QUANTITY | 0.69+ |
three things | QUANTITY | 0.67+ |
STML | TITLE | 0.65+ |
close | QUANTITY | 0.61+ |
Daniel Dines, UiPath | UiPath FORWARD IV
>> Announcer: From the Bellagio Hotel in Las Vegas, it's theCUBE, covering UiPath FORWARD IV brought to you by UiPath. >> Live from Las Vegas, it's theCUBE. We are wrapping up day two of our coverage of UiPath FORWARD IV. Lisa Martin here with Dave Vellante. We've had an amazing event talking with customers, partners, and users, and UiPath folks themselves. And who better to wrap up the show with than Daniel Dines the founder and CEO of UiPath. Welcome, Daniel, great to have you back on theCUBE. >> Oh, thank you so much for having me. I'm becoming a regular at theCUBE. >> Yeah, it's good to see you again. >> You are, this is your fifth... >> Fifth time on theCUBE. >> Fifth time, yes. >> Fifth time, but as you said before we went live, first time since the IPO. Congratulations. >> Thank you. >> UiPath has been a rocket ship for a very long time. I'm sure a tremendous amount of acceleration has occurred since the IPO. We can all see the numbers. You're a public company now, ARR of 726 million. You've got over 9,000 customers. We got the chance to speak with a few of them here today. We know how important the voice of the customer is to UiPath and how very symbiotic it is. But I want to talk about the culture of the company. How is that going? How is it being maintained especially since the big splashy IPO just about six months ago? >> Well, I always believe that in order to build a durable company, culture is maybe the most important thing. I think long lasting companies have very foundational culture. So we've built it, and we invested a lot in the last 5-6 years because in the beginning when it's just a bunch of people, they don't have a culture. It's maybe like a vibe of a group of friends. But then when you go and try to dial in your culture, I think it's important that you look at your roots and who are you? What defines you? So we ended up of this really core values, which is to be humble. To me, it's one of quintessential value of every human being. And all of us want to work with humble people much more inclined to listen, to change their mind. And then we say, you have to be humble, but you have to be bold in the same time. This rocket ship need a bold crew onboard. So you need to be fast because the fastest company will always win. And you need to be immersed because my theory with life and jobs is in whatever you do, you have to be immersed. I don't believe necessarily in life-work balance. I believe in life-work cycles, in life-work immersion. So when you are with family, you are immersed. When you work, you are immersed. That will bring the best of you and the best of productivity. So we try so much to keep our culture alive, to hire people that add to the culture, that nicely fit into the culture. And recently we took a veteran of UiPath and we appointed her as Chief Culture Officer. So I'm very happy of this move. So I think we are one of the few companies that really have a Chief Culture Officer reporting directly to the CEO. So we're really serious of building our culture along the way. And as I said yesterday in my keynote, I think our values are universal values. I think they have the value of the new way of working. All of us would like to work in a company, in an environment that fosters these values. >> I certainly think the events of the last 18 months have forced many more people to be humble and embrace humility. Because everybody on video conferencing, your dog walks in, your kids walk in, you're exposed. They have to be more humble because that's just how they were getting work done. I've seen and heard a lot of humility from your folks and a lot of bold statements from customers as well. We had the CIO of Coca-Cola on talking about how UiPath is fundamental in their transformation. I think that the fact that you are doing an event here in person, whereas as Dave was saying earlier this week, your competitors are on webcams is a great example of the boldness of this company and its culture. >> Well, thank you. I think that we've made a really good decision to do this event in person. Maybe on Zoom over the last 18 months, we kind of lost a bit how important is to connect with people. It's not only about the message, it's about the trust. And I think we are deeply embedded into the critical systems of our customers. They need to trust us. They need to work with the company that they look in their eyes and say, "Yes, we are here for you." And you cannot do it over Zoom. Even I really like Zoom and Eric Yuan is a friend of mine, but a combination I think, and going into this hybrid world, I think it's actually extremely beneficial for all of us. Meeting in person a few times a year, then continuing the relationship over Zoom in time, I think it's awesome. >> Yeah, and the fact that you were able to get so many customers here, I think that's, Lisa, why a lot of companies don't have physical events 'cause they can't get their customers here. You got 2000 customers here, customers and partners, but a lot of customers. I've spoken to dozens and they're easy to find. So I think that's one point I want the audience to know. You've always been on the culture train. And enduring companies, CEOs of great enduring companies, always come back to culture. So that's important. And of course, product. You said today, you're a product guy. That's when you get excited. You've changed the industry. And I think, I've never bought into the narrative about replacing jobs. I'd never been a fan of protecting the past from the future. It's inevitable, but I think the way you've changed the market, I wonder if you could comment is... You had legacy RPA tools that were expensive and cumbersome. And so people had to get the ROI and it took a long time. So that was an obvious way to get it is to reduce headcount. You came in and said, short money you can actually try it even a free version. You compressed that ROI and the light bulb went off, and so people then said, "Oh, wow, this isn't about replacing jobs, but making my life better." And you've always said that. And that's I think one way in which you've changed the market quite dramatically, and now you have a lot of people following that path. >> That was always kind of our biggest competitive advantage. We showed our customers and our partners, this is a technology that gives you the faster time to value and actually faster time to value translate into much higher return on investment. In a typical automation project, the license cost is maybe 5% of the project cost. So the moment you shrink the development time, the implementation time, you increase exponentially the return on investment. So this is why speaking about our roadmap, and we always start with this high level, how can we reduce the development time? So how can we reduce the friction? How can we expand the use cases? Because these are essential themes for us, always thinking customer first, customer value and that serves us pretty well really. We win a lot in all the contests where we go side by side with other competitors. It was a very simple strategy for us. Asking customers, "Just go and test it side-by-side and see," and they see. We implement the same process in halftime, half of resources involved. It's an easy math multiplied by a thousand processes and it's done. >> When theCUBE started Daniel in 2010. It was our first year. And so it coincided with big data movement. And we said at the time that the companies who can figure out how to apply big data are going to make a lot of money, more than the big data vendors. And I think in a way now the problem with big data was too complicated, right? There were only a few big internet giants who could figure out Hadoop and all that stuff. Automation, I think is even bigger in a way, 'cause it involves data. It involves AI, it's transformative. And so we're saying the same thing here. The companies that are applying automation, and we've seen a lot of them here, Coca-Cola, Merck, Applied Materials, on and on and on, are actually the ones that are going to not only survive but thrive, incumbents that don't have to invent AI necessarily or invent their own automation. But coming back to you 'cause I think your company can make a lot of money. You've set the TAM at 60 billion. I think it actually could be well over 100 billion, but we don't have to have that conversation here. It's just convergence of all these markets that guys like IDC and Gartner, they count in stove pipes. So anyway, big, no shortage of opportunity. My question to you is feels like you have the potential to build a next great software company and with the founder as the CEO, and there aren't a lot of them left. Michael Dell is not a software company, but his name is still, Larry Ellison is still there, Marc Benioff. How do you think about the endurance, the enduring UiPath? Are you envisioning building the next great software company, may take 20 years? >> People were asking me for a long time. Did you envision that you'd get here from the beginning? And I always tell them, no. Otherwise I would have been considered mad. (Lisa and Dave laughing) So you build the vision over time. I don't believe in people that start a small SaaS company and they say, "We are going to change the world." This is not how the world works. Really, you build and you understand the customer and you build more. But at some point I realized we change so much how people work, we get the best out of them. It's something major here. And if you look in history, we are in this trap that started with agriculture. This is the trap of manual, repetitive, low value tasks that we have to do. And it took the humanity of us. And I talked to Tom Montag about with this book "Sapiens". It's interesting and that book comes with the theory that our biggest quality is our ability to collaborate. Well, our technology gives people the ability to collaborate more. So, in this way, I think it's truly transformative. And yes, I believe now that we can build the next generation of software company. >> How do you like... That's the wrong question. How are you doing with the 90-day shot clock as Michael Dell calls it? It's a new world for you, right? You've never been a CEO of a public company, the street's getting to know you like, "Who is this guy?" I'll give you another cute story. There were three companies in the early CUBE days, Tableau, Splunk, and ServiceNow that had the kind of customer passion that you have. I think ServiceNow could be one of the next great software companies. Tableau now part of Salesforce. I think Splunk was under capitalized, but we see the same kind of passion here. So now you're the CEO of a public company, except the street's getting to know you a little bit. They're like, "Hmm, how do we read the guy?" All that stuff. That'll sort itself out. But so what's life like on the public markets? >> Well, I don't think anyone prepares you for the life of a public company. (Dave laughing) I thought it's going to be easier, but it's not, because we were used to deal with private investors and it's much easier because I think private investors have access to a lot more data. They look into your books. So they understand your business model. With public investors, they have access only to like a spreadsheet of numbers. So they need to figure out a business model, the trajectory from just a split. It's way more difficult. I've come to appreciate their job. It's much more difficult. So they have to get all the cues from how I dress, how do I say this word? They watch the FED announcements. What do they mean to say by this? And I and the shim we are first time in a job as a public company CEO, public company CFO. So of course it's a lot of learning for us and like in any learning environment, initial learning curve is tough, but you progress quite a lot. So I believe that over the next few quarters, we will be in the position to build trust with the street and they will understand better our business model, and they see that we are building everything for creating durable growth. >> It's a marathon, it's not a sprint. I know it's a cliche, but it really does apply here. >> You've certainly built a tremendous amount of trust within your 9,000 strong customer base. I think I was reading that your 70% of your revenue comes from existing customers. I think this is a great use case for how to do land and expand really well. So, the DNA I think is there at UiPath to be able to build that trust with the street. >> Yeah, absolutely. Our 9,000 plus customers, it's our wealth. This is our IP in a way. It's even better than in our pro. It's our customers. We have one of the best net retention rate in the industry of 144%. So that speaks volume. >> Lisa: It does. >> Automation for good. I know you've read some of the stuff I've written. I've covered you guys pretty extensively over the years. And that theme sounds like a lot of motherhood and apple pie, but one of the things that I wrote is that you look at the productivity decline and particularly in Western countries over the last two decades. Now I know with the pandemic and especially in 2021, productivity is going up for reasons that I think are understood, but the trend is clear. So when you think about big problems, climate change, diversity, income inequality, health of populations, overpopulation, on and on and on and on. You're not going to solve those problems by throwing labor at them. It has to be automation. So that to me is the tie to automation for good. And a lot of people might roll their eyes at it. But does that resonate with you? >> It totally resonates with me. Look at US. US population is not growing at the rates that we were used to. It's going to plateau at some point. It's just obvious. Like it plateaued in Japan, in Japan it's decreasing. US will see a decrease at some point. How do you increase the GDP? If your population is declining, productivity is declining. How do you increase GDP? Because the moment we stop increasing GDP, everything will collapse. The modern world is built on the idea of continuous economical growth. The moment growth stops, the world stops. We'll go back to our case and restart the engine. So, automation is hugely important in continuous GDP growth, which is the engine of our life. >> Which by the way is important because the chasm between the haves and the have-nots, that's how growth allows the people at the bottom to rise up to the middle and the middle to the top. So that's how you deal with that problem. You asked Tom Montag about crypto. So I have to ask you about crypto. What are your thoughts? Are you a fan? Are you not a fan? Do you have any wisdom? >> I have to admit, I never really understood the use cases of crypto. Technology behind crypto, blockchain is fascinating technology, but crypto in itself, I was never a fan. Tom Montag today gave me one of the best explanation of the very same. Look, Daniel, from Americans perspective we have the dollars, and this is the global currency. Crypto doesn't have so much sense, but think about a country like Columbia or Venezuela, countries where there people don't have so much trust in their currency, and where different political system can seize your assets from you. You need to be able to be capable of putting them into something else that is outside government context. I believe this is a good use case but I still don't believe that crypto is that type of asset that you know will survive the test of time. I think it's really too much... To me the difference between gold and Bitcoin is that it's too... You cannot replicate gold whatever you... It's impossible, unless you are God you cannot create gold two, right? It's impossible, but you can create Bitcoin 2. And at some point the fashion will move from Bitcoin 2 to Bitcoin 3. So I don't think the value that you can build in one particular crypto currency right now will stay over time. But it's just me. I was the wrong so many times in my life. >> You've been busy. You haven't had time to study crypto. >> I agree, totally agree. (Lisa and Dave laughing) >> What's been some of the feedback from the customers that are here. We saw yesterday a standing room only keynote. I'm sure it was great for you to be on stage again actually interacting with your customers and your partners. What's been some of the feedback as we've seen really this shift from an RPA point solution to an enterprise automation platform? >> Well, first of all, it was really great to be on stage. I don't know, I'm not a good presenter, really. But going there in front of people felt me with energy. Suddenly I felt a lot of comfort. So, I was capable of being myself with the people, which is really awesome. And the transition to a platform, from a product to a platform was really very well received by our customers because even in our competitive situations, when we are capable of explaining to them, what is the value of having an independent automation platform that is not tied to any big silos that application providers creates, we win and we win by default somehow. You've seen them now. So I think even the next evolution of semantic automation, this one is very well with our customers. >> Well, Daniel, it's been fantastic having you on. We have a good cadence here, and I hope we can continue it. On theCUBE, we love to identify early stage companies. Although as I wrote, you had a long, strange path to IPO because you took a long, long time and I think did it the right way to get product market fit. >> Absolutely. >> And that's not necessarily the way Silicon Valley works, double, double, triple, triple, and that you got product market fit, you got loyal customer base, and I think that's a key part of your success and you can see it and so congratulations, but many more years to come and we're really watching. >> Thank you so much. I'm looking forward to meeting you guys again. Thank you, that was awesome really. Great discussion. >> Exactly, good. Great to have you here in person and thanks for having us here in person as well. We look forward to FORWARD V. >> You will be invited forever. Thank you, guys, really. >> Forever, did you hear that? All right, for Daniel Dines and Dave Vellante, I'm Lisa Martin. This is theCUBE's coverage of UiPath FORWARD IV day two. Thanks for watching. (upbeat music)
SUMMARY :
brought to you by UiPath. than Daniel Dines the Oh, thank you so much for having me. Fifth time, but as you of the customer is to UiPath And then we say, you have to be humble, is a great example of the And I think we are deeply embedded Yeah, and the fact So the moment you shrink But coming back to you the ability to collaborate more. the street's getting to know And I and the shim we I know it's a cliche, but So, the DNA I think is there at UiPath We have one of the best net retention rate is that you look at the and restart the engine. So I have to ask you about crypto. of the very same. You haven't had time to study crypto. (Lisa and Dave laughing) What's been some of the feedback And the transition to a platform, to IPO because you took a long, long time and that you got product market fit, Thank you so much. Great to have you here in person You will be invited forever. Forever, did you hear that?
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Marc Benioff | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Tom Montag | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Merck | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Eric Yuan | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
Daniel Dines | PERSON | 0.99+ |
Coca-Cola | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Applied Materials | ORGANIZATION | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
5% | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
fifth | QUANTITY | 0.99+ |
144% | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
2000 customers | QUANTITY | 0.99+ |
Fifth time | QUANTITY | 0.99+ |
Daniel Dines | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
60 billion | QUANTITY | 0.99+ |
FED | ORGANIZATION | 0.99+ |
three companies | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Sapiens | TITLE | 0.99+ |
first time | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
pandemic | EVENT | 0.99+ |
Splunk | ORGANIZATION | 0.98+ |
726 million | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
Las Vegas | LOCATION | 0.98+ |
over 9,000 customers | QUANTITY | 0.98+ |
IDC | ORGANIZATION | 0.98+ |
2021 | DATE | 0.98+ |
TAM | ORGANIZATION | 0.97+ |
US | LOCATION | 0.97+ |
Venezuela | LOCATION | 0.97+ |
dozens | QUANTITY | 0.97+ |
9,000 plus customers | QUANTITY | 0.97+ |
earlier this week | DATE | 0.97+ |
ServiceNow | ORGANIZATION | 0.97+ |
day two | QUANTITY | 0.96+ |
Zoom | ORGANIZATION | 0.96+ |
Bitcoin | OTHER | 0.96+ |
UiPath FORWARD IV | TITLE | 0.96+ |
UiPath FORWARD IV. | TITLE | 0.96+ |
first year | QUANTITY | 0.95+ |
Columbia | LOCATION | 0.92+ |
about six months ago | DATE | 0.92+ |
Salesforce | ORGANIZATION | 0.91+ |
90-day shot | QUANTITY | 0.91+ |
theCUBE | ORGANIZATION | 0.9+ |
last 18 months | DATE | 0.9+ |
last two decades | DATE | 0.9+ |
Junaid Ahmed, AMET | UiPath FORWARD IV
Upbeat Music >> From the Bellagio Hotel in Las Vegas, it's theCUBE. Covering UiPath FORWARD IV. Brought to you by UiPath. >> Live, from Las Vegas, it's theCUBE at UiPath Forward IV. Lisa Martin here with Dave Vellante. Day 2 of our coverage. We've been getting a lot of really great perspectives on automation and how it is impacting, significantly, every industry. We're pleased to have, from the keynote stage, Junaid Amed, the corporate Vice President of Finance at Applied Materials. He's going to talk us through why you have a why-can't-we-automate-it-all attitude. Junaid, welcome to the program. >> Thank you so much. Pleasure to be here. >> So you have a really aggressive strategy for digital transformation automation led digital transformation. Your keynote this morning was great. It was, I just thought, strategically, it was so well thought out. And then, when you got up here before we went live, you started talking about how fast the time frame was. >> Yes. >> Give the audience an overview of the strategy, what you're aiming to do and how quickly you're expecting to see change. >> Yeah, absolutely. So when we set out, when we launched about two and a half years ago, the company had doubled in size the prior five years. We were looking for it to double again. We were honest with ourselves, with the CFO and the finance leadership team, could we support the new wave of growth? And the answer is no. Okay, what do we do? We knew we had to do something, not just more things but take a complete new view on things. That's how this whole initiative got incubated. And we took a bold approach. We said, we don't want just to cover the next five years, let's cover the next 20 years. Set ourselves up to make sure we do this right for the company and for our people. So, we basically set some very ambitious goals. Which is, the key KPI that we set at our true north is, we're going to get 50 % of finance work effort, all oriented around decision support. That's what helps move the needle for the company. Sure, we have our responsibilities to close the books, to do all the transactional stuff, to do all the reporting stuff. We will do that. But that can't be the mainstay anymore. That's just table stakes. And the business is screaming for this. It's just that we didn't have the levers and the tools to be able to do it. To pivot. But given the technological advancements, we said, "This is possible now." And that's- >> I think we have to set the table here with your industry. Because you started your journey to PA automation in 2019. >> Yes. >> You participate in one of the most challenging, if not the most challenging, industry on the planet. >> Junaid: Hundred percent. >> Everybody, I don't know, maybe not the insiders but everybody else missed, absolutely no, the insiders missed it too. What was the impact of the pandemic, right? And now, chips are every part of our lives. We've got this massive chip shortage. And you know, Wall Street missed it. They said, "Oh, sell Applied Materials. Sell every semiconductor company." And then they realized, "Oh wow," kind of late into the cycle, that this is like a multi-year, perhaps a decade long transition to, maybe this never ending demand, who knows? So that's the backdrop of your business. That was driving it. What was it like inside your company? >> So Dave, you know, what we could see, obviously we couldn't predict the pandemic. We could see long term growth, right? Really tangible market inflection on the back of AI big data. If you want to say where we made a big bet as a company? We went all in on AI. Right? We believed in that growth, at a time when I think not everyone was so convinced. Okay, is this going to be- How strong is this going to hit us? So, we had the benefit of going all in on AI and saying this is another big computing wave. The next big wave of computing. Coming off of mobile and social media. And Gary Dickerson, our CEO, bet the company that we're going to enable this growth. This is real. This is going to touch the whole global economy. So yes, that's a bet, a successful bet, the company made. No one could foresee what would the pandemic do but we had the good fortune of saying we were reacting to the growth, that we were committed to service. And we knew we had to get ahead of it. So we quickly organized and got finance, our organization well positioned to successfully support the company. Now, we got hit with the pandemic. Luckily for us, we're proactive and then, you know what we did? We accelerated. >> So your move to automation was an offensive move- >> Junaid: Hundred percent. >> Not a panic move to respond to a pandemic. >> Hundred percent. What do investors want? Operating leverage. Operating leverage. >> Yeah. >> Okay. And then, right now all the models have a certain baseline. Size of company, complexity. Okay, you need a certain amount of leverage coming out of this model. The models are going to change. Those that don't change ahead of the models, they're going to play catch up. It's not a fun ride. We wanted to be ahead. >> Well, I mean, talk about operating leverage. You're a company with what? 120+ Billion dollar market cap. You've got a 20+ Billion dollar revenue and you sell extremely expensive equipment. >> Extremely. >> And then a 5X revenue multiple. That's a trailing revenue multiple. I mean that's, that's impressive. That's operating leverage. >> Yes and but the bar keeps moving. You've got to stay ahead, right? You've got to be a leader. We're a leader. We've been a leader for five decades. It's the leadership mindset, I would say, in the company and our leadership team, that really propelled us towards this. The leadership of our CFO, Dan Durn, who invested. He made a bet. No one, you know, now we're sitting here, over almost 300,000 hours automated. We didn't have the playbook when we did it. >> You created the playbook. >> We created the playbook. >> Talk to me about the appetite, because obviously aggressive leadership, bold leadership, talk to me about the appetite to be able to be able to transform so quickly. Such that when, as Dave said, you're on the offensive, such that when the pandemic came, you leveraged that as an accelerator of what you've already been doing. Because culturally, that's challenging for folks to get on board to. How did you do that? >> I have to say, it is challenging. And it's at time's it feels counter-intuitive. We were going through the pandemic. We were having a large M&A integration happening, okay and we're transforming finance. And we're a resource constrained organization. Then you tell your people, "We've got more work to do. Transformation." And you're like, "Is that the right thing to do? Isn't everyone going to leave?" But when you dig deep, you say, "How do you get mind share?" How do you, first of all, you have to get people to see the value and then you have to make sure you do it fast enough, where they want to stick around. It's counter-intuitive. "Hey, we're going to launch this new platform. It's going to take three and half years. All right everyone, we're going to do this." What happens? People are like, in-out. Okay yeah, it'll come, we'll deal with it. Then instead, you say, "Hey, we're going to transform the way we plan. Completely. Top to bottom. 10 months. We're going to do it. Here's what you're going to be at your hands- Here's what you're going to have at your disposal in 10 months, all right? Oh, by the way, we're just showing you the high level. You get to really design. What do you want?" Now, when you have credibility, street cred with your organization, and you come out and say, "I'm going to give you top to bottom agility around forecasting and you get to have input on what you really want." Now people get excited. Like, "Oh, I'm going to work 25% more but wait a second, I'm really excited about what I get at the end of 10 months." >> So, the world was betting several years ago on the consolidation of fabs. "Oh, that's bad for Applied Materials." The exact opposite happened. You know, ARM changed the model, WAYFA volume's going through the roof. Now Intel is basically following that playbook, which is wonderful, they're breaking ground in Arizona. Which is, you have these massive tailwinds behind you. So I'm interested in how you forecast that and what role automation plays in that forecasting. >> Well, if you think about it, the fundamental demand isn't changing. Capacity has to go in. People think, wait a second, so and so is going to build less or whatever, The capacity, maybe geographically, is going to get dispersed out but it still has to go in. So I think it doesn't change the fundamental demand statement. Then, how does automation play into- I just thing that the fundamental nature and pace of business is changing. For us. And our customers are going through the same. So we have to be more reactive, we have to be able to respond to their needs. That whole thing cascades down into the organization. All the way deep into finance analyst forecasting, right? So, if everyone has to work off a weekly, monthly, quarterly cadence, you're too slow. Too late. Doesn't matter how good your plan is. It's old. It's stale. We're moving into a time and era where everything happens realtime. It always happened realtime but we just never had the tools to react realtime. Now, we have realtime business performance, enterprise grade dashboards. Any minute of the day you can see what the revenue forecast is, what the margin associated with that is. Yes, when we get into the official commit cycle everything firms up but it's not the big crank, right? You're fine tuning the knobs now. Which is great. What do you want in a plan? You want greater optionality. Is there a perfect plan? Of course there isn't. What is the North Star of forecasting? Give me as much options as- viable options and then let me decide. Because there's trade-offs. There's no one perfect plan. But you were limited. It just took too long to put a plan together. So you had very small degrees of freedom around it. Viable plans. We're changing all of that. >> This might be out of your swim lane but you had a slide up today and it had the IT in the middle- >> Yes. >> So technology's fundamental. And then, you had the elephant. The Hadoop elephant in the room. So I have to ask you, you guys announced this thing earlier this year called AI to the power of X, actionable insights. I remember reading about it, it's like you're collecting data across all the estate. So I'm like, wow this is a data company. Becoming a data company. So we've been talking a lot and of course the CFO purview is the reporting and I get that. The close, daily close, virtual close, all that. But then there's this whole line-of-business data play. >> Yes. >> And I'm wondering how automation fits there. I mean, that's got to be part of the vision. >> Yeah. Now, I can't speak to the capabilities you're talking to but we are leveraging some of that infrastructure, right? We have amazing IT organization. I have to say, we within Applied, we're a latecomer. From a product, customer product standpoint, already there is so much AI work being done. So we had the benefit of leveraging some of their capabilities for finance, when we launched Agile Finance. There is a lot going on over there. I think we actually enhanced that by introducing these RPA capabilities. And we did so from partnering with, I wouldn't say partnering, IT co-piloted this with us. Fundamentally co-piloted this, okay. And now, IT is taking it to other organizations. And they're taking it to product, they're taking it to operation, they're taking it to sales. So it will have a role. Hundred percent. But they're obviously starting, over the past three to six months is when they got started. So the answer is yes, for sure but I can't speak to exactly how it plays into that specific technology. >> But you addressed the dynamic. Which is, it started in a quick wind part of the company, finance. >> Yes. >> Which is logical. That's where I first introduced RPA a decade ago. A CFO conference, right? Then that now applies to the rest of the business. They're talking about operating leverage- >> Fundamental. Yeah. Hundred percent. >> How do you get that buy-in? How do you get finance and how do you get IT to work with finance, such that IT becomes a catalyst in all these downstream reactions to get this going across the company? >> Important question. >> Well they work for you. >> They don't. >> Oh they don't. >> They don't work for us. They work for me. I'm a customer of theirs. >> Okay. >> The first person that I needed to convince that we were serious and we're going to do it is the CIO. Okay, so you ask how do you get IT bought in? Well first thing, you have to get them in the tent. This is not about, "Oh, can you go do this for me? I need this from you. Can you do that?" Too slow, okay? This RPA, especially RPA, fundamentally, is such a, it's a technology that really needs to get embedded throughout the IT operating model. So you really need IT co-piloting this with you. This is how we did it. We said we're going to learn together. This is a must have for finance. We believe strongly this is going to become a must have for the enterprise but we're going to make the investment. In that must have for the enterprise, IT has to play the roll, right? So we started this together and we learned together and they've been fundamental in our being able to get to scale in 12 months. >> How do you federate governance? Who in the organization, what part of the organization owns governance, if you will? >> Yeah. So we created, established an RPA COE. They own the governance, the policies, the processes. Then, obviously there's a role to play for the business side. So we finance a business organization to them and there's roles to play. We actually, like I showed today in the presentation, there's multiple other players across the enterprise that have to vet these automations, right? Especially in finance. We have to be SOX compliant, we have to be data privacy compliant. We set all of those processes up. So, multiple parties have to engage but engage in an efficient way. >> We're seeing the CFO role emerge. I think of you as a CFO. I mean, I just use that umbrella, emerge as an innovator. I see this all of the place now, especially in Silicon Valley. You look at a company like Snowflake, I don't know if you know Mike Scarpelli but he kind of changed the world of software in some ways. So you're seeing very innovative CFOs emerge, that are technology savvy, they understand the operating leverage, we've used that term several times today, that you can get out of technology. It just reminds me, I don't know how long ago it was when Nick Carr wrote the book Does IT Matter. It seems like technology has never been more important. Along with people and process, of course, but in terms of creating that operating leverage, it's really a key part of the equation, the playbook going forward. >> I think it is a mindset change. We're trying to drive mindset change, right? But it's also, I think, come about because I think technology has become more friendly to non IT people. I think that's a fundamental driver. All these SaaS platforms in the market place, right? What did they design for? Business users. Of course IT has a very prominent role in that whole process and supporting it and implementing it. But the target audience is business users. What was the target audience for ERP? IT. Okay. Fundamental, the technology is changing by design and you're seeing now the impact of that. Where, "Hey wait, I can do this. I can do this by myself." Okay. IT always has been and will be a very important partner. They will service your data needs. This is how we're setting up the collaboration, right? But we really want the finance users to be able to iterate, model, analyze on the fly, in the moment. And they need to do it alone. >> Self serve, yeah. >> That's it. >> Self serve in realtime. I think one of the things, you mentioned it this morning, you mentioned it on our program and one of the things we've learned in the pandemic, that realtime and access to realtime data is no longer a nice-to-have. >> Yes. >> It's really a business critical element of any industry. >> Hundred percent. >> When do you think you'll put crypto on your balance sheet? I ask all the CFOs. >> He's been asking everyone that. >> There's an easy answer. I'm not authorized to answer. Above my pay grade. >> That's a good answer. >> That's good. >> Junaid, thank you so much for joining us. Talking to us about the transformation at Applied Materials, how you're partnering with UiPath to achieve that and the aggressive strategy that you've set out and congratulations on the success of it. We'll look forward to see what's going on in the next couple years. >> Great story. >> Of course. Thank you very much. Thank you for having me. >> Our pleasure. For Dave Vellante in Las Vegas, I'm Lisa Martin. You're watching theCUBE at UiPath Forward IV. Day two of our coverage. Stick around, we'll be right back with our next guest. (upbeat music)
SUMMARY :
Brought to you by UiPath. He's going to talk us Pleasure to be here. So you have a really Give the audience an But that can't be the mainstay anymore. to PA automation in 2019. of the most challenging, So that's the backdrop of your business. Okay, is this going to be- Not a panic move to What do investors want? ahead of the models, and you sell extremely And then a 5X revenue multiple. We didn't have the talk to me about the appetite the right thing to do? on the consolidation of fabs. Any minute of the day you can see So I have to ask you, I mean, that's got to over the past three to six But you addressed the dynamic. Then that now applies to a customer of theirs. In that must have for the enterprise, We have to be SOX compliant, but he kind of changed the And they need to do it alone. and one of the things we've critical element of any industry. I ask all the CFOs. I'm not authorized to answer. and congratulations on the success of it. Thank you very much. For Dave Vellante in Las
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dan Durn | PERSON | 0.99+ |
Nick Carr | PERSON | 0.99+ |
Gary Dickerson | PERSON | 0.99+ |
Junaid | PERSON | 0.99+ |
Arizona | LOCATION | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
50 % | QUANTITY | 0.99+ |
Junaid Ahmed | PERSON | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Junaid Amed | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Hundred percent | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
20+ Billion dollar | QUANTITY | 0.99+ |
5X | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
SOX | ORGANIZATION | 0.99+ |
five decades | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
10 months | QUANTITY | 0.99+ |
120+ Billion dollar | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
pandemic | EVENT | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
three and half years | QUANTITY | 0.97+ |
Applied Materials | ORGANIZATION | 0.97+ |
Day 2 | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
several years ago | DATE | 0.96+ |
Day two | QUANTITY | 0.96+ |
earlier this year | DATE | 0.95+ |
big | EVENT | 0.94+ |
first person | QUANTITY | 0.94+ |
Does IT Matter | TITLE | 0.94+ |
a decade ago | DATE | 0.92+ |
first thing | QUANTITY | 0.91+ |
this morning | DATE | 0.91+ |
Wall Street | ORGANIZATION | 0.9+ |
about two and a half years ago | DATE | 0.85+ |
25% more | QUANTITY | 0.85+ |
couple years | DATE | 0.81+ |
six months | QUANTITY | 0.8+ |
almost 300,000 hours | QUANTITY | 0.8+ |
next 20 years | DATE | 0.73+ |
a decade | QUANTITY | 0.7+ |
a second | QUANTITY | 0.68+ |
AMET | PERSON | 0.67+ |
years | DATE | 0.66+ |
Bellagio Hotel | ORGANIZATION | 0.66+ |
Agile | TITLE | 0.64+ |
over | QUANTITY | 0.6+ |
Applied | ORGANIZATION | 0.57+ |
next five years | DATE | 0.57+ |
prior | DATE | 0.57+ |
three | QUANTITY | 0.53+ |
double | QUANTITY | 0.51+ |
wave of | EVENT | 0.48+ |
five | QUANTITY | 0.43+ |
Finance | ORGANIZATION | 0.37+ |
Shruthi Murthy, St. Louis University & Venkat Krishnamachari, MontyCloud | AWS Startup Showcase
(gentle music) >> Hello and welcome today's session theCUBE presentation of AWS Startup Showcase powered by theCUBE, I'm John Furrier, for your host of theCUBE. This is a session on breaking through with DevOps data analytics tools, cloud management tools with MontyCloud and cloud management migration, I'm your host. Thanks for joining me, I've got two great guests. Venkat Krishnamachari who's the co-founder and CEO of MontyCloud and Shruthi Sreenivasa Murthy, solution architect research computing group St. Louis University. Thanks for coming on to talk about transforming IT, day one day two operations. Venkat, great to see you. >> Great to see you again, John. So in this session, I really want to get into this cloud powerhouse theme you guys were talking about before on our previous Cube Conversations and what it means for customers, because there is a real market shift happening here. And I want to get your thoughts on what solution to the problem is basically, that you guys are targeting. >> Yeah, John, cloud migration is happening rapidly. Not an option. It is the current and the immediate future of many IT departments and any type of computing workloads. And applications and services these days are better served by cloud adoption. This rapid acceleration is where we are seeing a lot of challenges and we've been helping customers with our platform so they can go focus on their business. So happy to talk more about this. >> Yeah and Shruthi if you can just explain your relationship with these guys, because you're a cloud architect, you can try to put this together. MontyCloud is your customer, talk about your solution. >> Yeah I work at the St. Louis University as the solutions architect for the office of Vice President of Research. We can address St. Louis University as SLU, just to keep it easy. SLU is a 200-year-old university with more focus on research. And our goal at the Research Computing Group is to help researchers by providing the right infrastructure and computing capabilities that help them to advance their research. So here in SLU research portfolio, it's quite diverse, right? So we do research on vaccines, economics, geospatial intelligence, and many other really interesting areas, and you know, it involves really large data sets. So one of the research computing groups' ambitious plan is to move as many high-end computation applications from on-prem to the AWS. And I lead all the cloud initiatives for the St. Louis university. >> Yeah Venkat and I, we've been talking, many times on theCUBE, previous interviews about, you know, the rapid agility that's happening with serverless and functions, and, you know, microservices start to see massive acceleration of how fast cloud apps are being built. It's put a lot of pressure on companies to hang on and manage all this. And whether you're a security group was trying to lock down something, or it's just, it's so fast, the cloud development scene is really fun and you're implementing it at a large scale. What's it like these days from a development standpoint? You've got all this greatness in the cloud. What's the DevOps mindset right now? >> SLU is slowly evolving itself as the AWS Center of Excellence here in St. Louis. And most of the workflows that we are trying to implement on AWS and DevOps and, you know, CICD Pipelines. And basically we want it ready and updated for the researchers where they can use it and not have to wait on any of the resources. So it has a lot of importance. >> Research as code, it's like the internet, infrastructure as code is DevOps' ethos. Venkat, let's get into where this all leads to because you're seeing a culture shift in companies as they start to realize if they don't move fast, and the blockers that get in the way of the innovation, you really can't get your arms around this growth as an opportunity to operationalize all the new technology, could you talk about the transformation goals that are going on with your customer base. What's going on in the market? Can you explain and unpack the high level market around what you guys are doing? >> Sure thing, John. Let's bring up the slide one. So they have some content that Act-On tabs. John, every legal application, commercial application, even internal IT departments, they're all transforming fast. Speed has never been more important in the era we are today. For example, COVID research, you know, analyzing massive data sets to come up with some recommendations. They don't demand a lot from the IT departments so that researchers and developers can move fast. And I need departments that are not only moving current workloads to the cloud they're also ensuring the cloud is being consumed the right way. So researchers can focus on what they do best, what we win, learning and working closely with customers and gathering is that there are three steps or three major, you know, milestone that we like to achieve. I would start the outcome, right? That the important milestone IT departments are trying to get to is transforming such that they're directly tied to the key business objectives. Everything they do has to be connected to the business objective, which means the time and you know, budget and everything's aligned towards what they want to deliver. IT departments we talk with have one common goal. They want to be experts in cloud operations. They want to deliver cloud operations excellence so that researchers and developers can move fast. But they're almost always under the, you know, they're time poor, right? And there is budget gaps and that is talent and tooling gap. A lot of that is what's causing the, you know, challenges on their path to journey. And we have taken a methodical and deliberate position in helping them get there. >> Shruthi hows your reaction to that? Because, I mean, you want it faster, cheaper, better than before. You don't want to have all the operational management hassles. You mentioned that you guys want to do this turnkey. Is that the use case that you're going after? Just research kind of being researchers having the access at their fingertips, all these resources? What's the mindset there, what's your expectation? >> Well, one of the main expectations is to be able to deliver it to the researchers as demand and need and, you know, moving from a traditional on-prem HBC to cloud would definitely help because, you know, we are able to give the right resources to the researchers and able to deliver projects in a timely manner, and, you know, with some additional help from MontyCloud data platform, we are able to do it even better. >> Yeah I like the onboarding thing and to get an easy and you get value quickly, that's the cloud business model. Let's unpack the platform, let's go into the hood. Venkat let's, if you can take us through the, some of the moving parts under the platform, then as you guys have it's up at the high level, the market's obvious for everyone out there watching Cloud ops, speed, stablism. But let's go look at the platform. Let's unpack that, do you mind pick up on slide two and let's go look at the what's going on in the platform. >> Sure. Let's talk about what comes out of the platform, right? They are directly tied to what the customers would like to have, right? Customers would like to fast track their day one activities. Solution architects, such as Shruthi, their role is to try and help get out of the way of the researchers, but we ubiquitous around delegating cloud solutions, right? Our platform acts like a seasoned cloud architect. It's as if you've instantly turned on a cloud solution architect that should, they can bring online and say, Hey, I want help here to go faster. Our lab then has capabilities that help customers provision a set of governance contracts, drive consumption in the right way. One of the key things about driving consumption the right way is to ensure that we prevent a security cost or compliance issues from happening in the first place, which means you're shifting a lot of the operational burden to left and make sure that when provisioning happens, you have a guard rails in place, we help with that, the platform solves a problem without writing code. And an important takeaway here, John is that a was built for architects and administrators who want to move fast without having to write a ton of code. And it is also a platform that they can bring online, autonomous bots that can solve problems. For example, when it comes to post provisioning, everybody is in the business of ensuring security because it's a shared model. Everybody has to keep an eye on compliance, that is also a shared responsibility, so is cost optimization. So we thought wouldn't it be awesome to have architects such as Shruthi turn on a compliance bot on the platform that gives them the peace of mind that somebody else and an autonomous bot is watching our 24 by 7 and make sure that these day two operations don't throw curve balls at them, right? That's important for agility. So platform solves that problem with an automation approach. Going forward on an ongoing basis, right, the operation burden is what gets IT departments. We've seen that happen repeatedly. Like IT department, you know, you know this, John, maybe you have some thoughts on this. You know, you know, if you have some comments on how IT can face this, then maybe that's better to hear from you. >> No, well first I want to unpack that platform because I think one of the advantages I see here and that people are talking about in the industry is not only is the technology's collision colliding between the security postures and rapid cloud development, because DevOps and cloud, folks, are moving super fast. They want things done at the point of coding and CICB pipeline, as well as any kind of changes, they want it fast, not weeks. They don't want to have someone blocking it like a security team, so automation with the compliance is beautiful because now the security teams can provide policies. Those policies can then go right into your platform. And then everyone's got the rules of the road and then anything that comes up gets managed through the policy. So I think this is a big trend that nobody's talking about because this allows the cloud to go faster. What's your reaction to that? Do you agree? >> No, precisely right. I'll let Shurthi jump on that, yeah. >> Yeah, you know, I just wanted to bring up one of the case studies that we read on cloud and use their compliance bot. So REDCap, the Research Electronic Data Capture also known as REDCap is a web application. It's a HIPAA web application. And while the flagship projects for the research group at SLU. REDCap was running on traditional on-prem infrastructure, so maintaining the servers and updating the application to its latest version was definitely a challenge. And also granting access to the researchers had long lead times because of the rules and security protocols in place. So we wanted to be able to build a secure and reliable enrollment on the cloud where we could just provision on demand and in turn ease the job of updating the application to its latest version without disturbing the production environment. Because this is a really important application, most of the doctors and researchers at St. Louis University and the School of Medicine and St. Louis University Hospital users. So given this challenge, we wanted to bring in MontyCloud's cloud ops and, you know, security expertise to simplify the provisioning. And that's when we implemented this compliance bot. Once it is implemented, it's pretty easy to understand, you know, what is compliant, what is noncompliant with the HIPAA standards and where it needs an remediation efforts and what we need to do. And again, that can also be automated. It's nice and simple, and you don't need a lot of cloud expertise to go through the compliance bot and come up with your remediation plan. >> What's the change in the outcome in terms of the speed turnaround time, the before and after? So before you're dealing with obviously provisioning stuff and lead time, but just a compliance closed loop, just to ask a question, do we have, you know, just, I mean, there's a lot of manual and also some, maybe some workflows in there, but not as not as cool as an instant bot that solve yes or no decision. And after MontyCloud, what are some of the times, can you share any data there just doing an order of magnitude. >> Yeah, definitely. So the provisioning was never simpler, I mean, we are able to provision with just one or two clicks, and then we have a better governance guardrail, like Venkat says, and I think, you know, to give you a specific data, it, the compliance bot does about more than 160 checks and it's all automated, so when it comes to security, definitely we have been able to save a lot of effort on that. And I can tell you that our researchers are able to be 40% more productive with the infrastructure. And our research computing group is able to kind of save the time and, you know, the security measures and the remediation efforts, because we get customized alerts and notifications and you just need to go in and, you know. >> So people are happier, right? People are getting along at the office or virtually, you know, no one is yelling at each other on Slack, hey, where's? Cause that's really the harmony here then, okay. This is like a, I'm joking aside. This is a real cultural issue between speed of innovation and the, what could be viewed as a block, or just the time that say security teams or other teams might want to get back to you, make sure things are compliant. So that could slow things down, that tension is real and there's some disconnects within companies. >> Yeah John, that's spot on, and that means we have to do a better job, not only solving the traditional problems and make them simple, but for the modern work culture of integrations. You know, it's not uncommon like you cut out for researchers and architects to talk in a Slack channel often. You say, Hey, I need this resource, or I want to reconfigure this. How do we make that collaboration better? How do you make the platform intelligent so that the platform can take off some of the burden off of people so that the platform can monitor, react, notify in a Slack channel, or if you should, the administrator say, Hey, next time, this happens automatically go create a ticket for me. If it happens next time in this environment automatically go run a playbook, that remediates it. That gives a lot of time back that puts a peace of mind and the process that an operating model that you have inherited and you're trying to deliver excellence and has more help, particularly because it is very dynamic footprint. >> Yeah, I think this whole guard rail thing is a really big deal, I think it's like a feature, but it's a super important outcome because if you can have policies that map into these bots that can check rules really fast, then developers will have the freedom to drive as fast as they want, and literally go hard and then shift left and do the coding and do all their stuff on the hygiene side from the day, one on security is really a big deal. Can we go back to this slide again for the other project? There's another project on that slide. You talked about RED, was it REDCap, was that one? >> Yeah. Yeah, so REDCap, what's the other project. >> So SCAER, the Sinfield Center for Applied Economic Research at SLU is also known as SCAER. They're pretty data intensive, and they're into some really sophisticated research. The Center gets daily dumps of sensitive geo data sensitive de-identified geo data from various sources, and it's a terabyte so every day, becomes petabytes. So you know, we don't get the data in workable formats for the researchers to analyze. So the first process is to convert this data into a workable format and keep an analysis ready and doing this at a large scale has many challenges. So we had to make this data available to a group of users too, and some external collaborators with ads, you know, more challenges again, because we also have to do this without compromising on the security. So to handle these large size data, we had to deploy compute heavy instances, such as, you know, R5, 12xLarge, multiple 12xLarge instances, and optimizing the cost and the resources deployed on the cloud again was a huge challenge. So that's when we had to take MontyCloud help in automating the whole process of ingesting the data into the infrastructure and then converting them into a workable format. And this was all automated. And after automating most of the efforts, we were able to bring down the data processing time from two weeks or more to three days, which really helped the researchers. So MontyCloud's data platform also helped us with automating the risk, you know, the resource optimization process and that in turn helped bring the costs down, so it's been pretty helpful then. >> That's impressive weeks to days, I mean, this is the theme Venkat speed, speed, speed, hybrid, hybrid. A lot of stuff happening. I mean, this is the new normal, this is going to make companies more productive if they can get the apps built faster. What do you see as the CEO and founder of the company you're out there, you know, you're forging new ground with this great product. What do you see as the blockers from customers? Is it cultural, is it lack of awareness? Why aren't people jumping all over this? >> Only people aren't, right. They go at it in so many different ways that, you know, ultimately be the one person IT team or massively well-funded IT team. Everybody wants to Excel at what they're delivering in cloud operations, the path to that as what, the challenging part, right? What are you seeing as customers are trying to build their own operating model and they're writing custom code, then that's a lot of need for provisioning, governance, security, compliance, and monitoring. So they start integrating point tools, then suddenly IT department is now having a, what they call a tax, right? They have to maintain the technical debt while cloud service moving fast. It's not uncommon for one of the developers or one of the projects to suddenly consume a brand new resource. And as you know, AWS throws up a lot more services every month, right? So suddenly you're not keeping up with that service. So what we've been able to look at this from a point of view of how do we get customers to focus on what they want to do and automate things that we can help them with? >> Let me, let me rephrase the question if you don't mind. Cause I I didn't want to give the impression that you guys aren't, you guys have a great solution, but I think when I see enterprises, you know, they're transforming, right? So it's not so much the cloud innovators, like you guys, it's really that it's like the mainstream enterprise, so I have to ask you from a customer standpoint, what's some of the cultural things are technical reasons why they're not going faster? Cause everyone's, maybe it's the pandemic's forcing projects to be double down on, or some are going to be cut, this common theme of making things available faster, cheaper, stronger, more secure is what cloud does. What are some of the enterprise challenges that they have? >> Yeah, you know, it might be money for right, there's some cultural challenges like Andy Jassy or sometimes it's leadership, right? You want top down leadership that takes a deterministic step towards transformation, then adequately funding the team with the right skills and the tools, a lot of that plays into it. And there's inertia typically in an existing process. And when you go to cloud, you can do 10X better, people see that it doesn't always percolate down to how you get there. So those challenges are compounded and digital transformation leaders have to, you know, make that deliberate back there, be more KPI-driven. One of the things we are seeing in companies that do well is that the leadership decides that here are our top business objectives and KPIs. Now if we want the software and the services and the cloud division to support those objectives when they take that approach, transformation happens. But that is a lot more easier said than done. >> Well you're making it really easy with your solution. And we've done multiple interviews. I've got to say you're really onto something really with this provisioning and the compliance bots. That's really strong, that the only goes stronger from there, with the trends with security being built in. Shruthi, got to ask you since you're the customer, what's it like working with MontyCloud? It sounds so awesome, you're customer, you're using it. What's your review, what's your- What's your, what's your take on them? >> Yeah they are doing a pretty good job in helping us automate most of our workflows. And when it comes to keeping a tab on the resources, the utilization of the resources, so we can keep a tab on the cost in turn, you know, their compliance bots, their cost optimization tab. It's pretty helpful. >> Yeah well you're knocking projects down from three weeks to days, looking good, I mean, looking real strong. Venkat this is the track record you want to see with successful projects. Take a minute to explain what else is going on with MontyCloud. Other use cases that you see that are really primed for MontyCloud's platform. >> Yeah, John, quick minute there. Autonomous cloud operations is the goal. It's never done, right? It there's always some work that you hands-on do. But if you set a goal such that customers need to have a solution that automates most of the routine operations, then they can focus on the business. So we are going to relentlessly focused on the fact that autonomous operations will have the digital transformation happen faster, and we can create a lot more value for customers if they deliver to their KPIs and objectives. So our investments in the platform are going more towards that. Today we already have a fully automated compliance bot, a security bot, a cost optimization recommendation engine, a provisioning and governance engine, where we're going is we are enhancing all of this and providing customers lot more fluidity in how they can use our platform Click to perform your routine operations, Click to set up rules based automatic escalation or remediation. Cut down the number of hops a particular process will take and foster collaboration. All of this is what our platform is going and enhancing more and more. We intend to learn more from our customers and deliver better for them as we move forward. >> That's a good business model, make things easier, reduce the steps it takes to do something, and save money. And you're doing all those things with the cloud and awesome stuff. It's really great to hear your success stories and the work you're doing over there. Great to see resources getting and doing their job faster. And it's good and tons of data. You've got petabytes of that's coming in. It's it's pretty impressive, thanks for sharing your story. >> Sounds good, and you know, one quick call out is customers can go to MontyCloud.com today. Within 10 minutes, they can get an account. They get a very actionable and valuable recommendations on where they can save costs, what is the security compliance issues they can fix. There's a ton of out-of-the-box reports. One click to find out whether you are having some data that is not encrypted, or if any of your servers are open to the world. A lot of value that customers can get in under 10 minutes. And we believe in that model, give the value to customers. They know what to do with that, right? So customers can go sign up for a free trial at MontyCloud.com today and get the value. >> Congratulations on your success and great innovation. A startup showcase here with theCUBE coverage of AWS Startup Showcase breakthrough in DevOps, Data Analytics and Cloud Management with MontyCloud. I'm John Furrier, thanks for watching. (gentle music)
SUMMARY :
the co-founder and CEO Great to see you again, John. It is the current and the immediate future you can just explain And I lead all the cloud initiatives greatness in the cloud. And most of the workflows that and the blockers that get in important in the era we are today. Is that the use case and need and, you know, and to get an easy and you get of the researchers, but we ubiquitous the cloud to go faster. I'll let Shurthi jump on that, yeah. and reliable enrollment on the cloud of the speed turnaround to kind of save the time and, you know, as a block, or just the off of people so that the and do the coding and do all Yeah, so REDCap, what's the other project. the researchers to analyze. of the company you're out there, of the projects to suddenly So it's not so much the cloud innovators, and the cloud division to and the compliance bots. the cost in turn, you know, to see with successful projects. So our investments in the platform reduce the steps it takes to give the value to customers. Data Analytics and Cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Shruthi | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Shruthi Murthy | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Sinfield Center for Applied Economic Research | ORGANIZATION | 0.99+ |
Venkat Krishnamachari | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
School of Medicine | ORGANIZATION | 0.99+ |
St. Louis | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Shruthi Sreenivasa Murthy | PERSON | 0.99+ |
SLU | ORGANIZATION | 0.99+ |
Venkat | PERSON | 0.99+ |
10X | QUANTITY | 0.99+ |
St. Louis University Hospital | ORGANIZATION | 0.99+ |
HIPAA | TITLE | 0.99+ |
MontyCloud | ORGANIZATION | 0.99+ |
two operations | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
St. Louis University | ORGANIZATION | 0.99+ |
two clicks | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
three steps | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
under 10 minutes | QUANTITY | 0.98+ |
200-year-old | QUANTITY | 0.98+ |
three weeks | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Research Computing Group | ORGANIZATION | 0.97+ |
MontyCloud.com | ORGANIZATION | 0.96+ |
Venkat | ORGANIZATION | 0.96+ |
first process | QUANTITY | 0.96+ |
AWS Center of Excellence | ORGANIZATION | 0.95+ |
Research Electronic Data Capture | ORGANIZATION | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
two | QUANTITY | 0.95+ |
7 | QUANTITY | 0.94+ |
Shurthi | PERSON | 0.94+ |
about more than 160 checks | QUANTITY | 0.94+ |
one person | QUANTITY | 0.93+ |
St. Louis university | ORGANIZATION | 0.93+ |
two great guests | QUANTITY | 0.93+ |
One click | QUANTITY | 0.93+ |
Vice President | PERSON | 0.91+ |
one common goal | QUANTITY | 0.9+ |
pandemic | EVENT | 0.9+ |
three major | QUANTITY | 0.9+ |
4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
John Matchette, Accenture | Accenture Executive Summit at AWS re:Invent 2019
>>live from Las Vegas. It's the two covering AWS executive Something >>brought to you by Accenture >>everyone to the ex Center Executive Summit here in AWS. Reinvent I'm your host, Rebecca Knight. I'm joined by John. Match it. He is the managing director. Applied Intelligence, North America Attic Center Thank you so much for coming on the Q. So we're gonna have a fun conversation about a I today. We tend to think of a I as this futuristic Star Trek Jetsons kind of thing. But in fact, a i a. I is happening here and now >>it's all around us. I think it's intricate zoologist, sort of blood into the fabric girl of our lives without really even knowing about, I mean, just to get here, Let me lives took a new burst. There's a I in the route navigation. We may have listened to Spotify, and there's a I and the recommendation engine. And if you want to check the weather with Alexa, there's a lot of agents in the natural language processing, and none of that was really impossible 10 years ago. So without even trying, just wake up and I sort of like in your system in your blood. >>So as consumers, we deal with a I every day. But it's all but businesses are also using a I, and it's already having an impact. >>I think >>what is absolutely true it and really interesting is that information is just the new basis of competition. Like like you know, companies used to compete with physical objects and look better cars and blenders and stereos and, you know, thermometers. But today, you know, they're all like on a device, and so information is how they compete. And what's interesting to me about that for our clients is that if you have a good idea, you can probably do it. And so you're limited, really by your own imagination on. So I just as an example of like how things are playing out a lover classroom, the farmer space to make better drugs, and every every form of company I know of is using some sort of machine learning a I to create better pharmaceuticals, the big ones, but also the new entrance. One of the companies that we followed numerator really issued company. What they've been able to do is like in just just a massive amount of data like all day, like good data, bad bias on buying >>its ingesting, this kind of data the data is about. >>It's about like drug efficacy, human health, the human genome like like like doctors visits like all this diverse information. And historically, if you put all that data together just to have a way to actually examine it, there's no way that was too much. Humans can't deal with it, but but But machine learning can. And so what? We just all this date up and we let the robots decided sort of less meaningful. And what's happened is you can now deal with instead, just a very fraction that data, but all of it. And the result, like in pharmaceuticals. Is it wearable? Come with new HIV drugs in six months? It used to be years and millions of dollars, tens of millions of dollars. But now it's, you know, it's months, and so it's really changing the way humans live. And certainly the associated industries. They're producing the drugs. >>So it's as you said, I was already being used to reimagine medicine. So many of the high tech jobs openings today are not necessarily in technology there in pharmaceuticals and automotive's. And these and these involved artificial intelligence, their skills in artificial intelligence. What can you tell us about how a eyes having an impact? And that's what I think. >>This is a really good question. What is interesting is that industry she wouldn't think, or digital companies are now actually digital competitors. I'll give you two examples. One is a lot of clients make liquefied natural gas. Now that that is a mucky business. It's full of science, like geology and chemistry and chemical engineering, and they work with these like small refineries. But the questions like, how we gonna get better if you make you know Ellen G. And so what they do is they use a I, and the way they do that is likely have these small refineries. Each piece of equipment has a sensor on it, so there may be 5000 sensors, and each sensor has three or four like bots looking at it, and one might be looking at vibration heat and and what they're doing is they're making predictions. Millions of predictions every every day about you know whether quality is good. The machine's about to have a problem that safety is jeopardise something like that. And so So you've gone from a place where, you know, the best competitors were chemists to the best competitors are actually using machine learning to make the plants work better. You know, another entry. We see this really was brewing. You know, you don't think no one would think brewing is like a digital business like his beer? The Egyptians may be right, like so everyone knows how to do it. So But think about if you make beer like how you're gonna get better and again do what you do is you begin to touch customers more effectively with better digital marketing, you know? Hey, I tow target to understand who your best customers are, how to make offers to them, had a price head of both new product introduction, and even had a formulate new brands of beer that might appeal to different segments of society. So brewing, like they're all about, like ml in the eye. And they really are, like a digital competitive these days, which I think it's interesting, like no one would have thought about that, you know, is they were consuming beer on a Friday with their friends >>and craft brewing is so hot right now. I mean, it is one of those things. As you said, it is attracting new, different kinds of segments of customers. >>Right? And so the questions like if you are a craft brewer like, how do you go find the people that that you want? So what we're doing is we're way have new digital ways to go touch them very personalized offer like, if you like running, you know we can We can give you an offer like fun run followed by a brew. But we know who you are and what you like your friends like to do to get very specific A CZ we like examined the segments of society to do very personal marketing. It's actually fun, like, you know, it gives you things to go Dio we did one event where he looked at cos we we had a a beer tasting with barbecue teach you no instruction. So if you wanna learn how to cook barbecue and also do a beer tasting can get 20 people together and you have a social experience and you you buy more the product. But what's interesting is like, Well, how do you find those people? How do you reach them? How do you identify these of the right folks? That'll actually participate? And that's where a I comes into play. >>So this is fascinating, and you just you just described a number of different industries and companies beer, brewers, liquefied natural gas, pharmaceuticals that are using a I to transform themselves. What is your What do you recommend for the people out there watching and say, I want to do that? How could I get on >>board or what we advise Companies are clients to really get good at three things, and the first is just to do things differently. So you got to go into your core operations and figure out how you can extract more cash and more profit from your existing operations. And so that's like we talked about natural gas, right? Like you could produce it more profitably and effectively, but that's not enough. The next thing you do step to would be to actually grow your core business. Everyone wants to leave to the new right away, but but you're getting all your cash and your legacy businesses and so like like we saw in the brewing history. If you can find new customers, more profitable customers interact with them, create a better digital experience with them, then you'll grow both your top line in your bottom line. But for our from our perspective, the reason you do both of those things is cash. Then make investments into New Net new businesses on DSO. The last thing you do is to do different things, so find in adjacency and grow. And it's important to talk about the role of a I and that because that's the way you develop outcomes with speed, right? Like you're not gonna build a factory and we're gonna build a service or some sort of, you know, information centric offerings. And so what we like to do is talk about like the wise pivot from your old legacy businesses. We generate cash and you make selective investments in the new and how you regulate that is a really important question, because you're too fast and you start the Lexie businesses like to slow, and you're gonna be sort of left out of the new economy. So doing those three things correctly with the right sort of managing processes is what we advise our clients to focus on. >>So I see all of this from the business side. But do you because you're also a consumer? Do you ever see any sort of concerns about privacy and security in the sense of why does anyone need to know if I like to run or I like barbecue with my beer? I mean, how do you How do you sort of think about those things and and talk to clients about those issues >>too? Well, I think, you know, actually, for censure. Ah, large part of our focus is what we call just ethical a eye on. And so it's important to us to actually have offerings that we think that we're comfortable with that are legally comfortable, but also just societally are acceptable. And it's actually like there's a lot of focus in this area, right, how you do it. And there's actually a lot to learn. Like like what we see, for example, is there could be biased in the data which effects the actual algorithm. So a lot of times were the folks in the algorithm, you need to go back to the data and look at that. But it's something we spend a lot of time on. Its important us because we to our consumers and we care about our privacy. >>So when you talk about the wise pivot and the regulation, this is a This is a big question. There's a lot of bills on the table in Washington. It's certainly dominating our national conversation, how we think about regulating thes new emerging technologies that that present a lot of opportunities, but also a lot of risks. So how how are you, how you are you a tech center thinking about regulation and working with regulators on these issues >>way get involved with talking to the government. They seek independent counsel, so we participate when they're seeking guidance and we'll give our offer. So we're a voice at the table. But you know, what I would say is there's a lot of discussion about privacy and ask. But if you look at, like, at a national level, particularly government, I think there used to be more focused just on the parts that are incontrovertibly not problematic with privacy. So I gave you the example of working with liquefied natural gas. Okay, we need better, eh? I'd run our factories better. There's a lot of a I that goes into those kind of problems or supply chain planning. Like, how do I predict demand more effectively, or where should I put my plants? And A. I is the new way supply chain is done right? And so there's There's very few of the consumer centric problems I think, actually is. A society like 90% of the use cases are gonna be in areas where they don't actually influence for privacy and a lot of art. Our time is actually working on those kind of use cases just to make you know the operations of our organization's Maur more effective than more efficient. >>So we talked about the very beginning of this conversation about the companies that are disrupting old industries. Using a lot of these technologies, I mean, is this is a I A case where you need to be using this you need to be using >>you need to be using it. My view, my personal view is that there is going to be no basis of competition in the future, except for a digital. It just is going to be the case. And so all of our clients, you know, they're at some state of maturity and they're all asking the question like, How did I grow up? I don't get more profitable. Like certainly the street. Once more results on DSO if you want to move quickly in the new space, is you. You you you only have 11 choice. Really? And that that is to get really, really, really good at managing in harnessing digital technologies, inclusive of >>a I >>two to compete in a different way. And so I mean, we're seeing really interesting examples were like, you know, like, retailers are getting into health care, right? Like, you see this like you go into Wal Mart and they have our Walgreens. They have, like a doc in the box, right? So we're seeing. But lots of companies that are making physical things that then turn around and use the developing service and what they used to use their know how they take everything they know about, like like something you know about, like healthcare or how to like, you know, offer service is to customers and retail setting, but then they need to do something different. And now how do I get the data and the know how to then offer, like a new differentiated health service? And so to do that, you know, you have a lot. You have a lot of understanding about your customers, but you need to get all the data sources in place. You may need certain help desk. You know you need ways to aggregate it on, and so you probably need a new partnerships that don't have. You probably need toe manage skill sets that you don't have. You may need to get involved with open source communities. You may need to be involved with universities that where they do research, so you'll need a different kind of partnerships to move a speed then companies have probably used in the past. But when they put all those those eco systems together, onda new emphasis on the required skill sets, they can take their legacy knowledge that's probably physically oriented and then create a service that can create. They can monetize their experience with the new service. What what we find usually doesn't work is just a monetized data. If you have a lot of data, it's not usually worth that much. But if you take the data and you create a new service that people care about, then you can monetize your legacy information that that that's what a lot of our class they're trying to do, think they've very mature and now, like Where do you go? And where they go is something may be nearby to their existing business, but it's not. It's not the same legacy business of the path for years. >>I want to take a little deeper on something you brought up about the skills, and there's a real skills gap in Silicon Valley and in companies in this area. How are you working with companies to make sure that they are attracting the right talent pool and retaining those workers once they have? Um, >>well, so this is, I think, one of the most important questions because, like what? What happened with technology in the past? We would put in these like ear piece systems, and that was a big part of our business, like 15 years ago. And once you learned one of those things, that's a P or oracle or, you know, like whatever your skill set was good for 10 years, You probably you were good. You could just, like, go to the work. But today it just just go down to like the convention center. Look at this vast array of like like >>humanity, humanity >>and new technologies. I mean, half these companies didn't even exist, like, five years ago, right? And so you're still set today is probably only good for a year. So I think the first thing you've got to realise is that there's got to be a new focus on actually cultivating talent as a strategy. It's it's the way to compete like people is your product, if you wanna look at that way. But we're doing actually starting very, uh, where we can very early in the process, like much beyond a corporation. So we work with charter schools over kids, we get them into college, we work with universities, we do a lot of internship. So we're trying to start, like, really early on when you ask a question like, what would our recommendation to the government be were actually advising, like, get kids involved in I t. Like earlier and so so we can get that problem resolved but otherwise, once companies work. I think you know you need your own talent strategy. But part of that might be again, like an eco system play like maybe you don't want all of those people and you'd rather sort of borrow on. And so I think, I think figuring out what your eco system is because I think I think in the future like competition will be like my eco system versus your eco system. And that's that is the way I think it's gonna work. And so thinking in an eco system way is, is what most of our clients need to do. >>Well, it's like you said about the old ways of it was a good idea for a good product versus good ideas. And I just keep looking. Thank you so much, John, for coming on the Cuba Really fascinating conversation >>was my pleasure. Thank you so much. >>I'm Rebecca Knight. Stay tuned for more of the cubes. Live coverage of the Accenture Executive Summit coming up in just a little bit
SUMMARY :
It's the two covering North America Attic Center Thank you so much for coming on the Q. So we're gonna And if you want So as consumers, we deal with a I every day. Like like you know, companies used to compete with physical objects and look better cars and blenders And what's happened is you can now deal with instead, just a very fraction that data, but all of it. So it's as you said, I was already being used to reimagine medicine. But the questions like, how we gonna get better if you make you know Ellen G. And so what they do is they As you said, it is attracting new, And so the questions like if you are a craft brewer like, how do you go find the people that that you want? So this is fascinating, and you just you just described a number of different industries and companies And it's important to talk about the role of a I and that because that's the way you develop outcomes I mean, how do you How do you sort of think So a lot of times were the folks in the algorithm, you need to go back to the data and look at that. So when you talk about the wise pivot and the regulation, this is a This is But you know, what I would say is there's a lot of discussion about privacy and ask. Using a lot of these technologies, I mean, is this is a I A case where you need And so all of our clients, you know, they're at some state of maturity And so to do that, you know, you have a lot. I want to take a little deeper on something you brought up about the skills, and there's a real skills gap in Silicon Valley or, you know, like whatever your skill set was good for 10 years, You probably you were good. I think you know you need your own talent strategy. Well, it's like you said about the old ways of it was a good idea for a good product versus good ideas. Thank you so much. Live coverage of the Accenture Executive Summit
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Wal Mart | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
John Matchette | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
5000 sensors | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Ellen G. | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Washington | LOCATION | 0.99+ |
Each piece | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
20 people | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Walgreens | ORGANIZATION | 0.99+ |
Cuba | LOCATION | 0.99+ |
four | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two examples | QUANTITY | 0.99+ |
Millions | QUANTITY | 0.99+ |
tens of millions of dollars | QUANTITY | 0.99+ |
each sensor | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
Star Trek Jetsons | TITLE | 0.98+ |
today | DATE | 0.98+ |
Lexie | ORGANIZATION | 0.98+ |
six months | QUANTITY | 0.98+ |
Accenture | ORGANIZATION | 0.97+ |
Applied Intelligence | ORGANIZATION | 0.97+ |
Accenture Executive Summit | EVENT | 0.97+ |
one | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
first thing | QUANTITY | 0.97+ |
a year | QUANTITY | 0.96+ |
years | QUANTITY | 0.96+ |
15 years ago | DATE | 0.94+ |
11 choice | QUANTITY | 0.93+ |
one event | QUANTITY | 0.93+ |
half | QUANTITY | 0.92+ |
Alexa | TITLE | 0.86+ |
Center Executive Summit | EVENT | 0.85+ |
Egyptians | PERSON | 0.83+ |
three things | QUANTITY | 0.78+ |
North America | LOCATION | 0.75+ |
predictions | QUANTITY | 0.74+ |
AWS re:Invent 2019 | EVENT | 0.67+ |
Spotify | ORGANIZATION | 0.66+ |
HIV | OTHER | 0.59+ |
Friday | DATE | 0.58+ |
use | QUANTITY | 0.58+ |
Albert Ng, Misapplied Sciences | Sports Tech Tokyo World Demo Day 2019
(upbeat music) >> Hey welcome back everybody. Jeff Frick here with theCUBE. I wish I could give you my best John Miller impersonation but I'm just not that good. But we are at Oracle Park, home of the San Francisco Giants. We haven't really done a show here since 2014, so we're excited to be back. Pretty unique event, it's called Sports Tech Tokyo World Demo Day. About 25 companies representing about 100 different companies really demonstrating a bunch of cool technology that's used for sports as well as beyond sports, so we're excited to have one of the companies here who's demoing their software today, or their solution I should say. It's Albert Ng, he's the founder and CEO of Misapplied Sciences. Albert, great to see you. >> Great to see you, thank you for having me. >> So Misapplied Sciences. Now I want to hear about the debates on that name. So how did that come about? >> Yeah, so I used to work part time for Microsoft, at Microsoft Research, and one of the groups I worked for was called the Applied Sciences group. And so it was a little bit of a spin on that and it conveys the way that we come up with innovations at our company. We're a little bit more whimsical as a company that we take technologies that weren't intended for the ways that we apply them and so we misapply those technologies to create new innovations. >> Okay, so you're here today, you're showing a demo. So what is it? What is your technology all about? And what is the application in sports, and then we'll talk about beyond sports. >> Yeah, so Misapplied Sciences, we came up with a new display technology. Think like LED video wall, digital signage, that sort of display. But what's unique about our displays, is you can have a crowd of people, all looking at the same display at the same time, yet every single person sees something completely different. You don't need to have any special glasses or anything like that. You look at your displays with your naked eyes, except everyone gets their own personalized experience. >> Interesting. So how is that achieved? Obviously, we've all been on airplanes and we know privacy filters that people put on laptops so we know there's definitely some changes based on angle. Is it based on the angles that you're watching it? How do you accomplish that and is it completely different, or I just see a little bit of difference here, there, and in other places? >> Sure, so at the risk of sounding a little too technical, it's in the pixel technology that we developed itself. So each of our pixels can control the color of light that it sends in many different directions. So one time a single pixel can emit green light towards you, whereas red light towards the person sitting right next to you, so you perceive green, whereas the person right next to you perceives red at the same time. We can do that at a massive scale. So our pixels can control the color of light that they send between tens of thousands, up to a million different angles. So using our software, our processors on our back end, we can control what each of our pixels looks like from up to a million different angles. >> So how does it have an edge between a million points of a compass? That's got to be, obviously it's your secret sauce, but what's going on in layman's terms? >> Yeah, so it's a very sophisticated technology. It's a full stack technology, as we call it. So it's everything from new optics to new high performance computing. We had to develop our own custom processor to drive this. Computer vision, software user interfaces, everything. And so this is an innovation we can up with after four and a half years in stealth mode. So we started the company in late 2014, and we were all the way completely in stealth mode until middle of last year. So about four years just hardcore doing the development work, because the technology's very sophisticated. And I know when I say this, it does sound a little impossible, a little bit like science fiction, so we knew that. So now we have our first product coming on the market, our first public installation later next year and it's going to be really exciting. >> Great. So, obviously you're not going to have a million different feeds, 'cuz you have to have a different feed I would imagine, for each different view, 'cuz you designate this is the view from point A. This is the view from point B. Use feed A, use feed B. I assume you use something like that 'cuz obviously the controller's a big piece of the display. >> Exactly, so a lot of the technology underneath the hood is to reduce the calculations, or the rendering required from a normal computer, so you can actually drive our big displays that can control hundreds of different views using a normal PC, just using our platform. >> So what's the application. You know obviously it's cute and it's fun and I told you it's a dog, no it's a cat as you said, but what are some of the applications that you see in sports? What are you going to do in your first demo that you're putting out? >> Yeah, so what the technology enables is finally having personalized experiences when in a public environment, like a stadium, like an airport, like a shopping mall. So let me give an airport example. So imagine you go up to the giant flight board and instead of a list of a hundred flights, you see only your own flight information in big letters so you can see it from 50 feet away. You can have arrows that light your path towards your particular gate. The displays could let you know exactly how many minutes you have to board, and suggest places for you to eat and shop that are convenient for you. So the environment can be tailored just for you and you're not looking down at a smart phone, you're not wearing any special glasses to see everything that you want to see. So that ability to personalize a venue stretch, is to every single public venue, even in the stadium here, imagine the stadium knowing whether you're a home team fan or away team fan or your fantasy players. You can see it all on the jumbotron or any of the displays that are in the interstitial areas. We can have the entire stadium come alive just for you and personalize it. >> Except you're not going to have 10,000 different feeds, so is there going to be some subset of infinite that people are driving in terms of the content side? >> Mhmm. >> So on your first one, you're first installation, what's that installation going to be all about? >> The first installation is going to be at an airport, I can't see right now publicly where it's going to be or when it's going to be or what partner. But the idea is to be able to have a giant flight board that you only see your own flight information, navigating you to your particular gate. You know when you're at an airport, or any other public venue like a stadium, a lot of times you feel like cow in a herd, right? And it's not tailored for you in any way. You don't have as good of an experience. So we can personalize that for you. >> All right, Misapplied Sciences. Oh I'll come down and take a look at the booth a little bit later. And thanks for taking a few minutes. Good luck on the adventure. I look forward to watching it unfold. >> Appreciate it, thank you so much. >> All right, he's Albert I'm Jeff. You're watching theCUBE. We're at Oracle Park, on the shores of McCovey Cove. Thanks for watching, we'll see ya next time. >> Thank you. (upbeat music)
SUMMARY :
I wish I could give you my best John Miller impersonation So how did that come about? and it conveys the way that we come up Okay, so you're here today, you're showing a demo. is you can have a crowd of people, So how is that achieved? So each of our pixels can control the color of light And so this is an innovation we can up with 'cuz you have to have a different feed Exactly, so a lot of the technology underneath the hood that you see in sports? So the environment can be tailored just for you that you only see your own flight information, Good luck on the adventure. We're at Oracle Park, on the shores of McCovey Cove. Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Akhtar Saeed, SGWC & Michael Noel, Accenture | AWS Executive Summit 2018
>> Live from Las Vegas It's theCUBE! Covering the AWS Accenture Executive Summit. Brought to you by Accenture. >> Welcome back everyone to theCUBE's live coverage of the AWS Executive Summit here at the Venetian. I'm your host, Rebecca Knight. We have two guests for this segment. We have Akhtar Saeed, VP Solution Delivery, Southern Glazers Wine and Spirits, and Michael Noel, Managing Director Applied Intelligence at Accenture. Thank you so much for coming on the show. >> Thank you. >> Thank you for having us. >> I think this is going to be a fun one. We're talking about wine and spirits. >> Absolutely. (laughs) >> Akhtar, tell our viewers a little bit about Southern Glazer. >> Yeah, so Southern Glazer Wine and Spirits is a privately held company. We are in about 44 states, and we are the largest distributor of wine and spirits. >> Okay, in 44 states. What was the business problem you were trying to solve in terms of the partnership that you formed with Accenture? >> Yeah, so we started this initiative before Southern and Glazer merged. >> And that was in? >> It was 2016. So southern was already looking at how to enhance our technology, how to provide better data analytics, and how to create one source of truth. So that's what drove this and we were looking to partner with appropriate system integrator and right technology to be able to help deliver well if the company to be able to do analytics and data analysis. >> So you had two separate companies merging together and I like this idea, one source of truth. What does that mean, what did that mean for you? >> Well what it means to us is that since you have quite a few data marts out there and everybody is looking at the numbers a little differently, we spend a lot of time trying to say, hey is this right or is this right? So we want to bring all the data together saying this is what the data is and this is how we're going to standardize it, that's what we're trying to do. >> Okay, so this one source, now, Michael, in terms of that, is that a common, common issue particularly among companies that are merging would you say? >> No absolutely you have businesses that might be in the same industry but they might have different processes to try to get to the same answer, right, and the answer's never really the same. So having this concept of a clean room that allows you to take your various aspects of a business and combine that from a data point of view, a business metrics point of view and a business process point of view, this one source, helps you consolidate and streamline that so you can see that integrated view across your new business model really. >> So where do you begin? So you bring in Accenture and AWS and where do you start? >> So like you've mentioned, in 2016, Glazer and Southern Wine Spirits came together and merged, it actually accelerated process because we needed what Mike mentioned as a clean room where we could put this data and won't have to merge at data centers on day one and have the reporting, common reporting platform being available for the new SGWS and that's what we started so we said, okay what is the key performance indicators, the key metrics that we need going into day one? and that's what we want to populate the data with to begin with to make sure that information is available when the day one for merger comes through. >> Okay and so what were those indicators? >> There were several indicators, there were several business reports, people needed the supply chain, they needed to understand the data, what the inventory looks like they needed to know how we were doing across the markets. So all those indicators, that's what we put together. >> Okay, okay, and so how do you work with the client in this respect, how do you and AWS sort of help the client look at what the core business challenges are and then say okay, this is how we're going to attack this problem? >> Right, no that's a good question. I think the main thing is understanding, what does the business need? and how is the technology going to support what the business needs, right? that's first and foremost, right, and then getting alignment and understanding that is really what drives a roadmap to say here's what we're going to do, here's the order we're going to do it in and here's the value that we expect to get out of following these steps one by one and I think one thing we learned is you have to be directionally correct, you may not be exact but as long as we're making progress in the right direction, you course correct as you need to, right, based upon as the business learns new things and as the market changes and what not and that's really how we accomplish this. >> And is it a co-creative process or, how closely are you working with Accenture and AWS? >> Oh, very closely with Accenture and AWS, it's very co-creative, I mean we are really working hand-in-hand. I mean, as Mike alluded, you start certain ways a journey and you realize, gee, this may work but I have to change a little bit here and there's several time we had to change team's direction how to get there and how to approach it and to deliver value. >> Well let's talk, let's get into the nitty gritty with the architecture and components. So what did this entail, coming to this clean room, this one source of truth? >> Yeah, AWR architecture is based on AWS' platform or Accenture's AIP, Accenture Insights platform which runs on AWS and we have, what we did right from the beginning we said we're going to have a data link, we're going to have a hadoop environment where we're going to all our data there And then for analytics research we're going to use Redshift, on top of that for reporting we use Tableau, and we have a homegrown tool called Compass for reporting also that we use. So that's how we initially started, initially we were feeding data directly into it, because we needed to stand the system up relatively quickly. The advantage to us, we didn't have to deal with infrastructure, that was all set up at AWS, we just to need to make sure we load our data and make sure we make the reports available. >> Were you going to add something to that? >> Yeah I know that the concept around, because the merger is expediting this clean room which allows you to stand up an analytics as a service model, to start bringing your data, to start building out your reporting analytics quickly right, which should really speak to market to understanding their position, as an integrated company was so important. So building the Accenture Insights platform on the AWS platform, was a huge success in order to allow them to start going down that path.. >> Yeah I want to hear about some of the innovative stuff you're doing around data analytics and really let's bring it back down to earth too and say actually so this is what we could learn and see, in terms of what was selling what was not selling, what were you finding out? >> So at this point we have about 6000 users on the platform approximately. Initially we had some challenges, I'll be very frank upfront, that everything does not go smooth. That's where we then say "Okay what do I do differently?" We started with dense storage, nodes and we soon found it's not meeting our needs. Then we enhanced Tougaloo dense cluster, and they helped us by about by 70%, that it drove the speed, but the queue length was still long, with Redshift we were still not getting the performance we needed. Then we went to second generation of dense computers and clusters and we got some more leverage, but really the breakthrough came when we said "we need to really reevaluate "how we've been doing our workload management." Some of our queries were very short term report queries real quick, others were loading data that took a while. And that's the challenge we had to overcome, with the workload management we were able to create, where we were able to bump queries and send them to different directions and create that capacity. And that's what really had a breakthrough in terms of technology for us, till that time we were struggling, I'll be honest, but once we got that breakthrough, we were able to comfortably deliver what business needed from data perspective and from businesses perspective. Mike would you like to add... >> Yeah, in addition to AWS, using Redshift has really been a really important, I guess decision and solution in place here, because not only are we using it for loading massive amounts of data, but it's also being used for power users, to generate very adhoc and large queries, to be able to support other analytic type needs right? And I think Redshift has allowed us to scale quickly as we needed to based upon certain times of year, certain market conditions or whatever, Redshift has really allowed us to do that. In order to support where the business demands have really grown exponentially since we've been putting this in place. And it all starts with architecting, and we said, and delivering all around the data. And then how do you enable the capabilities, not just data as a foundation but you know real time analytics, and looking at what looking at what could be, you know, forecasting and predicting what's happening in the future, using artificial intelligence, machine learning and that's really where the platform is taking us next. >> I want to talk about that, but I want to ask you quickly about the skills challenge, because introducing a new technology, there's going to be maybe some resistance and maybe simply your workers aren't quite up to speed. So can you talk a little bit about what you experienced, and then also how you overcame it? >> Yeah, I mean we had several challenges, I mean I'll put it in two big buckets, one is just change management. Anytime you're changing technology on this many users, they're comfortable with something they know, a known commodity, here's something new, that's a challenge. And one should not ignore, we need to pay a lot of attention on how to manage change. That's one, second challenge was within the technical group itself, because we were changing technology on them also right, and we had to overcome the skill sets, we were not the company, who were using open source a lot. So we had to overcome that and say how do we train our folks, how do we get knowledge? And in that case Accenture was great partner with us, they helped us tremendously and AWS professional services, they were able to help us and we had a couple of folks from professional services, they had really helped us with our technology to help drive that change. So you have to tackle from both sides, but we're doing pretty well at this point, we have found our own place, where we can drive through this together. >> In terms of what you were talking about earlier, in terms of what is next with predictive analytics and machine learning, can you talk a little bit about the most exciting things that are coming down the pipeline in terms of Southern Glazer? >> I think that's a great question, I think there's multiple way to look at it. From a business point of view right it's, how do they gain further insights by looking at as much different data sets as possible, right, whether it be internal data, external data, how do we combine that to really understand the customers better? And looking at how they approach things from a future point of view, we've been able to predict what's going to happen in the marketplace so I think it's about looking at all the different possible datasets out there and combining that to really understand what they can do from an art of the possible point of view. >> Can you give us some examples of terms of combining data sets so you're looking at, I mean, drinking patterns or what do we have here? >> I mean you have third party data, right, and TD links and those kind of things, you pull that data in and then you have our own data, then we have data from suppliers right, so that where we combine it and say okay what is this telling me, what story is this putting together telling me? I don't think we are there all the way, we have started on the journey, right now we are at what I call the, this one source of truth and we still have some more sub-editors loading to it, but that's the vision that, how do we pull in all that information and create predictive analysis down the road and be able to see what that means and how we'll be driving? >> And so you're really in the infancy of this? >> Yes, I mean it's a journey right, some may say that you're not in infancy, you're in the middle somewhere, somebody said, if they were ahead of us, it's all depending where you want to put this on that chart but we at least have taken first steps and we have one place where the data's available to us now, we're just going to keep adding to it and now it's a matter of how should we start to use it? >> In terms of lessons that you've learned along the way and you've been very candid in talking about some of the challenges that you've had to overcome but what would you say are some of the biggest takeaways that you have from this process? >> Yeah the biggest takeaway for me would be, as I've already mentioned, change management, don't ignore that, pay attention to that because that's what really drives it, second one that I'll say is probably, have a broader vision but when you execute make sure you look at the smaller things that you can measure, you can deliver against because you would have to take some steps to adjust to that so those are the two things, the third have the right partners with you because you can't go alone on this, you need to make sure you understand who you're going to work with and create a relation with them and saying "hey it's okay to have tough conversations", we have plenty of challenging conversations when we were having issues but it's as a team how you overcome those and deliver value, that's what matters. >> High praise for you Michael (laughs) at Accenture here, but what would you say in terms of being a partner with Southern Glazer and having helped and observed this company, what would you say are some of the biggest learnings from your perspective? >> Oddly enough I think the technology's the easier part of all this, right, I think that's fair to say without a doubt but really I think, really focusing on making the business successful, right, if everything you do is tied around making the business successful, then the rest will just kind of, you know, go along the way right because that's really the guiding principles right and then you saw that with technology right and that's really I think what we've learned most and foremost is, bring the business along, right, educating them and understanding what they really need and focusing on listening, alright, and trying to answer those specific questions, right, I think that's really the biggest factor we've learned over the past journey, yeah. >> And finally so we're here at AWS re:Invent, 60,000 people descending here on Sin City, what most excites you about, why do you come first of all and most excites you about the many announcements and innovations that we're seeing here this week? >> Yeah, so I'll be honest, this is the first time I've come to this conference but it's been really exciting, what excites me about these things is the new innovation, you learn new things, you say "hey, how can I go back "and apply this and do something different "and add more value back?" That's what excites me. >> Now, no I think you're absolutely right, I think, AWS is obviously a massive disruptor across any industry and their commitment to new technology, new innovation and the practicality of how we can start using some of that quickly I think is really exciting, right, because we've been working on this journey for a while and now there's some things that they've announced today, I think that we can go back and apply it pretty quickly, right, to really even further accelerate Southern Glazer's, you know, pivot to being a fully digital company. >> So a fully digital company, this is my last question (laughs) sorry, your advice for a company that is like yours, about to embark on this huge transformation, as you said, don't ignore the change management, the technology can sometimes be the easy part but do you have any other words of wisdom for a company that's in your shoes? >> All the words of wisdom I'll have is just I think I've already mentioned, three things they'll probably need to focus on, just take the first step, right, that's the hardest part, I think Anne even said this morning that some companies just never take the first step, take that first step and you have to, this is where the industry is going and data is going to be very important so you have to take the first step saying how do I get better, handle on the data. >> Excellent, great. Well Michael, Akhtar, thank you so much for coming on theCUBE this has been a real pleasure, thinking about Southern Glazer, next time bring some alchohol. >> Absolutely. (laughs) It's Vegas! >> Thank you, appreciate it. >> Great. I'm Rebecca Knight, we'll have more of theCUBE's live coverage of the AWS executive summit coming up in just a few moments, stay with us. (light music)
SUMMARY :
Brought to you by Accenture. Thank you so much for coming on the show. I think this is going to be a fun one. Absolutely. about Southern Glazer. and we are the largest distributor of wine and spirits. in terms of the partnership that you formed with Accenture? Yeah, so we started this initiative and right technology to be able to help deliver well and I like this idea, one source of truth. and this is how we're going to standardize it, and the answer's never really the same. and that's what we want to populate the data with they needed to know how we were doing across the markets. and here's the value that we expect to get and there's several time we had to change team's direction the nitty gritty with the architecture and components. and we have a homegrown tool called Compass because the merger is expediting this clean room And that's the challenge we had to overcome, and delivering all around the data. and then also how you overcame it? and we had to overcome the skill sets, and combining that to really understand have the right partners with you and that's really I think what we've learned is the new innovation, you learn new things, and the practicality of how we can start using and data is going to be very important Well Michael, Akhtar, thank you so much Absolutely. live coverage of the AWS executive summit
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Michael Noel | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Akhtar | PERSON | 0.99+ |
Glazer | ORGANIZATION | 0.99+ |
Anne | PERSON | 0.99+ |
Southern Glazer | ORGANIZATION | 0.99+ |
Akhtar Saeed | PERSON | 0.99+ |
two guests | QUANTITY | 0.99+ |
Southern Glazers Wine and Spirits | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
Tableau | TITLE | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Southern | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
SGWC | ORGANIZATION | 0.99+ |
Sin City | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Redshift | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Venetian | LOCATION | 0.99+ |
44 states | QUANTITY | 0.98+ |
second one | QUANTITY | 0.98+ |
two separate companies | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
one source | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
second challenge | QUANTITY | 0.98+ |
Redshift | TITLE | 0.98+ |
first steps | QUANTITY | 0.98+ |
second generation | QUANTITY | 0.97+ |
Compass | TITLE | 0.97+ |
about 6000 users | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
AWS Executive Summit | EVENT | 0.97+ |
first time | QUANTITY | 0.97+ |
SGWS | ORGANIZATION | 0.96+ |
Southern Glazer Wine and Spirits | ORGANIZATION | 0.95+ |
two big buckets | QUANTITY | 0.95+ |
Southern Wine Spirits | ORGANIZATION | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
today | DATE | 0.93+ |
AIP | TITLE | 0.93+ |
70% | QUANTITY | 0.92+ |
AWS Executive Summit 2018 | EVENT | 0.91+ |
one place | QUANTITY | 0.9+ |
AWS executive summit | EVENT | 0.88+ |
day one | QUANTITY | 0.88+ |
AWS | EVENT | 0.85+ |
this morning | DATE | 0.81+ |
AWR | ORGANIZATION | 0.81+ |
three things | QUANTITY | 0.81+ |
about 44 states | QUANTITY | 0.8+ |
Gene Reznik, Accenture | AWS Executive Summit 2018
>> Live from Las Vegas. It's theCUBE covering the AWS Accenture Executive Summit. Brought to you by Accenture. >> Welcome back the theCUBE's live coverage of the AWS Executive Summit here at the Venetian in Las Vegas. I'm your host, Rebecca Knight. We are joined by Gene Reznik, the Chief Strategy Officer at Accenture. Thanks so much for coming on theCUBE, Gene. >> My pleasure, Rebecca. >> So, Accenture is calling this period of time that we are all living through a period of epic disruption. Define what that means for us. >> Sure, sure. So, well, I think we're living in a very disruptive age right now. But again, I think we believe over the next 10 years it's going to become even more epic. And I think what's driving that, some things are geopolitical in nature. Alright, uh. Sort of, everything between U.S. and China relations, what's happening in Europe, all of that. Of course, there's technological. Dynamics around artificial intelligence. Of course, there's data, there's privacy, there's security. And all that really compounding on each other. We believe it's creating an environment where it's just going to be very challenging for people, but also for companies to navigate. And I think leadership and big organizations and their teams have to be very thoughtful of how they navigate this time. And I think there's going to be some big winners and I think there's going to be some big losers. And I think we see companies today that have been around for hundreds of years challenged to really adapt, adjust, and transform to really be prepared for this next wave of change. >> So, as you said, it's a very restive time politically, technologically, business wise. How are companies approaching this? I mean, as you said, you have to be very thoughtful. You have to have a real strategy in terms of how you're going to approach this, an approach innovation. How would you say companies are doing? Give them a report card right now in terms of how industry is responding. >> Yeah, well I think the first thing we would sort of say, and we've done quite a bit of analysis, and study through Accenture research, and as you'd expect, different industries are under different amounts of pressure and disruption. Some, like music industry and book publishing and currently retail, are under tremendous pressure. And, many have not responded well. They were too slow. They saw the digital natives just really take away their businesses. Others are better protected. So we have really gone through and analyzed industry by industry how they are prepared for today and really what they need to do going forward. And I guess our assessment is it's very, very difficult, as you would expect, to take a big organization and transform it. And the issue is again, while a lot of it is technology, the people side, the culture side, the organizational side, the incumbency dimension, the shareholders, all those things that make change very difficult are at the core of the transformation agenda. >> And innovation is really sort of the answer to it all because once you're move innovative, then you are going to ride this wave of epic disruption. So, first of all, how would you, so many companies are saying we want to be more innovative. What's the answer to that? I mean, what does that mean to be more innovative? >> Yeah, it's a good point. So we have, Accenture has looked at this. We sort of codified something that we call the wise pivot. Which is how should an organization really pivot to transform their business. And it's got elements, we believe strongly that you have to transform the core. Innovation can't be on the edge. At some point, you have to transform the core which usually gets at cost reduction, using automation to transform the business, transform the core economics. Then you have to grow the core and that's really I think the hard part, which is gaining market share in the core business we believe, whether it's in automotive business, whether it's in healthcare, whether it's in even retail, you have to grow the core, cause ultimately that gives you the investment capacity to scale the new. So, how to orchestrate that journey in a methodical way, again, keeping in mind the organization and what it delivers today and not leaving different parts of the organization behind is what we work with our clients on. >> And, what separates the winners from the losers? So the companies that are doing this well, how are they focusing on their core? And the core competencies? >> Right. We believe investing is a very big thing. Right, so the hardest part of all of this, in terms of economically, I mean there's a lot of difficult dimensions, but economically, as the pressure mounts, the ability to invest diminishes for most companies. And they don't have the room to invest in the business that their future depends on. And really freeing up that room and making the difficult decisions, you may have seen there were some announcements of mass lay offs, even today right? It's some of the biggest companies in the world. They're trying to create the room to invest in their next generation business that will take them into the future. And I think that's really the hardest part. How do you ultimately create the capacity to invest? And how do you make those investments? Again, cause there's also a lot of other examples of companies that have invested in the wrong way or in the wrong thing, that ultimately didn't lead them to the future So those two elements, creating the room to invest, then investing it in the right things and the right ways is what we find is key. >> So you're talking of course about GM which announced today that it was laying off about 15 thousand white collar and blue collar jobs. And the reason they're doing this is because they're saying there's no longer any room for six passenger cars in this market. We want to focus on self-driving vehicles. Is that a good move? I mean, I know you're not a GM analyst here but at the same time, it sounds as though that is smart, as you said. It's making room for investing in the future. >> Yeah, and I think that GM is clearly seeing autonomous cars coming, sort of form factors, everything that they're doing. And again, I don't think particular it's that in GM's case but again you read it that way. You'll look at General Electric. They're restructuring their entire company to better compete in the new. You'll look at IBM. IBM is acquiring Red Hat to have the kind of assets to compete in the new. So I think the biggest companies in the world are really trying to sort of say what the next ten years, what is their business going to to be? And then how to they take what got them here which for many of them have been 50 to 100 year journeys. And really figure out how to restructure that, to give them the room to invest into building a new business. And really, that takes tremendous leadership by the entire, you know, by the CEO, the board, the entire executive team and the people. The people have to commit to go along for that ride, and endure some of the pain for the greater good. >> So it's really a change management issue here, but in terms of, you talked about leadership, it also takes the foresight to actually know what your business is in the future. So GM is saying autonomous vehicles, which an average layman can say, yeah, that looks as though that's where the car industry is going. But how does a company even begin to imagine it's future at this time where there are new technologies being invented everyday, which are disruptive as we started talking about in the earlier conversation. >> Yeah, I think that's a very good question. Cause also if you look at where's the money going. The money is going to the disrupters. Right, if you look at the top five, the Google, Amazon, Facebook, Apple, let's put Microsoft in there. Combined last year they invested over 70 billion dollars, and that's about 15 percent of all of the fortune of the global one thousand. So the capital, as measured by what companies are expending, what the start-up, the VCs are at an all time high. 155 billion dollars invested last year, double what it was in 2001. The IPO market is at an all time high. Right, then you have these things like division fund, which is a whole other investment vehicle to fuel technology. So the reality is, there's never been more money going in to create the next wave of disruption, which is why we believe many of the existing companies really need to create those partnerships where they benefit from that. They can't compete with it. They can't outguess it, right? They need to be making equal systems that ultimately enable them to leverage those investments, to really help power their next generation business. >> So as an ecosystem driven world, where is Accenture doing this kind of work? >> Yeah, so the good news for Accenture is we built our business, built services in an ecosystem kind of model. Initially with SAP, with Oracle, with Salesforce and now it's with companies like Amazon and the AWS. And I think our view, and what we try to work with our clients on, is really to create the construct. And by the way, a lot of these constructs are just now being formed. What does partnering with an AWS to create your next generation digital business, what does that look like? And there's some models emerging in terms of co-innovation. And I would tell you what Amazon has done, what Berkshire Hathaway and J.P. Morgan Chase is an example of partnering to transform healthcare. Interesting way to do that. You look at something, another Seattle company Starbucks partnering with Alibaba to basically power their entire business in China. So you're starting to see different constructs where big companies are really coming together in different ways. And then again, those partnership constructs, incentives, business models around that, I think that's really where the innovation is going to take place. How do you do that? How do you align your incentives? And how do you jointly benefit from that partnership? >> So you announced something today with your Applied Intelligence Center of Excellence in Seattle, Washington. Tell our viewers a bit little more about that. >> Well, first of all we look at AWS and we say, clearly this is a company that is really important. >> It's doing something right. >> It's doing a lot of things right, it's doing a lot of things right. And I think a lot of our clients are looking at them, are leveraging them. So, it's our responsibility then as a services organization to build up capabilities and skills, and make their, and enable our clients to really tap in to this tremendous innovation. So, yeah, we did announce at Applied Intelligence Center of Excellence in Seattle. It'll be one of many centers across the United States and globally with a simple premise of building skills, building proof of concepts, building use cases, building MVPs to really around different industries and different solutions sets so again, reimagine business processes, catalyze transformation, and really make it something that our clients can tap into. >> You are the Chief Strategy Officer, what is your piece of advice for companies out there, at AWS, at reInvent here, what's sort of your one piece of strategy advice in this period of epic disruption and this cloud world. >> Yeah, I would say that unburdening ourselves from the day to day, and really immersing ourselves in this amazing environment. Learning, really understanding what makes one of these, one of the greatest companies in the world tick. Understanding how they do things. Not only, and as you know, there's more to Amazon than just technology. Right? There's a very strong culture. There's a very strong customer centricity. And really sort of understanding that, and really trying to apply it to our respective businesses. And seeing how it could really be, make the pivot to the digital more effective. And that's what I would sort of say. Come with an open mind. Learn a lot, and take it forward. >> Great, well Gene Reznik thank you so much for coming on theCUBE. >> This is a lot of fun. >> My pleasure. Thank you, thank you. >> I'm Rebecca Knight. We will have more of theCUBE's live coverage of the AWS Executive Summit in just a little bit. (techno music)
SUMMARY :
Brought to you by Accenture. of the AWS Executive Summit that we are all living through And I think there's going to be some big winners You have to have a real strategy And the issue is again, while a lot of it is technology, And innovation is really sort of the answer to it all and not leaving different parts of the organization behind the ability to invest diminishes for most companies. And the reason they're doing this And then how to they take what got them here it also takes the foresight to actually know So the capital, as measured by what companies are expending, And I think our view, and what we try to work So you announced something today and we say, clearly this is a company And I think a lot of our clients You are the Chief Strategy Officer, make the pivot to the digital more effective. thank you so much for coming on theCUBE. My pleasure. of the AWS Executive Summit in just a little bit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Gene Reznik | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
2001 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Applied Intelligence Center of Excellence | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
General Electric | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Applied Intelligence Center of Excellence | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Berkshire Hathaway | ORGANIZATION | 0.99+ |
two elements | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
155 billion dollars | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
Gene | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Venetian | LOCATION | 0.99+ |
100 year | QUANTITY | 0.99+ |
hundreds of years | QUANTITY | 0.99+ |
GM | ORGANIZATION | 0.99+ |
over 70 billion dollars | QUANTITY | 0.99+ |
Seattle, Washington | LOCATION | 0.99+ |
one | QUANTITY | 0.98+ |
J.P. Morgan Chase | ORGANIZATION | 0.98+ |
AWS Executive Summit | EVENT | 0.98+ |
one thousand | QUANTITY | 0.97+ |
about 15 percent | QUANTITY | 0.97+ |
about 15 thousand white collar | QUANTITY | 0.97+ |
one piece | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
SAP | ORGANIZATION | 0.91+ |
first | QUANTITY | 0.91+ |
first thing | QUANTITY | 0.9+ |
AWS Executive Summit 2018 | EVENT | 0.88+ |
six passenger cars | QUANTITY | 0.88+ |
AWS Accenture Executive Summit | EVENT | 0.83+ |
top five | QUANTITY | 0.83+ |
reInvent | ORGANIZATION | 0.82+ |
Red Hat | ORGANIZATION | 0.82+ |
China | ORGANIZATION | 0.76+ |
double | QUANTITY | 0.72+ |
years | QUANTITY | 0.7+ |
wave of | EVENT | 0.66+ |
next 10 years | DATE | 0.62+ |
wave | EVENT | 0.61+ |
U.S. | LOCATION | 0.54+ |
centers | QUANTITY | 0.54+ |
Chief | PERSON | 0.5+ |
Salesforce | ORGANIZATION | 0.49+ |
next ten | DATE | 0.48+ |
Remi Duquette, MAYA | PI World 2018
>> Announcer: From San Francisco, it's theCUBE, covering OSIsoft PI World, 2018. Brought to you by OSIsoft. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in downtown San Francisco at the OSIsoft show, it's called PI World. It's been going on for over 15 years. We've never been here before, we're excited to be here. Really is coming at it from the operations point of view, and they've been worrying about operations and operations efficiency for years. There's people walking around with 15-year pins, which is pretty amazing. I got my first one-year pin, so that's good. So we're excited to be here and dive into the details, because we've talked about IoT and industrial IoT, and kind of coming at it from the IT side, but these guys have been working at it from the OT side for years and years and years, almost 40 years. So our first guest is joining us. He's Remi Duquette, the Global Head - Applied AI and Datacenter at Clarity Lifecycle, it's a mouthful, for Maya Heat Transfer Technologies. Remi, nice to meet you. >> Very nice meeting you, thank you for having me. >> So, give us a little bit more detail on what Maya Heat Transfer is all about, and then we'll dive into some of the specific stuff you're working on. >> So Maya Heat Transfer started about 28 years ago in the simulation of heat and getting rid of all that heat that's being emitted by a lot of data centers, all the servers and the density that's occurring these days. And we've involved into developing a software solution, leveraging the PI infrastructure for real-time monitoring, and extended it beyond, for forecasting and doing all sorts of advanced analytics from that data. >> Right, so heat is the historical enemy of electronics, and has been forever. >> Yes, continuing to be so, for sure. >> And continuing to be so, and the data centers, you know, it's an interesting evolution in the data center space, because on one hand, they're consolidating data centers, or shutting down data centers, you've got this public cloud phenomenon. On the other hand, it's density, density, density, density, density, which probably is good opportunity for you guys. >> A great opportunity. Unfortunately, you know, the problems kind of are accentuated by exactly those phenomenon of consolidation, and the cloud, and the virtualization projects that are going on. So all of that combined, makes for a really big cocktail of heat and that heat needs to be dissipated somehow. And of course, the energy efficiency of all the machines are getting better and better, but at some point, it needs to be optimized, and that's where the software component, to remove the human in the loop, really to optimize that heat distribution and removal. >> So one of the big themes here at this show is finding inefficiency. This kind of continual quest for better efficiency and using data, and big data specifically, and sensor data, to be able to get that, find the inefficiency and act on the inefficiency. So what are some of the things that you guys look at? You've been at it for a long time, but there's still a lot more opportunities to find inefficiencies. Where are you still finding inefficiencies? >> Well, I mean, the main aspect is we have a lot of building automation systems and cooling loop systems, that have been programed to try and get to the best situation in any circumstances. And, really, when you look at what we're doing now, is applying artificial intelligence to augment the abilities of those systems, to better control and get to even a better place from an energy efficiency perspective. So that's really the latest evolution, to use that big data, to learn from that data, and then further optimize your cooling environment and your heat distribution. >> Right, now I'm curious what kind of new learnings came out of kind of the hyperscale players. Obviously, big public cloud players, Amazon and Azure, Google Cloud, have giant data centers, not only for their own core businesses, but now they're building them out as public clouds. Much bigger scale than the traditional corporate data centers. They're just operating at a whole different level. >> A whole new, yeah (laughs). >> So what are some of the things that have come out of those experiences that are different than the world pre-public cloud? >> Well, if you look at the pre-public, private cloud and public cloud, you had maybe, on average, five to six kilowatt per rack in a data center, was the average power consumed by those racks. Now we're looking, you know, some of our clients have up to 50 kilowatt per rack and now you need water-cooled elements into that rack, or other cooling elements that are being, helping the situation, 'cause those kinds of densities are producing a huge amount of heat, and that's really a big concern and a big shift from the enterprise level data center that was a little bit less of a consumer of that power. >> Right, now do you guys do anything outside of the data center? I know that's your area of specialties, but we've been doing a lot of autonomous vehicle shows, and one of the things that comes over and over and over is kind of the harsh environment for compute in a car or a truck or a bus or whatever. It's not a beautifully controlled with a lot of great backup power and diesel and air conditioning. Very rough environment. So what are some of the applications that you guys can use to help get that compute power in these vehicles? >> Well, actually the evolution for us more on the software side, was to apply our deep learning, artificial intelligence components and agents to other industries. So we're leveraging the forecasting capabilities of these deep learning agents to apply to other areas. So discrete manufacturing was one example, fleet optimization, so to go back to those edge devices, so we do a lot of fleet optimization, fuel optimization on these components. And that's completely outside the data center, but it's based on the same type of deep learning technologies that we've developed for the data center. >> And all the forecasts are, as more and more the compute and the store moves out to the edge, and you've got all the industrial devices running around in the centers, it's not new news for the group at this organization, >> No, clearly (chuckles). >> But you know, you're kind of shifting that load of the heat management from the data center out to the edge. >> To the edge, correct. So it does relieve a little bit of the, let's call it the pressure, inside the data center, but at the end of the day, the density of those cloud providers is just being accentuated by the sheer number of devices. So we thought there might be a shift towards the edge from a power, let's say a removal from the core data center, but in the end, it's actually the opposite that's happening. The power is really getting denser and denser inside the data center itself. >> So, last question before I let you go. What's your take on the vibe of the show, what's happening here at PI World? It's amazing, the international flavor as I'm walking around the halls. I'm seeing badges and hearing all kinds of languages. I mean, this is pretty hard-core, industrial internet happening right here. >> Oh yeah, I mean the operational technologies and the various applications and industries in which PI is used and leveraged worldwide is phenomenal. And it's a very vibrant show. It's actually quite good, when it comes down to it, a lot of people, the exchange between the end users together from different industries share their tips and tricks on how they've deployed, their various stories are just amazing. So a great, great, great PI World conference for sure. >> All right, good. Well thank you for taking a few minutes and sitting down and sharing the Maya story with us. >> Thank you for having me. >> Absolutely. All right, he's Remi, I'm Jeff. We are at OSIsoft PI World 2018 in downtown San Francisco, we'll be right back, thanks for watching. (electronic music)
SUMMARY :
Brought to you by OSIsoft. and kind of coming at it from the IT side, thank you for having me. some of the specific in the simulation of heat and Right, so heat is the Yes, continuing to be so, and the data centers, and the cloud, and the So one of the big So that's really the latest evolution, the hyperscale players. from the enterprise level data center and one of the things that but it's based on the same type of the heat management from the core data center, It's amazing, the international flavor and the various the Maya story with us. 2018 in downtown San Francisco,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Remi | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
OSIsoft | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Remi Duquette | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Clarity Lifecycle | ORGANIZATION | 0.99+ |
Maya Heat Transfer Technologies | ORGANIZATION | 0.99+ |
first guest | QUANTITY | 0.98+ |
one example | QUANTITY | 0.97+ |
almost 40 years | QUANTITY | 0.97+ |
over 15 years | QUANTITY | 0.96+ |
PI World | EVENT | 0.95+ |
one | QUANTITY | 0.95+ |
PI World 2018 | EVENT | 0.95+ |
Maya Heat Transfer | TITLE | 0.93+ |
up to 50 kilowatt | QUANTITY | 0.92+ |
about 28 years ago | DATE | 0.91+ |
PI World | ORGANIZATION | 0.91+ |
Azure | ORGANIZATION | 0.91+ |
first one-year pin | QUANTITY | 0.91+ |
theCUBE | ORGANIZATION | 0.9+ |
MAYA | PERSON | 0.89+ |
15-year pins | QUANTITY | 0.88+ |
OSIsoft PI World 2018 | EVENT | 0.88+ |
PI World | EVENT | 0.87+ |
San Francisco | LOCATION | 0.82+ |
Maya | PERSON | 0.8+ |
OSIsoft | EVENT | 0.8+ |
2018 | DATE | 0.78+ |
six kilowatt | QUANTITY | 0.76+ |
years | QUANTITY | 0.73+ |
Global Head - Applied | ORGANIZATION | 0.72+ |
Google Cloud | ORGANIZATION | 0.61+ |
rack | QUANTITY | 0.59+ |
PI | ORGANIZATION | 0.43+ |
Ron Bodkin, Google | Big Data SV 2018
>> Announcer: Live from San Jose, it's theCUBE. Presenting Big Data, Silicon Valley, brought to you by Silicon Angle Media and its ecosystem partners. >> Welcome back to theCUBE's continuing coverage of our event Big Data SV. I'm Lisa Martin, joined by Dave Vellante and we've been here all day having some great conversations really looking at big data, cloud, AI machine-learning from many different levels. We're happy to welcome back to theCUBE one of our distinguished alumni, Ron Bodkin, who's now the Technical Director of Applied AI at Google. Hey Ron, welcome back. >> It's nice to be back Lisa, thank you. >> Yeah, thanks for coming by. >> Thanks Dave. >> So you have been a friend of theCUBE for a long time, you've been in this industry and this space for a long time. Let's take a little bit of a walk down memory lane, your perspectives on Big Data Hadoop and the evolution that you've seen. >> Sure, you know so I first got involved in big data back in 2007. I was VP in generating a startup called QuantCast in the online advertising space. You know, we were using early versions of Hadoop to crunch through petabytes of data and build data science models and I saw a huge opportunity to bring those kind of capabilities to the enterprise. You know, we were working with early Hadoop vendors. Actually, at the time, there was really only one commercial vendor of Hadoop, it was Cloudera and we were working with them and then you know, others as they came online, right? So back then we had to spend a lot of time explaining to enterprises what was this concept of big data, why it was Hadoop as an open source could get interesting, what did it mean to build a data lake? And you know, we always said look, there's going to be a ton of value around data science, right? Putting your big data together and collecting complete information and then being able to build data science models to act in your business. So you know, the exciting thing for me is you know, now we're at a stage where many companies have put those assets together. You've got access to amazing cloud scale resources like we have at Google to not only work with great information, but to start to really act on it because you know, kind of in parallel with that evolution of big data was the evolution of the algorithms as well as the access to large amounts of digital data that's propelled, you know, a lot of innovation in AI through this new trend of deep learning that we're invested heavily in. >> I mean the epiphany of Hadoop when I first heard about it was bringing, you know, five megabytes of code to a petabyte of data as sort of the bromide. But you know, the narrative in the press has really been well, they haven't really lived up to expectations, the ROI has been largely a reduction on investment and so is that fair? I mean you've worked with practitioners, you know, all your big data career and you've seen a lot of companies transform. Obviously Google as a big data company is probably the best example of one. Do you think that's a fair narrative or did the big data hype fail to live up to expectations? >> I think there's a couple of things going on here. One is, you know, that the capabilities in big data have varied widely, right? So if you look at the way, for example, at Google we operate with big data tools that we have, they're extremely productive, work at massive scale, you know, with large numbers of users being able to slice and dice and get deep analysis of data. It's a great setup for doing machine learning, right? That's why we have things like BigQuery available in the cloud. You know, I'd say that what happened in the open source Hadoop world was it ended up settling in on more of the subset of use cases around how do we make it easy to store large amounts of data inexpensively, how do we offload ETL, how do we make it possible for data scientists to get access to raw data? I don't think that's as functional as what people really had imagined coming out of big data. But it's still served a useful function complementing what companies were already doing at their warehouse, right? So I'd say those efforts to collect big data and to make them available have really been a, they've set the stage for analytic value both through better building of analytic databases but especially through machine learning. >> And there's been some clear successes. I mean, one of them obviously is advertising, Google's had a huge success there. But much more, I mean fraud detection, you're starting to see health care really glom on. Financial services have been big on this, you know, maybe largely for marketing reasons but also risk, You know for sure, so there's been some clear successes. I've likened it to, you know, before you got to paint, you got to scrape and you got to, you put in caulking and so forth. And now we're in a position where you've got a corpus of data in your organization and you can really start to apply things like machine learning and artificial intelligence. Your thoughts on that premise? >> Yeah, I definitely think there's a lot of truth to that. I think some of it was, there was a hope, a lot of people thought that big data would be magic, that you could just dump a bunch of raw data without any effort and out would come all the answers. And that was never a realistic hope. There's always a level of you have to at least have some level of structure in the data, you have to put some effort in curating the data so you have valid results, right? So it's created a set of tools to allow scaling. You know, we now take for granted the ability to have elastic data, to have it scale and have it in the cloud in a way that just wasn't the norm even 10 years ago. It's like people were thinking about very brittle, limited amounts of data in silos was the norm, so the conversation's changed so much, we almost forget how much things have evolved. >> Speaking of evolution, tell us a little bit more about your role with applied AI at Google. What was the genesis of it and how are you working with customers for them to kind of leverage this next phase of big data and applying machine learning so that they really can identify, well monetize content and data and actually identify new revenue streams? >> Absolutely, so you know at Google, we really started the journey to become an AI-first company early this decade, a little over five years ago. We invested in the Google X team, you know, Jeff Dean was one of the leaders there, sort of to invest in, hey, these deep learning algorithms are having a big impact, right? Fei-Fei Li, who's now the Chief Scientist at Google Cloud was at Stanford doing research around how can we teach a computer to see and catalog a lot of digital data for visual purposes? So combining that with advances in computing with first GPUs and then ultimately we invested in specialized hardware that made it work well for us. The massive-scale TPU's, right? That combination really started to unlock all kinds of problems that we could solve with machine learning in a way that we couldn't before. So it's now become central to all kinds of products at Google, whether it be the biggest improvements we've had in search and advertising coming from these deep learning models but also breakthroughs, products like Google Photos where you can now search and find photos based on keywords from intelligence in a machine that looks at what's in the photo, right? So we've invested and made that a central part of the business and so what we're seeing is as we build up the cloud business, there's a tremendous interest in how can we take Google's capabilities, right, our investments in open source deep learning frameworks, TensorFlow, our investments in hardware, TPU, our scalable infrastructure for doing machine learning, right? We're able to serve a billion inferences a second, right? So we've got this massive capability we've built for our own products that we're now making available for customers and the customers are saying, "How do I tap into that? "How can I work with Google, how can I work with "the products, how can I work with the capabilities?" So the applied AI team is really about how do we help customers drive these 10x opportunities with machine learning, partnering with Google? And the reason it's a 10x opportunity is you've had a big set of improvements where models that weren't useful commercially until recently are now useful and can be applied. So you can do things like translating languages automatically, like recognizing speech, like having automated dialog for chat bots or you know, all kinds of visual APIs like our AutoML API where engineers can feed up images and it will train a model specialized to their need to recognize what you're looking for, right? So those types of advances mean that all kinds of business process can be reconceived of, and dramatically improved with automation, taking a lot of human drudgery out. So customers are like "That's really "exciting and at Google you're doing that. "How do we get that, right? "We don't know how to go there." >> Well natural language processing has been amazing in the last couple of years. Not surprising that Google is so successful there. I was kind of blown away that Amazon with Alexa sort of blew past Siri, right? And so thinking about new ways in which we're going to interact with our devices, it's clearly coming, so it leads me into my question on innovation. What's driven in your view, the innovation in the last decade and what's going to drive innovation the next 10 years? >> I think innovation is very much a function of having the right kind of culture and mindset, right? So I mean for us at Google, a big part of it is what we call 10x thinking, which is really focusing on how do you think about the big problem and work on something that could have a big impact? I also think that you can't really predict what's going to work, but there's a lot of interesting ideas and many of them won't pan out, right? But the more you have a culture of failing fast and trying things and at least being open to the data and give it a shot, right, and say "Is this crazy thing going to work?" That's why we have things like Google X where we invest in moonshots but that's where, you know, throughout the business, we say hey, you can have a 20% project, you can go work on something and many of them don't work or have a small impact but then you get things like Gmail getting created out of a 20% project. It's a cultural thing that you foster and encourage people to try things and be open to the possibility that something big is on your hands, right? >> On the cultural front, it sounds like in some cases depending on the enterprise, it's a shift, in some cases it's a cultural journey. The Google on Google story sounds like it could be a blueprint, of course, how do we do this? You've done this but how much is it a blueprint on the technology capitalizing on deep learning capabilities as well as a blueprint for helping organizations on this cultural journey to be actually being able to benefit and profit from this? >> Yeah, I mean that's absolutely right Lisa that these are both really important aspects, that there's a big part of the cultural journey. In order to be an AI-first company, to really reconceive your business around what can happen with machine learning, it's important to be a digital company, right? To have a mindset of making quick decisions and thinking about how data impacts your business and activating in real time. So there's a cultural journey that companies are going through. How do we enable our knowledge workers to do this kind of work, how do we think about our products in a new way, how do we reconceive, think about automation? There's a lot of these aspects that are cultural as well, but I think a big part of it is, you know, it's easy to get overwhelmed for companies but it's like you have pick somewhere, right? What's something you can do, what's a true north, what's an area where you can start to invest and get impact and start the journey, right? Start to do pilots, start to get something going. What we found, something I've found in my career has been when companies get started with the right first project and get some success, they can build on that success and invest more, right? Whereas you know, if you're not experimenting and trying things and moving, you're never going to get there. >> Momentum is key, well Ron, thank you so much for taking some time to stop by theCUBE. I wish we had more time to chat but we appreciate your time. >> No, it's great to be here again. >> See ya. >> We want to thank you for watching theCUBE live from our event, Big Data SV in San Jose. I'm Lisa Martin with Dave Vellante, stick around we'll be back with our wrap shortly. (relaxed electronic jingle)
SUMMARY :
brought to you by Silicon Angle Media We're happy to welcome back to theCUBE So you have been a friend of theCUBE for a long time, and then you know, others as they came online, right? was bringing, you know, five megabytes of code One is, you know, that the capabilities and you can really start to apply things like There's always a level of you have to at What was the genesis of it and how are you We invested in the Google X team, you know, been amazing in the last couple of years. we invest in moonshots but that's where, you know, on this cultural journey to be actually but I think a big part of it is, you know, Momentum is key, well Ron, thank you We want to thank you for watching theCUBE live
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Ron Bodkin | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Jeff Dean | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lisa | PERSON | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
Fei-Fei Li | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Hadoop | TITLE | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
QuantCast | ORGANIZATION | 0.99+ |
10x | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Google X | ORGANIZATION | 0.98+ |
first project | QUANTITY | 0.97+ |
Silicon Valley | LOCATION | 0.97+ |
Gmail | TITLE | 0.97+ |
Big Data | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
10 years ago | DATE | 0.95+ |
BigQuery | TITLE | 0.94+ |
early this decade | DATE | 0.94+ |
last couple of years | DATE | 0.94+ |
Big Data SV | EVENT | 0.94+ |
Alexa | TITLE | 0.94+ |
Big Data SV 2018 | EVENT | 0.93+ |
Cloudera | ORGANIZATION | 0.91+ |
last decade | DATE | 0.89+ |
Google Cloud | ORGANIZATION | 0.87+ |
over five years ago | DATE | 0.85+ |
first company | QUANTITY | 0.82+ |
10x opportunities | QUANTITY | 0.82+ |
one commercial | QUANTITY | 0.81+ |
next 10 years | DATE | 0.8+ |
first GPUs | QUANTITY | 0.78+ |
Big Data Hadoop | TITLE | 0.68+ |
AutoML | TITLE | 0.68+ |
Google X | TITLE | 0.63+ |
Applied | ORGANIZATION | 0.62+ |
a second | QUANTITY | 0.61+ |
petabytes | QUANTITY | 0.57+ |
petabyte | QUANTITY | 0.56+ |
billion inferences | QUANTITY | 0.54+ |
TensorFlow | TITLE | 0.53+ |
Stanford | ORGANIZATION | 0.51+ |
Google Photos | ORGANIZATION | 0.42+ |