Image Title

Search Results for exxon mobil:

Jamie Thomas, IBM | IBM Think 2021


 

>> Narrator: From around the globe, it's the CUBE with digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome back to IBM Think 2021, the virtual edition. This is the CUBEs, continuous, deep dive coverage of the people, processes and technologies that are really changing our world. Right now, we're going to talk about modernization and what's beyond with Jamie Thomas, general manager, strategy and development, IBM Enterprise Security. Jamie, always a pleasure. Great to see you again. Thanks for coming on. >> It's great to see you, Dave. And thanks for having me on the CUBE is always a pleasure. >> Yeah, it is our pleasure. And listen, we've been hearing a lot about IBM is focused on hybrid cloud, Arvind Krishna says we must win the architectural battle for hybrid cloud. I love that. We've been hearing a lot about AI. And I wonder if you could talk about IBM Systems and how it plays into that strategy? >> Sure, well, it's a great time to have this discussion Dave. As you all know, IBM Systems Technology is used widely around the world, by many, many 1000s of clients in the context of our IBM System Z, our power systems and storage. And what we have seen is really an uptake of monetization around those workloads, if you will, driven by hybrid cloud, the hybrid cloud agenda, as well as an uptake of Red Hat OpenShift, as a vehicle for this modernization. So it's pretty exciting stuff, what we see as many clients taking advantage of OpenShift on Linux, to really modernize these environments, and then stay close, if you will, to that systems of record database and the transactions associated with it. So they're seeing a definite performance advantage to taking advantage of OpenShift. And it's really fascinating to see the things that they're doing. So if you look at financial services, for instance, there's a lot of focus on risk analytics. So things like fraud, anti money laundering, mortgage risk, types of applications being done in this context, when you look at our retail industry clients, you see also a lot of customer centricity solutions, if you will, being deployed on OpenShift. And once again, having Linux close to those traditional LPARs of AIX, I-Series, or in the context of z/OS. So those are some of the things we see happening. And it's quite real. >> Now, you didn't mention power, but I want to come back and ask you about power. Because a few weeks ago, we were prompted to dig in a little bit with the when Arvind was on with Pat Kessinger at Intel and talking about the relationship you guys have. And so we dug in a little bit, we thought originally, we said, oh, it's about quantum. But we dug in. And we realized that the POWER10 is actually the best out there and the highest performance in terms of disaggregating memory. And we see that as a future architecture for systems and actually really quite excited about it about the potential that brings not only to build beyond system on a chip and system on a package, but to start doing interesting things at the Edge. You know, what do you what's going on with power? >> Well, of course, when I talked about OpenShift, we're doing OpenShift on power Linux, as well as Z Linux, but you're exactly right in the context for a POWER10 processor. We couldn't be more we're so excited about this processor. First of all, it's our first delivery with our partner Samsung with a seven nanometer form factor. The processor itself has only 18 billion transistors. So it's got a few transistors there. But one of the cool inventions, if you will, that we have created is this expansive memory region as part of this design point, which we call memory inception, it gives us the ability to reach memory across servers, up to two petabytes of memory. Aside from that, this processor has generational improvements and core and thread performance, improved energy efficiency. And all of this, Dave is going to give us a lot of opportunity with new workloads, particularly around artificial intelligence and inferencing around artificial intelligence. I mean, that's going to be that's another critical innovation that we see here in this POWER10 processor. >> Yeah, processor performance is just exploding. We're blowing away the historical norms. I think many people don't realize that. Let's talk about some of the key announcements that you've made in quantum last time we spoke on the qubit for last year, I think we did a deeper dive on quantum. You've made some announcements around hardware and software roadmaps. Give us the update on quantum please. >> Well, there is so much that has happened since we last spoke on the quantum landscape. And the key thing that we focused on in the last six months is really an articulation of our roadmaps, so the roadmap around hardware, the roadmap around software, and we've also done quite a bit of ecosystem development. So in terms of the roadmap around hardware, we put ourselves out there we've said we were going to get to over 1000 qubit machine and in 2023, so that's our milestone. And we've got a number of steps we've outlined along that way, of course, we have to make progress, frankly, every six months in terms of innovating around the processor, the electronics and the fridge associated with these machines. So lots of exciting innovation across the board. We've also published a software roadmap, where we're articulating how we improve a circuit execution speeds. So we hope, our plan to show shortly a 100 times improvement in circuit execution speeds. And as we go forward in the future, we're modifying our Qiskit programming model to not only allow a easily easy use by all types of developers, but to improve the fidelity of the entire machine, if you will. So all of our innovations go hand in hand, our hardware roadmap, our software roadmap, are all very critical in driving the technical outcomes that we think are so important for quantum to become a reality. We've deployed, I would say, in our quantum cloud over, you know, over 20 machines over time, we never quite identify the precise number because frankly, as we put up a new generation machine, we often retire when it's older. So we're constantly updating them out there, and every machine that comes on online, and that cloud, in fact, represents a sea change and hardware and a sea change in software. So they're all the latest and greatest that our clients can have access to. >> That's key, the developer angle you got redshift running on quantum yet? >> Okay, I mean, that's a really good question, you know, as part of that software roadmap in terms of the evolution and the speed of that circuit execution is really this interesting marriage between classical processing and quantum processing and bring those closer together. And in the context of our classical operations that are interfacing with that quantum processor, we're taking advantage of OpenShift, running on that classical machine to achieve that. And once again, if, as you can imagine, that'll give us a lot of flexibility in terms of where that classical machine resides and how we continue the evolution the great marriage, I think that's going to that will exist that does exist and will exist between classical computing and quantum computing. >> I'm glad I asked it was kind of tongue in cheek. But that's a key thread to the ecosystem, which is critical to obviously, you know, such a new technology. How are you thinking about the ecosystem evolution? >> Well, the ecosystem here for quantum is infinitely important. We started day one, on this journey with free access to our systems for that reason, because we wanted to create easy entry for anyone that really wanted to participate in this quantum journey. And I can tell you, it really fascinates everyone, from high school students, to college students, to those that are PhDs. But during this journey, we have reached over 300,000 unique users, we have now over 500,000 unique downloads of our Qiskit programming model. But to really achieve that is his back plane by this ongoing educational thrust that we have. So we've created an open source textbook, around Qiskit that allows organizations around the world to take advantage of it from a curriculum perspective. We have over 200 organizations that are using our open source textbook. Last year, when we realized we couldn't do our in person programming camps, which were so exciting around the world, you can imagine doing an in person programming camp and South Africa and Asia and all those things we did in 2019. Well, we had just like you all, we had to go completely virtual, right. And we thought that we would have a few 100 people sign up for our summer school, we had over 4000 people sign up for our summer school. And so one of the things we had to do is really pedal fast to be able to support that many students in this summer school that kind of grew out of our proportions. The neat thing was once again, seeing all the kids and students around the world taking advantage of this and learning about quantum computing. And then I guess that the end of last year, Dave, to really top this off, we did something really fundamentally important. And we set up a quantum center for historically black colleges and universities, with Howard University being the anchor of this quantum center. And we're serving 23 HBCUs now, to be able to reach a new set of students, if you will, with STEM technologies, and most importantly, with quantum. And I find, you know, the neat thing about quantum is is very interdisciplinary. So we have quantum physicist, we have electrical engineers, we have engineers on the team, we have computer scientists, we have people with biology and chemistry and financial services backgrounds. So I'm pretty excited about the reach that we have with quantum into HBCUs and even beyond right I think we can do some we can have some phenomenal results and help a lot of people on this journey to quantum and you know, obviously help ourselves but help these students as well. >> What do you see in people do with quantum and maybe some of the use cases. I mean you mentioned there's sort of a connection to traditional workloads, but obviously some new territory what's exciting out there? >> Well, there's been a really a number of use cases that I think are top of mind right now. So one of the most interesting to me has been one that showed us a few months ago that we talked about in the press actually a few months ago, which is with Exxon Mobil. And they really started looking at logistics in the context of Maritime shipping, using quantum. And if you think of logistics, logistics are really, really complicated. Logistics in the face of a pandemic are even more complicated and logistics when things like the Suez Canal shuts down, are even more complicated. So think about, you know, when the Suez Canal shut down, it's kind of like the equivalent of several major airports around the world shutting down and then you have to reroute all the traffic, and that traffic and maritime shipping is has to be very precise, has to be planned the stops are plan, the routes are plan. And the interest that ExxonMobil has had in this journey is not just more effective logistics, but how do they get natural gas shipped around the world more effectively, because their goal is to bring energy to organizations into countries while reducing CO2 emissions. So they have a very grand vision that they're trying to accomplish. And this logistics operation is just one of many, then we can think of logistics, though being a being applicable to anyone that has a supply chain. So to other shipping organizations, not just Maritime shipping. And a lot of the optimization logic that we're learning from that set of work also applies to financial services. So if we look at optimization, around portfolio pricing, and everything, a lot of the similar characteristics will also go be applicable to the financial services industry. So that's one big example. And I guess our latest partnership that we announced with some fanfare, about two weeks ago, was with the Cleveland Clinic, and we're doing a special discovery acceleration activity with the Cleveland Clinic, which starts prominently with artificial intelligence, looking at chemistry and genomics, and improve speed around machine learning for all of the the critical healthcare operations that the Cleveland Clinic has embarked on but as part of that journey, they like many clients are evolving from artificial intelligence, and then learning how they can apply quantum as an accelerator in the future. And so they also indicated that they will buy the first commercial on premise quantum computer for their operations and place that in Ohio, in the the the years to come. So it's a pretty exciting relationship. These relationships show the power of the combination, once again, of classical computing, using that intelligently to solve very difficult problems. And then taking advantage of quantum for what it can uniquely do in a lot of these use cases. >> That's great description, because it is a strong connection to things that we do today. It's just going to do them better, but then it's going to open up a whole new set of opportunities. Everybody wants to know, when, you know, it's all over the place. Because some people say, oh, not for decades, other people say I think it's going to be sooner than you think. What are you guys saying about timeframe? >> We're certainly determined to make it sooner than later. Our roadmaps if you note go through 2023. And we think the 2023 is going to will be a pivotal year for us in terms of delivery around those roadmaps. But it's these kind of use cases and this intense working with these clients, 'cause when they work with us, they're giving us feedback on everything that we've done, how does this programming model really help me solve these problems? What do we need to do differently? In the case of Exxon Mobil, they've given us a lot of really great feedback on how we can better fine tune all elements of the system to improve that system. It's really allowed us to chart a course for how we think about the programming model in particular in the context of users. Just last week, in fact, we announced some new machine learning applications, which these applications are really to allow artificial intelligence users and programmers to get take advantage of quantum without being a quantum physicist or expert, right. So it's really an encapsulation of a composable elements so that they can start to use, using an interface allows them to access through PyTorch into the quantum computer, take advantage of some of the things we're doing around neural networks and things like that, once again, without having to be experts in quantum. So I think those are the kind of things we're learning how to do better, fundamentally through this co-creation and development with our quantum network. And our quantum network now is over 140 unique organizations and those are commercial, academic, national laboratories and startups that we're working with. >> The picture started become more clear, we're seeing emerging AI applications, a lot of work today in AI is in modeling. Over time, it's going to shift toward inference and real time and practical applications. Everybody talks about Moore's law being dead. Well, in fact, the yes, I guess, technically speaking, but the premise or the outcome of Moore's law is actually accelerating, we're seeing processor performance, quadrupling every two years now, when you include the GPU along with the CPU, the DSPs, the accelerators. And so that's going to take us through this decade, and then then quantum is going to power us, you know, well beyond who can even predict that. It's a very, very exciting time. Jamie, I always love talking to you. Thank you so much for coming back on the CUBE. >> Well, I appreciate the time. And I think you're exactly right, Dave, you know, we talked about POWER10, just for a few minutes there. But one of the things we've done in POWER10, as well as we've embedded AI into every core that processor, so you reduce that latency, we've got a 10 to 20 times improvement over the last generation in terms of artificial intelligence, you think about the evolution of a classical machine like that state of the art, and then combine that with quantum and what we can do in the future, I think is a really exciting time to be in computing. And I really appreciate your time today to have this dialogue with you. >> Yeah, it's always fun and it's of national importance as well. Jamie Thomas, thanks so much. This is Dave Vellante with the CUBE keep it right there our continuous coverage of IBM Think 2021 will be right back. (gentle music) (bright music)

Published Date : May 12 2021

SUMMARY :

it's the CUBE with digital of the people, processes and technologies the CUBE is always a pleasure. and how it plays into that strategy? and the transactions associated with it. and talking about the that we have created is of the key announcements And the key thing that we And in the context of the ecosystem evolution? And so one of the things we and maybe some of the use cases. And a lot of the optimization to things that we do today. of the things we're doing going to power us, you know, like that state of the art, and it's of national importance as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jamie ThomasPERSON

0.99+

Pat KessingerPERSON

0.99+

Dave VellantePERSON

0.99+

Cleveland ClinicORGANIZATION

0.99+

JamiePERSON

0.99+

SamsungORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Exxon MobilORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

DavePERSON

0.99+

Jamie ThomasPERSON

0.99+

10QUANTITY

0.99+

2019DATE

0.99+

ExxonMobilORGANIZATION

0.99+

100 timesQUANTITY

0.99+

OhioLOCATION

0.99+

AsiaLOCATION

0.99+

Last yearDATE

0.99+

2023DATE

0.99+

Howard UniversityORGANIZATION

0.99+

last weekDATE

0.99+

ArvindPERSON

0.99+

last yearDATE

0.99+

South AfricaLOCATION

0.99+

Suez CanalLOCATION

0.99+

over 300,000 unique usersQUANTITY

0.99+

IntelORGANIZATION

0.99+

23 HBCUsQUANTITY

0.99+

QiskitTITLE

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

MoorePERSON

0.99+

Z LinuxTITLE

0.99+

over 200 organizationsQUANTITY

0.99+

LinuxTITLE

0.98+

over 4000 peopleQUANTITY

0.98+

first deliveryQUANTITY

0.98+

OpenShiftTITLE

0.98+

Think 2021COMMERCIAL_ITEM

0.97+

over 140 unique organizationsQUANTITY

0.97+

FirstQUANTITY

0.97+

seven nanometerQUANTITY

0.97+

over 20 machinesQUANTITY

0.97+

pandemicEVENT

0.97+

18 billion transistorsQUANTITY

0.97+

oneQUANTITY

0.96+

20 timesQUANTITY

0.96+

day oneQUANTITY

0.95+

over 500,000 unique downloadsQUANTITY

0.95+

one big exampleQUANTITY

0.94+

Think 2021COMMERCIAL_ITEM

0.93+

100 peopleQUANTITY

0.93+

about two weeks agoDATE

0.92+

over 1000 qubitQUANTITY

0.9+

I-SeriesCOMMERCIAL_ITEM

0.87+

z/OSTITLE

0.85+

six monthsQUANTITY

0.82+

few months agoDATE

0.8+

POWER10TITLE

0.79+

upQUANTITY

0.78+

PyTorchTITLE

0.78+

few weeks agoDATE

0.78+

1000s of clientsQUANTITY

0.76+

BOS19 Jamie Thomas VTT


 

(bright music) >> Narrator: From around the globe, it's the CUBE with digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome back to IBM Think 2021, the virtual edition. This is the CUBEs, continuous, deep dive coverage of the people, processes and technologies that are really changing our world. Right now, we're going to talk about modernization and what's beyond with Jamie Thomas, general manager, strategy and development, IBM Enterprise Security. Jamie, always a pleasure. Great to see you again. Thanks for coming on. >> It's great to see you, Dave. And thanks for having me on the CUBE is always a pleasure. >> Yeah, it is our pleasure. And listen, we've been hearing a lot about IBM is focused on hybrid cloud, Arvind Krishna says we must win the architectural battle for hybrid cloud. I love that. We've been hearing a lot about AI. And I wonder if you could talk about IBM Systems and how it plays into that strategy? >> Sure, well, it's a great time to have this discussion Dave. As you all know, IBM Systems Technology is used widely around the world, by many, many 1000s of clients in the context of our IBM System Z, our power systems and storage. And what we have seen is really an uptake of monetization around those workloads, if you will, driven by hybrid cloud, the hybrid cloud agenda, as well as an uptake of Red Hat OpenShift, as a vehicle for this modernization. So it's pretty exciting stuff, what we see as many clients taking advantage of OpenShift on Linux, to really modernize these environments, and then stay close, if you will, to that systems of record database and the transactions associated with it. So they're seeing a definite performance advantage to taking advantage of OpenShift. And it's really fascinating to see the things that they're doing. So if you look at financial services, for instance, there's a lot of focus on risk analytics. So things like fraud, anti money laundering, mortgage risk, types of applications being done in this context, when you look at our retail industry clients, you see also a lot of customer centricity solutions, if you will, being deployed on OpenShift. And once again, having Linux close to those traditional LPARs of AIX, I-Series, or in the context of z/OS. So those are some of the things we see happening. And it's quite real. >> Now, you didn't mention power, but I want to come back and ask you about power. Because a few weeks ago, we were prompted to dig in a little bit with the when Arvind was on with Pat Kessinger at Intel and talking about the relationship you guys have. And so we dug in a little bit, we thought originally, we said, oh, it's about quantum. But we dug in. And we realized that the POWER10 is actually the best out there and the highest performance in terms of disaggregating memory. And we see that as a future architecture for systems and actually really quite excited about it about the potential that brings not only to build beyond system on a chip and system on a package, but to start doing interesting things at the Edge. You know, what do you what's going on with power? >> Well, of course, when I talked about OpenShift, we're doing OpenShift on power Linux, as well as Z Linux, but you're exactly right in the context for a POWER10 processor. We couldn't be more we're so excited about this processor. First of all, it's our first delivery with our partner Samsung with a seven nanometer form factor. The processor itself has only 18 billion transistors. So it's got a few transistors there. But one of the cool inventions, if you will, that we have created is this expansive memory region as part of this design point, which we call memory inception, it gives us the ability to reach memory across servers, up to two petabytes of memory. Aside from that, this processor has generational improvements and core and thread performance, improved energy efficiency. And all of this, Dave is going to give us a lot of opportunity with new workloads, particularly around artificial intelligence and inferencing around artificial intelligence. I mean, that's going to be that's another critical innovation that we see here in this POWER10 processor. >> Yeah, processor performance is just exploding. We're blowing away the historical norms. I think many people don't realize that. Let's talk about some of the key announcements that you've made in quantum last time we spoke on the qubit for last year, I think we did a deeper dive on quantum. You've made some announcements around hardware and software roadmaps. Give us the update on quantum please. >> Well, there is so much that has happened since we last spoke on the quantum landscape. And the key thing that we focused on in the last six months is really an articulation of our roadmaps, so the roadmap around hardware, the roadmap around software, and we've also done quite a bit of ecosystem development. So in terms of the roadmap around hardware, we put ourselves out there we've said we were going to get to over 1000 qubit machine and in 2023, so that's our milestone. And we've got a number of steps we've outlined along that way, of course, we have to make progress, frankly, every six months in terms of innovating around the processor, the electronics and the fridge associated with these machines. So lots of exciting innovation across the board. We've also published a software roadmap, where we're articulating how we improve a circuit execution speeds. So we hope, our plan to show shortly a 100 times improvement in circuit execution speeds. And as we go forward in the future, we're modifying our Qiskit programming model to not only allow a easily easy use by all types of developers, but to improve the fidelity of the entire machine, if you will. So all of our innovations go hand in hand, our hardware roadmap, our software roadmap, are all very critical in driving the technical outcomes that we think are so important for quantum to become a reality. We've deployed, I would say, in our quantum cloud over, you know, over 20 machines over time, we never quite identify the precise number because frankly, as we put up a new generation machine, we often retire when it's older. So we're constantly updating them out there, and every machine that comes on online, and that cloud, in fact, represents a sea change and hardware and a sea change in software. So they're all the latest and greatest that our clients can have access to. >> That's key, the developer angle you got redshift running on quantum yet? >> Okay, I mean, that's a really good question, you know, as part of that software roadmap in terms of the evolution and the speed of that circuit execution is really this interesting marriage between classical processing and quantum processing and bring those closer together. And in the context of our classical operations that are interfacing with that quantum processor, we're taking advantage of OpenShift, running on that classical machine to achieve that. And once again, if, as you can imagine, that'll give us a lot of flexibility in terms of where that classical machine resides and how we continue the evolution the great marriage, I think that's going to that will exist that does exist and will exist between classical computing and quantum computing. >> I'm glad I asked it was kind of tongue in cheek. But that's a key thread to the ecosystem, which is critical to obviously, you know, such a new technology. How are you thinking about the ecosystem evolution? >> Well, the ecosystem here for quantum is infinitely important. We started day one, on this journey with free access to our systems for that reason, because we wanted to create easy entry for anyone that really wanted to participate in this quantum journey. And I can tell you, it really fascinates everyone, from high school students, to college students, to those that are PhDs. But during this journey, we have reached over 300,000 unique users, we have now over 500,000 unique downloads of our Qiskit programming model. But to really achieve that is his back plane by this ongoing educational thrust that we have. So we've created an open source textbook, around Qiskit that allows organizations around the world to take advantage of it from a curriculum perspective. We have over 200 organizations that are using our open source textbook. Last year, when we realized we couldn't do our in person programming camps, which were so exciting around the world, you can imagine doing an in person programming camp and South Africa and Asia and all those things we did in 2019. Well, we had just like you all, we had to go completely virtual, right. And we thought that we would have a few 100 people sign up for our summer school, we had over 4000 people sign up for our summer school. And so one of the things we had to do is really pedal fast to be able to support that many students in this summer school that kind of grew out of our proportions. The neat thing was once again, seeing all the kids and students around the world taking advantage of this and learning about quantum computing. And then I guess that the end of last year, Dave, to really top this off, we did something really fundamentally important. And we set up a quantum center for historically black colleges and universities, with Howard University being the anchor of this quantum center. And we're serving 23 HBCUs now, to be able to reach a new set of students, if you will, with STEM technologies, and most importantly, with quantum. And I find, you know, the neat thing about quantum is is very interdisciplinary. So we have quantum physicist, we have electrical engineers, we have engineers on the team, we have computer scientists, we have people with biology and chemistry and financial services backgrounds. So I'm pretty excited about the reach that we have with quantum into HBCUs and even beyond right I think we can do some we can have some phenomenal results and help a lot of people on this journey to quantum and you know, obviously help ourselves but help these students as well. >> What do you see in people do with quantum and maybe some of the use cases. I mean you mentioned there's sort of a connection to traditional workloads, but obviously some new territory what's exciting out there? >> Well, there's been a really a number of use cases that I think are top of mind right now. So one of the most interesting to me has been one that showed us a few months ago that we talked about in the press actually a few months ago, which is with Exxon Mobil. And they really started looking at logistics in the context of Maritime shipping, using quantum. And if you think of logistics, logistics are really, really complicated. Logistics in the face of a pandemic are even more complicated and logistics when things like the Suez Canal shuts down, are even more complicated. So think about, you know, when the Suez Canal shut down, it's kind of like the equivalent of several major airports around the world shutting down and then you have to reroute all the traffic, and that traffic and maritime shipping is has to be very precise, has to be planned the stops are plan, the routes are plan. And the interest that ExxonMobil has had in this journey is not just more effective logistics, but how do they get natural gas shipped around the world more effectively, because their goal is to bring energy to organizations into countries while reducing CO2 emissions. So they have a very grand vision that they're trying to accomplish. And this logistics operation is just one of many, then we can think of logistics, though being a being applicable to anyone that has a supply chain. So to other shipping organizations, not just Maritime shipping. And a lot of the optimization logic that we're learning from that set of work also applies to financial services. So if we look at optimization, around portfolio pricing, and everything, a lot of the similar characteristics will also go be applicable to the financial services industry. So that's one big example. And I guess our latest partnership that we announced with some fanfare, about two weeks ago, was with the Cleveland Clinic, and we're doing a special discovery acceleration activity with the Cleveland Clinic, which starts prominently with artificial intelligence, looking at chemistry and genomics, and improve speed around machine learning for all of the the critical healthcare operations that the Cleveland Clinic has embarked on but as part of that journey, they like many clients are evolving from artificial intelligence, and then learning how they can apply quantum as an accelerator in the future. And so they also indicated that they will buy the first commercial on premise quantum computer for their operations and place that in Ohio, in the the the years to come. So it's a pretty exciting relationship. These relationships show the power of the combination, once again, of classical computing, using that intelligently to solve very difficult problems. And then taking advantage of quantum for what it can uniquely do in a lot of these use cases. >> That's great description, because it is a strong connection to things that we do today. It's just going to do them better, but then it's going to open up a whole new set of opportunities. Everybody wants to know, when, you know, it's all over the place. Because some people say, oh, not for decades, other people say I think it's going to be sooner than you think. What are you guys saying about timeframe? >> We're certainly determined to make it sooner than later. Our roadmaps if you note go through 2023. And we think the 2023 is going to will be a pivotal year for us in terms of delivery around those roadmaps. But it's these kind of use cases and this intense working with these clients, 'cause when they work with us, they're giving us feedback on everything that we've done, how does this programming model really help me solve these problems? What do we need to do differently? In the case of Exxon Mobil, they've given us a lot of really great feedback on how we can better fine tune all elements of the system to improve that system. It's really allowed us to chart a course for how we think about the programming model in particular in the context of users. Just last week, in fact, we announced some new machine learning applications, which these applications are really to allow artificial intelligence users and programmers to get take advantage of quantum without being a quantum physicist or expert, right. So it's really an encapsulation of a composable elements so that they can start to use, using an interface allows them to access through PyTorch into the quantum computer, take advantage of some of the things we're doing around neural networks and things like that, once again, without having to be experts in quantum. So I think those are the kind of things we're learning how to do better, fundamentally through this co-creation and development with our quantum network. And our quantum network now is over 140 unique organizations and those are commercial, academic, national laboratories and startups that we're working with. >> The picture started become more clear, we're seeing emerging AI applications, a lot of work today in AI is in modeling. Over time, it's going to shift toward inference and real time and practical applications. Everybody talks about Moore's law being dead. Well, in fact, the yes, I guess, technically speaking, but the premise or the outcome of Moore's law is actually accelerating, we're seeing processor performance, quadrupling every two years now, when you include the GPU along with the CPU, the DSPs, the accelerators. And so that's going to take us through this decade, and then then quantum is going to power us, you know, well beyond who can even predict that. It's a very, very exciting time. Jamie, I always love talking to you. Thank you so much for coming back on the CUBE. >> Well, I appreciate the time. And I think you're exactly right, Dave, you know, we talked about POWER10, just for a few minutes there. But one of the things we've done in POWER10, as well as we've embedded AI into every core that processor, so you reduce that latency, we've got a 10 to 20 times improvement over the last generation in terms of artificial intelligence, you think about the evolution of a classical machine like that state of the art, and then combine that with quantum and what we can do in the future, I think is a really exciting time to be in computing. And I really appreciate your time today to have this dialogue with you. >> Yeah, it's always fun and it's of national importance as well. Jamie Thomas, thanks so much. This is Dave Vellante with the CUBE keep it right there our continuous coverage of IBM Think 2021 will be right back. (gentle music) (bright music)

Published Date : Apr 16 2021

SUMMARY :

it's the CUBE with digital of the people, processes and technologies the CUBE is always a pleasure. and how it plays into that strategy? and the transactions associated with it. and talking about the that we have created is of the key announcements And the key thing that we And in the context of the ecosystem evolution? And so one of the things we and maybe some of the use cases. And a lot of the optimization to things that we do today. of the things we're doing going to power us, you know, like that state of the art, and it's of national importance as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Cleveland ClinicORGANIZATION

0.99+

Pat KessingerPERSON

0.99+

Jamie ThomasPERSON

0.99+

SamsungORGANIZATION

0.99+

JamiePERSON

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

10QUANTITY

0.99+

Exxon MobilORGANIZATION

0.99+

Jamie ThomasPERSON

0.99+

ExxonMobilORGANIZATION

0.99+

100 timesQUANTITY

0.99+

2019DATE

0.99+

OhioLOCATION

0.99+

AsiaLOCATION

0.99+

Last yearDATE

0.99+

2023DATE

0.99+

last weekDATE

0.99+

Howard UniversityORGANIZATION

0.99+

last yearDATE

0.99+

South AfricaLOCATION

0.99+

23 HBCUsQUANTITY

0.99+

over 200 organizationsQUANTITY

0.99+

Suez CanalLOCATION

0.99+

MoorePERSON

0.99+

ArvindPERSON

0.99+

Z LinuxTITLE

0.98+

over 300,000 unique usersQUANTITY

0.98+

todayDATE

0.98+

QiskitTITLE

0.98+

over 4000 peopleQUANTITY

0.98+

LinuxTITLE

0.98+

IntelORGANIZATION

0.98+

first deliveryQUANTITY

0.98+

OpenShiftTITLE

0.98+

over 140 unique organizationsQUANTITY

0.97+

over 20 machinesQUANTITY

0.97+

oneQUANTITY

0.97+

20 timesQUANTITY

0.97+

FirstQUANTITY

0.97+

seven nanometerQUANTITY

0.96+

Think 2021COMMERCIAL_ITEM

0.96+

pandemicEVENT

0.96+

18 billion transistorsQUANTITY

0.95+

100 peopleQUANTITY

0.94+

day oneQUANTITY

0.94+

over 1000 qubitQUANTITY

0.93+

over 500,000 unique downloadsQUANTITY

0.93+

one big exampleQUANTITY

0.93+

PyTorchTITLE

0.92+

z/OSTITLE

0.9+

endDATE

0.9+

Think 2021COMMERCIAL_ITEM

0.87+

Red Hat OpenShiftTITLE

0.87+

about two weeks agoDATE

0.86+

first commercialQUANTITY

0.86+

few weeks agoDATE

0.85+

POWER10TITLE

0.84+

AIXTITLE

0.83+

a few months agoDATE

0.83+

few months agoDATE

0.81+

BOS19COMMERCIAL_ITEM

0.79+

last six monthsDATE

0.78+

Tom Deane, Cloudera and Abhinav Joshi, Red Hat | KubeCon + CloudNativeCon NA 2020


 

from around the globe it's thecube with coverage of kubecon and cloudnativecon north america 2020 virtual brought to you by red hat the cloud native computing foundation and ecosystem partners hello and welcome back to the cube's coverage of kubecon plus cloud nativecon 2020 the virtual edition abinav joshi is here he's the senior product marketing manager for openshift at red hat and tom dean is the senior director of pro product management at cloudera gentlemen thanks for coming on thecube good to see you thank you very much for having us here hey guys i know you would be here it was great to have you and guys i know you're excited about the partnership and i definitely want to get in and talk about that but before we do i wonder if we could just set the tone you know what are you seeing in the market tom let's let's start with you i had a great deep dive a couple of weeks back with anupam singh and he brought me up to speed on what's new with cloudera but but one of the things we discussed was the accelerated importance of data putting data at the core of your digital business tom what are you seeing in the marketplace right now yeah absolutely so um overall we're still seeing a growing demand for uh storing and and processing massive massive amounts of data even in the past few months um where perhaps we see a little bit more variety is on by industry sector is on the propensity to adopt some of the latest and greatest uh technologies that are out there or that we we deliver to the market um so whether perhaps in the retail hospitality sector you may see a little bit more risk aversion around some of the latest tools then you you go to the healthcare industry as an example and you see we see a strong demand for our latest technologies uh with with everything that is that is going on um so overall um still a lot lots of demand around this space so abnormal i mean we just saw in ibm's earnings though the momentum of red hat you know growing in the mid teens and the explosion that we're seeing around containers and and obviously openshift is at the heart of that how the last nine months affected your customers priorities and what are you seeing yeah we've been a lot more busier like in the last few months because there's like a lot of use cases and if you look at the like a lot of the research and so on and we are seeing that from our customers as well that now the customers are actually speeding up the digital transformation right people say that okay kovac 19 has actually uh speeded up the digital transformation for a lot of our customers for the right reasons to be able to help the customers and so on so we are seeing a lot of attraction on like number of verticals and number of use cases beyond the traditional lab dev data analytics aiml messaging streaming edge and so on like lots of use cases in like a lot of different like industry verticals so there's a lot of momentum going on on openshift and the broader that portfolio as well yeah it's ironic the the timing of the pandemic but it sure underscores that this next 10 years is going to be a lot different than the last 10 years okay let's talk about some of the things that are new around data tom cloudera you guys have made a number of moves since acquiring hortonworks a little over two years ago what's new with uh with the cloudera data platform cdp sure so yes our latest therap uh platform is called cbp clara data platform last year we announced the public cloud version of cdp running on aws and then azure and what's new is just two months ago we announced the release of the version of this platform targeted at the data center and that's called cvp private cloud and really the focus of this platform this new version has been around solving some of the pain points that we see around agility or time to value and the ease of use of the platform and to give you some specific examples with our previous technology it could take a customer three months to provision a data warehouse if you include everything from obtaining the infrastructure to provisioning the warehouse loading the data setting security policies uh and fine-tuning the the software now with cbp private cloud we've been able to take those uh three months and turn it into three minutes so significant uh speed up in in that onboarding time and in time to valley and a key piece of this uh that enabled this this speed up was a revamping of the entire stack specifically the infrastructure and service services management layer and this is where the containerization of the platform comes in specifically kubernetes and red hat open shift that is a key piece of the puzzle that enables this uh order of magnitude uh improvement in time right uh now abner you think about uh red hat you think about cloudera of course hortonworks the stalwarts of of of open source you got kind of like birds of a feather how are red hat and cloudera partnering with each other you know what are the critical aspects of that relationship that people should be aware of yeah absolutely that's a very good question yeah so on the openshift side we've had a lot of momentum in the market and we have well over 2000 customers in terms of a lot of different verticals and the use cases that i talked about at the beginning of our conversation in terms of traditional and cloud native app dev databases data analytics like ai messaging and so on right and the value that you have with openshift and the containers kubernetes and devops like part of the solution being able to provide the agility flexibility scalability the cross cloud consistency like so all that that you see in a typical app dev world is directly applicable to fast track the data analytics and the ai projects as well and we've seen like a lot of customers and some of the ones that we can talk about in a public way like iix rbc bank hca healthcare boston children's bmw exxon mobil so all these organizations are being are able to leverage openshift to kind of speed up the ai projects and and help with the needs of the data engineers data scientists and uh and the app dev folks now from our perspective providing the best in class uh you say like experience for the customers at the platform level is key and we have to make sure that the tooling that the customers run on top of it uh gets the best in class the experience in terms of the day zero to day two uh management right and it's uh and and it's an ecosystem play for us and and and that's the way cloudera is the top isv in the space right when it comes to data analytics and ai and that was our key motivation to partner with cloudera in terms of bringing this joint solution to market and making sure that our customers are successful so the partnership is at all the different levels in the organization say both up and down as well as in the the engineering level the product management level the marketing level the sales level and at the support and services level as well so that way if you look at the customer journey in terms of selecting a solution uh putting it in place and then getting the value out of it so the partnership it actually spans across the entire spectrum yeah and tom you know i wonder if you could add anything there i mean it's not just about the public cloud with containers you're seeing obviously the acceleration of of cloud native principles on-prem in a hybrid you know across clouds it's sort of the linchpin containers really and kubernetes specifically linchpin to enable that what would you add to that discussion yeah as part of the partnership when we were looking for a vendor who could provide us that kubernetes layer we looked at our customer base and if you think about who clara is focused on we really go after that global the global 2000 firms out there these customers have very strict uh security requirements and they're often in these highly regulated uh industries and so when we looked at a customer's base uh we saw a lot of overlap and there was a natural good fit for us there but beyond that just our own technical evaluation of the solutions and also talking to uh to our own customers about who they do they see as a trusted platform that can provide enterprise grade uh features on on a kubernetes layer red hat had a clear leadership in in that front and that combined with our own uh long-standing relationship with our parent company ibm uh it made this partnership a natural good thing for us right and cloudera's always had a good relationship with ibm tom i want to stay with you if i can for a minute and talk about the specific joint solutions that you're providing with with red hat what are you guys bringing to customers in in terms of those solutions what's the business impact where's the value absolutely so the solution is called cbd or color data platform private cloud on red hat openshift and i'll describe three uh the three pillars that make up cbp uh first what we have is the five data analytic experiences and that is meant to cover the end to end data lifecycle in the first release we just came out two months ago we announced the availability of two of those five experiences we have data warehousing for bi analytics as well as machine learning and ai where we offer a collaborative data science data science tools for data scientists to come together do exploratory data analytics but also develop predictive models and push them to production going forward we'll be adding the remaining three uh experiences they include data engineering or transformations on uh on your data uh data flow for streaming analytics and ingest uh as well as operational database for uh real-time surveying of both structure and unstructured data so these five experiences have been re-banked right compared to our prior platform to target these specific use cases and simplify uh these data disciplines the second pillar that i'll talk about is the sdx or uh what what we call the shared data experience and what this is is the ability for these five experiences to have one global data set that they can all access with shared metadata security including fine grain permissions and a suite of governance tools that provide lineage provide auditing and business metadata so by having these shared data experiences our developers our users can build these multi-disciplinary workflows in a very straightforward way without having to create all this custom code and i can stitch you can stitch them together and the last pillar that i'll mention uh is the containerization of of the platform and because of containers because of kubernetes we're now able to offer that next level of agility isolation uh and infrastructure efficiency on the platform so give you a little bit more specific examples on the agility i mentioned going from three months to three minutes in terms of the speed up with i uh with uh containers we can now also give our users the ability to bring their own versions of their libraries and engines without colliding with another user who's sharing the platform that has been a big ask from our customers and last i'll mention infrastructure efficiency by re-architecting our services to running a microservices architecture we can now impact those servers in a much more efficient way we can also auto scale auto suspend bring all this as you mentioned bring all these cloud native concepts on premises and the end result of that is better infrastructure efficiency now our customers can do more with the same amount of hard work which overall uh reduces their their total spend on the solution so that's what we call cbp private cloud great thanks for that i mean wow we've seen really the evolution from the the wild west days of you know the early days of so-called big data ungoverned a lot of shadow data science uh maybe maybe not as efficient as as we'd like and but certainly today taking advantage of some of those capabilities dealing with the noisy neighbor problem enough i wonder if you could comment another question that i have is you know one of the things that jim whitehurst talked about when ibm acquired red hat was the scale that ibm could bring and what i always looked at in that context was ibm's deep expertise in vertical industries so i wonder what are some of the key industry verticals that you guys are targeting and succeeding in i mean yes there's the pandemic has some effects we talked about hospitality obviously airlines have to have to be careful and conserving cash but what are some of the interesting uh tailwinds that you're seeing by industry and some of the the more interesting and popular use cases yeah that's a very good question now in terms of the industry vertical so we are seeing the traction in like a number of verticals right and the top ones being the financial services like healthcare telco the automotive industry as well as the federal government are some of the key ones right and at the end of the day what what all the customers are looking at doing is be able to improve the experience of their customers with the digital services that they roll out right as part of the pandemic and so on as well and then being able to gain competitive edge right if you can have the services in your platform and make them kind of fresh and relevant and be able to update them on a regular basis that's kind of that's your differentiator these days right and then the next one is yeah if you do all this so you should be able to increase your revenue be able to save cost as well that's kind of a key one that you mentioned right that that a lot of the industries like the hospitality the airlines and so on are kind of working on saving cash right so if you can help them save the cost that's kind of key and then the last one is is being able to automate the business processes right because there's not like a lot of the manual processes so yeah if you can add in like a lot of automation that's all uh good for your business and then now if you look at the individual use cases in these different industry verticals what we're seeing that the use cases cannot vary from the industry to industry like if you look at the financial services the use cases like fraud detection being able to do the risk analysis and compliance being able to improve the customer support and so on are some of the key use cases the cyber security is coming up a lot as well because uh yeah nobody wants to be hacked and so and and so on yeah especially like in these times right and then moving on to healthcare and the life sciences right what we're seeing the use cases on being able to do the data-driven diagnostics and care and being able to do the discovery of drugs being able to say track kobit 19 and be able to tell that okay uh which of my like hospital is going to be full when and what kind of ppe am i going to need at my uh the the sites and so on so that way i can yeah and mobilize like as needed are some of the key ones that we are seeing on the healthcare side uh and then in terms of the automotive industry right that's where being able to speed up the autonomous driving initiatives uh being able to do uh the auto warranty pricing based on the history of the drivers and so on and then being able to save on the insurance cost is a big one that we are seeing as well for the insurance industries and then but more like manufacturing right being able to do the quality assurance uh at the shop floor being able to do the predictive maintenance on machinery and also be able to do the robotics process automation so like lots of use cases that customers are prioritizing but it's very verticalized it kind of varies from the vertical to a vertical but at the end of the day yeah it's all about like improving the customer experience the revenue saving cost and and being able to automate the business processes yeah that's great thank you for that i mean we we heard a lot about automation we were covering ansible fest i mean just think about fraud how much you know fraud detection has changed in the last 10 years it used to be you know so slow you'd have to go go through your financial statements to find fraud and now it's instantaneous cyber security is critical because the adversaries are very capable healthcare is a space where you know it's ripe for change and now of course with the pandemic things are changing very rapidly automotive another one an industry that really hasn't hadn't seen much disruption and now you're seeing with a number of things autonomous vehicles and you know basically software on wheels and insurance great example even manufacturing you're seeing you know a real sea change there so thank you for that description you know very often in the cube we like to look at joint engineering solutions that's a gauge of the substance of a partnership you know sometimes you see these barney deals you know there's a press release i love you you love me okay see you but but so i wonder if you guys could talk about specific engineering that you're doing tom maybe you could start sure yeah so on the on the engineering and product side um we've um for cbp private cloud we've we've changed our uh internal development and testing to run all on uh openshift uh internally uh and as part of that we we have a direct line to red hat engineering to help us solve any issues that that uh we run into so in the initial release we start with support of openshift43 we're just wrapping up uh testing of and we'll begin with openshift46 very soon on another aspect of their partnership is on being able to update our images to account for any security vulnerabilities that are coming up so with the guidance and help from red hat we've been we've standardized our docker images on ubi or the universal based image and that allows us to automatically get many of these security fixes uh into our into our software um the last point that i mentioned here is that it's not just about providing kubernetes uh red hat helps us with the end to end uh solution so there is also the for example bringing a docker registry into the picture or providing a secure vault for storing uh all the secrets so all these uh all these pieces combined make up the uh a strong complete solution actually the last thing i'll mention is is a support aspect which is critical to our customers in this model our customers can bring support tickets to cluberra but as soon as we determine that it may be an issue that uh related to red hat or openshift where we can use their help we have that direct line of communication uh and automated systems in the back end to resolve those support tickets uh quickly for our customers so those are some of the examples of what we're doing on the technical side great thank you uh enough we're out of time but i wonder if we could just close here i mean when we look at our survey data with our data partner etr we see containers container orchestration container management generally and again kubernetes specifically is the the number one area of investment for companies that has the most momentum in terms of where they're putting their efforts it's it's it's right up there and even ahead of ai and machine learning and even ahead of cloud which is obviously larger maybe more mature but i wonder if you can add anything and bring us home with this segment yeah absolutely and i think uh so uh one thing i want to add is like in terms of the engineering level right we also have like between cloudera and red hat the partnership and the sales and the go to market levels as well because once you build the uh the integration it yeah it has to be built out in the customer environments as well right so that's where we have the alignment um at the marketing level as well as the sales level so that way we can like jointly go in and do the customer workshops and make sure the solutions are getting deployed the right way right uh and also we have a partnership at the professional services level as well right where um the experts from both the orgs are kind of hand in hand to help the customers right and then at the end of the day if you need help with support and that's what tom talked about that we have the experts on the support side as well yeah and then so to wrap things up right uh so all the industry research and the customer conversation that we are having are kind of indicating that the organizations are actually increasing the focus on digital uh transformation with the data and ai being a key part of it and that's where this strategic partnership between cloudera and and red hat is going to play a big role to help our mutual customers uh through that our transition and be able to achieve the key goals that they set for their business great well guys thanks so much for taking us through the partnership and the integration work that you guys are doing with customers a great discussion really appreciate your time yeah thanks a lot dave really appreciate it really enjoyed the conversation all right keep it right there everybody you're watching thecube's coverage of cubecon plus cloud nativecon north america the virtual edition keep it right there we'll be right back

Published Date : Nov 19 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
twoQUANTITY

0.99+

five experiencesQUANTITY

0.99+

three monthsQUANTITY

0.99+

three minutesQUANTITY

0.99+

Abhinav JoshiPERSON

0.99+

last yearDATE

0.99+

ibmORGANIZATION

0.99+

KubeConEVENT

0.99+

Red HatORGANIZATION

0.99+

clouderaORGANIZATION

0.99+

first releaseQUANTITY

0.98+

second pillarQUANTITY

0.98+

two months agoDATE

0.98+

red hatORGANIZATION

0.98+

openshift46TITLE

0.98+

jim whitehurstPERSON

0.98+

tomPERSON

0.98+

telcoORGANIZATION

0.98+

pandemicEVENT

0.98+

north americaLOCATION

0.98+

ClouderaORGANIZATION

0.97+

bothQUANTITY

0.97+

abinav joshiPERSON

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

CloudNativeConEVENT

0.96+

a minuteQUANTITY

0.95+

tom deanPERSON

0.95+

openshiftTITLE

0.95+

five dataQUANTITY

0.95+

kubeconORGANIZATION

0.95+

hortonworksORGANIZATION

0.94+

anupam singhPERSON

0.94+

davePERSON

0.92+

last few monthsDATE

0.9+

Tom DeanePERSON

0.9+

over two years agoDATE

0.9+

firstQUANTITY

0.9+

kobit 19OTHER

0.89+

rbc bankORGANIZATION

0.88+

last 10 yearsDATE

0.88+

north americaLOCATION

0.88+

openshift43TITLE

0.87+

hcaORGANIZATION

0.87+

threeQUANTITY

0.87+

over 2000 customersQUANTITY

0.86+

2020DATE

0.86+

next 10 yearsDATE

0.85+

kovac 19ORGANIZATION

0.82+

one thingQUANTITY

0.81+

past few monthsDATE

0.81+

three pillarsQUANTITY

0.8+

last nine monthsDATE

0.78+

federal governmentORGANIZATION

0.76+

one globalQUANTITY

0.76+

hatTITLE

0.75+

both structureQUANTITY

0.74+

NA 2020EVENT

0.72+

cloudnativeconORGANIZATION

0.7+

cubeconORGANIZATION

0.69+

cloudORGANIZATION

0.69+

three uh experiencesQUANTITY

0.68+

lotQUANTITY

0.68+

day twoQUANTITY

0.67+

exxon mobilORGANIZATION

0.67+

a couple of weeks backDATE

0.67+

iixORGANIZATION

0.66+

kubernetesORGANIZATION

0.66+

of a featherTITLE

0.64+

2000 firmsQUANTITY

0.63+

lot of use casesQUANTITY

0.61+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Renaud Gaubert, NVIDIA & Diane Mueller, Red Hat | KubeCon + CloudNativeCon NA 2019


 

>>Live from San Diego, California It's the Q covering Koopa and Cloud Native Cot brought to you by Red Cloud, Native Computing Pounding and its ecosystem March. >>Welcome back to the Cube here at Q. Khan Club native Khan, 2019 in San Diego, California Instrumental in my co host is Jon Cryer and first of all, happy to welcome back to the program. Diane Mueller, who is the technical of the tech lead of cloud native technology. I'm sorry. I'm getting the wrong That's director of community development Red Hat, because renew. Goodbye is the technical lead of cognitive technologies at in video game to the end of day one. I've got three days. I gotta make sure >>you get a little more Red Bull in the conversation. >>All right, well, there's definitely a lot of energy. Most people we don't even need Red Bull here because we're a day one. But Diane, we're going to start a day zero. So, you know, you know, you've got a good group of community of geeks when they're like Oh, yeah, let me fly in a day early and do like 1/2 day or full day of deep dives. There So the Red Hat team decided to bring everybody on a boat, I guess. >>Yeah. So, um, open ships Commons gathering for this coup con we hosted at on the inspiration Hornblower. We had about 560 people on a boat. I promised them that it wouldn't leave the dock, but we deal still have a little bit of that weight going on every time one of the big military boats came by. And so people were like a little, you know, by the end of the day, but from 8 a.m. in the morning till 8 p.m. In the evening, we just gathered had some amazing deep dives. There was unbelievable conversations onstage offstage on we had, ah, wonderful conversation with some of the new Dev ops folks that have just come on board. That's a metaphor for navigation and Coop gone. And and for events, you know, Andrew Cliche for John Willis, the inevitable Crispin Ella, who runs Open Innovation Labs, and J Bloom have all just formed the global Transformation Office. I love that title on dhe. They're gonna be helping Thio preach the gospel of Cultural Dev ops and agile transformation from a red hat office From now going on, there was a wonderful conversation. I felt privileged to actually get to moderate it and then just amazing people coming forward and sharing their stories. It was a great session. Steve Dake, who's with IBM doing all the SDO stuff? Did you know I've never seen SDO done so well, Deployment explains so well and all of the contents gonna be recorded and up on Aaron. We streamed it live on Facebook. But I'm still, like reeling from the amount of information overload. And I think that's the nice thing about doing a day zero event is that it's a smaller group of people. So we had 600 people register, but I think was 560 something. People show up and we got that facial recognition so that now when they're traveling through the hallways here with 12,000 other people, that go Oh, you were in the room. I met you there. And that's really the whole purpose for comments. Events? >>Yeah, I tell you, this is definitely one of those shows that it doesn't take long where I say, Hey, my brain is full. Can I go home. Now. You know I love your first impressions of Q Khan. Did you get to go to the day zero event And, uh, what sort of things have you been seeing? So >>I've been mostly I went to the lightning talks, which were amazing. Anything? Definitely. There. A number of shout outs to the GPU one, of course. Uh, friend in video. But I definitely enjoyed, for example, of the amazing D. M s one, the one about operators. And generally all of them were very high quality. >>Is this your first Q? Khan, >>I've been there. I've been a year. This is my third con. I've been accused in Europe in the past. Send you an >>old hat old hand at this. Well, before we get into the operator framework and I wanna love to dig into this, I just wanted to ask one more thought. Thought about open shift, Commons, The Commons in general, the relationship between open shift, the the offering. And then Okay, the comments and okay, D and then maybe the announcement about about Okay. Dee da da i o >>s. Oh, a couple of things happened yesterday. Yesterday we dropped. Okay, D for the Alfa release. So anyone who wants to test that out and try it out it's an all operators based a deployment of open shift, which is what open ship for is. It's all a slightly new architectural deployment methodology based on the operator framework, and we've been working very diligently. Thio populate operator hub dot io, which is where all of the upstream projects that have operators like the one that Reynolds has created for in the videos GP use are being hosted so that anyone could deploy them, whether on open shift or any kubernetes so that that dropped. And yesterday we dropped um, and announced Open Sourcing Quay as project quay dot io. So there's a lot of Io is going on here, but project dia dot io is, um, it's a fulfillment, really, of a commitment by Red Hat that whenever we do an acquisition and the poor folks have been their acquired by Cora West's and Cora Weston acquired by Red Hat in an IBM there. And so in the interim, they've been diligently working away to make the code available as open source. And that hit last week and, um, to some really interesting and users that are coming up and now looking forward to having them to contribute to that project as well. But I think the operator framework really has been a big thing that we've been really hearing, getting a lot of uptake on. It's been the new pattern for deploying applications or service is on getting things beyond just a basic install of a service on open shift or any kubernetes. And that's really where one of the exciting things yesterday on we were talking, you know, and I were talking about this earlier was that Exxon Mobil sent a data scientist to the open ship Commons, Audrey Resnick, who gave this amazing presentation about Jupiter Hub, deeper notebooks, deploying them and how like open shift and the advent of operators for things like GP use is really helping them enable data scientists to do their work. Because a lot of the stuff that data signs it's do is almost disposable. They'll run an experiment. Maybe they don't get the result they want, and then it just goes away, which is perfect for a kubernetes workload. But there are other things you need, like a Jeep use and work that video has been doing to enable that on open shift has been just really very helpful. And it was It was a great talk, but we were talking about it from the first day. Signs don't want to know anything about what's under the hood. They just want to run their experiments. So, >>you know, let's like to understand how you got involved in the creation of the operator. >>So generally, if we take a step back and look a bit at what we're trying to do is with a I am l and generally like EJ infrastructure and five G. We're seeing a lot of people. They're trying to build and run applications. Whether it's in data Center at the and we're trying to do here with this operator is to bring GPS to enterprise communities. And this is what we're working with. Red Hat. And this is where, for example, things like the op Agrestic A helps us a lot. So what we've built is this video Gee, few operator that space on the upper air sdk where it wants us to multiple phases to in the first space, for example, install all the components that a data scientist were generally a GPU cluster of might want to need. Whether it's the NVIDIA driver, the container runtime, the community's device again feast do is as you go on and build an infrastructure. You want to be able to have the automation that is here and, more importantly, the update part. So being able to update your different components, face three is generally being able to have a life cycle. So as you manage multiple machines, these are going to get into different states. Some of them are gonna fail, being able to get from these bad states to good states. How do you recover from them? It's super helpful. And then last one is monitoring, which is being able to actually given sites dr users. So the upper here is decay has helped us a lot here, just laying out these different state slips. And in a way, it's done the same thing as what we're trying to do for our customers. The different data scientists, which is basically get out of our way and allow us to focus on core business value. So the operator, who basically takes care of things that are pretty cool as an engineer I lost due to your election. But it doesn't really help me to focus on like my core business value. How do I do with the updates, >>you know? Can I step back one second, maybe go up a level? The problem here is that each physical machine has only ah limited number of NVIDIA. GPU is there and you've got a bunch of containers that maybe spawning on different machines. And so they have to figure out, Do I have a GPU? Can I grab one? And if I'm using it, I assume I have to reserve it and other people can't use and then I have to give it up. Is that is that the problem we're solving here? So this is >>a problem that we've worked with communities community so that like the whole resource management, it's something that is integrated almost first class, citizen in communities, being able to advertise the number of deep, use their your cluster and used and then being able to actually run or schedule these containers. The interesting components that were also recently added are, for example, the monitoring being able to see that a specific Jupiter notebook is using this much of GP utilization. So these air supercool like features that have been coming in the past two years in communities and which red hat has been super helpful, at least in these discussions pushing these different features forward so that we see better enterprise support. Yeah, >>I think the thing with with operators and the operator lifecycle management part of it is really trying to get to Day two. So lots of different methodologies, whether it's danceable or python or job or or UH, that's helm or anything else that can get you an insult of a service or an application or something. And in Stan, she ate it. But and the operator and we support all of that with SD case to help people. But what we're trying to do is bridge the to this day to stuff So Thea, you know, to get people to auto pilot, you know, and there's a whole capacity maturity model that if you go to operator hab dot io, you can see different operators are a different stages of the game. So it's been it's been interesting to work with people to see Theo ah ha moment when they realize Oh, I could do this and then I can walk away. And then if that pod that cluster dies, it'll just you know, I love the word automatically, but they, you know, it's really the goal is to help alleviate the hands on part of Day two and get more automation into the service's and applications we deploy >>right and when they when they this is created. Of course it works well with open shift, but it also works for any kubernetes >>correct operator. HAB Daddio. Everything in there runs on any kubernetes, and that's really the goal is to be ableto take stuff in a hybrid cloud model. You want to be able to run it anywhere you want, so we want people to be unable to do it anywhere. >>So if this really should be an enabler for everything that it's Vinny has been doing to be fully cloud native, Yes, >>I think completely arable here is this is a new attack. Of course, this is a bit there's a lot of complexity, and this is where we're working towards is reducing the complexity and making true that people there. Dan did that a scientist air machine learning engineers are able to focus on their core business. >>You watch all of the different service is in the different things that the data scientists are using. They don't I really want to know what's under under the hood. They would like to just open up a Jupiter Hub notebook, have everything there. They need, train their models, have them run. And then after they're done, they're done and it goes away. And hopefully they remember to turn off the Jeep, use in the woods or wherever it is, and they don't keep getting billed for it. But that's the real beauty of it is that they don't have to worry so much anymore about that. And we've got a whole nice life cycle with source to image or us to I. And they could just quickly build on deploy its been, you know, it's near and dear to my heart, the machine learning the eyesight of stuff. It is one of the more interesting, you know, it's the catchy thing, but the work was, but people are really doing it today, and it's been we had 23 weeks ago in San Francisco, we had a whole open ship comments gathering just on a I and ML and you know, it was amazing to hear. I think that's the most redeeming thing or most rewarding thing rather for people who are working on Kubernetes is to have the folks who are doing workloads come and say, Wow, you know, this is what we're doing because we don't get to see that all the time. And it was pretty amazing. And it's been, you know, makes it all worthwhile. So >>Diane Renaud, thank you so much for the update. Congratulations on the launch of the operators and look forward to hearing more in the future. >>All right >>to >>be here >>for John Troy runs to minimum. More coverage here from Q. Khan Club native Khan, 2019. Thanks for watching. Thank you.

Published Date : Nov 20 2019

SUMMARY :

Koopa and Cloud Native Cot brought to you by Red Cloud, California Instrumental in my co host is Jon Cryer and first of all, happy to welcome back to the program. There So the Red Hat team decided to bring everybody on a boat, And that's really the whole purpose for comments. Did you get to go to the day zero event And, uh, what sort of things have you been seeing? But I definitely enjoyed, for example, of the amazing D. I've been accused in Europe in the past. The Commons in general, the relationship between open shift, And so in the interim, you know, let's like to understand how you got involved in the creation of the So the operator, who basically takes care of things that Is that is that the problem we're solving here? added are, for example, the monitoring being able to see that a specific Jupiter notebook is using this the operator and we support all of that with SD case to help people. Of course it works well with open shift, and that's really the goal is to be ableto take stuff in a hybrid lot of complexity, and this is where we're working towards is reducing the complexity and It is one of the more interesting, you know, it's the catchy thing, but the work was, Congratulations on the launch of the operators and look forward for John Troy runs to minimum.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Audrey ResnickPERSON

0.99+

Andrew ClichePERSON

0.99+

Diane MuellerPERSON

0.99+

Steve DakePERSON

0.99+

IBMORGANIZATION

0.99+

Jon CryerPERSON

0.99+

Exxon MobilORGANIZATION

0.99+

Diane RenaudPERSON

0.99+

EuropeLOCATION

0.99+

John TroyPERSON

0.99+

San FranciscoLOCATION

0.99+

1/2 dayQUANTITY

0.99+

Red HatORGANIZATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

firstQUANTITY

0.99+

J BloomPERSON

0.99+

DianePERSON

0.99+

2019DATE

0.99+

Open Innovation LabsORGANIZATION

0.99+

yesterdayDATE

0.99+

Red CloudORGANIZATION

0.99+

560QUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

600 peopleQUANTITY

0.99+

three daysQUANTITY

0.99+

John WillisPERSON

0.99+

8 a.m.DATE

0.99+

Crispin EllaPERSON

0.99+

JeepORGANIZATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

Cora WestORGANIZATION

0.99+

YesterdayDATE

0.99+

last weekDATE

0.99+

SDOTITLE

0.99+

DanPERSON

0.99+

8 p.m.DATE

0.98+

23 weeks agoDATE

0.98+

first impressionsQUANTITY

0.98+

one secondQUANTITY

0.98+

Q. Khan ClubORGANIZATION

0.98+

oneQUANTITY

0.98+

RenauPERSON

0.98+

Red BullORGANIZATION

0.98+

ReynoldsPERSON

0.97+

AaronPERSON

0.97+

Day twoQUANTITY

0.97+

MarchDATE

0.96+

third con.QUANTITY

0.96+

first spaceQUANTITY

0.96+

first dayQUANTITY

0.95+

VinnyPERSON

0.95+

Cora WestonORGANIZATION

0.94+

ThioPERSON

0.94+

CloudORGANIZATION

0.93+

FacebookORGANIZATION

0.92+

first classQUANTITY

0.92+

todayDATE

0.9+

about 560 peopleQUANTITY

0.9+

JupiterLOCATION

0.89+

each physical machineQUANTITY

0.88+

12,000 otherQUANTITY

0.88+

day zeroQUANTITY

0.88+

D. MPERSON

0.87+

CloudNativeCon NA 2019EVENT

0.87+

d GaubertPERSON

0.87+

TheaPERSON

0.86+

pythonTITLE

0.84+

Native Computing PoundingORGANIZATION

0.83+

a dayQUANTITY

0.79+

day zeroEVENT

0.78+

day oneQUANTITY

0.78+

KoopaORGANIZATION

0.76+

one more thoughtQUANTITY

0.74+

KhanPERSON

0.72+

CommonsORGANIZATION

0.72+

KubeCon +EVENT

0.72+

Jupiter HubORGANIZATION

0.71+

Randy Arseneau & Steve Kenniston, IBM | CUBEConversation, August 2019


 

from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape all right buddy welcome to this cute conversation my name is Dave Ville on time or the co-host of the cube and we're gonna have a conversation to really try to explore does infrastructure matter you hear a lot today I've ever since I've been in this business I've heard Oh infrastructure is dead hardware is dead but we're gonna explore that premise and with me is Randy Arsenault and Steve Kenaston they're both global market development execs at IBM guys thanks for coming in and let's riff thanks for having us Dave so here's one do I want to start with the data we were just recently at the MIT chief data officer event 10 years ago that role didn't even exist now data is everything so I want to start off with you here this bro my data is the new oil and we've said you know what data actually is more valuable than oil oil I can put in my car I can put in my house but I can't put it in both data is it doesn't follow the laws of scarcity I can use the same data multiple times and I can copy it and I can find new value I can cut cost I can raise revenue so data in some respects is more valuable what do you think right yeah I would agree and I think it's also to your point kind of a renewable resource right so so data has the ability to be reused regularly to be repurposed so I would take it even further we've been talking a lot lately about this whole concept that data is really evolving into its own tier so if you think about a traditional infrastructure model where you've got sort of compute and network and applications and workloads and on the edge you've got various consumers and producers of that data the data itself has those pieces have evolved the data has been evolving as well it's becoming more complicated it's becoming certainly larger and more voluminous it's better instrumented it carries much more metadata it's typically more proximal with code and compute so the data itself is evolving into its own tier in a sense so we we believe that we want to treat data as a tier we want to manage it to wrap the services around it that enable it to reach its maximum potential in a sense so guys let's we want to make this interactive in a way and I'd love to give you my opinions as well as links are okay with that but but so I want to make an observation Steve if you take a look at the top five companies in terms of market cap in the US of Apple Google Facebook Amazon and of course Microsoft which is now over a trillion dollars they're all data companies they've surpassed the bank's the insurance companies the the Exxon Mobil's of the world as the most valuable companies in the world what are your thoughts on that why is that I think it's interesting but I think it goes back to your original statement about data being the new oil the and unlike oil Ray's said you can you can put it in house what you can't put it in your car you also when it's burnt it's gone right but with data you you have it around you generate more of it you keep using it and the more you use it and the more value you get out of it the more value the company gets out of it and so as those the reason why they continue to grow in value is because they continue to collect data they continue to leverage that data for intelligent purposes to make user experiences better their business better to be able to go faster to be able to new new things faster it's all part of part of this growth so data is one of the superpowers the other superpower of course is machine intelligence or what everybody talks about as AI you know it used to be that processing power doubling every 18 months was what drove innovation in the industry today it's a combination of data which we have a lot of it's AI and cloud for scaling we're going to talk about cloud but I want to spend a minute talking about AI when I first came into this business AI was all the rage but we didn't have the amount of data that we had today we don't we didn't have the processing power it was too expensive to store all this data that's all changed so now we have this emerging machine intelligence layer being used for a lot of different inks but it's sort of sitting on top of all these workloads that's being injected into databases and applications it's being used to detect fraud to sell us more stuff you know in real time to save lives and I'm going to talk about that but it's one of these superpowers that really needs new hardware architectures so I want to explore machine intelligence a little bit it really is a game changers it really is and and and tying back to the first point about sort of the the evolution of data and the importance of data things like machine learning and adaptive infrastructure and cognitive infrastructure have driven to your point are a hard requirement to adapt and improve the infrastructure upon which that lives and runs and operates and moves and breathes so we always had Hardware evolution or development or improvements and networks and the basic you know components of the infrastructure being driven again by advances in material science and silicon etc well now what's happening is the growth and importance and and Dynamis city of data is far outpacing the ability of the physical sciences to keep pace right that's a reality that we live in so therefore things like you know cognitive computing machine learning AI are kind of bridging the gap almost between the limitations we're bumping up against in physical infrastructure and the immense unlocked potential of data so that intermediary is really where this phenomenon of AI and machine learning and deep learning is happening and you're also correct in pointing out that it's it's everywhere I mean it's imbuing every single workload it's transforming every industry and a fairly blistering pace IBM's been front and center around artificial intelligence in cognitive computing since the beginning we have a really interesting perspective on it and I think we bring that to a lot of the solutions that we offer as well Ginni Rometty a couple years ago actually use the term incumbent disruptors and when I think of that I think about artificial intelligence and I think about companies like the ones I mentioned before that are very valuable they have data at their core most incumbents don't they have data all over the place you know they might have a bottling plant at the core of the manufacturing plant or some human process at the core so to close that gap artificial intelligence from the incumbents the appointees they're gonna buy that from companies like IBM they're gonna you know procure Watson or other AI tools and you know or maybe you know use open-source AI tools but they're gonna then figure out how to apply those to their business to do whatever fraud detection or recommendation engines or maybe even improve security and we're going to talk about this in detail but Steve this there's got to be new infrastructure behind that we can't run these new workloads on infrastructure that was designed 30 40 years ago exactly I mean I think I am truly fascinated by with this growth of data it's now getting more exponential and why we think about why is it getting more exponential it's getting more exponential because the ease at which you can actually now take advantage of that data it's going beyond the big financial services companies the big healthcare companies right we're moving further and further and further towards the edge where people like you and I and Randi and I have talked about the maker economy right I want to be able to go in and build something on my own and then deliver it to either as a service as a person a new application or as a service to my infrastructure team to go then turn it on and make something out of that that infrastructure it's got to come down in cost but all the things that you said before performance reliability speed to get there intelligence about data movement how do we get smarter about those things all of the underlying ways we used to think about how we managed protect secure that it all has evolved and it's continuing to evolve everybody talks about the journey the journey to cloud why does that matter it's not just the cloud it's also the the componentry underneath and it's gonna go much broader much bigger much faster well and I would just add just amplify what Steve said about this whole maker movement one of the other pressures that that's putting on corporate IT is it's driving essentially driving product development and innovation out to the end to the very edge to the end user level so you have all these very smart people who are developing these amazing new services and applications and workloads when it gets to the point where they believe it can add value to the business they then hand it off to IT who is tasked with figuring out how to implement it scale it protect it secured debt cetera that's really where I believe I um plays a key role or where we can play a key role add a lot of values we understand that process of taking that from inception to scale and implementation in a secure enterprise way and I want to come back to that so we talked about data as one of the superpowers an AI and the third one is cloud so again it used to be processor speed now it's data plus AI and cloud why is cloud important because cloud enables scale there's so much innovation going on in cloud but I want to talk about you know cloud one dot o versus cloud two dot o IBM talks about you know the new era of cloud so what was cloud one dot o it was largely lift and shift it was taking a lot of crap locations and putting him in the public cloud it was a lot of tests in dev a lot of startups who said hey I don't need to you know have IT I guess like the cube we have no ID so it's great for small companies a great way to experiment and fail fast and pay for you know buy the drink that was one dot o cloud to dot all to datos is emerging is different it's hybrid it's multi cloud it's massively distributed systems distributed data on Prem in many many clouds and it's a whole new way of looking at infrastructure and systems design so as Steve as you and I have talked about it's programmable so it's the API economy very low latency we're gonna talk more about what that means but that concept of shipping code to data wherever it lives and making that cloud experience across the entire infrastructure no matter whether it's on Prem or in cloud a B or C it's a complicated problem it really is and when you think about the fact that you know the big the big challenge we started to run into when we were talking about cloud one always shadow IT right so folks really wanted to be able to move faster and they were taking data and they were actually copying it to these different locations to be able to use it for them simply and easily well once you broke that mold you started getting away from the security and the corporate furnance that was required to make sure that the business was safe right it but it but it but following the rules slowed business down so this is why they continued to do it in cloud 2.0 I like the way you position this right is the fact that I no longer want to move data around moving data it within the infrastructure is the most expensive thing to do in the data center so if I can move code to where I need to be able to work on it to get my answers to do my AI to do my intelligent learning that all of a sudden brings a lot more value and a lot more speed and speed as time as money rate if I can get it done faster I get more valuable and then just you know people often talk about moving data but you're right on you the last thing you want to do is move data in just think about how long it takes to back up the first time you ever backed up your iPhone how long it took well and that's relatively small compared to all the data in a data center there's another subtext here from a standpoint of cloud 2.0 and it involves the edge the edge is a new thing and we have a belief inside of wiki bond and the cube that we talk about all the time that a lot of the inference is going to be done at the edge what does that mean it means you're going to have factory devices autonomous vehicles a medical device equipment that's going to have intelligence in there with new types of processors and we'll talk about that but a lot of the the inference is that conclusions were made real-time and and by the way these machines will be able to talk to each other so you'll have a machine to machine communication no humans need to be involved to actually make a decision as to where should I turn or you know what should be the next move on the factory floor so again a lot of the data is gonna stay in place now what does that mean for IBM you still have an opportunity to have data hubs that collect that data analyze it maybe push it up to the cloud develop models iterate and push it back down but the edge is a fundamentally new type of approach that we've really not seen before and it brings in a whole ton of new data yeah that's a great point and it's a market phenomenon that has moved and is very rapidly moving from smartphones to the enterprise right so right so your point is well-taken if you look in the fact is we talked earlier that compute is now proximal to the data as opposed to the other way around and the emergence of things like mesh networking and you know high bandwidth local communications peer-to-peer communications it's it's not only changing the physical infrastructure model and the and the best practices around how to implement that infrastructure it's also fundamentally changing the way you buy them the way you consume them the way you charge for them so it's it's that shift is changing and having a ripple effect across our industry in every sense whether it's from the financial perspective the operational perspective the time to market perspective it's also and we talked a lot about industry transformation and disruptors that show up you know in an industry who work being the most obvious example and just got an industry from the from the bare metal and recreate it they are able to do that because they've mastered this new environment where the data is king how you exploit that data cost-effectively repeatably efficiently is what differentiates you from the pack and allows you to create a brand new business model that that didn't exist prior so that's really where every other industry is going you talking about those those those big five companies in North America that are that are the top top companies now because of data I often think about rewind you know 25 years do you think Amazon when they built Amazon really thought they were going to be in the food service business that the video surveillance business the drone business all these other book business right maybe the book business right but but their architecture had to scale and change and evolve with where that's going all around the data because then they can use these data components and all these other places to get smarter bigger and grow faster and that's that's why they're one of the top five this is a really important point especially for the young people in the audience so it used to be that if you were in an industry if you were in health care or you were in financial services or you were in manufacturing you were in that business for life every industry had its own stack the sales the marketing the R&D everything was wired to that industry and that industry domain expertise was really not portable across businesses because of data and because of digital transformations companies like Amazon can get into content they can get into music they can get it to financial services they can get into healthcare they can get into grocery it's all about that data model being portable across those industries it's a very powerful concept that you and I mean IBM owns the weather company right so I mean there's a million examples of traditional businesses that have developed ways to either enter new markets or expand their footprint in existing markets by leveraging new sources of data so you think about a retailer or a wholesale distributor they have to very accurately or as accurately as possible forecast demand for goods and make sure logistically the goods are in the right place at the right time well there are million factors that go into that there's whether there's population density there's local cultural phenomena there's all sorts of things that have to be taken into consideration previously that would be near impossible to do now you can sit down again as an individual maker I can sit down at my desk and I can craft a model that consumes data from five readily available public api's or data sets to enhance my forecast and I can then create that model execute it and give it to two of my IT guy to go scale-out okay so I want to start getting into the infrastructure conversation again remember the premise of this conversation it doesn't read for structure matter we want to we want to explore that oh I start at the high level with with with cloud multi-cloud specifically we said cloud 2.0 is about hybrid multi cloud I'm gonna make a statements of you guys chime in my my assertion is that multi cloud has largely been a symptom of multi-vendor shadow IT different developers different workloads different lines of business saying hey we want to we want to do stuff in the cloud this happened so many times in the IT business all and then I was gonna govern it how is this gonna be secure who's got access control on and on and on what about compliance what about security then they throw it over to IT and they say hey help us fix this and so itea said look we need a strategy around multi cloud it's horses for courses maybe we go for cloud a for our collaboration software cloud B for the cognitive stuff cloud C for the you know cheap and deep storage different workloads for different clouds but there's got to be a strategy around that so I think that's kind of point number one and I T is being asked to kind of clean up this stuff but the future today the clouds are loosely coupled there may be a network that connects them but there's there's not a really good way to take data or rather to take code ship it to data wherever it lives and have it be a consistent well you were talking about an enterprise data plane that's emerging and that's kind of really where the opportunity is and then you maybe move into the control plane and the management piece of it and then bring in the edge but envision this mesh of clouds if you will whether it's on pram or in the public cloud or some kind of hybrid where you can take metadata and code ship it to wherever the data is leave it there much smaller you know ship five megabytes of code to a petabyte of data as opposed to waiting three months to try to ship you know petabytes to over the network it's not going to work so that's kind of the the spectrum of multi cloud loosely coupled today going to this you know tightly coupled mesh your guys thoughts on that yeah that's that's a great point and and I would add to it or expand that even further to say that it's also driving behavioral fundamental behavioral and organizational challenges within a lot of organizations and large enterprises cloud and this multi cloud proliferation that you spoke about one of the other things that's done that we talked about but probably not enough is it's almost created this inversion situation where in the past you'd have the business saying to IT I need this I need this supply chain application I need this vendor relationship database I need this order processing system now with the emergence of this cloud and and how easy it is to consume and how cost-effective it is now you've got the IT guys and the engineers and the designers and the architects and the data scientists pushing ideas to the business hey we can expand our footprint and our reach dramatically if we do this so you've get this much more bi-directional conversation happening now which frankly a lot of traditional companies are still working their way through which is why you don't see you know 100% cloud adoption but it drives those very productive full-duplex conversations at a level that we've never seen before I mean we encounter clients every day who are having these discussions are sitting down across the table and IT is not just doesn't just have a seat at the table they are often driving the go-to-market strategy so that's a really interesting transformation that we see as well in addition to the technology so there are some amazing things happening Steve underneath the covers and the plumbing and infrastructure and look at we think infrastructure matters that's kind of why we're here we're infrastructure guys but I want to make a point so for decades this industry is marked to the cadence of Moore's law the idea that you can double processing speeds every 18 months disk drive processors disk drives you know they followed that curve you could plot it out the last ten years that started to attenuate so what happened is chip companies would start putting more cores on to the real estate well they're running out of real estate now so now what's happening is we've seen this emergence of alternative processors largely came from mobile now you have arm doing a lot of offload processing a lot of the storage processing that's getting offloaded those are ARM processors in video with GPUs powering a lot of a lot of a is yours even seeing FPGAs they're simple they're easy them to spin up Asics you know making a big comeback so you've seen these alternative processes processors powering things underneath where the x86 is and and of course they're still running applications on x86 so that's one sort of big thing big change in infrastructure to support this distributed systems the other is flash we saw flash basically take out spinning disk for all high-speed applications we're seeing the elimination of scuzzy which is a protocol that sits in between the the the disk you know and the rest of the network that's that's going away you're hearing things like nvme and rocky and PCIe basically allowing stores to directly talk to the so now a vision envision this multi-cloud system where you want to ship metadata and code anywhere these high speed capabilities interconnects low latency protocols are what sets that up so there's technology underneath this and obviously IBM is you know an inventor of a lot of this stuff that is really gonna power this next generation of workloads your comments yeah I think I think all that 100% true and I think the one component that we're fading a little bit about it even in the infrastructure is the infrastructure software right there's hardware we talked a lot talked about a lot of hardware component that are definitely evolving to get us better stronger faster more secure more reliable and that sort of thing and then there's also infrastructure software so not just the application databases or that sort of thing but but software to manage all this and I think in a hybrid multi cloud world you know you've got these multiple clauses for all practical purposes there's no way around it right marketing gets more value out of the Google analytic tools and Google's cloud and developers get more value out of using the tools in AWS they're gonna continue to use that at the end of the day I as a business though need to be able to extract the value from all of those things in order to make different business decisions to be able to move faster and surface my clients better there's hardware that's gonna help me accomplish that and then there are software things about managing that whole consetta component tree so that I can maximize the value across that entire stack and that stack is multiple clouds plus the internal clouds external clouds everything yeah so it's great point and you're seeing clear examples of companies investing in custom hardware you see you know Google has its own ship Amazon its own ship IBM's got you know power 9 on and on but none of this stuff works if you can't manage it so we talked before about programmable infrastructure we talked about the data plane and the control plane that software that's going to allow us to actually manage these multiple clouds as at least a quasi single entity you know something like a logical entity certainly within workload classes and in Nirvana across the entire you know network well and and the principal or the principle drivers of that evolution of course is containerization right so the containerization phenomenon and and you know obviously with our acquisition of red hat we're now very keenly aware and acutely plugged into the whole containerization phenomenon which is great we're you're seeing that becoming almost the I can't think of us a good metaphor but you're seeing containerization become the vernacular that's being spoken in multiple different types of reference architectures and use case environments that are vastly different in their characteristics whether they're high throughput low latency whether they're large volume whether they're edge specific whether they're more you know consolidated or hub-and-spoke models containerization is becoming the standard by which those architectures are being developed and with which they're being deployed so we think we're very well-positioned working with that emerging trend and that rapidly developing trend to instrument it in a way that makes it easier to deploy easier to instrument easier to develop so that's key and I want to sort of focus now on the relevance of IBM one side one thing that we understand because that that whole container is Asian think back to your original point Dave about moving data being very expensive and the fact that the fact that you want to move code out to the data now with containers microservices all of that stuff gets a lot easier development becomes a lot faster and you're actually pushing the speed of business faster well and the other key point is we talked about moving code you know to the data as you move the code to the data and run applications anywhere wherever the data is using containers the kubernetes etc you don't have to test it it's gonna run you know assuming you have the standard infrastructure in place to do that and the software to manage it that's huge because that means business agility it means better quality and speed alright let's talk about IBM the world is complex this stuff is not trivial the the more clouds we have the more edge we have the more data we have the more complex against IBM happens to be very good at complex three components of the innovation cocktail data AI and cloud IBM your customers have a lot of data you guys are good with data it's very strong analytics business artificial intelligence machine intelligence you've invested a lot in Watson that's a key component business and cloud you have a cloud it's not designed to compete not knock heads and the race to zero with with the cheap and deep you know storage clouds it's designed to really run workloads and applications but you've got all three ingredients as well you're going hard after the multi cloud world for you guys you've got infrastructure underneath you got hardware and software to manage that infrastructure all the modern stuff that we've talked about that's what's going to power the customers digital transformations and we'll talk about that in a moment but maybe you could expand on that in terms of IBM's relevance sure so so again using the kind of maker the maker economy metaphor bridging from that you know individual level of innovation and creativity and development to a broadly distributed you know globally available work loader or information source of some kind the process of that bridge is about scale and reach how do you scale it so that it runs effectively optimally is easily managed Hall looks and feels the same falls under the common umbrella of services and then how do you get it to as many endpoints as possible whether it's individuals or entities or agencies or whatever scale and reach iBM is all about scale and reach I mean that's kind of our stock and trade we we are able to take solutions from small kind of departmental level or kind of skunkworks level and make them large secure repeatable easily managed services and and make them as turnkey as possible our services organizations been doing it for decades exceptionally well our product portfolio supports that you talk about Watson and kind of the cognitive computing story we've been a thought leader in this space for decades I mean we didn't just arrive on the scene two years ago when machine learning and deep learning and IO ste started to become prominent and say this sounds interesting we're gonna plant our flag here we've been there we've been there for a long time so you know I kind of from an infrastructure perspective I kind of like to use the analogy that you know we are technology ethos is built on AI it's built on cognitive computing and and sort of adaptive computing every one of our portfolio products is imbued with that same capability so we use it internally we're kind of built from AI for AI so maybe that's the answer to this question of it so what do you say that somebody says well you know I want to buy you know my flash storage from pure AI one of my bi database from Oracle I want to buy my you know Intel servers from Dell you know whatever I want to I want to I want control and and and I gotta go build it myself why should I work with IBM do you do you get that a lot and how do you respond to that Steve I think I think this whole new data economy has opened up a lot of places for data to be stored anywhere I think at the end of the day it really comes down to management and one of the things that I was thinking about as you guys were we're conversing is the enterprise class or Enterprise need for things like security and protection that sort of thing that rounds out the software stack in our portfolio one of the things we can bring to the table is sure you can go by piece parts and component reform from different people that you want right and in that whole notion around fail-fast sure you can get some new things that might be a little bit faster that might be might be here first but one of the things that IBM takes a lot of pride was a lot of qual a lot of pride into is is the quality of their their delivery of both hardware and software right so so to me even though the infrastructure does matter quite a bit the question is is is how much into what degree so when you look at our core clients the global 2,000 right they want to fail fast they want to fail fast securely they want to fail fast and make sure they're protected they want to fail fast and make sure they're not accidentally giving away the keys to the kingdom at the end of the day a lot of the large vendor a lot of the large clients that we have need to be able to protect their are their IP their brain trust there but also need the flexibility to be creative and create new applications that gain new customer bases so the way I the way I look at it and when I talk to clients and when I talk to folks is is we want to give you them that while also making sure they're they're protected you know that said I would just add that that and 100% accurate depiction the data economy is really changing the way not only infrastructure is deployed and designed but the way it can be I mean it's opening up possibilities that didn't exist and there's new ones cropping up every day to your point if you want to go kind of best to breed or you want to have a solution that includes multi vendor solutions that's okay I mean the whole idea of using again for instance containerization thinking about kubernetes and docker for instance as a as a protocol standard or a platform standard across heterogeneous hardware that's fine like like we will still support that environment we believe there are significant additive advantages to to looking at IBM as a full solution or a full stack solution provider and our largest you know most mission critical application clients are doing that so we think we can tell a pretty compelling story and I would just finally add that we also often see situations where in the journey from the kind of maker to the largely deployed enterprise class workload there's a lot of pitfalls along the way and there's companies that will occasionally you know bump into one of them and come back six months later and say ok we encountered some scalability issues some security issues let's talk about how we can develop a new architecture that solves those problems without sacrificing any of our advanced capabilities all right let's talk about what this means for customers so everybody talks about digital transformation and digital business so what's the difference in a business in the digital business it's how they use data in order to leverage data to become one of those incumbent disruptors using Ginny's term you've got to have a modern infrastructure if you want to build this multi cloud you know connection point enterprise data pipeline to use your term Randy you've got to have modern infrastructure to do that that's low latency that allows me to ship data to code that allows me to run applet anywhere leave the data in place including the edge and really close that gap between those top five data you know value companies and yourselves now the other piece of that is you don't want to waste a lot of time and money managing infrastructure you've got to have intelligence infrastructure you've got to use modern infrastructure and you've got to redeploy those labor assets toward a higher value more productive for the company activities so we all know IT labor is a chop point and we spend more on IT labor managing Leung's provisioning servers tuning databases all that stuff that's gotta change in order for you to fund digital transformations so that to me is the big takeaway as to what it means for customer and we talked about that sorry what we talked about that all the time and specifically in the context of the enterprise data pipeline and within that pipeline kind of the newer generation machine learning deep learning cognitive workload phases the data scientists who are involved at various stages along the process are obviously kind of scarce resources they're very expensive so you can't afford for them to be burning cycles and managing environments you know spinning up VMs and moving data around and creating working sets and enriching metadata that they that's not the best use of their time so we've developed a portfolio of solutions specifically designed to optimize them as a resource as a very valuable resource so I would vehemently agree with your premise we talked about the rise of the infrastructure developer right so at the end of the day I'm glad you brought this topic up because it's not just customers it's personas Pete IBM talks to different personas within our client base or our prospect base about why is this infrastructure important to to them and one of the core components is skill if you have when we talk about this rise of the infrastructure developer what we mean is I need to be able to build composable intelligent programmatic infrastructure that I as IT can set up not have to worry about a lot of risk about it break have to do in a lot of troubleshooting but turn the keys over to the users now let them use the infrastructure in such a way that helps them get their job done better faster stronger but still keeps the business protected so don't make copies into production and screw stuff up there but if I want to make a copy of the data feel free go ahead and put it in a place that's safe and secure and it won't it won't get stolen and it also won't bring down the enterprise's is trying to do its business very key key components - we talked about I infused data protection and I infused storage at the end of the day it's what is an AI infused data center right it needs to be an intelligent data center and I don't have to spend a lot of time doing it the new IT person doesn't want to be troubleshooting all day long they want to be in looking at things like arm and vme what's that going to do for my business to make me more competitive that's where IT wants to be focused yeah and it's also we just to kind of again build on this this whole idea we haven't talked a lot about it but there's obviously a cost element to all this right I mean you know the enterprise's are still very cost-conscious and they're still trying to manage budgets and and they don't have an unlimited amount of capital resources so things like the ability to do fractional consumption so by you know pay paper drink right buy small bits of infrastructure and deploy them as you need and also to Steve's point and this is really Steve's kind of area of expertise and where he's a leader is kind of data efficiency you you also can't afford to have copy sprawl excessive data movement poor production schemes slow recovery times and recall times you've got a as especially as data volumes are ramping you know geometrically the efficiency piece and the cost piece is absolutely relevant and that's another one of the things that often gets lost in translation between kind of the maker level and the deployment level so I wanted to do a little thought exercise for those of you think that this is all you know bromide and des cloud 2.0 is also about we're moving from a world of cloud services to one where you have this mesh which is ubiquitous of of digital services you talked about intelligence Steve you know the intelligent data center so all these all these digital services what am I talking about AI blockchain 3d printing autonomous vehicles edge computing quantum RPA and all the other things on the Gartner hype cycle you'll be able to procure these as services they're part of this mesh so here's the thought exercise when do you think that owning and driving your own vehicle is no longer gonna be the norm right interesting thesis question like why do you ask the question well because these are some of the disruptions so the questions are designed to get you thinking about the potential disruptions you know is it possible that our children's children aren't gonna be driving their own car it's because it's a it's a cultural change when I was 16 year olds like I couldn't wait but you started to see a shifted quasi autonomous vehicles it's all sort of the rage personally I don't think they're quite ready yet but it's on the horizon okay I'll give you another one when will machines be able to make better diagnosis than doctors actually both of those are so so let's let's hit on autonomous and self-driving vehicles first I agree they're not there yet I will say that we have a pretty thriving business practice and competency around working with a das providers and and there's an interesting perception that a das autonomous driving projects are like there's okay there's ten of them around the world right maybe there's ten metal level hey das projects around the world what people often don't see is there is a gigantic ecosystem building around a das all the data sourcing all the telemetry all the hardware all the network support all the services I mean building around this is phenomenal it's growing at a had a ridiculous rate so we're very hooked into that we see tremendous growth opportunities there if I had to guess I would say within 10 to 12 years there will be functionally capable viable autonomous vehicles not everywhere but they will be you will be able as a consumer to purchase one yeah that's good okay and so that's good I agree that's a the time line is not you know within the next three to five years all right how about retail stores will well retail stores largely disappeared we're we're rainy I was just someplace the other day and I said there used to be a brick-and-mortar there and we were walking through the Cambridge Tseng Galleria and now the third floor there's no more stores right there's gonna be all offices they've shrunken down to just two floors of stores and I highly believe that it's because you know the brick you know the the retailers online are doing so well I mean think about it used to be tricky and how do you get in and and and I need the Walmart minute I go cuz I go get with Amazon and that became very difficult look at places like bombas or Casper or all the luggage plate all this little individual boutique selling online selling quickly never having to have to open up a store speed of deployment speed of product I mean it's it's it's phenomenal yeah and and frankly if if Amazon could and and they're investing billions of dollars and they're trying to solve the last mile problem if Amazon could figure out a way to deliver ninety five percent of their product catalog Prime within four to six hours brick-and-mortar stores would literally disappear within a month and I think that's a factual statement okay give me another one will banks lose control traditional banks lose control of the payment systems you can Moselle you see that banks are smart they're buying up you know fin tech companies but right these are entrenched yeah that's another one that's another one with an interesting philosophical element to it because people and some of its generational right like our parents generation would be horrified by the thought of taking a picture of a check or using blockchain or some kind of a FinTech coin or any kind of yeah exactly so Bitcoin might I do my dad ask you're not according I do I don't bit going to so we're gonna we're waiting it out though it's fine by the way I just wanted to mention that we don't hang out in the mall that's actually right across from our office I want to just add that to the previous comment so there is a philosophical piece of it they're like our generation we're fairly comfortable now because we've grown up in a sense with these technologies being adopted our children the concept of going to a bank for them will be foreign I mean it will make it all have no context for the content for the the the process of going to speak face to face to another human it just say it won't exist well will will automation whether its robotic process automation and other automation 3d printing will that begin to swing the pendulum back to onshore manufacturing maybe tariffs will help to but again the idea that machine intelligence increasingly will disrupt businesses there's no industry that's safe from disruption because of the data context that we talked about before Randy and I put together a you know IBM loves to use were big words of transformation agile and as a sales rep you're in the field and you're trying to think about okay what does that mean what does that mean for me to explain to my customer so he put together this whole thing about what his transformation mean to one of them was the taxi service right in the another one was retail so and not almost was fencers I mean you're hitting on on all the core things right but this transformation I mean it goes so deep and so wide when you think about exactly what Randy said before about uber just transforming just the taxi business retailers and taxis now and hotel chains and that's where the thing that know your customer they're getting all of that from data data that I'm putting it not that they're doing work to extract out of me that I'm putting in so that autonomous vehicle comes to pick up Steve Kenaston it knows that Steve likes iced coffee on his way to work gives me a coupon on a screen I hit the button it automatically stops at Starbucks for me and it pre-ordered it for me you're talking about that whole ecosystem wrapped around just autonomous vehicles and data now it's it's unbeliev we're not far off from the Minority Report era of like Anthem fuck advertising targeted at an individual in real time I mean that's gonna happen it's almost there now I mean you just use point you will get if I walk into Starbucks my phone says hey why don't you use some points while you're here Randy you know so so that's happening at facial recognition I mean that's all it's all coming together so and again underneath all this is infrastructure so infrastructure clearly matters if you don't have the infrastructure to power these new workloads you're drugged yeah and I would just add and I think we're all in agreement on that and and from from my perspective from an IBM perspective through my eyes I would say we're increasingly being viewed as kind of an arms dealer and that's a probably a horrible analogy but we're being used we're being viewed as a supplier to the providers of those services right so we provide the raw materials and the machinery and the tooling that enables those innovators to create those new services and do it quickly securely reliably repeatably at a at a reasonable cost right so it's it's a step back from direct engagement with consumer with with customers and clients and and architects but that's where our whole industry is going right we are increasingly more abstracted from the end consumer we're dealing with the sort of assembly we're dealing with the assemblers you know they take the pieces and assemble them and deliver the services so we're not as often doing the assembly as we are providing the raw materials guys great conversation I think we set a record tends to be like that so thank you very much for no problem yeah this is great thank you so much for watching everybody we'll see you next time you're watching the cube

Published Date : Aug 8 2019

SUMMARY :

the the the disk you know and the rest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Randy ArsenaultPERSON

0.99+

Steve KenastonPERSON

0.99+

StevePERSON

0.99+

MicrosoftORGANIZATION

0.99+

three monthsQUANTITY

0.99+

Dave VillePERSON

0.99+

RandyPERSON

0.99+

Exxon MobilORGANIZATION

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

August 2019DATE

0.99+

100%QUANTITY

0.99+

DavidPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Steve KennistonPERSON

0.99+

two floorsQUANTITY

0.99+

DellORGANIZATION

0.99+

third floorQUANTITY

0.99+

GoogleORGANIZATION

0.99+

RandiPERSON

0.99+

25 yearsQUANTITY

0.99+

ninety five percentQUANTITY

0.99+

AppleORGANIZATION

0.99+

billions of dollarsQUANTITY

0.99+

North AmericaLOCATION

0.99+

over a trillion dollarsQUANTITY

0.99+

WalmartORGANIZATION

0.99+

million factorsQUANTITY

0.99+

30DATE

0.99+

StarbucksORGANIZATION

0.99+

10 years agoDATE

0.99+

uberORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

twoQUANTITY

0.99+

six hoursQUANTITY

0.99+

Boston MassachusettsLOCATION

0.99+

Randy ArseneauPERSON

0.98+

two years agoDATE

0.98+

fourQUANTITY

0.98+

WatsonTITLE

0.98+

x86TITLE

0.98+

first pointQUANTITY

0.98+

Cambridge Tseng GalleriaLOCATION

0.98+

GartnerORGANIZATION

0.98+

bothQUANTITY

0.98+

six months laterDATE

0.98+

OracleORGANIZATION

0.98+

12 yearsQUANTITY

0.97+

USLOCATION

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

AWSORGANIZATION

0.96+

a minuteQUANTITY

0.96+

Joe Donahue, Hal Stern & Derek Seymour | AWS Executive Summit 2018


 

>> Live from Las Vegas, it's theCUBE! Covering the AWS Accenture Executive Summit. Brought to you by Accenture. >> Welcome back everyone to theCUBE's live coverage of the AWS Executive Summit here in Las Vegas. I'm your host, Rebecca Knight. We have three guests for this segment. We have Joe Donahue, managing director at Accenture. Hal Stern, AVP, IT Engineering Merck Research Labs. And Derek Seymour, Global Partner Leader Industry Verticals at AWS. Thank you so much for coming on theCUBE. >> Thank you! >> So, we're talking today about a new informatics research platform in the pharmaceutical/medical research industry. Will you paint a picture for us right now, Joe, of what it's like today. Sort of what medical research the time frame we're thinking about, the clunkiness of it all. >> Yeah, so it's a great question Rebecca. Drug discovery today generally takes more than a decade, it costs billions of dollars and has a lot of failures in excess of 90%. So it's not an exact science, we're generating more and more data. And at the same time, just our understanding of human disease biology continues to increase. These metrics haven't really changed. If you look back at the last coupe of decades, it's a 10 year plus process and that much money. So we're looking for ways that we can apply technology to really improve the odds of discovering a new drug that could help patients sooner and faster. >> And that will ultimately save lives. So it's a real social problem, a real problem. Why a platform for this? >> I think if you look at basic research, and you talk about basic blood sciences research, the lingua franca there is chemistry and biology. And we still don't really understand all the aspects, all the mechanisms of action that lead to chronic disease or lead to specific disease that we're interested in. So very, very much research is driven by the scientific method. You formulate a hypothesis based on some data, you run an experiment, you collect the data, you analyze it, and you start over again. So your ability to essentially cycle your data through that discovery process is absolutely critical. The problem is that we buy a lot of applications. And the applications were not designed to be able to interchange data freely. There is no platform to the sense of you have one on your phone, or you have one on your server operating system, where things were designed with a fairly small set of standards that say this is how you share data, this is how you represent it, this is how you access it. Instead we have these very top to bottom integrated applications that, quite honestly, they work together through a variety of copy and paste. Sometimes quite literal copy and paste mechanisms. And our goal in producing a platform is we would like to be able to first separate data from the applications to allow it to flow more freely around the cycle, that basic scientific method. Number two, to now start to allow component substitution. So we'll actually start to encourage more innovation in the space, bring in some of the new players. Make it easier to bring in new ideas is there better ways of analyzing the data or better ways of helping shape and formulate and curate those hypothesis. And finally, there's just a lot of parts of this that are fairly common. They're what we call pre competitive. Everybody has to do them. Everybody has to store data, everybody has to get lab instrument information. Everybody has to be able to go capture assay information. It's very hard to do it better than one of your competitors. So we should just all do it the same way. You see this happen in the cable industry, you see this happen at a variety of other industries where there are industry standards for how you accomplish basic commoditized things, and we haven't really had that. So one of the goals is, let's just sit down and find the first things to commoditize and go drive that economic advantage of being able to buy them as opposed to having to go build them bespoke each time. >> So this pre competitive element is really important. Derek, can you talk a little bit about how this platform in particular operates? >> Certainly. Our goal collectively as partners is to help pharma companies and researchers improve their efficiency and effectiveness in the drug discovery process. So the platform that we built brings together content and service and data from the pharma companies in a way that allows them, the researchers, a greater access to share that information. To do analysis, and to spend their time on researching the data and using their science and less on the work of managing an IT environment. So in that way we can both elevate their work and also take away, what we at AWS, call the undifferentiated heavy lifting of managing an IT environment. >> So you're doing the heavy lifting behind the scenes so that the researchers themselves can do what they do, which is focus on the science. So what have we seen so far? What kind of outcomes are we seeing? Particularly because it is in this pre competitive time. >> Well we've just really started, but we're getting a lot of excitement. Merck obviously is our first client, but our intent is that we'll have other pharmaceutical and biotech companies coming on board. And right now we've effectively started to create this two sided marketplace of pharma and biotech companies on one side and the key technology providers and content providers on the other side. We've effectively created that environment where the technology companies can plug in their secret sauce, you know via standardized APIs and micro services, and then the pharmaceutical and biotech companies can leverage those capabilities as part of this industry standard open platform that we're co creating. And so far we've started that process. The results are really encouraging. And the key thing is, you know really two fold. Get the word out there, we're doing that today here. Talking to other pharmaceutical and biotech companies. As well as not only the established technology providers in this space, but also the new comers. 'Cause this type of infrastructure, this type of platform, will enable the new innovative companies, the startup companies, to enter a market that traditionally has been very challenging to get into. Because there's so much data, there's so much legacy infrastructure. We're creating a mechanism that pharmaceutical researchers can take advantage of new technologies faster. For example, the latest algorithms on artificial intelligence and machine learning analyze all of this diverse data that's being generated. >> So that's for the startups, and that's sort of the promise of this kind of platform approach. But what about for a Merck, a established player in this. What kinds of things are you feeling and seeing inside the company? >> You think about this efficient frontier of what does is cost us to run the underlying technology systems that are foundational to our science? And you think about it, there are some things we do which are highly commoditized, we want them to be very efficient. And some things we do, which are very highly specialized, they're highly competitive, and it's okay if they're less efficient. You want to invest your money there. And you really want to invest more in things that are going to drive you a unique competitive advantage. And less in the things that are highly commoditized. The example I use frequently is you could go out and buy a barrel of oil, bring it home, refine it in your backyard, make your own gasoline. It's not recommended. It's messy, it really annoys the neighbors. Especially when it goes wrong. And it's not nearly as cost effective or as convenient as driving over to Exxon Mobil and filling up at the pump. If you're in New Jersey, having someone else even pump it for you. That's kind of the environment we're in right now today where we're refining that barrel of oil for every single application we have. So in doing this, we start to establish the base line of really thinking about refactoring our core applications into those things which can be driven by the economics of the commodity platform and those things which are going to give us unique advantage. We will see things I think, like improved adoption of data standards. We're going to see a lower barrier to entry for new applications, for new ideas. We're also going to see a lower barrier to exit. It'll be easier for us to adopt new ideas. Or to change or to substitute components because they really are built as part of a platform. And you see this, you look at, I would say over time things that have sedimented into AWS. It's been a remarkable story of starting with things that were basically resting our faces on a pausics file system and turned all the sudden into a seamless data base. By sedimenting well defined open source projects, we would like to see some of the same thing happen, where some of the core things we have to go do, entity registration, assay data captured, data management. They should be part of the platform. It's really hard to register an entity better than your competitor. What you do with it, how you describe what you're registering, how you capture intellectual property, how it drives your next invention. Completely bespoke, completely highly competitive. I'm going to keep that. But the underlying mechanics of it, to me it's file system stuff, it's data base stuff. We should leverage the economics of our industry. And again, leverage it as technologist ingredient. It's not the top level brand, chemistry and biology are the top level brand, technology's an ingredient brand we should really use the best ingredients we can. >> When you're hearing this conversation so related to life sciences, medical, bio/pharma research, what are sort of the best practices that have emerged, in terms of the way life sciences approaches its platform, and how it can be applied to other industries? >> What we've seen through the early collaboration with Merck and with Accenture is that bringing together these items in a secure environment, multi talent environment, managed by Accenture, run by AWS. We can put those tools in the hands of the researchers. We can provide them with work flow data analytics capabilities, reporting capabilities, to cover the areas that Hal is talking about so that they can elevate the work that they are doing. Over time, we expect to bring in more components. The application, the platform, will become more feature rich as we add additional third parties. And that's a key element in life science is that the science itself, while it may take place in (mumbles), it's a considerable collaboration across a number of research institutes. Both within the pharma and biotech community. Having this infrastructure in place where those companies and the researchers can come together in a secure manner, we're very proud to be supporting of that. >> So Joe, we started this conversation with you describing the state of medical research today, can you describe what you think it will be in 10 years from now as more pharmaceutical companies adopt this platform approach. And we're talking about the Mercks of the world, but then also those hungry start ups that are also. >> Sure, I think we're starting to see that transition actually happen now. And I think it's the recognition and you start to hear it as you hear some of the pharmaceutical CEO's talking about their business and the transformation. They've always talked about the science. They've always talked about the research. Now they're talking about data and informatics and they're realizing being a pharmaceutical company is not just about the science, it's about the data and you have to be as good and as efficient on the informatics and the IT side as you are on the science side. And that's the transition that we're going through right now. In 10 years, where we all hope we should be, is leveraging modern computing architectures. Existing platform technology to let the organizations focus on what's really important. And that's the science and the data that they generate for the benefit potentially of saving patient's lives in the future. >> So not only focusing on their core competencies, but then also that means that drug discovery will be quicker, that failure rates will go down. >> Even a 10 or 20% improvement in failure rates would be incredibly dramatic to the industry. >> And could save millions of lives. And improve lives and outcomes. Great, well thank you all so much for coming on theCUBE. It's been a really fun and interesting conversation. >> Same here, thank you Rebecca. >> Thank you, thank you. >> Thank you. >> I'm Rebecca Knight, we will have more of the AWS Executive Summit and theCUBE's live coverage coming up in just a little bit. (upbeat music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Accenture. live coverage of the AWS Executive Summit here in Las Vegas. platform in the pharmaceutical/medical research industry. And at the same time, just our understanding And that will ultimately save lives. and find the first things to commoditize and go drive Derek, can you talk a little bit about So the platform that we built brings together so that the researchers themselves can do what they do, And the key thing is, you know really two fold. So that's for the startups, and that's sort of that are going to drive you a unique competitive advantage. is that the science itself, while it may take place So Joe, we started this conversation with you And that's the science and the data So not only focusing on their core competencies, Even a 10 or 20% improvement in failure rates Great, well thank you all so much for coming on theCUBE. of the AWS Executive Summit and theCUBE's live coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Joe DonahuePERSON

0.99+

RebeccaPERSON

0.99+

Derek SeymourPERSON

0.99+

Exxon MobilORGANIZATION

0.99+

Hal SternPERSON

0.99+

AccentureORGANIZATION

0.99+

JoePERSON

0.99+

AWSORGANIZATION

0.99+

10 yearQUANTITY

0.99+

10QUANTITY

0.99+

New JerseyLOCATION

0.99+

MerckORGANIZATION

0.99+

20%QUANTITY

0.99+

Las VegasLOCATION

0.99+

DerekPERSON

0.99+

millionsQUANTITY

0.99+

three guestsQUANTITY

0.99+

billions of dollarsQUANTITY

0.99+

more than a decadeQUANTITY

0.99+

todayDATE

0.99+

one sideQUANTITY

0.98+

first clientQUANTITY

0.98+

BothQUANTITY

0.97+

10 yearsQUANTITY

0.97+

AWS Executive SummitEVENT

0.97+

firstQUANTITY

0.97+

HalPERSON

0.97+

oneQUANTITY

0.96+

two foldQUANTITY

0.96+

two sidedQUANTITY

0.96+

theCUBEORGANIZATION

0.94+

first thingsQUANTITY

0.94+

bothQUANTITY

0.94+

each timeQUANTITY

0.94+

MercksORGANIZATION

0.93+

IT Engineering Merck Research LabsORGANIZATION

0.92+

AVPORGANIZATION

0.88+

AWS Executive Summit 2018EVENT

0.87+

Number twoQUANTITY

0.86+

Accenture Executive SummitEVENT

0.83+

AWSEVENT

0.82+

one ofQUANTITY

0.79+

90%QUANTITY

0.78+

Global Partner LeaderORGANIZATION

0.71+

single applicationQUANTITY

0.69+

excessQUANTITY

0.63+

goalsQUANTITY

0.51+