Image Title

Search Results for Neurons:

Srinivas Mukkamala & David Shepherd | Ivanti


 

(gentle music) >> Announcer: "theCube's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) (logo whooshing) >> Hey, everyone, welcome back to "theCube's" coverage of day one, MWC23 live from Barcelona, Lisa Martin here with Dave Vellante. Dave, we've got some great conversations so far This is the biggest, most packed show I've been to in years. About 80,000 people here so far. >> Yeah, down from its peak of 108, but still pretty good. You know, a lot of folks from China come to this show, but with the COVID situation in China, that's impacted the attendance, but still quite amazing. >> Amazing for sure. We're going to be talking about trends and mobility, and all sorts of great things. We have a couple of guests joining us for the first time on "theCUBE." Please welcome Dr. Srinivas Mukkamala or Sri, chief product officer at Ivanti. And Dave Shepherd, VP Ivanti. Guys, welcome to "theCUBE." Great to have you here. >> Thank you. >> So, day one of the conference, Sri, we'll go to you first. Talk about some of the trends that you're seeing in mobility. Obviously, the conference renamed from Mobile World Congress to MWC mobility being part of it, but what are some of the big trends? >> It's interesting, right? I mean, I was catching up with Dave. The first thing is from the keynotes, it took 45 minutes to talk about security. I mean, it's quite interesting when you look at the shore floor. We're talking about Edge, we're talking about 5G, the whole evolution. And there's also the concept of are we going into the Cloud? Are we coming back from the Cloud, back to the Edge? They're really two different things. Edge is all decentralized while you recompute. And one thing I observed here is they're talking about near real-time reality. When you look at automobiles, when you look at medical, when you look at robotics, you can't have things processed in the Cloud. It'll be too late. Because you got to make millisecond-based stations. That's a big trend for me. When I look at staff... Okay, the compute it takes to process in the Cloud versus what needs to happen on-prem, on device, is going to revolutionize the way we think about mobility. >> Revolutionize. David, what are some of the things that you're saying? Do you concur? >> Yeah, 100%. I mean, look, just reading some of the press recently, they're predicting 22 billion IoT devices by 2024. Everything Sri just talked about there. It's growing exponentially. You know, problems we have today are a snapshot. We're probably in the slowest place we are today. Everything's just going to get faster and faster and faster. So it's a, yeah, 100% concur with that. >> You know, Sri, on your point, so Jose Maria Alvarez, the CEO of Telefonica, said there are three pillars of the future of telco, low latency, programmable networks, and Cloud and Edge. So, as to your point, Cloud and low latency haven't gone hand in hand. But the Cloud guys are saying, "All right, we're going to bring the Cloud to the Edge." That's sort of an interesting dynamic. We're going to bypass them. We heard somebody, another speaker say, "You know, Cloud can't do it alone." You know? (chuckles) And so, it's like these worlds need each other in a way, don't they? >> Definitely right. So that's a fantastic way to look at it. The Cloud guys can say, "We're going to come closer to where the computer is." And if you really take a look at it with data localization, where are we going to put the Cloud in, right? I mean, so the data sovereignty becomes a very interesting thing. The localization becomes a very interesting thing. And when it comes to security, it gets completely different. I mean, we talked about moving everything to a centralized compute, really have massive processing, and give you the addition back wherever you are. Whereas when you're localized, I have to process everything within the local environment. So there's already a conflict right there. How are we going to address that? >> Yeah. So another statement, I think, it was the CEO of Ericsson, he was kind of talking about how the OTT guys have heard, "We can't let that happen again. And we're going to find new ways to charge for the network." Basically, he's talking about monetizing the API access. But I'm interested in what you're hearing from customers, right? 'Cause our mindset is, what value you're going to give to customers that they're going to pay for, versus, "I got this data I'm going to charge developers for." But what are you hearing from customers? >> It's amazing, Dave, the way you're looking at it, right? So if we take a look at what we were used to perpetual, and we said we're going to move to a subscription, right? I mean, everybody talks about subscription economy. Telcos on the other hand, had subscription economy for a long time, right? They were always based on usage, right? It's a usage economy. But today, we are basically realizing on compute. We haven't even started charging for compute. If you go to AWS, go to Azure, go to GCP, they still don't quite charge you for actual compute, right? It's kind of, they're still leaning on it. So think about API-based, we're going to break the bank. What people don't realize is, we do millions of API calls for any high transaction environment. A consumer can't afford that. What people don't realize is... I don't know how you're going to monetize. Even if you charge a cent a call, that is still going to be hundreds and thousands of dollars a day. And that's where, if you look at what you call low-code no-code motion? You see a plethora of companies being built on that. They're saying, "Hey, you don't have to write code. I'll give you authentication as a service. What that means is, Every single time you call my API to authenticate a user, I'm going to charge you." So just imagine how many times we authenticate on a single day. You're talking a few dozen times. And if I have to pay every single time I authenticate... >> Real friction in the marketplace, David. >> Yeah, and I tell you what. It's a big topic, right? And it's a topic that we haven't had to deal with at the Edge before, and we hear it probably daily really, complexity. The complexity's growing all the time. That means that we need to start to get insight, visibility. You know? I think a part of... Something that came out of the EU actually this week, stated, you know, there's a cyber attack every 11 seconds. That's fast, right? 2016, that was 40 seconds. So actually that speed I talked about earlier, everything Sri says that's coming down to the Edge, we want to embrace the Edge and that is the way we're going to move. But customers are mindful of the complexity that's involved in that. And that, you know, lens thought to how are we going to deal with those complexities. >> I was just going to ask you, how are you planning to deal with those complexities? You mentioned one ransomware attack every 11 seconds. That's down considerably from just a few years ago. Ransomware is a household word. It's no longer, "Are we going to get attacked?" It's when, it's to what extent, it's how much. So how is Ivanti helping customers deal with some of the complexities, and the changes in the security landscape? >> Yeah. Shall I start on that one first? Yeah, look, we want to give all our customers and perspective customers full visibility of their environment. You know, devices that are attached to the environment. Where are they? What are they doing? How often are we going to look for those devices? Not only when we find those devices. What applications are they running? Are those applications secure? How are we going to manage those applications moving forward? And overall, wrapping it round, what kind of service are we going to do? What processes are we going to put in place? To Sri's point, the low-code no-code angle. How do we build processes that protect our organization? But probably a point where I'll pass to Sri in a moment is how do we add a level of automation to that? How do we add a level of intelligence that doesn't always require a human to be fixing or remediating a problem? >> To Sri, you mentioned... You're right, the keynote, it took 45 minutes before it even mentioned security. And I suppose it's because they've historically, had this hardened stack. Everything's controlled and it's a safe environment. And now that's changing. So what would you add? >> You know, great point, right? If you look at telcos, they're used to a perimeter-based network. >> Yep. >> I mean, that's what we are. Boxed, we knew our perimeter. Today, our perimeter is extended to our home, everywhere work, right? >> Yeah- >> We don't have a definition of a perimeter. Your browser is the new perimeter. And a good example, segueing to that, what we have seen is horizontal-based security. What we haven't seen is verticalization, especially in mobile. We haven't seen vertical mobile security solutions, right? Yes, you hear a little bit about automobile, you hear a little bit about healthcare, but what we haven't seen is, what about food sector? What about the frontline in food? What about supply chain? What security are we really doing? And I'll give you a simple example. You brought up ransomware. Last night, Dole was attacked with ransomware. We have seen the beef producer colonial pipeline. Now, if we have seen agritech being hit, what does it mean? We are starting to hit humanity. If you can't really put food on the table, you're starting to really disrupt the supply chain, right? In a massive way. So you got to start thinking about that. Why is Dole related to mobility? Think about that. They don't carry service and computers. What they carry is mobile devices. that's where the supply chain works. And then that's where you have to start thinking about it. And the evolution of ransomware, rather than a single-trick pony, you see them using multiple vulnerabilities. And Pegasus was the best example. Spyware across all politicians, right? And CEOs. It is six or seven vulnerabilities put together that actually was constructed to do an attack. >> Yeah. How does AI kind of change this? Where does it fit in? The attackers are going to have AI, but we could use AI to defend. But attackers are always ahead, right? (chuckles) So what's your... Do you have a point of view on that? 'Cause everybody's crazy about ChatGPT, right? The banks have all banned it. Certain universities in the United States have banned it. Another one's forcing his students to learn how to use ChatGPT to prompt it. It's all over the place. You have a point of view on this? >> So definitely, Dave, it's a great point. First, we all have to have our own generative AI. I mean, I look at it as your digital assistant, right? So when you had calculators, you can't function without a calculator today. It's not harmful. It's not going to take you away from doing multiplication, right? So we'll still teach arithmetic in school. You'll still use your calculator. So to me, AI will become an integral part. That's one beautiful thing I've seen on the short floor. Every little thing there is a AI-based solution I've seen, right? So ChatGPT is well played from multiple perspective. I would rather up level it and say, generated AI is the way to go. So there are three things. There is human intense triaging, where humans keep doing easy work, minimal work. You can use ML and AI to do that. There is human designing that you need to do. That's when you need to use AI. >> But, I would say this, in the Enterprise, that the quality of the AI has to be better than what we've seen so far out of ChatGPT, even though I love ChatGPT, it's amazing. But what we've seen from being... It's got to be... Is it true that... Don't you think it has to be cleaner, more accurate? It can't make up stuff. If I'm going to be automating my network with AI. >> I'll answer that question. It comes down to three fundamentals. The reason ChatGPT is giving addresses, it's not trained on the latest data. So for any AI and ML method, you got to look at three things. It's your data, it's your domain expertise, who is training it, and your data model. In ChatGPT, it's older data, it's biased to the people that trained it, right? >> Mm-hmm. >> And then, the data model is it's going to spit out what it's trained on. That's a precursor of any GPT, right? It's pre-trained transformation. >> So if we narrow that, right? Train it better for the specific use case, that AI has huge potential. >> You flip that to what the Enterprise customers talk about to us is, insight is invaluable. >> Right. >> But then too much insight too quickly all the time means we go remediation crazy. So we haven't got enough humans to be fixing all the problems. Sri's point with the ChatGPT data, some of that data we are looking at there could be old. So we're trying to triage something that may still be an issue, but it might have been superseded by something else as well. So that's my overriding when I'm talking to customers and we talk ChatGPT, it's in the news all the time. It's very topical. >> It's fun. >> It is. I even said to my 13-year-old son yesterday, your homework's out a date. 'Cause I knew he was doing some summary stuff on ChatGPT. So a little wind up that's out of date just to make that emphasis around the model. And that's where we, with our Neurons platform Ivanti, that's what we want to give the customers all the time, which is the real-time snapshot. So they can make a priority or a decision based on what that information is telling them. >> And we've kind of learned, I think, over the last couple of years, that access to real-time data, real-time AI, is no longer nice to have. It's a massive competitive advantage for organizations, but it's going to enable the on-demand, everything that we expect in our consumer lives, in our business lives. This is going to be table stakes for organizations, I think, in every industry going forward. >> Yeah. >> But assumes 5G, right? Is going to actually happen and somebody's going to- >> Going to absolutely. >> Somebody's going to make some money off it at some point. When are they going to make money off of 5G, do you think? (all laughing) >> No. And then you asked a very good question, Dave. I want to answer that question. Will bad guys use AI? >> Yeah. Yeah. >> Offensive AI is a very big thing. We have to pay attention to it. It's got to create an asymmetric war. If you look at the president of the United States, he said, "If somebody's going to attack us on cyber, we are going to retaliate." For the first time, US is willing to launch a cyber war. What that really means is, we're going to use AI for offensive reasons as well. And we as citizens have to pay attention to that. And that's where I'm worried about, right? AI bias, whether it's data, or domain expertise, or algorithmic bias, is going to be a big thing. And offensive AI is something everybody have to pay attention to. >> To your point, Sri, earlier about critical infrastructure getting hacked, I had this conversation with Dr. Robert Gates several years ago, and I said, "Yeah, but don't we have the best offensive, you know, technology in cyber?" And he said, "Yeah, but we got the most to lose too." >> Yeah, 100%. >> We're the wealthiest nation of the United States. The wealthiest is. So you got to be careful. But to your point, the president of the United States saying, "We'll retaliate," right? Not necessarily start the war, but who started it? >> But that's the thing, right? Attribution is the hardest part. And then you talked about a very interesting thing, rich nations, right? There's emerging nations. There are nations left behind. One thing I've seen on the show floor today is, digital inequality. Digital poverty is a big thing. While we have this amazing technology, 90% of the world doesn't have access to this. >> Right. >> What we have done is we have created an inequality across, and especially in mobility and cyber, if this technology doesn't reach to the last mile, which is emerging nations, I think we are creating a crater back again and putting societies a few miles back. >> And at much greater risk. >> 100%, right? >> Yeah. >> Because those are the guys. In cyber, all you need is a laptop and a brain to attack. >> Yeah. Yeah. >> If I don't have it, that's where the civil war is going to start again. >> Yeah. What are some of the things in our last minute or so, guys, David, we'll start with you and then Sri go to you, that you're looking forward to at this MWC? The theme is velocity. We're talking about so much transformation and evolution in the telecom industry. What are you excited to hear and learn in the next couple of days? >> Just getting a complete picture. One is actually being out after the last couple of years, so you learn a lot. But just walking around and seeing, from my perspective, some vendor names that I haven't seen before, but seeing what they're doing and bringing to the market. But I think goes back to the point made earlier around APIs and integration. Everybody's talking about how can we kind of do this together in a way. So integrations, those smart things is what I'm kind of looking for as well, and how we plug into that as well. >> Excellent, and Sri? >> So for us, there is a lot to offer, right? So while I'm enjoying what I'm seeing here, I'm seeing at an opportunity. We have an amazing portfolio of what we can do. We are into mobile device management. We are the last (indistinct) company. When people find problems, somebody has to go remediators. We are the world's largest patch management company. And what I'm finding is, yes, all these people are embedding software, pumping it like nobody's business. As you find one ability, somebody has to go fix them, and we want to be the (indistinct) company. We had the last smile. And I find an amazing opportunity, not only we can do device management, but do mobile threat defense and give them a risk prioritization on what needs to be remediated, and manage all that in our ITSM. So I look at this as an amazing, amazing opportunity. >> Right. >> Which is exponential than what I've seen before. >> So last question then. Speaking of opportunities, Sri, for you, what are some of the things that customers can go to? Obviously, you guys talk to customers all the time. In terms of learning what Ivanti is going to enable them to do, to take advantage of these opportunities. Any webinars, any events coming up that we want people to know about? >> Absolutely, ivanti.com is the best place to go because we keep everything there. Of course, "theCUBE" interview. >> Of course. >> You should definitely watch that. (all laughing) No. So we have quite a few industry events we do. And especially there's a lot of learning. And we just raised the ransomware report that actually talks about ransomware from a global index perspective. So one thing what we have done is, rather than just looking at vulnerabilities, we showed them the weaknesses that led to the vulnerabilities, and how attackers are using them. And we even talked about DHS, how behind they are in disseminating the information and how it's actually being used by nation states. >> Wow. >> And we did cover mobility as a part of that as well. So there's a quite a bit we did in our report and it actually came out very well. >> I have to check that out. Ransomware is such a fascinating topic. Guys, thank you so much for joining Dave and me on the program today, sharing what's going on at Ivanti, the changes that you're seeing in mobile, and the opportunities that are there for your customers. We appreciate your time. >> Thank you >> Thank you. >> Yes. Thanks, guys. >> Thanks, guys. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching "theCUBE" live from MWC23 in Barcelona. As you know, "theCUBE" is the leader in live tech coverage. Dave and I will be right back with our next guest. (gentle upbeat music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. This is the biggest, most packed from China come to this show, Great to have you here. Talk about some of the trends is going to revolutionize the Do you concur? Everything's just going to get bring the Cloud to the Edge." I have to process everything that they're going to pay for, And if I have to pay every the marketplace, David. to how are we going to deal going to get attacked?" of automation to that? So what would you add? If you look at telcos, extended to our home, And a good example, segueing to that, The attackers are going to have AI, It's not going to take you away the AI has to be better it's biased to the people the data model is it's going to So if we narrow that, right? You flip that to what to be fixing all the problems. I even said to my This is going to be table stakes When are they going to make No. And then you asked We have to pay attention to it. got the most to lose too." But to your point, have access to this. reach to the last mile, laptop and a brain to attack. is going to start again. What are some of the things in But I think goes back to a lot to offer, right? than what I've seen before. to customers all the time. is the best place to go that led to the vulnerabilities, And we did cover mobility I have to check that out. As you know, "theCUBE" is the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

DavidPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Dave ShepherdPERSON

0.99+

Jose Maria AlvarezPERSON

0.99+

EricssonORGANIZATION

0.99+

David ShepherdPERSON

0.99+

sixQUANTITY

0.99+

TelefonicaORGANIZATION

0.99+

Srinivas MukkamalaPERSON

0.99+

40 secondsQUANTITY

0.99+

ChinaLOCATION

0.99+

45 minutesQUANTITY

0.99+

100%QUANTITY

0.99+

2024DATE

0.99+

United StatesLOCATION

0.99+

2016DATE

0.99+

90%QUANTITY

0.99+

ChatGPTTITLE

0.99+

Robert GatesPERSON

0.99+

FirstQUANTITY

0.99+

AWSORGANIZATION

0.99+

SriORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

millionsQUANTITY

0.99+

this weekDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

TelcosORGANIZATION

0.99+

USORGANIZATION

0.99+

Last nightDATE

0.98+

TodayDATE

0.98+

SriPERSON

0.98+

Mobile World CongressEVENT

0.98+

oneQUANTITY

0.98+

EdgeORGANIZATION

0.98+

three thingsQUANTITY

0.98+

first timeQUANTITY

0.98+

Dr.PERSON

0.98+

108QUANTITY

0.98+

telcoORGANIZATION

0.98+

several years agoDATE

0.97+

firstQUANTITY

0.97+

MWCEVENT

0.96+

hundreds and thousands of dollars a dayQUANTITY

0.96+

MWC23EVENT

0.96+

About 80,000 peopleQUANTITY

0.95+

one thingQUANTITY

0.95+

13-year-oldQUANTITY

0.95+

theCUBETITLE

0.95+

theCUBEORGANIZATION

0.95+

two different thingsQUANTITY

0.94+

day oneQUANTITY

0.93+

IvantiPERSON

0.92+

seven vulnerabilitiesQUANTITY

0.91+

VPPERSON

0.91+

presidentPERSON

0.9+

three pillarsQUANTITY

0.89+

first thingQUANTITY

0.89+

Opher Kahane, Sonoma Ventures | CloudNativeSecurityCon 23


 

(uplifting music) >> Hello, welcome back to theCUBE's coverage of CloudNativeSecurityCon, the inaugural event, in Seattle. I'm John Furrier, host of theCUBE, here in the Palo Alto Studios. We're calling it theCUBE Center. It's kind of like our Sports Center for tech. It's kind of remote coverage. We've been doing this now for a few years. We're going to amp it up this year as more events are remote, and happening all around the world. So, we're going to continue the coverage with this segment focusing on the data stack, entrepreneurial opportunities around all things security, and as, obviously, data's involved. And our next guest is a friend of theCUBE, and CUBE alumni from 2013, entrepreneur himself, turned, now, venture capitalist angel investor, with his own firm, Opher Kahane, Managing Director, Sonoma Ventures. Formerly the founder of Origami, sold to Intuit a few years back. Focusing now on having a lot of fun, angel investing on boards, focusing on data-driven applications, and stacks around that, and all the stuff going on in, really, in the wheelhouse for what's going on around security data. Opher, great to see you. Thanks for coming on. >> My pleasure. Great to be back. It's been a while. >> So you're kind of on Easy Street now. You did the entrepreneurial venture, you've worked hard. We were on together in 2013 when theCUBE just started. XCEL Partners had an event in Stanford, XCEL, and they had all the features there. We interviewed Satya Nadella, who was just a manager at Microsoft at that time, he was there. He's now the CEO of Microsoft. >> Yeah, he was. >> A lot's changed in nine years. But congratulations on your venture you sold, and you got an exit there, and now you're doing a lot of investments. I'd love to get your take, because this is really the biggest change I've seen in the past 12 years, around an inflection point around a lot of converging forces. Data, which, big data, 10 years ago, was a big part of your career, but now it's accelerated, with cloud scale. You're seeing people building scale on top of other clouds, and becoming their own cloud. You're seeing data being a big part of it. Cybersecurity kind of has not really changed much, but it's the most important thing everyone's talking about. So, developers are involved, data's involved, a lot of entrepreneurial opportunities. So I'd love to get your take on how you see the current situation, as it relates to what's gone on in the past five years or so. What's the big story? >> So, a lot of big stories, but I think a lot of it has to do with a promise of making value from data, whether it's for cybersecurity, for Fintech, for DevOps, for RevTech startups and companies. There's a lot of challenges in actually driving and monetizing the value from data with velocity. Historically, the challenge has been more around, "How do I store data at massive scale?" And then you had the big data infrastructure company, like Cloudera, and MapR, and others, deal with it from a scale perspective, from a storage perspective. Then you had a whole layer of companies that evolved to deal with, "How do I index massive scales of data, for quick querying, and federated access, et cetera?" But now that a lot of those underlying problems, if you will, have been solved, to a certain extent, although they're always being stretched, given the scale of data, and its utility is becoming more and more massive, in particular with AI use cases being very prominent right now, the next level is how to actually make value from the data. How do I manage the full lifecycle of data in complex environments, with complex organizations, complex use cases? And having seen this from the inside, with Origami Logic, as we dealt with a lot of large corporations, and post-acquisition by Intuit, and a lot of the startups I'm involved with, it's clear that we're now onto that next step. And you have fundamental new paradigms, such as data mesh, that attempt to address that complexity, and responsibly scaling access, and democratizing access in the value monetization from data, across large organizations. You have a slew of startups that are evolving to help the entire lifecycle of data, from the data engineering side of it, to the data analytics side of it, to the AI use cases side of it. And it feels like the early days, to a certain extent, of the revolution that we've seen in transition from traditional databases, to data warehouses, to cloud-based data processing, and big data. It feels like we're at the genesis of that next wave. And it's super, super exciting, for me at least, as someone who's sitting more in the coach seat, rather than being on the pitch, and building startups, helping folks as they go through those motions. >> So that's awesome. I want to get into some of these data infrastructure dynamics you mentioned, but before that, talk to the audience around what you're working on now. You've been a successful entrepreneur, you're focused on angel investing, so, super-early seed stage. What kind of deals are you looking at? What's interesting to you? What is Sonoma Ventures looking for, and what are some of the entrepreneurial dynamics that you're seeing right now, from a startup standpoint? >> Cool, so, at a macro level, this is a little bit of background of my history, because it shapes very heavily what it is that I'm looking at. So, I've been very fortunate with entrepreneurial career. I founded three startups. All three of them are successful. Final two were sold, the first one merged and went public. And my third career has been about data, moving data, passing data, processing data, generating insights from it. And, at this phase, I wanted to really evolve from just going and building startup number four, from going through the same motions again. A 10 year adventure, I'm a little bit too old for that, I guess. But the next best thing is to sit from a point whereby I can be more elevated in where I'm dealing with, and broaden the variety of startups I'm focused on, rather than just do your own thing, and just go very, very deep into it. Now, what specifically am I focused on at Sonoma Ventures? So, basically, looking at what I refer to as a data-driven application stack. Anything from the low-level data infrastructure and cloud infrastructure, that helps any persona in the data universe maximize value for data, from their particular point of view, for their particular role, whether it's data analysts, data scientists, data engineers, cloud engineers, DevOps folks, et cetera. All the way up to the application layer, in applications that are very data-heavy. And what are very typical data-heavy applications? FinTech, cyber, Web3, revenue technologies, and product and DevOps. So these are the areas we're focused on. I have almost 23 or 24 startups in the portfolio that span all these different areas. And this is in terms of the aperture. Now, typically, focus on pre-seed, seed. Sometimes a little bit later stage, but this is the primary focus. And it's really about partnering with entrepreneurs, and helping them make, if you will, original mistakes, avoid the mistakes I made. >> Yeah. >> And take it to the next level, whatever the milestone they're driving with. So I'm very, very hands-on with many of those startups. Now, what is it that's happening right now, initially, and why is it so exciting? So, on one hand, you have this scaling of data and its complexity, yet lagging value creation from it, across those different personas we've touched on. So that's one fundamental opportunity which is secular. The other one, which is more a cyclic situation, is the fact that we're going through a down cycle in tech, as is very evident in the public markets, and everything we're hearing about funding going slower and lower, terms shifting more into the hands of typical VCs versus entrepreneur-friendly market, and so on and so forth. And a very significant amount of layoffs. Now, when you combine these two trends together, you're observing a very interesting thing, that a lot of folks, really bright folks, who have sold a startup to a company, or have been in the guts of the large startup, or a large corporation, have, hands-on, experienced all those challenges we've spoken about earlier, in turf, maximizing value from data, irrespective of their role, in a specific angle, or vantage point they have on those challenges. So, for many of them, it's an opportunity to, "Now, let me now start a startup. I've been laid off, maybe, or my company's stock isn't doing as well as it used to, as a large corporation. Now I have an opportunity to actually go and take my entrepreneurial passion, and apply it to a product and experience as part of this larger company." >> Yeah. >> And you see a slew of folks who are emerging with these great ideas. So it's a very, very exciting period of time to innovate. >> It's interesting, a lot of people look at, I mean, I look at Snowflake as an example of a company that refactored data warehouses. They just basically took data warehouse, and put it on the cloud, and called it a data cloud. That, to me, was compelling. They didn't pay any CapEx. They rode Amazon's wave there. So, a similar thing going on with data. You mentioned this, and I see it as an enabling opportunity. So whether it's cybersecurity, FinTech, whatever vertical, you have an enablement. Now, you mentioned data infrastructure. It's a super exciting area, as there's so many stacks emerging. We got an analytics stack, there's real-time stacks, there's data lakes, AI stack, foundational models. So, you're seeing an explosion of stacks, different tools probably will emerge. So, how do you look at that, as a seasoned entrepreneur, now investor? Is that a good thing? Is that just more of the market? 'Cause it just seems like more and more kind of decomposed stacks targeted at use cases seems to be a trend. >> Yeah. >> And how do you vet that, is it? >> So it's a great observation, and if you take a step back and look at the evolution of technology over the last 30 years, maybe longer, you always see these cycles of expansion, fragmentation, contraction, expansion, contraction. Go decentralize, go centralize, go decentralize, go centralize, as manifested in different types of technology paradigms. From client server, to storage, to microservices, to et cetera, et cetera. So I think we're going through another big bang, to a certain extent, whereby end up with more specialized data stacks for specific use cases, as you need performance, the data models, the tooling to best adapt to the particular task at hand, and the particular personas at hand. As the needs of the data analysts are quite different from the needs of an NL engineer, it's quite different from the needs of the data engineer. And what happens is, when you end up with these siloed stacks, you end up with new fragmentation, and new gaps that need to be filled with a new layer of innovation. And I suspect that, in part, that's what we're seeing right now, in terms of the next wave of data innovation. Whether it's in a service of FinTech use cases, or cyber use cases, or other, is a set of tools that end up having to try and stitch together those elements and bridge between them. So I see that as a fantastic gap to innovate around. I see, also, a fundamental need in creating a common data language, and common data management processes and governance across those different personas, because ultimately, the same underlying data these folks need, albeit in different mediums, different access models, different velocities, et cetera, the subject matter, if you will, the underlying raw data, and some of the taxonomies right on top of it, do need to be consistent. So, once again, a great opportunity to innovate, whether it's about semantic layers, whether it's about data mesh, whether it's about CICD tools for data engineers, and so on and so forth. >> I got to ask you, first of all, I see you have a friend you brought into the interview. You have a dog in the background who made a little cameo appearance. And that's awesome. Sitting right next to you, making sure everything's going well. On the AI thing, 'cause I think that's the hot trend here. >> Yeah. >> You're starting to see, that ChatGPT's got everyone excited, because it's kind of that first time you see kind of next-gen functionality, large-language models, where you can bring data in, and it integrates well. So, to me, I think, connecting the dots, this kind of speaks to the beginning of what will be a trend of really blending of data stacks together, or blending of models. And so, as more data modeling emerges, you start to have this AI stack kind of situation, where you have things out there that you can compose. It's almost very developer-friendly, conceptually. This is kind of new, but kind of the same concept's been working on with Google and others. How do you see this emerging, as an investor? What are some of the things that you're excited about, around the ChatGPT kind of things that's happening? 'Cause it brings it mainstream. Again, a million downloads, fastest applications get a million downloads, even among all the successes. So it's obviously hit a nerve. People are talking about it. What's your take on that? >> Yeah, so, I think that's a great point, and clearly, it feels like an iPhone moment, right, to the industry, in this case, AI, and lots of applications. And I think there's, at a high level, probably three different layers of innovation. One is on top of those platforms. What use cases can one bring to the table that would drive on top of a ChatGPT-like service? Whereby, the startup, the company, can bring some unique datasets to infuse and add value on top of it, by custom-focusing it and purpose-building it for a particular use case or particular vertical. Whether it's applying it to customer service, in a particular vertical, applying it to, I don't know, marketing content creation, and so on and so forth. That's one category. And I do know that, as one of my startups is in Y Combinator, this season, winter '23, they're saying that a very large chunk of the YC companies in this cycle are about GPT use cases. So we'll see a flurry of that. The next layer, the one below that, is those who actually provide those platforms, whether it's ChatGPT, whatever will emerge from the partnership with Microsoft, and any competitive players that emerge from other startups, or from the big cloud providers, whether it's Facebook, if they ever get into this, and Google, which clearly will, as they need to, to survive around search. The third layer is the enabling layer. As you're going to have more and more of those different large-language models and use case running on top of it, the underlying layers, all the way down to cloud infrastructure, the data infrastructure, and the entire set of tools and systems, that take raw data, and massage it into useful, labeled, contextualized features and data to feed the models, the AI models, whether it's during training, or during inference stages, in production. Personally, my focus is more on the infrastructure than on the application use cases. And I believe that there's going to be a massive amount of innovation opportunity around that, to reach cost-effective, quality, fair models that are deployed easily and maintained easily, or at least with as little pain as possible, at scale. So there are startups that are dealing with it, in various areas. Some are about focusing on labeling automation, some about fairness, about, speaking about cyber, protecting models from threats through data and other issues with it, and so on and so forth. And I believe that this will be, too, a big driver for massive innovation, the infrastructure layer. >> Awesome, and I love how you mentioned the iPhone moment. I call it the browser moment, 'cause it felt that way for me, personally. >> Yep. >> But I think, from a business model standpoint, there is that iPhone shift. It's not the BlackBerry. It's a whole 'nother thing. And I like that. But I do have to ask you, because this is interesting. You mentioned iPhone. iPhone's mostly proprietary. So, in these machine learning foundational models, >> Yeah. >> you're starting to see proprietary hardware, bolt-on, acceleration, bundled together, for faster uptake. And now you got open source emerging, as two things. It's almost iPhone-Android situation happening. >> Yeah. >> So what's your view on that? Because there's pros and cons for either one. You're seeing a lot of these machine learning laws are very proprietary, but they work, and do you care, right? >> Yeah. >> And then you got open source, which is like, "Okay, let's get some upsource code, and let people verify it, and then build with that." Is it a balance? >> Yes, I think- >> Is it mutually exclusive? What's your view? >> I think it's going to be, markets will drive the proportion of both, and I think, for a certain use case, you'll end up with more proprietary offerings. With certain use cases, I guess the fundamental infrastructure for ChatGPT-like, let's say, large-language models and all the use cases running on top of it, that's likely going to be more platform-oriented and open source, and will allow innovation. Think of it as the equivalent of iPhone apps or Android apps running on top of those platforms, as in AI apps. So we'll have a lot of that. Now, when you start going a little bit more into the guts, the lower layers, then it's clear that, for performance reasons, in particular, for certain use cases, we'll end up with more proprietary offerings, whether it's advanced silicon, such as some of the silicon that emerged from entrepreneurs who have left Google, around TensorFlow, and all the silicon that powers that. You'll see a lot of innovation in that area as well. It hopefully intends to improve the cost efficiency of running large AI-oriented workloads, both in inference and in learning stages. >> I got to ask you, because this has come up a lot around Azure and Microsoft. Microsoft, pretty good move getting into the ChatGPT >> Yep. >> and the open AI, because I was talking to someone who's a hardcore Amazon developer, and they said, they swore they would never use Azure, right? One of those types. And they're spinning up Azure servers to get access to the API. So, the developers are flocking, as you mentioned. The YC class is all doing large data things, because you can now program with data, which is amazing, which is amazing. So, what's your take on, I know you got to be kind of neutral 'cause you're an investor, but you got, Amazon has to respond, Google, essentially, did all the work, so they have to have a solution. So, I'm expecting Google to have something very compelling, but Microsoft, right now, is going to just, might run the table on developers, this new wave of data developers. What's your take on the cloud responses to this? What's Amazon, what do you think AWS is going to do? What should Google be doing? What's your take? >> So, each of them is coming from a slightly different angle, of course. I'll say, Google, I think, has massive assets in the AI space, and their underlying cloud platform, I think, has been designed to support such complicated workloads, but they have yet to go as far as opening it up the same way ChatGPT is now in that Microsoft partnership, and Azure. Good question regarding Amazon. AWS has had a significant investment in AI-related infrastructure. Seeing it through my startups, through other lens as well. How will they respond to that higher layer, above and beyond the low level, if you will, AI-enabling apparatuses? How do they elevate to at least one or two layers above, and get to the same ChatGPT layer, good question. Is there an acquisition that will make sense for them to accelerate it, maybe. Is there an in-house development that they can reapply from a different domain towards that, possibly. But I do suspect we'll end up with acquisitions as the arms race around the next level of cloud wars emerges, and it's going to be no longer just about the basic tooling for basic cloud-based applications, and the infrastructure, and the cost management, but rather, faster time to deliver AI in data-heavy applications. Once again, each one of those cloud suppliers, their vendor is coming with different assets, and different pros and cons. All of them will need to just elevate the level of the fight, if you will, in this case, to the AI layer. >> It's going to be very interesting, the different stacks on the data infrastructure, like I mentioned, analytics, data lake, AI, all happening. It's going to be interesting to see how this turns into this AI cloud, like data clouds, data operating systems. So, super fascinating area. Opher, thank you for coming on and sharing your expertise with us. Great to see you, and congratulations on the work. I'll give you the final word here. Give a plugin for what you're looking for for startup seats, pre-seeds. What's the kind of profile that gets your attention, from a seed, pre-seed candidate or entrepreneur? >> Cool, first of all, it's my pleasure. Enjoy our chats, as always. Hopefully the next one's not going to be in nine years. As to what I'm looking for, ideally, smart data entrepreneurs, who have come from a particular domain problem, or problem domain, that they understand, they felt it in their own 10 fingers, or millions of neurons in their brains, and they figured out a way to solve it. Whether it's a data infrastructure play, a cloud infrastructure play, or a very, very smart application that takes advantage of data at scale. These are the things I'm looking for. >> One final, final question I have to ask you, because you're a seasoned entrepreneur, and now coach. What's different about the current entrepreneurial environment right now, vis-a-vis, the past decade? What's new? Is it different, highly accelerated? What advice do you give entrepreneurs out there who are putting together their plan? Obviously, a global resource pool now of engineering. It might not be yesterday's formula for success to putting a venture together to get to that product-market fit. What's new and different, and what's your advice to the folks out there about what's different about the current environment for being an entrepreneur? >> Fantastic, so I think it's a great question. So I think there's a few axes of difference, compared to, let's say, five years ago, 10 years ago, 15 years ago. First and foremost, given the amount of infrastructure out there, the amount of open-source technologies, amount of developer toolkits and frameworks, trying to develop an application, at least at the application layer, is much faster than ever. So, it's faster and cheaper, to the most part, unless you're building very fundamental, core, deep tech, where you still have a big technology challenge to deal with. And absent that, the challenge shifts more to how do you manage my resources, to product-market fit, how are you integrating the GTM lens, the go-to-market lens, as early as possible in the product-market fit cycle, such that you reach from pre-seed to seed, from seed to A, from A to B, with an optimal amount of velocity, and a minimal amount of resources. One big difference, specifically as of, let's say, beginning of this year, late last year, is that money is no longer free for entrepreneurs, which means that you need to operate and build startup in an environment with a lot more constraints. And in my mind, some of the best startups that have ever been built, and some of the big market-changing, generational-changing, if you will, technology startups, in their respective industry verticals, have actually emerged from these times. And these tend to be the smartest, best startups that emerge because they operate with a lot less money. Money is not as available for them, which means that they need to make tough decisions, and make verticals every day. What you don't need to do, you can kick the cow down the road. When you have plenty of money, and it cushions for a lot of mistakes, you don't have that cushion. And hopefully we'll end up with companies with a more agile, more, if you will, resilience, and better cultures in making those tough decisions that startups need to make every day. Which is why I'm super, super excited to see the next batch of amazing unicorns, true unicorns, not just valuation, market rising with the water type unicorns that emerged from this particular era, which we're in the beginning of. And very much enjoy working with entrepreneurs during this difficult time, the times we're in. >> The next 24 months will be the next wave, like you said, best time to do a company. Remember, Airbnb's pitch was, "We'll rent cots in apartments, and sell cereal." Boy, a lot of people passed on that deal, in that last down market, that turned out to be a game-changer. So the crazy ideas might not be that bad. So it's all about the entrepreneurs, and >> 100%. >> this is a big wave, and it's certainly happening. Opher, thank you for sharing. Obviously, data is going to change all the markets. Refactoring, security, FinTech, user experience, applications are going to be changed by data, data operating system. Thanks for coming on, and thanks for sharing. Appreciate it. >> My pleasure. Have a good one. >> Okay, more coverage for the CloudNativeSecurityCon inaugural event. Data will be the key for cybersecurity. theCUBE's coverage continues after this break. (uplifting music)

Published Date : Feb 2 2023

SUMMARY :

and happening all around the world. Great to be back. He's now the CEO in the past five years or so. and a lot of the startups What kind of deals are you looking at? and broaden the variety of and apply it to a product and experience And you see a slew of folks and put it on the cloud, and new gaps that need to be filled You have a dog in the background but kind of the same and the entire set of tools and systems, I call it the browser moment, But I do have to ask you, And now you got open source and do you care, right? and then build with that." and all the use cases I got to ask you, because and the open AI, and it's going to be no longer What's the kind of profile These are the things I'm looking for. about the current environment and some of the big market-changing, So it's all about the entrepreneurs, and to change all the markets. Have a good one. for the CloudNativeSecurityCon

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Satya NadellaPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2013DATE

0.99+

OpherPERSON

0.99+

CapExORGANIZATION

0.99+

SeattleLOCATION

0.99+

John FurrierPERSON

0.99+

Sonoma VenturesORGANIZATION

0.99+

BlackBerryORGANIZATION

0.99+

10 fingersQUANTITY

0.99+

AirbnbORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

nine yearsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Origami LogicORGANIZATION

0.99+

OrigamiORGANIZATION

0.99+

IntuitORGANIZATION

0.99+

RevTechORGANIZATION

0.99+

eachQUANTITY

0.99+

Opher KahanePERSON

0.99+

CloudNativeSecurityConEVENT

0.99+

Palo Alto StudiosLOCATION

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

third layerQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

two layersQUANTITY

0.98+

AndroidTITLE

0.98+

third careerQUANTITY

0.98+

two thingsQUANTITY

0.98+

bothQUANTITY

0.98+

MapRORGANIZATION

0.98+

oneQUANTITY

0.98+

one categoryQUANTITY

0.98+

late last yearDATE

0.98+

millions of neuronsQUANTITY

0.98+

a million downloadsQUANTITY

0.98+

three startupsQUANTITY

0.98+

10 years agoDATE

0.97+

FintechORGANIZATION

0.97+

winter '23DATE

0.97+

first oneQUANTITY

0.97+

this yearDATE

0.97+

StanfordLOCATION

0.97+

ClouderaORGANIZATION

0.97+

theCUBE CenterORGANIZATION

0.96+

five years agoDATE

0.96+

10 yearQUANTITY

0.96+

ChatGPTTITLE

0.96+

threeQUANTITY

0.95+

first timeQUANTITY

0.95+

XCEL PartnersORGANIZATION

0.95+

15 years agoDATE

0.94+

24 startupsQUANTITY

0.93+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Neuromorphic in Silico Simulator For the Coherent Ising Machine


 

>>Hi everyone, This system A fellow from the University of Tokyo before I thought that would like to thank you she and all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today or some of the recent works that have been done either by me or by character of Hong Kong Noise Group indicating the title of my talk is a neuro more fic in silica simulator for the commenters in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then I will show some proof of concept of the game in performance that can be obtained using dissimulation in the second part and the production of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is adapted from a recent natural tronics paper from the Village Back hard People. And this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, Interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba purification machine, or a recently proposed restricted Bozeman machine, FPD eight, by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition influx you beat or the energy efficiency off memory sisters uh P. J. O are still an attractive platform for building large theorizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particle in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system in this respect, the f. D. A s. They are interesting from the perspective, off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see. And so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for suggesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics. Orphan, chaotic because of symmetry, is interconnectivity. The infrastructure. No neck talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's a schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the Cortes in machine, which is a growing toe the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo F represents the monitor optical parts, the district optical parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback cooking cm using oh, more than detection and refugee A then injection off the cooking time and eso this dynamics in both cases of CME in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the Eyes in coping and the H is the extension of the rising and attorney in India and expect so. >>Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of >>this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted convergence to the global minimum of there's even 20 and using this approach. And so this is >>why we propose toe introduce a macro structure the system or where one analog spin or one D o. P. O is replaced by a pair off one and knock spin and one error on cutting. Viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a >>learning process for searching for the ground state of the icing. Every 20 >>within this massacre structure the role of the ER variable eyes to control the amplitude off the analog spins to force the amplitude of the expense toe, become equal to certain target amplitude. A Andi. This is known by moderating the strength off the icing complaints or see the the error variable e I multiply the icing complain here in the dynamics off UH, D o p o on Then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different, I think introduces a >>symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here for solving certain current size off, um, escape problem, Uh, in which the exiled from here in the i r. From here and the value of the icing energy is shown in the bottom plots. And you see this Celtics search that visit various local minima of the as Newtonian and eventually finds the local minima Um, >>it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing hamiltonian so that we're gonna do not get stuck in any of them. On more over the other types of attractors, I can eventually appear, such as the limits of contractors or quality contractors. They can also be destabilized using a moderation of the target amplitude. And so we have proposed in the past two different motivation of the target constitute the first one is a moderation that ensure the 100 >>reproduction rate of the system to become positive on this forbids the creation of any non tree retractors. And but in this work I will talk about another modulation or Uresti moderation, which is given here that works, uh, as well as this first, uh, moderation, but is easy to be implemented on refugee. >>So this couple of the question that represent the current the stimulation of the cortex in machine with some error correction, they can be implemented especially efficiently on an F B G. And here I show the time that it takes to simulate three system and eso in red. You see, at the time that it takes to simulate the X, I term the EI term, the dot product and the rising everything. Yet for a system with 500 spins analog Spain's equivalent to 500 g. O. P. S. So in f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements tobacco cm in which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as, ah one gear repression to replicate the post phaser CIA. Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts, all the dog products, respect to the problem size. And and if we had a new infinite amount of resources and PGA to simulate the dynamics, then the non in optical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a low carrot off end and while the kite off end. Because computing the dot product involves the summing, all the terms in the products, which is done by a nephew, Jay by another tree, which heights scares a logarithmic any with the size of the system. But this is in the case if we had an infinite amount of resources on the LPGA food but for dealing for larger problems off more than 100 spins, usually we need to decompose the metrics into ah smaller blocks with the block side that are not you here. And then the scaling becomes funny non inner parts linear in the and over you and for the products in the end of you square eso typically for low NF pdf cheap P a. You know you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance started path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product. By increasing the size of this at the tree and this can be done by organizing Yeah, click the extra co components within the F p G A in order which is shown here in this right panel here in order to minimize the finding finance of the system and to minimize the long distance that the path in the in the fpt So I'm not going to the details of how this is implemented the PGA. But just to give you a new idea off why the Iraqi Yahiko organization off the system becomes extremely important toe get good performance for simulator organizing mission. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should result for solving escape problems, free connected person, randomly person minus one, spin last problems and we sure, as we use as a metric the numbers >>of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with Nina successful BT against the problem size here and and in red here there's propose F B J implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior. It's similar to the car testing machine >>and security. You see that the scaling off the numbers of metrics victor product necessary to solve this problem scales with a better exponents than this other approaches. So so So that's interesting feature of the system and next we can see what is the real time to solution. To solve this, SK instances eso in the last six years, the time institution in seconds >>to find a grand state of risk. Instances remain answers is possibility for different state of the art hardware. So in red is the F B G. A presentation proposing this paper and then the other curve represent ah, brick, a local search in in orange and center dining in purple, for example, and So you see that the scaring off this purpose simulator is is rather good and that for larger politicizes, we can get orders of magnitude faster than the state of the other approaches. >>Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FBT implementation would be faster than risk Other recently proposed izing machine, such as the Hope you know network implemented on Memory Sisters. That is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the >>restricted Bosman machine implemented a PGA proposed by some group in Brooklyn recently again, which is very fast for small promise sizes. But which canning is bad So that, uh, this worse than the purpose approach so that we can expect that for promise sizes larger than, let's say, 1000 spins. The purpose, of course, would be the faster one. >>Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better cut values that have been previously found by any other >>algorithms. So they are the best known could values to best of our knowledge. And, um, or so which is shown in this paper table here in particular, the instances, Uh, 14 and 15 of this G set can be We can find better converse than previously >>known, and we can find this can vary is 100 times >>faster than the state of the art algorithm and cp to do this which is a recount. Kasich, it s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. >>So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g A onda and carefully routing the trickle components within the P G A. And and we can draw some projections of what type of performance we can achieve in >>the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape problems respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital and, you know, free to is shown in the green here, the green >>line without that's and, uh and we should two different, uh, prosthesis for this productions either that the time to solution scales as exponential off n or that >>the time of social skills as expression of square root off. So it seems according to the data, that time solution scares more as an expression of square root of and also we can be sure >>on this and this production showed that we probably can solve Prime Escape Program of Science 2000 spins to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP or optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this, what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account out on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will >>be just based on the simple common line access for the simulator and in which will have just a classical approximation of the system. We don't know Sturm, binary weights and Museum in >>term, but then will propose a second version that would extend the current arising machine to Iraq off eight f p g. A. In which we will add the more refined models truncated bigger in the bottom question model that just talked about on the supports in which he valued waits for the rising problems and support the cement. So we will announce >>later when this is available, and Farah is working hard to get the first version available sometime in September. Thank you all, and we'll be happy to answer any questions that you have.

Published Date : Sep 24 2020

SUMMARY :

know that the classical approximation of the Cortes in machine, which is a growing toe So the well known problem of And so this is And the addition of this chemical structure introduces learning process for searching for the ground state of the icing. off the analog spins to force the amplitude of the expense toe, symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here is a moderation that ensure the 100 reproduction rate of the system to become positive on this forbids the creation of any non tree in the in the fpt So I'm not going to the details of how this is implemented the PGA. of the mattress Victor products since it's the bottleneck of the computation, uh, You see that the scaling off the numbers of metrics victor product necessary to solve So in red is the F B G. A presentation proposing Moreover, the relatively good scanning off the But which canning is bad So that, scheme scales well that you can find the maximum cut values off benchmark the instances, Uh, 14 and 15 of this G set can be We can find better faster than the state of the art algorithm and cp to do this which is a recount. So given that the performance off the design depends on the height the near future based on the, uh, implementation that we are currently working. the time of social skills as expression of square root off. And so the idea of this model is that instead of having the very be just based on the simple common line access for the simulator and in which will have just a classical to Iraq off eight f p g. A. In which we will add the more refined models any questions that you have.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrooklynLOCATION

0.99+

SeptemberDATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Hong Kong Noise GroupORGANIZATION

0.99+

CIAORGANIZATION

0.99+

300 yardsQUANTITY

0.99+

1000 spinsQUANTITY

0.99+

IndiaLOCATION

0.99+

15 yearsQUANTITY

0.99+

second versionQUANTITY

0.99+

first versionQUANTITY

0.99+

FarahPERSON

0.99+

second partQUANTITY

0.99+

first partQUANTITY

0.99+

twoQUANTITY

0.99+

500 spinsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

first stepQUANTITY

0.99+

20QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

ScottPERSON

0.99+

University of TokyoORGANIZATION

0.99+

500 g.QUANTITY

0.98+

MexicanLOCATION

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

KasichPERSON

0.98+

first versionQUANTITY

0.98+

firstQUANTITY

0.98+

IraqLOCATION

0.98+

third partQUANTITY

0.98+

13 clock cyclesQUANTITY

0.98+

43 clock cyclesQUANTITY

0.98+

first thingQUANTITY

0.98+

0.5 microsecondsQUANTITY

0.97+

JayPERSON

0.97+

HaiderLOCATION

0.97+

15QUANTITY

0.97+

one microsecondsQUANTITY

0.97+

SpainLOCATION

0.97+

about 10 secondsQUANTITY

0.97+

LPGAORGANIZATION

0.96+

eachQUANTITY

0.96+

500 timerQUANTITY

0.96+

one strategyQUANTITY

0.96+

both casesQUANTITY

0.95+

one errorQUANTITY

0.95+

20 wattsQUANTITY

0.95+

NinaPERSON

0.95+

about 0.1 microsecondsQUANTITY

0.95+

nineQUANTITY

0.95+

each graphQUANTITY

0.93+

14QUANTITY

0.92+

CMEORGANIZATION

0.91+

IraqiOTHER

0.91+

billions of neuronsQUANTITY

0.91+

99 successQUANTITY

0.9+

about 100QUANTITY

0.9+

larger than 500 speedsQUANTITY

0.9+

VectorORGANIZATION

0.89+

spinsQUANTITY

0.89+

VictorORGANIZATION

0.89+

last six yearsDATE

0.86+

oneQUANTITY

0.85+

one analogQUANTITY

0.82+

hamiltonianOTHER

0.82+

SimulatorTITLE

0.8+

EuropeanOTHER

0.79+

three neuro inspired principlesQUANTITY

0.78+

BosmanPERSON

0.75+

three systemQUANTITY

0.75+

trumpPERSON

0.74+

Xia PiosCOMMERCIAL_ITEM

0.72+

100QUANTITY

0.7+

one gearQUANTITY

0.7+

P.QUANTITY

0.68+

FPD eightCOMMERCIAL_ITEM

0.66+

first oneQUANTITY

0.64+

Escape Program of Science 2000TITLE

0.6+

CelticsOTHER

0.58+

TobyPERSON

0.56+

MachineTITLE

0.54+

Refugee ATITLE

0.54+

coupleQUANTITY

0.53+

TektronixORGANIZATION

0.51+

OpaOTHER

0.51+

P. J. OORGANIZATION

0.51+

BozemanORGANIZATION

0.48+

Nayaki Nayyar, Ivanti and Stephanie Hallford, Intel | CUBE Conversation, July 2020


 

(calm music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Welcome to this CUBE Conversation. I'm Lisa Martin, and today, I'm talking to Ivanti again and Intel, some breaking news. So please welcome two guests, the EVP and Chief Product Officer of Ivanti, Nayaki Nayyar. She's back, and we've also got the VP and GM of Business Client Salute Platforms for Intel, Stephanie Hallford. Nayaki and Stephanie, it's great to have you on the program. >> It's great to be back here with you Lisa, and Stephanie glad to have you here with us, thank you. >> Thank you, we're excited >> Yeah, you guys are going to break some news for us, so let's go ahead and start. Nayaki, hot off the presses is Ivanti's announcement of its new hyper-automation platform, Ivanti Neurons, helping organizations now in this new next normal of so much remote work. Now, just on the heels of that, you're announcing a new strategic partnership with Intel. Tell me about that. >> So Lisa, like we announced, our Ivanti Neurons platform that is helping our customers and all the IT organizations around the world to deal with this explosive growth of remote workers, the devices that would work is used, the data that it's getting from those devices, and also the security challenges, and Neurons really help address what we call discover all the devices, manage those devices, self-heal those devices, self-secure the devices, and with this partnership with Intel, we are extremely excited about the potential our customers and the benefits that customers can get. Intel is offering what they call Device as a Service, which includes both the hardware and software, and with this partnership, we are announcing the integration between Intel's vPro platform and Ivanti's Neurons platform, which is what we are so excited about. Our joint customers, joint enterprises that are using both the products can now benefit from this out of the box integration to take advantage of this Device as a Service combined offer. >> So Stephanie, talk to us from Intel's perspective. This is an integration of Intel's Endpoint Management Assistant with Ivanti Neurons. How does this drive up the value for the EMA solution for your customers who are already using it? >> Right, well, so vPro is just to step everyone back, vPro is the number one enterprise platform trusted now for over 14 years. We are in a vast majority of enterprises around the world, and that's because vPro is essentially our best performing CPUs, our highest level of security, our highest level manageability, which is our EMA or "Emma" manageability solution, which Ivanti is integrating, and also stability, so that is the promise to IT managers for a stable, the Intel Stable Image platform, and what that allows is IT managers to know that we will keep as much stability and fast forward and push through any fixes as quickly as possible on those vPro devices because we understand that IT networks usually QUAL, you know, not all at one time, but it's sequential. So vPro is our number one enterprise built for business, validated, enabled, and we're super excited today because we're taking that remote manageability solution that comes with vPro, and we are marrying it with Ivanti's top class in point management solution, and Ivanti is a world leader in managing and protecting endpoints, and today more than ever, because IT's remote and Intel. For instance, our IT over one weekend had to figure out how to support a hundred thousand remote workers, so the ability for Ivanti to now have our remote manageability in band, out of band, on-prem, in the cloud, it really rounds out. Ivanti's already fantastic world-class solution, so it's a fantastic start to what I foresee is going to be a great partnership. >> And probably a big target install base. Now, can you talk to me a little bit about COVID as a catalyst for this partnership? So many companies, the stuff they talked about a great example of Intel pivoting over a weekend for a hundred thousand people. We're hearing so many different numbers of an explosion of devices, but also experts and even C-suite from tech companies projecting maybe 30 to 40% of the workforce only will go back, so talk to me about COVID as really driving the necessity for organizations to benefit from this type of technology. >> Yeah, so Lisa, like Stephanie said, right, as Intel had to take hundred thousand employees remote over a weekend, that is true for pretty much every company, every organization, every enterprise independent of industry vertical that they had to take all their workforce and move them to be primarily remote workers, and the stats of BFC is what used to be, I would say, three to four percent before COVID of remote working. Post-COVID or during COVID, as we say, it's going to be around 30, 40, 50%, and this is a conversation and a challenge. Every IT organization, every C-level exec, and, in most cases, I'm also seeing this become a board conversation that they're trying to figure out not just how to support remote workers for a short time, but for a longer time as this becomes the new normal or the next normal, whatever you call that, Lisa, and really helping employees through this transition and providing what we call a seamless experience as we employees are working from home or on the move or location agnostic, being able to provide a experience, a service experience that understands what employee's preferences are, what their needs are, and providing that consumer with experiences, what this joint offering between Intel and Ivanti really brings together for our joint customers. >> So you talked about this being elevated to the board level conversation, you know, and this is something that we're hearing a lot of that suddenly there's so much more visibility and focus on certain parts of businesses, and survival is, so many businesses are at risk. Stephanie, I'd like to get your perspective on how this joint solution with Intel and Ivanti, do you see this as an opportunity to give your customers not just a competitive advantage, but for maybe some of those businesses who might be in jeopardy like a survival strategy? >> Absolutely, I mean, the, you know, while we both Ivanti and Intel have our own IT challenges and we support our workers directly, we are broadly experienced in supporting many many companies that frankly, perhaps, weren't planning for these types of instances, remote manageability overnight, security and cyber threats getting more and more sophisticated, but, you know, tech companies like Ivanti, like Intel, we have been thinking about this and experiencing and planning for these things and bringing them out in our products for some time, and so I think it is a great opportunity when we come together and we bring that, you know, IP expertise and IT expertise, both IP technical and that IT insight, and we bring it to customers who are of all industries, whether it be healthcare or financial or medium businesses who are increasingly being managed by service providers who can utilize this type of device as a service and endpoint manageability. Most companies and certainly all IT managers will tell you they're overwhelmed. They are traditionally squeezed on budget, and they have the massive requirement to take their companies entirely cloud and cloud oriented or maybe a hybrid of cloud and on-prem, and they really would prefer to leave network security and network management to experts, and that's where we can come in with our platform, with our intelligence, we work hard to continue to build that product roadmap to stay ahead of cyber threats. Our vPro platform, for instance, has what we call Intel Hardware Shield to set up technologies that actually protects against cyber attack, even under the OS, so if the OS is down or there's a cyber attack around the OS, we actually can lock down the BIOS and the Firmware and alert the OS and have that communication, which allows the system to protect those areas that need to be protected or lock down or encrypt those areas, so this is the type of thing we bring to the party, and than Ivanti has that absolute in Point Management credibility that there's just, I think, ease, So if IT managers are worried about moving to the cloud and getting workers remote and, you know, managing cyber threats, they really would prefer to leave this management and security of their network to experts like Ivanti, and so we're thrilled to kind of combine that expertise and give IT managers a little bit of peace of mind. >> I think it's even more than giving IT managers a peace of mind, but so talk to me, Nayaki, about how these technologies work together. So for example, when we talked about the Neurons and the hyper-automation platform that you just announced, you were talking about the discovery, the self-healing, self-securing of all these devices within an organization that they may not even know they have EDGE devices on-prem cloud. Talk to me about how these two technologies work together. Is it discovering all these devices first, self-security, self-healing? How does then EMA come into play? >> So let me give an analogy in our consumer world, Lisa. We all are used to or getting used to cars where they automatically heal themselves. I have a car sitting in my garage that I haven't taken to a workshop for last four years since I bought it, so it's almost a similar experience that combined offering things to our customers where all these endpoints, like Stephanie said, we are, I would say, one of the leading providers in the endpoint management where we support today. Ivanti supports over 40 million endpoints for our customers, and combining that with a strong vPro platform from Intel, that combined offering, which is what we call Device as a Service, so that the IT departments or the enterprises don't have to really worry about how we are discovering all of those devices, managing those devices. Self-healing, like if there's any performance issues, configuration drift issues, if there are any security vulnerabilities, anomalies on those devices, it automatically heals them. I mean, that is the beauty of it where IT doesn't have to worry about trying to do it reactively. These neurons detect and self-heal those devices automatically in the background, and almost augmenting IT with what I call these automation bots that are constantly running in the background on these devices and self-healing and self-securing those devices. So that's a benefit every organization, every company, every enterprise, every IT department gets from this joint offering, and if I were on their side, on the other side, I can really sleep at night knowing those devices are now not just being managed, but are secure because now we are able to auto-heal or auto-secure those devices in the background continuously. >> Let's talk about speed cause that's one of the things, speed and scale, we talk about with every different technology, but right now there's so much uncertainty across the globe, so for joint customers, Stephanie talked about the, you know, the large install base of customers on the vPro platform, how quickly would they be able to leverage this joint solution to really get those endpoints under management and start dialing down some of the risks like device sprawl and security threats? >> So the joint offering is available today and being released the integration between both the platforms with this announcement, so companies that have both of our platforms and solutions can start implementing it and really getting the benefit out of it. They don't have to wait for another three months or six months. Right after this release, they should be able to integrate the two platforms, discover everything that they have across their entire network, manage those, secure those devices and use these neurons to automatically heal and service those endpoints. >> So this is something that could get up and running pretty quickly? >> It's an AutoBox connection and integration that we worked very closely, Stephanie's team and my team had been working for months now, and, yeah, this is an exciting announcement not just from the product perspective, but also the benefit it gives our customers, the speed, the accuracy, and the service experience that they can provide to their end user, employees, customers, and consumers, I think, that's super beneficial for everyone. >> Absolutely, and then that 360 degree view. Stephanie, we'll wrap it up with you. Talk to us about how this new strategic partnership is a facilitator or an accelerant of Intel's device as a service vision. >> Well, you know, first off, I wanted to commend Nayaki's team because our engineers were so impressed. They, you know, felt like they were working with the PhD advanced version of so many other engineering partners they'd ever come across, so I think we have a very strong engineering culture between our two companies and the speed at which we were able to integrate our solutions, and at the same time start thinking about what we may be able to do in the future, should we put our heads together and start doing a joint product roadmap on opportunities in the future, network connectivity, wifi connectivity, all sorts of ideas, so huge congratulations to the engineering teams because the speed at which we were able to integrate and get a product offering out was impressive, but, you know, secondarily, on to your question on device as a service, this is going to be by far where the future moves. We know that companies will tend to continue to look for ways to have sustainability in their environments, and so when you have Device as a Service, you're able to do things like into end supporting that device from its start into a network to when you end of life a device and how you end of life that device has severe, some sustainability and costs, you know, complexities, and if we're able to manage that device from end to end and provide servicing to alert IT managers and self-heal before problems happen, that helps obviously not only with business models and, you know, protecting data, but it also helps in keeping systems running and being alert to when systems begin to degrade or if there are issues or if it's time to refresh because the hardware is not new enough to take advantage of the new software capabilities, then you're able to end of life that device in a sustainable way, in a safe way, and, even to some degree, provide some opportunity for remediation of data and, you know, remote erase and continue to provide that security all the way into the end, so when we look at device as a service, it's more than just one aspect. It's really taking a device and being responsible for the security, the manageability, the self-healing from beginning to end, and I know that all IT managers need that, appreciate that, and frankly don't have the time or skillsets to be able to provide that in their own house. So I think there's the beginnings today, and I think we have a huge upside to what we can do in the future. I look at Intel's strengths in enterprise and how long we have been, you know, operating in enterprises around the world. Ivanti's, you know, in the vast majority of Fortune 100s, and when you've got kind of engineering powerhouses that are coming together and brainstorming it's, I think, it's a great partnership for relief for customer pain points in the future, which unfortunately there's going to be more probably. >> And this is just the beginning. >> I think that's one thing we can guarantee. It's what, sorry? >> Yeah, and it's just the beginning. This partnership is just the beginning. You will see lot more happening between both the companies as we define the roadmap into the future, so we are super excited about all the work, the joint teams, and, Stephanie, I want to take this opportunity to thank you, your leadership, and your entire organization for helping us with this partnership. >> We're excited by it, we are, we know it's just the beginning of great things to come. >> Well, just the beginning means we have to have more conversations. The cultural fit really sounds like it's really there, and there's tight alignment with Ivanti and Intel. Ladies, thank you so much for joining me. Nayaki, great to have you back on the program. >> Thank you, thank you, Lisa. Thank you for hosting us, and, Stephanie, it's always a pleasure talking to you, thank you. >> Likewise, looking forward to the launch and all the customer reactions. >> Absolutely. >> Yes, all right, thanks Nayaki, thanks Stephanie. For my guests, I'm Lisa Martin. You're watching this CUBE Conversation. (calm music)

Published Date : Jul 23 2020

SUMMARY :

leaders all around the world, to have you on the program. and Stephanie glad to have Now, just on the heels of that, and all the IT organizations So Stephanie, talk to us so that is the promise to so talk to me about COVID as really and the stats of BFC is what to the board level conversation, you know, and the Firmware and alert the OS and the hyper-automation so that the IT departments and being released the integration and the service experience Absolutely, and then and how long we have been, you know, thing we can guarantee. Yeah, and it's just the beginning. of great things to come. Well, just the beginning means we have a pleasure talking to you, and all the customer reactions. Yes, all right, thanks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NayakiPERSON

0.99+

StephaniePERSON

0.99+

Lisa MartinPERSON

0.99+

IvantiPERSON

0.99+

Stephanie HallfordPERSON

0.99+

July 2020DATE

0.99+

LisaPERSON

0.99+

six monthsQUANTITY

0.99+

Nayaki NayyarPERSON

0.99+

Palo AltoLOCATION

0.99+

30QUANTITY

0.99+

IntelORGANIZATION

0.99+

three monthsQUANTITY

0.99+

two companiesQUANTITY

0.99+

BostonLOCATION

0.99+

360 degreeQUANTITY

0.99+

two platformsQUANTITY

0.99+

bothQUANTITY

0.99+

IvantiORGANIZATION

0.99+

two guestsQUANTITY

0.99+

threeQUANTITY

0.99+

40QUANTITY

0.99+

over 14 yearsQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

50%QUANTITY

0.98+

BFCORGANIZATION

0.98+

hundred thousand employeesQUANTITY

0.98+

todayDATE

0.98+

two technologiesQUANTITY

0.98+

Ivanti NeuronsTITLE

0.97+

EMAORGANIZATION

0.97+

40%QUANTITY

0.97+

four percentQUANTITY

0.97+

NeuronsTITLE

0.97+

one timeQUANTITY

0.96+

over 40 million endpointsQUANTITY

0.96+

vProTITLE

0.95+

Jeff Abbott & Nayaki Nayyar, Ivanti | CUBE Conversation, July 2020


 

>> Announcer: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> Welcome to this cube conversation. I'm Lisa Martin, and I'm joined by two guests from Ivanti, today. Please welcome its President, Jeff Abbot and its Chief Product Officer, Nayaki Nayyar. Jeff and Nayaki, it's so great to talk to you today. >> Pleasure to speak to you, Lisa. >> Pleasure to be here, Lisa, look forward to this. >> Me too. So Jeff, let's start with you, transformation, you got some big news that you're going to be sharing and breaking through theCUBE Conversation today which we're going to dig into but there's been a lot of transformation at the top at Ivanti, you're new, tell me about that and what's the shake up that's been going on there to really drive this company forward? >> Yeah. We have got a lot of transformation going on, Lisa. And it's been an exciting ride for the first six months of my tenure at Ivanti. I came in January as president along with our new CEO, who has been Chairman, Jim Schaper. And when Jim and I started talking about Ivanti last fall, the challenges were pretty clear. It's a company that's had outstanding employees, fantastic customers, and a real heritage of innovation. But they had leveled off a little bit. And the idea behind the new executive team was to bring in a team of veterans to take it to the next level, really to grow to a billion dollars and beyond, both organically and through acquisitions. So you're right, we brought in a fantastic team of veterans people that Jim and I have both worked with: Angie Gunter, new Chief Marketing Officer, Mary Trick, new Chief Customer Officer, we recently hired Nayaki Nayyar, who's with us today, our Chief Product Officer, John Flavin, the Head of our Industry Business Unit, and a host of others that have all come in with a single mission to take Ivanti to the next level. >> So Nayaki, let's dig into Ivanti's vision, lot of change, lot of momentum, I imagine with that change, but what's your vision? >> So let's take a step back, Lisa and you look at, what I call Ivanti's position of strength. And when you look at the entire portfolio Ivanti has, one of the key strengths Ivanti has is its ability to discover, secure, manage and service the endpoints. And if you look at the entire marketplace, there is no vendor in the market today, most of them UEM vendors don't have service management, service management don't have UEM, our ability, Ivanti's ability to do this end to end management of endpoints all the way from discovery to security to service management is what our key strength is. That's our competitive advantage, bringing these three pillars together under one umbrella and having a holistic story. Especially in this day and age of COVID and post COVID, where everyone is trying to manage those endpoints, secure those endpoints, and have almost a seamless experience as remote becomes the next normal going forward for every enterprise, Lisa. >> Yeah, the next normal. Well, there's data scatter, there's device scatter and it's now almost like so many people working from home overnight a few months ago that now will have almost a relationship with our devices because they're our lifeline. So for an organization to be able to understand where all those devices are, people are now working from home, but as you shared, Nayaki, with me the other day, there's some gartner data that demonstrates that 3.6% of the workforce before COVID was working from home. It might be 10X that post COVID So the amount of device scatter and data scatter and need to secure, that challenge is even going up. So how does Ivanti help? How do you solve that challenge? >> So Lisa, if you put yourself in any large enterprise and organization that is dealing with this post COVID or addressing the needs of a remote worker, the remote workers are going through, I would say, explosive growth where they used to be single digits 3% 4% before COVID, and now, during COVID, and after COVID, it's probably going to be I would say, 30, 40% of remote workers that every enterprise has to now provide that service, that seamless service experience as they're working from home, they could be on the move. So providing that seamless experience is, I would say, number one priority and a key challenge for every enterprise. So what we are going to be releasing and launching and announcing to the market given our position of strength in managing endpoints is how we help that seamless experience and what I call the ambient experience for an end user independent of where they are working from, they could be working from home, they could be on the move, or office. >> Which is critical these days. But before we dig into the announcement, Jeff, I wanted to ask you, some of the stats that I've been seeing in terms of the C suite and the amount of decisions that the C suite has had to make in the last four months has been more than over the last five or so years. Talk to us a little bit about how Ivanti got together this new C suite to make the decision to announce what you're going to talk about today so quickly. >> Now, that's a great point. And it's one that we had to, quite frankly, Lisa. The market is demanding a hyper-automation, it's demanding more agnostic deployment, it needs more flexibility in terms of the ability to be self driven and sense and service without a whole lot of intervention. So we knew that when we came in as a new leadership team, the first thing we had to do was get the go-to-market strategy in order, which we did. We balanced our direct sales strategy with our partner strategy. We made some changes in the marketing organization to a more contemporary content-focused demand generation style, and we reset the company's focus on customer outcomes. And in so doing, we changed the mentality to success as measured by are we meeting our customers intended business goals? And that led us very quickly to say, "Listen, the unified IT message we've been using for the last few years has been great, and our customers have responded well to it, and we've acquired a lot of new customers with that message, but the game has changed." And as Nayaki was leading up to, the expectation has changed. And the entire IT space is relatively mature but the expectations and the pressure on that space has grown tremendously, as you pointed out, in the last few years. Just think of the number of devices we all now have to manage as a company, and it's growing. And as Nayaki pointed out as she discusses our launch, it's growing almost exponentially. So we knew that we had to have a new product strategy, we had to take the unified IT message and start to think differently about how the IT leaders in the field and our various customers around the world, how their game has changed and lean in to what they need in terms of automation, AI, bot technology, and so on. And that's what we're announcing with this latest release. >> All right, Nayaki, take it away. What are you announcing? >> Yeah, so what we're super-excited about, Lisa, is to Jeff's point, to handle this explosive growth, growth of devices, growth of data that is being generated from those devices, and also this explosive growth of remote workers. Meaning the only way to handle this growth is through what we call automation and we are taking that next, advanced automation, that leap frog strategy of what we call hyper-automation, embedding that into our entire stack, into our UEM endpoint management stack, into our security stack and also service management to help customers, what we call, self-heal, discover all the devices continuously, optimize the performance, optimize any configuration drifts, and proactively predictively remediate any issues, any issues that you see on those devices, and get into a world of what we call self-healing autonomous edge. Where it's continuously detecting every issue and being able to predictively and cognitively self-heal that edge. And this is what we are launching, is what we branded as Ivanti Neurons, is the brand that we are launching for these automation, this hyper-automation bots, that every company can deploy these hyper-automation bots into their network that will constantly discover every device you have across your entire network, discover any performance issues, configuration drift issues, security issues, vulnerabilities, anomalies, and really get into what we call self-healing, self-securing and providing a service experience that we are used to in our day to day life or in our consumer world. So that's what we are announcing, super-excited about the overall launch. The fact that every enterprise, every company, and it's not tied to any single vertical, Lisa, any vertical organization can leverage these neurons and get that closer to self-healing of those devices that they have to now manage every organization that has to now manage. >> I know Ivanti has a lot of strengths and several verticals, one of them being healthcare. And I can imagine right now, the last five months, the hyper status that every hospital and clinic is in, I'm curious, though, about the name. Jeff, talk to me about in this new, the next normal that we're living in, Neurons, what does that mean and what does it mean to your customers? >> Yeah, great question. And I know this will resonate with you, Lisa, as an accomplished biologist. With the idea is with what we're providing and what we're launching with Neurons, there's a sense of hyper-scale, hyper-automation, like the synapses in your brain, handles so much information at once. So we wanted to personalize the launch of these solutions. When you see the announcement next week, you'll see a series of products across the spectrum Ivanti solutions; the ITSM, endpoint management, security and so on. And we address in each of those areas, the self-sensing, self-healing, self-servicing, each of those business processes. But like your synapses or your neurons in your brain, there'll be a lot of super-fast automation, super-fast sensing of challenges and addressing those challenges. And that's why we went with Neurons. It was actually a pretty fun contest in the company and we really believe Neurons will connect with our target market. >> I love it. And the biologist part of me is gone, "That makes sense." So Nayaki, over to you. And in terms of that connectivity perspective, there's so many disparate data sources out there, it's only growing. And Jeff, you mentioned this, how can one of your existing 25,000 customers, use, deploy, this on top of their existing infrastructure to start connecting data sources that they may not even know they can connect or that they may not know does it make even sense to connect them? >> Yeah, so the beauty of the entire Neuron network is it uses MQTT protocol, Lisa, which is the protocol that immediately detects every device, be it endpoint desktops, laptops, mobile devices, or even, I was suggesting IoT devices, that it automatically detects. And senses if there is anything happening on those devices, predicts if there is any issue that may happen, like I said, performance issues, configuration drift issues, security issues and pulls that data in real time. The beauty of this is the speed at which it pulls its data, I've seen customers who can deploy this across their entire network around the world and within seconds, it's able to pull the data into a centri console, and give ourselves a full 360 view of every device you have, every user that's using those devices all the applications that are running on those devices and the services that are being delivered to those devices. So just the power of being able to pull that much data in seconds and provide that 360 view of what we call, a Neuron Workspace, for any IT organization to have that full 360 view, and detect and predict that there's any issue and almost like get into a self-healing remediated before it interrupts your productivity or interrupts your... Any service disruption. I think you were trying to say something, go ahead. >> I was just going to add to that, Nayaki. And you asked this or made this point, Lisa, Nayaki and I are speaking to the healthcare industry almost every day. We are very in tune with the challenges they're experiencing, obviously, with what's happening right now around the world. And as Nayaki is describing, the Neurons we intend to be a very seamless improvement to their existing IT processes and so on. In fact, when I described this to some of the hospitals I've been speaking to, and certainly the IT staff and leaders within, they are fascinated and very excited about what we're describing. Because if you think about it, IT challenges down at the device level in the healthcare industry can be life critical. And they need to solve those IT challenges very fast. They need to know when their new endpoints are online, they need to know when they need servicing, and then they know when their software needs patching. We're not talking about just being at home and being frustrated if you're having an IT challenge, we're talking about life and death. So Neurons is absolutely what the healthcare industry is asking for in terms of self-healing, self-sensing, self-securing and so on, they need those attributes in their business model, now definitely more than ever. >> Absolutely, they do. So Nayaki, talking to customers in healthcare, whatnot, I can see this being a great tool for the IT analyst but also maybe even helping the IT analysts and business users have better relationships that overall help drive a business forward. >> Yeah, so you put yourself in an end user or line of business, they expect, and especially in this day and age of post COVID, Lisa, they expect a consumer grade experience to be delivered to them. They expect their service provider to know exactly where they're working from, what devices they have, how all those devices are not just secure, but understands the preferences I need as an individual and provides that service experience to me. So I mean that, I would say, a close tie in between what the business wants, the end users in those lines of business want and how IT or any service organization can provide that service to employees, customers, and consumers is what really Neurons, I would really... Helps us get closer and closer to consumer grade experience that we all are used to in our day to day life. And to Jeff's point, in addition to healthcare, which is a strong industry vertical for us, some other industries, retail is another big industry that we are very strong in, Lisa, and also supply chain rugged devices in a warehouse. So it really gives us a huge expansion opportunity beyond just managing the IT devices or endpoints to also managing the IoT devices by industry vertical, in those segments, where we already have a very, very strong foothold, because of the technology that we have that powers this whole thing in the backend. >> And we're seeing some of the numbers of 40+ Billion, connected devices in the next few years. So Jeff, let's end this with you. I know there's more coming, but you probably have a great partnership suite that you're working with to enable this, talk to us a little bit about the partners, and then what's next? >> Yeah, no, great point, Lisa. I come from a heritage of companies that have leveraged our partners. And we continue to grow our partner network. We believe strongly in the strength of the extended ecosystem, solution partners, delivery partners, global systems integrators, they all have a role in Neurons. And we're excited to continue to provide the platform for mutual growth between us and those partners. And what's really important is, these are companies that our customers really love as well. So we're going to continue to, in some cases, tie our solutions together, in some cases, extend our services organization through partners, and in some cases, we'll actually service our customers through our channel partner network. We actually went through a little bit of a rationalization to really zero in on our most strategic partners, we've done that, we've finished that in the first six months of coming on board. And now we are hitting the gas pedal and going full speed to market with a great group of partners and again, you'll see that ecosystem more and more as part of our strategy. >> Excellent. So Neurons announced, what's next? >> Well, there's quite a bit behind Neurons. So it will take us probably into at least 2021 getting all the solutions launched, and getting them ingrained with our customers out there. Well, we fully intend to continue to innovate. And if there's one thing I leave you with, Lisa, it's that that's our big announcement more than anything. I mean, Ivanti's had a history of innovation, it's a company that practically invented patching, and keeping all of the devices up to speed on the latest virus protection software and so on, there's a lot of legacy companies within our footprint that are now completely tied together and under the Neuron strategy under Nayaki's leadership we intended to put innovation out in the marketplace, quarter after quarter after quarter, but Neurons for now will keep us quite busy. So we're very excited. >> Well, congratulations on that. Ivanti, innovation, hyper-automation. Jeff, Nayaki, it's been such a pleasure talking to you. Thank you for joining me on theCUBE today. Thank you, Lisa. >> Thank you for having us. >> For my guests, I am Lisa Martin, you're watching theCUBE Conversation. (upbeat music)

Published Date : Jul 21 2020

SUMMARY :

leaders all around the world, great to talk to you today. Pleasure to be here, at the top at Ivanti, you're new, and a host of others that have all come in and service the endpoints. and need to secure, that and announcing to the market that the C suite has had to make in terms of the ability to What are you announcing? and get that closer to self-healing of those devices and what does it mean to your customers? and what we're launching with Neurons, And in terms of that and the services that are being and certainly the IT So Nayaki, talking to customers because of the technology that we have connected devices in the next few years. and going full speed to market with a great group of partners and keeping all of the devices up to speed a pleasure talking to you. you're watching theCUBE Conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

JeffPERSON

0.99+

Mary TrickPERSON

0.99+

Angie GunterPERSON

0.99+

Lisa MartinPERSON

0.99+

NayakiPERSON

0.99+

Jeff AbbotPERSON

0.99+

LisaPERSON

0.99+

Jim SchaperPERSON

0.99+

Jeff AbbottPERSON

0.99+

JanuaryDATE

0.99+

Palo AltoLOCATION

0.99+

Nayaki NayyarPERSON

0.99+

3.6%QUANTITY

0.99+

NeuronsORGANIZATION

0.99+

John FlavinPERSON

0.99+

July 2020DATE

0.99+

two guestsQUANTITY

0.99+

10XQUANTITY

0.99+

40+ BillionQUANTITY

0.99+

25,000 customersQUANTITY

0.99+

IvantiPERSON

0.99+

last fallDATE

0.99+

Nayaki NayyarPERSON

0.99+

next weekDATE

0.99+

360 viewQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

3%QUANTITY

0.98+

Ivanti NeuronsORGANIZATION

0.98+

first six monthsQUANTITY

0.98+

BostonLOCATION

0.98+

IvantiORGANIZATION

0.98+

singleQUANTITY

0.96+

NeuronORGANIZATION

0.96+

first thingQUANTITY

0.95+

4%QUANTITY

0.95+

one thingQUANTITY

0.94+

eachQUANTITY

0.94+

oneQUANTITY

0.93+

COVIDTITLE

0.9+

last five monthsDATE

0.9+

Erik Brynjolfsson, MIT & Andrew McAfee, MIT - MIT IDE 2015 - #theCUBE


 

>> live from the Congress Centre in London, England. It's the queue at M i t. And the digital economy The second machine age Brought to you by headlines sponsor M i t. >> I already We're back Dave along with Student of American Nelson and Macca Fear are back here after the day Each of them gave a detailed presentation today related to the book Gentlemen, welcome back to to see you >> Good to see you again I want to start with you >> on a question. That last question That and he got from a woman when you're >> starting with him on a question that was asked of him Yes. And you'LL see why when you find something you like. You dodged the question by the way. Fair for record Hanging out with you guys makes us smarter. Thank you. Hear it? So the question was >> around education She expressed real concern, particularly around education for younger people. I guess by the time they get to secondary education it's too late. You talked about in the book about the three r's we need to read. Obviously we need to write Teo be able to do arithmetic in our head. Sure. What's your take on that on that question. You >> know those basics, our table stakes. I mean, you have to be able to do that kind of stuff. But the real payoff comes from creativity doing something really new and original. The good news is that most people love being creative and original. You look at a kid playing, you know, whether it there two or three years old, that's all that you put some blocks in front of them. They start building, creating things, and our school system is, Andy was saying in his his talkers, questions was, is that many of the schools are almost explicitly designed to tamp that down to get people to conform, get them to all be consistent. Which is exactly what Henry Ford needed for his factories, you know, to work on the assembly line. But now that machines could do that repetitive, consistent kind of work, it's time to let creativity flourish again. And that's when you got to do on top of those basic skills. >> So I have one, and it's pretty clear that that that are Kramer education model. It's really hard for some kids to accept. They just want they want to run around. They want to go express themselves. They wantto poke a world. That's not what that grid full of desks is designed to do. >> We call that a d d. Now I follow. Yeah, I have one >> Montessori kid out of my foot. Really? He's by far the most creative most ano didactic. You're a Montessori Travel Marie, not the story. Have it right? Is that >> Look, I'm not educational research. I am Amon a story kid. I think she got it right. And she was able to demonstrate that she could take kids out of the slums of Bologna who were, at the time considered mentally defective. There's this notion that the reason the poor are poor because they were they were just mentally insufficient. And she could show their learning and their progress. So I completely agree with Eric. We need all of our students need to be able to Teo, accomplish the basics, to read, to write, to do basic math. What Montessori taught me is you can get there via this completely kind of hippie freeform route. And I'm really happy for that education talk. Talk about you and your students. >> Your brainstorm on things that people could do with computers. Can't. >> Yeah, a lot of money >> this and exercise that you do pretty regularly. What's that? How is >> that evolved? A little >> something. We do it more systematically, I almost always doing in at talking over where With Forum. It's a kind of dinner conversation out we can't get away from. So we're hearing a lot. And you know, there's a recurring patterns that emerged, and you heard some of them today around interpersonal skills around creativity. Still, coordination is still physical coordination. What some of these have in common is that their skills that we've evolved over literally, you know, hundreds of thousands or millions of years. And there are billions of neurons devoted to some of these skills. Coordination, vision, interpersonal skills and other skills like arithmetic is something that's really very recent, and we don't have a lot of neurons devoted to that. So it's not surprising the machines can pick up those more recent skills more than the Maurin eight ones. Now overtime, will machines be able to do more of those other skills? I suspect they probably will exactly how long it will take. That's the question for neuroscientists. The AI researchers >> made me make that country think about not just diagnosing a patient but getting them to comply with the treatment regimen. Take your medicine. Eat better. Stop smoking. We know the compliance rates for terrible for demonstrably good ideas. How do we improve them? Is in a technology solution a little bit. Is it an interpersonal solution? Absolutely. I think we need deeply empathetic, deeply capable people to help each other become healthier, become better people. Right Program might come from an algorithm, but that algorithm on the computer that spits it out is going to be lousy at getting most people to comply. Way need human beings for that. So when >> we talking technology space, we've been evangelizing that people need to get rid of what we call the undifferentiated having lifting. And I wonder if there's an opportunity in our personal life, you think about how much time we spend Well, you know, what are we doing for dinner when we're running the kids around? You know, how do I get dressed in the different things that have here their studies sometimes like waste so much brain power, trying to get rid of these things and there's opportunities. Welcome, Jetsons. Actually, no, they >> didn't have these problems that can help us with some of that. I think people should actually help us with over of it. You know, I actually I have a personal trainer and he's one of the last people that I would ever have exclude from my life because he's the guy who could actually help me lead a healthier life. And I play so much value on that. >> I like your metaphor of this is undifferentiated stuff, that really it's not the stuff that makes you great. It's just stuff you have to do. And I remember having a conversation with folks that s AP, and they said, you know, sure would like to brag about this, but we take away a lot of stuff that isn't what differentiates companies in the back office stuff. Getting your basic bookkeeping, accounting, supply chain stuff done and it's interesting. I think we could use the same thing for for personal lives. Let's get rid of that sort of underbrush of necessity stuff so we can focus on the things that are uniquely good at >> alright so way have to run out when I need garbage bags with toilet paper. Honestly, a drone should show up and drop that on my friends. >> So I wonder when I look at the self driving car that you've talked about, will we reach a point that not only do we trust computers in the car, it's cars to drive herself? But we've reached a point where we're just got nothing. Trust humans anymore because self driving cars there just so much safer and better than what we've got is that coming >> in the next twenty years? I personally think so, and the first time is deeply weird and unsettling. I think both of us were a little bit terrified the first time we drove in the Google Autonomous Car and the Google or driving it hit the button and took his hands off the controls. That was a weird moment. I liken it to when I was learning to scuba dive. Very first breath you take underwater is deeply unsettling because you're not supposed to be doing this. After a few breaths, it becomes background. >> But you know, I was I was driving to the airport to come here, and I look in the lanes left to me. There's a woman, you know, texting, and I'd be much you're terrifying if she wasn't driving. If the computer is doing because then we could be more, that's the right way to think about it. I think the time will come and it may not be that far away. We're the norm's shift exactly the other way around and be considered risky to have a human at the wheel and the safety. That thing that the insurance company will want is to have a machine there. You know, I think this is a temporary phase with Newt technology. We become frightened of them. When microwave ovens first came out, they were weird and wonderful. Not most of us think of them is really kind of boring and routine. Same thing is gonna happen with self driving to accidents. Well, that's the story is, that is, But none of them were. Of course, according to the story >> driving, what's clear is that they're safer than the human driver. As of today, they are only going to get safer. We're not evolving that quick, >> but you got the question. Is that self driving, car driven story? Dr. We laughed because we're live in Boston. But your answer was, Will drive started driving, driving, >> you know, eventually, you know, I think it's fair to say that there's a big difference. You know, the first nineteen, ninety five, ninety nine percent of driving is something that's a lot easier. That last one percent or one hundredth of one percent becomes much, much harder. And right now we've had There's a card just last week that drove across the United States, but there were half a dozen times when he had to have a human interviews and particularly unusual situations. And I think because of our norms and expectations, that won't be enough for a self driving car to be safer than humans will need it to be te next paper or something like maybe >> like the just example may be the ultimate combination is a combination of human and self driving car, >> Maybe situation after situation. I think that's going to be the case and I'LL go back to medical diagnosis. I would at least for the short to medium term, I would like to have a pair of human eyes over the treatment plan that the that being completely digital diagnostician spits out. Maybe over time it will be clear that there are no flaws in that. We could go totally digital, but we can combine the two. >> I think in most cases what anything is right, what you brought up. But you know the case of self driving cars in particular, and other situations where humans have to take over for a machine that's failing for someway like aircraft. When the autopilot is doing things right, it turns out that that transition could be very, very rocky and expecting a human to be on call to be able to quickly grasp what's going on in the middle of a crisis of a freak out that's not reasonable isn't necessarily the best time to be swishing over. So there's a there's a fuel. Human factors issued their of how you design it, not just to the human could take over, but you could make a kind of a seamless transition. And that's not easy. >> Okay, so maybe self driving cars, that doesn't happen. But back to the medical example. Maybe Watson will replace Dr Welby, but have not Dr Oz >> interaction or any nurse or somebody who actually gets me to comply again. But also, I do think that Dr Watson can and should take over for people in the developing world who only have access instead of First World medical care. They've got a smartphone. OK, we're going to be able to deliver absolute top shelf world class medical diagnostics to those people fairly quickly. Of course, we should >> do that and then combine it with a coach who gets people to take the prescription when they're supposed to do it, change their eating habits or communities or whatever else you hear your peers are all losing weight. >> Why aren't you? >> I wantto askyou something coming on. Time here has been gracious with your time and your talk. We're very out spoken about. A couple of things I would summarize. It is you lot must Bill Gates and Stephen Hawking. You're paranoid tens. There's no privacy in the Internet, so get over. >> I didn't say there's no privacy. I know working. I think it's important to be clear on this. I think privacy is really important. I do think it's right that we have, and we should have. What I don't want to do is have a bureaucrat defined my privacy rights for me and start telling >> companies what they can and can't do is a result. What >> I'd much prefer instead is to say, Look, if there are things that we know >> Cos we're doing that we do not approve >> of let's deal with that situation as opposed to trying to put the guard rails in place and fence off the different kinds of innovative, strict growth, right? >> I mean, there's two kinds of mistakes you could make. One is, you can let companies do things and you should have regulated them. The other is. You could regulate them preemptively when you really should have let them do things and both kinds of errors or possible. Our sense of looking at what's happening in Jinan is that we've thrived where we allow more permission, listen innovation. We allowed companies to do things and then go back and fix things rather than when we try and locked down the past in the existing processes, so are leaning. In most cases, not every case is to be a little more free, a little more open recognized that there will be mistakes. It's not gonna be that we're perfectly guaranteed is that there is a risk when you walk across the street but go back and fix things at that point rather than preemptively define exactly how things are gonna play. Let >> me give you an example. If Google were to say to me, Hey, Andy, unless you pay us x dollars per month, we're gonna show the world your last fifty Google searches. I would completely pay for that kind of blackmail, right? Certain your search history is incredibly personal reveals a lot about you. Google is not going to do that. It would just it would crater their own business. So trying to trying to fence that kind of stuff often advance makes a lot of sense to me. Then then then relying on this. This sounds a little bit weird, but a combination of for profit companies and people with three choice that that's a really good guarantor of our freedoms and our rights. So you >> guys have a pretty good thing going. It doesn't look like strangle each other anytime soon. But >> how do you How do you decide who >> does one treat by how you operate with reading the book? It's like, Okay, like I think that was Andy because he's talking about Erica. I think that was Erica's. He's talking, >> but I couldn't tell you. I think it's hard for you to reverse engineer because it gets so co mingled over time. And, you know, I gave the example the end of the talk about humans and machines working together synergistically. I think the same thing is true with Indian me out. You may disagree, but I find that we are smarter when we work together so much smarter. Then when we work individually, we go and bring some things on the blackboard. And I had these aha moments that I don't think I would've had just sitting by myself and do I should be that ah ha moment to Andy. To me, it's actually to this Borg of us working together >> and fundamentally, these air bumper sticker things to say. If after working with someone, you become convinced that they respect you and that you could trust them and like Erik says that you're better off together, that you would be individually, it's a complete no brainer to >> keep doing the work together. Well, we're really humbled to be here. You guys are great contact. Everything is free and available. We really believe in that sort of economics. And so thank you very much for having us here. >> Well, it's just a real pleasure. >> All right, Right there, buddy. We'LL be back to wrap up right after this is Q relied from London. My tea.

Published Date : Apr 10 2015

SUMMARY :

to you by headlines sponsor M i t. That last question That and he got from a woman when you're with you guys makes us smarter. I guess by the time they get to secondary education it's too late. I mean, you have to be able to do that kind of stuff. It's really hard for some kids to accept. I have one You're a Montessori Travel Marie, not the story. We need all of our students need to be able to Teo, accomplish the basics, Your brainstorm on things that people could do with computers. this and exercise that you do pretty regularly. that we've evolved over literally, you know, hundreds of thousands or millions of years. but that algorithm on the computer that spits it out is going to be lousy at getting most people to comply. And I wonder if there's an opportunity in our personal life, you think about how much time we spend I think people should actually help us with over of it. I think we could use the same thing for for personal lives. alright so way have to run out when I need garbage bags with toilet paper. do we trust computers in the car, it's cars to drive herself? I liken it to when I was learning to scuba dive. I think this is a temporary phase with Newt technology. they are only going to get safer. but you got the question. And I think because of our norms I think that's going to be the case and I'LL go back to medical I think in most cases what anything is right, what you brought up. But back to the medical example. I do think that Dr Watson can and should take over for people in do it, change their eating habits or communities or whatever else you hear your peers are all It is you lot must Bill Gates and I think it's important to be clear on this. companies what they can and can't do is a result. It's not gonna be that we're perfectly guaranteed is that there is a risk when you walk across So you But I think that was Erica's. I think it's hard for you to reverse engineer because it gets so co mingled and fundamentally, these air bumper sticker things to say. And so thank you very much for having We'LL be back to wrap up right after this is Q relied from London.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

ChrisPERSON

0.99+

Lisa MartinPERSON

0.99+

JoelPERSON

0.99+

Jeff FrickPERSON

0.99+

PeterPERSON

0.99+

MonaPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

KeithPERSON

0.99+

AWSORGANIZATION

0.99+

JeffPERSON

0.99+

KevinPERSON

0.99+

Joel MinickPERSON

0.99+

AndyPERSON

0.99+

RyanPERSON

0.99+

Cathy DallyPERSON

0.99+

PatrickPERSON

0.99+

GregPERSON

0.99+

Rebecca KnightPERSON

0.99+

StephenPERSON

0.99+

Kevin MillerPERSON

0.99+

MarcusPERSON

0.99+

Dave AlantePERSON

0.99+

EricPERSON

0.99+

AmazonORGANIZATION

0.99+

twoQUANTITY

0.99+

DanPERSON

0.99+

Peter BurrisPERSON

0.99+

Greg TinkerPERSON

0.99+

UtahLOCATION

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

RaleighLOCATION

0.99+

BrooklynLOCATION

0.99+

Carl KrupitzerPERSON

0.99+

LisaPERSON

0.99+

LenovoORGANIZATION

0.99+

JetBlueORGANIZATION

0.99+

2015DATE

0.99+

DavePERSON

0.99+

Angie EmbreePERSON

0.99+

Kirk SkaugenPERSON

0.99+

Dave NicholsonPERSON

0.99+

2014DATE

0.99+

SimonPERSON

0.99+

UnitedORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

SouthwestORGANIZATION

0.99+

KirkPERSON

0.99+

FrankPERSON

0.99+

Patrick OsbornePERSON

0.99+

1984DATE

0.99+

ChinaLOCATION

0.99+

BostonLOCATION

0.99+

CaliforniaLOCATION

0.99+

SingaporeLOCATION

0.99+