Image Title

Search Results for Caltech:

Christine Corbett Moran, Caltech | Open Source Summit 2017


 

>> [Voiceover] Live, from Los Angeles, it's theCUBE. Covering Open Source Summit, North America 2017. Brought to you by the Linux Foundation, and Red Hat.>> Hello everyone, welcome back to our special Cube live coverage of Linux Foundation's Open Source Summit North America here in LA, I'm John Furrier your co-host with Stu Mitiman. Our next guest is Christine Corbett Moran, Ph.D. at astronomy, astrophysics post-doctoral fellow at Caltech.>> That's right, it's a mouthful.>> Welcome to theCUBE, a mouthful but you're also keynoting, gave one of the talks opening day today after Jim Zemlin, on tech and culture and politics.>> That's right, yeah.>> Which I thought was fantastic. A lot of great notes there. Connect the dots for us metaphorically speaking, between Caltech and tech and culture. Why did you take that theme?>> Sure. So I've been involved in programming since I was an undergraduate in college. I studied computer science and always attending more and more conferences. hacker cons, security conferences, that sort of stuff. Very early on what attracted me to technology was not just the nitty gritty nuts and bolts of being able to solve a hard technical problem That was a lot of fun, but also the impact that it could have. So even as I went on a very academic track, I continued to make open source contributions. Really seeking that kind of cultural impact. And it wasn't something that I was real vocal about. Talking about. More talking about the technology side of things than the politics side of things. But in the past few years, I think with the rise of fake news, with the rise of various sorts of societal problems that we're seeing as a consequence of technology, I decided I was going to try to speak more to that end of things. So that we can focus on that as a technology community on what are we going to do with this enormous power that we have.>> And looking at that, a couple of direct questions for you, it was awesome talk. You get a lot in there. You were riffing some good stuff there with Jim as well. But you had made a comment that you originally wanted to be lawyer, you went to MIT, and you sort of got pulled in to the dark side>> That's right, yeah.>> In programming. As a former computer scientist myself, what got the bug take us through that moment. Was it you just started coding and said damn I love coding? What was the moment?>> Sure, so I was always talented in math and science. That was part of the reason why I was admitted to MIT and chose to go there. My late father was a lawyer. I didn't really have an example of a technologist in my life. So, to me, career wise I was going to be a lawyer, but I was interested in technology. What kind of lawyer is that? Patent attorney. So that was my career path. MIT, some sort of engineering, then a patent attorney. I got to MIT and realized I didn't have to be a attorney. I could just do the fun stuff. For some people that's the fun part. For me it ended up being when I took my first computer science class. Something that was fun, that I was good at, and that I really got addicted to kind of the feedback loop of you always have a problem you're trying to solve. It doesn't work, it doesn't work. Then you get it to work and then it's great for a minute and then there's a new problem to solve.>> That's a great story. I think it was very inspirational. A lot of folks of watching will be inspired by that. The other thing that inspired me in the key note was your comment about code and culture.>> [Christine] Yeah.>> I love this notion that code is now at a point where open source is a global phenomenon. You mentioned Earth and space.>> [Christine] Yeah.>> You know and all this sort of space is now Linux based now. But coding can shape culture. Explain what you mean by that, because I think it's one of those things that people might not see happening right now, but it is happening. You starting to see the more inclusionary roles and the communities are changing. Code is not just a tech thing. Explain what you mean by code-shaping culture.>> Well we can already that in terms of changing corporate culture. So, for example, 10 or 15, 20 years ago it might be inconceivable to make contributions that might benefit your corporate competitor. And we all have corporate competitors whether that's a nation, the US having competitors. Whether that's your local sports rivalry. We all have competitors, but open source has really shown that you're relying on things that you as a group, no matter what entity you are, you can't do as much as you can if you share your contributions and benefit from people around the globe. So that's one big way I've seen corporate culture in just every day culture change that people have recognized. Whether it's science, or corporate success, you can't do it alone. There's no lone genius. You really have to do it as a community.>> As a collective too you mentioned some of the ruling class and you kind of referring to not ruling class and open source, but also politics. In that gerrymandering was a word you used. We don't hear that often at conferences, but the idea of having more people exposed creates more data. Talk about what you mean by that because this is interesting. This truly is a democratization opportunity.>> [Christine] Absolutely.>> If not handled properly could go away.>> Yeah, I think am a little, I don't know if there's any Game of Thrones fans out there, but you know at some point this season and previous seasons you know Daenerys Targaryen is there and they're like well if you do this you're going to be the same evil person just new face. I think there's a risk of that in the open source community that if it ends up just being a few people it's the same oligarchy. The same sort of corruption just a different face to it. I don't think open source will go that way just based on the people that I've met in the community. It is something that we actively have to guard against and make sure that that we have as many people contributing to open source so that it's not just a few people who are capable of changing the world and have the power to decide whether it's going to be A or B, but as many people as possible.>> Christine, the kind of monetization of open source is always an interesting topic at these kind of shows. You had an interesting piece talking about young people contributing. You know contributing to open source. It's not just oh yeah do it for free and expect them to do it. Same thing in academia a lot of times. Like oh hey, you're going to do that research and participate and write papers and you know money is got to come somewhere to help fund this. How does kind of the money fit into this whole discussion of open source?>> So I think that's been one of the big successes of open source and we heard that from Jim as well today. It isn't you know some sort of unattainable in terms of achieving value for society. When you do something of value, money is a reward for that. The only question is how to distribute that award effectively to the community. What I see sometimes in the community is there's this myth of everyone in open source getting involved for just the fun of it and there's a huge amount of that. I have done a bunch of contributions for free on the side, but I've always in the end gotten some sort monetary reward for that down the line. And someone talked today about that makes you more employable, et cetera. That has left me with the time and freedom to continue that development. I think it's a risk that as a young person who is going into debt for college to not realize that that monetary reward will come or have it be so out of sync with their current life situation that they're unable to get the time to develop the skills. So, I don't think that money is a primary motivating factor for most people in the community, but certainly as Linus said today as well. When you don't have to worry about money that's when you do the really cool nitty-gritty things that might be a risk that then grow to be that next big project.>> It's an interesting comment you made about the US how they couldn't do potentially Linux if it wasn't in the US. It opens up your eyes and you say hmm we got to do better.>> Yeah.>> And so that brings up the whole notion of the radical comment of open source has always been kind of radical and then you know when I was growing up it was a tier two alternative to the big guys. Now it's tier one. I think the stakes are higher and the thing I'd like you to get your comment or reaction to is how does the community take it to the next level when it's bigger than the United States. You have China saying no more ICOs, no more virtual currencies. That's a potential issue there's a data point of many other things that can be on the global scale. Security, the Equifax hack, identity theft, truth in communities is now an issue, and there's more projects more than ever. So I made a comment on Twitter. Whose shoulders do we stand on in the expression of standing on the shoulders before you.>> [Christine] Yeah, you're standing on a sea.>> So it's a discovery challenge of what do we do and how do we get to the truth. What's your thoughts on that?>> That is a large question. I don't know if I can answer it in the short amount of time. So to break it down a little bit. One of the issues is that we're in this global society and we have different portions trying to regulate what's next in technology. For example, China with the ICOs, et cetera. One of the phrases I used in my talk was that the math was on the people's side and I think it is the case still with a lot of the technologies that are distributed. It's very hard for one particular government, or nation state, to say hey we're going to put this back in the box. It's Pandora's box. It's out in the open. So that's a challenge as well for China and other people, the US. If you have some harmful scenario, how to actually regulate that. I don't know how that's going to work out moving forward. I think it is the case in our community how to go to the next level, which is another point that you brought up. One thing that Linus also brought up today, is one of the reasons why it's great to collaborate with corporations is that often they put kind of the finishing touches on a product to really make it to the level that people can engage with it easily. That kind of on ramping to new technology is very easy and that's because of corporations is very incentivized monetarily to do that, whereas the open source community isn't necessarily incentivized to do that. Moreover, a lot of that work that final 1% of a project for the polish is so much more difficult. It's not the fun technical element. So a lot of the open source contributors, myself included, aren't necessarily very excited about that. However, what we saw in Signal, which is a product that it is a non-profit it is something that isn't necessarily for corporate gain, but that final polish and making it very usable did mean that a lot more people are using the product. So in terms of we as a community I think we have to figure out how keeping our radical governance structure, how to get more and more projects to have that final polish. And that'll really take the whole community.>> Let them benefit from it in a way that they're comfortable with now it's not a proprietary lock and it's more of only 10% of most of the applications are uniquely differentiated with open source. Question kind of philosophic thought experiment, or just philosophical question, I'll say astronomy and astrophysics is an interesting background. You've got a world of connected devices, the IoT, Internet of Things, includes people. So, you know I'm sitting there looking at the stars, oh that's the Apache Project, lots of stars in that one. You have these constellations of communities, if you will out there to kind of use the metaphor. And then you got astrophysics, the Milky Way, a lot of gravity around me. You almost take a metaphor talks to how communities work. So let's get your thoughts. How does astrophysics and astronomy relate to some of the dynamics in how self-governing things work?>> I'd love to see that visualization by the way, of the Apache Project and the Milky Way,>> [John] Which one's the Big Dipper?>> That sounds gorgeous, you guys should definitely pursue that.>> John you're going to find something at Caltech, you know our next fellowship.>> Argued who always did the Big Dipper or not, but you know.>> I think some of the challenges are similar in the sciences in that people initially get into it because it's something they're curious about. It's something they love and that's an innate human instinct. People have always gazed up at the stars. People have always wondered how things work. How your computer works? You know let me figure that out. That said, ultimately, they need to eat and feed their families and that sort of stuff. And we often see in the astrophysics community incredibly talented people at some stage in their career leaving for some sort of corporate job. And retaining talent is difficult because a lot of people are forced to move around the globe, to different centers in academia, and that lifestyle can be difficult. The pay often isn't as rewarding as it could be. So to make some sort of parallel between that community and the open source community, retaining talent in open source, if you want people to not necessarily work in open source under Microsoft, under a certain corporation only, but to kind of work more generally. That is something that ultimately, we have to distribute the rewards from that to the community.>> It's kind of interesting. The way I always thought the role of the corporation and open source was always trying to change the game. You know, you mentioned gerrymandering. The old model was we got to influence a slow that down so that we can control it.>> So John we've had people around the globe and even that have made it to space on theCUBE before. I don't know that we've ever had anybody that's been to the South Pole before on theCUBE. So Christine, maybe tell us a little about how's technology you know working in the South Pole and what can you tell our audience about it?>> Sure. So I spent 10 and half months at the South Pole. Not just Antarctica, but literally the middle of the continent, the geographic South Pole. There the US has a research base that houses up to about 200 people during the austral summer months when it's warm that is maybe minus 20 degrees or so. During the cold winter months, it gets completely dark and planes have a very difficult time coming in and out so they close off the station to a skeleton crew to keep the science experiments down there running. There are several astrophysical experiments, several telescopes, as well as many research projects, and that skeleton crew was what I was a part of. 46 people and I was tasked with running the telescope down there and looking at some of the echoes of the Big Bang. And I was basically a telescope doctor. So I was on call much like a sys-admin might be. I was responsible for the kind of IT support for the telescope, but also just physical, something physically broke, kind of replacing that. And that meant I could be woken up in the middle of night because of some kind of package update issue or anything like that and I'd have to hike out in minus a 100 degrees to fix this, sometimes. Oftentimes, there was IT support on the station so we did have internet running to the telescope which was about a kilometer away. It took me anywhere from 20 to 30 minutes to walk out there. So if it didn't require on-site support sometimes I could do the work in my pajamas to kind of fix that. So it was a kind of traditional computer support role in a very untraditional environment.>> That's an IoT device isn't it.>> Yeah.>> Stu and I are always interested in the younger generation as we both have kids who are growing up in this new digital culture. What's your feeling in terms of the younger generation that are coming up because people going to school now, digital natives, courseware, online isn't always the answer, people learn differently. Your thoughts on onboarding the younger generation and for the inclusion piece which is super important whether it's women in tech and/or just people just getting more people into computer science. What are some of things that you see happening that excite you and what are some of the things that get you concerned?>> Yeah, so I had the chance I mentioned a little in my talk to teach 12 high school students how to computer program this summer. Some of them have been through computer programming classes at their colleges, or at their high schools, some not. What I saw when I was in high school was a huge variety of competence in the high school teachers that I had. Some were amazing and inspiring. Others because in the US you need a degree in education, but not necessarily a degree in the field that you're teaching. I think that there's a huge lack of people capable of teaching the next generation who are working at the high school level. It's not that there's a huge lack of people who are capable, kind of anyone at this conference could sit down and help a high schooler get motivated and self-study. So I think teacher training is something that I'm concerned about. In terms of things I'm very excited about, we're not quite there yet with the online courses, but the ability to acquire that knowledge online is very, very exciting. In addition, I think we're waking up as a society to the fact that four year college isn't necessarily the best preparation for every single field. For some fields it's very useful. For other fields, particularly engineering, maybe even computer science engineering, apprenticeships or practical experience could be as valuable if not more valuable for less expense. So I'm excited about new initiatives, these coding bootcamps. I think there's a difficulty in regulation in that you don't know for a new coding bootcamp. Is it just trying to get people's money? Is it really going to help their careers? So we're in a very frothy time there, but I think ultimately how it will shake out is it's going to help people enter technology jobs quicker.>> You know there's a percentage of jobs that aren't even invented yet. So there's AI. You see self-driving cars. These things are easy indicators that hey society's changing.>> Yeah. And it's also good to be helpful for a professionals like us, older professionals who want to keep up in this ever growing field and I don't necessarily want to go back for a second Ph.D, but I'll absolutely take an online course in something I didn't see in my undergrad.>> I mean you can get immersed in anything these days online. It's great, there's a lot of community behind it. Christine thanks so much for sharing. Congratulations on a great keynote. Thanks for spending some time with us.>> [Christine] Yeah, thanks for having me.>> It's theCUBE live coverage here in LA for Open Source Summit in North America. I'm John Furrier, Stu Miniman, and we'll be right back with more live coverage after this short break.

Published Date : Sep 11 2017

SUMMARY :

Brought to you by the Linux Foundation, and Red Hat. Source Summit North America here in LA, I'm John Furrier your co-host with Stu Mitiman. Welcome to theCUBE, a mouthful but you're also keynoting, gave one of the talks opening Why did you take that theme? So that we can focus on that as a technology community on what are we going to do with But you had made a comment that you originally wanted to be lawyer, you went to MIT, and Was it you just started coding and said damn I love coding? the feedback loop of you always have a problem you're trying to solve. I think it was very inspirational. I love this notion that code is now at a point where open source is a global phenomenon. You starting to see the more inclusionary roles and the communities are changing. that you as a group, no matter what entity you are, you can't do as much as you can if In that gerrymandering was a word you used. is there and they're like well if you do this you're going to be the same evil person just How does kind of the money fit into this whole discussion of open source? I have done a bunch of contributions for free on the side, but I've always in the end gotten It's an interesting comment you made about the US how they couldn't do potentially Linux I think the stakes are higher and the thing I'd like you to get your comment or reaction So it's a discovery challenge of what do we do and how do we get to the truth. So a lot of the open source contributors, myself included, aren't necessarily very excited lock and it's more of only 10% of most of the applications are uniquely differentiated the globe, to different centers in academia, and that lifestyle can be difficult. You know, you mentioned gerrymandering. So Christine, maybe tell us a little about how's technology you know working in the South So if it didn't require on-site support sometimes I could do the work in my pajamas to kind that get you concerned? Others because in the US you need a degree in education, but not necessarily a degree You know there's a percentage of jobs that aren't even invented yet. And it's also good to be helpful for a professionals like us, older professionals who want to keep I mean you can get immersed in anything these days online. I'm John Furrier, Stu Miniman, and we'll be right back with more live coverage after this

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

ChristinePERSON

0.99+

Stu MitimanPERSON

0.99+

South PoleLOCATION

0.99+

Stu MinimanPERSON

0.99+

Jim ZemlinPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

South PoleLOCATION

0.99+

AntarcticaLOCATION

0.99+

Red HatORGANIZATION

0.99+

John FurrierPERSON

0.99+

LALOCATION

0.99+

LinusPERSON

0.99+

USLOCATION

0.99+

Los AngelesLOCATION

0.99+

Christine Corbett MoranPERSON

0.99+

JohnPERSON

0.99+

CaltechORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Game of ThronesTITLE

0.99+

PandoraORGANIZATION

0.99+

MITORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

EarthLOCATION

0.99+

EquifaxORGANIZATION

0.99+

Big BangEVENT

0.99+

oneQUANTITY

0.99+

20QUANTITY

0.99+

firstQUANTITY

0.99+

StuPERSON

0.99+

four yearQUANTITY

0.99+

North AmericaLOCATION

0.99+

minus 20 degreesQUANTITY

0.99+

30 minutesQUANTITY

0.98+

10%QUANTITY

0.97+

1%QUANTITY

0.97+

Daenerys TargaryenPERSON

0.97+

United StatesLOCATION

0.97+

secondQUANTITY

0.97+

bothQUANTITY

0.96+

about a kilometerQUANTITY

0.95+

Open Source Summit 2017EVENT

0.95+

about 200 peopleQUANTITY

0.95+

TwitterORGANIZATION

0.95+

tier oneQUANTITY

0.94+

LinuxTITLE

0.93+

10DATE

0.92+

10 and half monthsQUANTITY

0.92+

15DATE

0.91+

Open Source SummitEVENT

0.91+

46 peopleQUANTITY

0.9+

12 high school studentsQUANTITY

0.9+

One thingQUANTITY

0.89+

Milky WayLOCATION

0.89+

ChinaORGANIZATION

0.88+

Open Source Summit North AmericaEVENT

0.87+

this summerDATE

0.85+

2017EVENT

0.84+

tier twoQUANTITY

0.79+

20 years agoDATE

0.79+

a minuteQUANTITY

0.79+

single fieldQUANTITY

0.78+

one big wayQUANTITY

0.77+

Dave Jent, Indiana University and Aaron Neal, Indiana University | SuperComputing 22


 

(upbeat music) >> Welcome back. We're here at Supercomputing 22 in Dallas. My name's Paul Gill, I'm your host. With me, Dave Nicholson, my co-host. And one thing that struck me about this conference arriving here, was the number of universities that are exhibiting here. I mean, big, big exhibits from universities. Never seen that at a conference before. And one of those universities is Indiana University. Our two guests, Dave Jent, who's the AVP of Networks at Indiana University, Aaron Neal, Deputy CIO at Indiana University. Welcome, thanks for joining us. >> Thank you for having us. >> Thank you. >> I've always thought that the CIO job at a university has got to be the toughest CIO job there is, because you're managing this sprawling network, people are doing all kinds of different things on it. You've got to secure it. You've got to make it performant. And it just seems to be a big challenge. Talk about the network at Indiana University and what you have done particularly since the pandemic, how that has affected the architecture of your network. And what you do to maintain the levels of performance and security that you need. >> On the network side one of the things we've done is, kept in close contact with what the incoming students are looking for. It's a different environment than it was then 10 years ago when a student would come, maybe they had a phone, maybe they had one laptop. Today they're coming with multiple phones, multiple laptops, gaming devices. And the expectation that they have to come on a campus and plug all that stuff in causes lots of problems for us, in managing just the security aspect of it, the capacity, the IP space required to manage six, seven devices per student when you have 35,000 students on campus, has always been a challenge. And keeping ahead of that knowing what students are going to come in with, has been interesting. During the pandemic the campus was closed for a bit of time. What we found was our biggest challenge was keeping up with the number of people who wanted to VPN to campus. We had to buy additional VPN licenses so they could do their work, authenticate to the network. We doubled, maybe even tripled our our VPN license count. And that has settled down now that we're back on campus. But again, they came back with a vengeance. More gaming devices, more things to be connected, and into an environment that was a couple years old, that we hadn't done much with. We had gone through a pretty good size network deployment of new hardware to try to get ready for them. And it's worked well, but it's always challenging to keep up with students. >> Aaron, I want to ask you about security because that really is one of your key areas of focus. And you're collaborating with counties, local municipalities, as well as other educational institutions. How's your security strategy evolving in light of some of the vulnerabilities of VPNs that became obvious during the pandemic, and this kind of perfusion of new devices that that Dave was talking about? >> Yeah, so one of the things that we we did several years ago was establish what we call OmniSOC, which is a shared security operations center in collaboration with other institutions as well as research centers across the United States and in Indiana. And really what that is, is we took the lessons that we've learned and the capabilities that we've had within the institution and looked to partner with those key institutions to bring that data in-house, utilize our staff such that we can look for security threats and share that information across the the other institutions so that we can give each of those areas a heads up and work with those institutions to address any kind of vulnerabilities that might be out there. One of the other things that you mentioned is, we're partnering with Purdue in the Indiana Office of Technology on a grant to actually work with municipalities, county governments, to really assess their posture as it relates to security in those areas. It's a great opportunity for us to work together as institutions as well as work with the state in general to increase our posture as it relates to security. >> Dave, what brings IU to Supercomputing 2022? >> We've been here for a long time. And I think one of the things that we're always interested in is, what's next? What's new? There's so many, there's network vendors, software vendors, hardware vendors, high performance computing suppliers. What is out there that we're interested in? IU runs a large Cray system in Indiana called Big Red 200. And with any system you procure it, you get it running, you operate it, and your next goal is to upgrade it. And what's out there that we might be interested? That I think why we come to IU. We also like to showcase what we do at IU. If you come by the booth you'll see the OmniSOC, there's some video on that. The GlobalNOC, which I manage, which supports a lot of the RNE institutions in the country. We talk about that. Being able to have a place for people to come and see us. If you stand by the booth long enough people come and find you, and want to talk about a project they have, or a collaboration they'd like to partner with. We had a guy come by a while ago wanting a job. Those are all good things having a big booth can do for you. >> Well, so on that subject, in each of your areas of expertise and your purview are you kind of interleaved with the academic side of things on campus? Do you include students? I mean, I would think it would be a great source of cheap labor for you at least. Or is there kind of a wall between what you guys are responsible for and what students? >> Absolutely we try to support faculty and students as much as we can. And just to go back a little bit on the OmniSOC discussion. One of the things that we provide is internships for each of the universities that we work with. They have to sponsor at least three students every year and make that financial commitment. We bring them on site for three weeks. They learn us alongside the other analysts, information security analysts and work in a real world environment and gain those skills to be able to go back to their institutions and do an additional work there. So it's a great program for us to work with students. I think the other thing that we do is we provide obviously the infrastructure that enable our faculty members to do the research that they need to do. Whether that's through Big Red 200, our Supercomputer or just kind of the everyday infrastructure that allows them to do what they need to do. We have an environment on premise called our Intelligent Infrastructure, that we provide managed access to hardware and storage resources in a way that we know it's secure and they can utilize that environment to do virtually anything that they need in a server environment. >> Dave, I want to get back to the GigaPOP, which you mentioned earlier you're the managing director of the Indiana GigaPOP. What exactly is it? >> Well, the GigaPOP and there are a number of GigaPOP around the country. It was really the aggregation facility for Indiana and all of the universities in Indiana to connect to outside resources. GigaPOP has connections to internet too, the commodity internet, Esnet, the Big Ten or the BTAA a network in Chicago. It's a way for all universities in Indiana to connect to a single source to allow them to connect nationally to research organizations. >> And what are the benefits of having this collaboration of university. >> If you could think of a researcher at Indiana wants to do something with a researcher in Wisconsin, they both connect to their research networks in Wisconsin and Indiana, and they have essentially direct connection. There's no commodity internet, there's no throttling of of capacity. Both networks and the interconnects because we use internet too, are essentially UNT throttled access for the researchers to do anything they need to do. It's secure, it's fast, easy to use, in fact, so easy they don't even know that they're using it. It just we manage the networks and organize the networks in a way configure them that's the path of least resistance and that's the path traffic will take. And that's nationally. There are lots of these that are interconnected in various ways. I do want to get back to the labor point, just for a moment. (laughs) Because... >> You're here to claim you're not violating any labor laws. Is that what you're going to be? >> I'm here to hopefully hire, get more people to be interested to coming to IU. >> Stop by the booth. >> It's a great place to work. >> Exactly. >> We hire lots of interns and in the network space hiring really experienced network engineers, really hard to do, hard to attract people. And these days when you can work from anywhere, you don't have to be any place to work for anybody. We try to attract as many students as we can. And really we're exposing 'em to an environment that exists in very few places. Tens of thousands of wireless access points, big fast networks, interconnections and national international networks. We support the Noah network which supports satellite systems and secure traffic. It really is a very unique experience and you can come to IU, spend lots of years there and never see the same thing twice. We think we have an environment that's really a good way for people to come out of college, graduate school, work for some number of years and hopefully stay at IU, but if not, leave and get a good job and talk well about IU. In fact, the wireless network today here at SC was installed and is managed by a person who manages our campus network wireless, James Dickerson. That's the kind of opportunity we can provide people at IU. >> Aaron, I'd like to ask, you hear a lot about everything moving to the cloud these days, but in the HPC world I don't think that move is happening as quickly as it is in some areas. In fact, there's a good argument some workloads should never move to the cloud. You're having to balance these decisions. Where are you on the thinking of what belongs in the data center and what belongs in the cloud? >> I think our approach has really been specific to what the needs are. As an institution, we've not pushed all our chips in on the cloud, whether it be for high performance computing or otherwise. It's really looking at what the specific need is and addressing it with the proper solution. We made an investment several years ago in a data center internally, and we're leveraging that through the intelligent infrastructure that I spoke about. But really it's addressing what the specific need is and finding the specific solution, rather than going all in in one direction or another. I dunno if Jet Stream is something that you would like to bring up as well. >> By having our own data center and having our own facilities we're able to compete for NSF grants and work on projects that provide shared resources for the research community. Just dream is a project that does that. Without a data center and without the ability to work on large projects, we don't have any of that. If you don't have that then you're dependent on someone else. We like to say that, what we are proud of is the people come to IU and ask us if they can partner on our projects. Without a data center and those resources we are the ones who have to go out and say can we partner on your project? We'd like to be the leaders of that in that space. >> I wanted to kind of double click on something you mentioned. Couple of things. Historically IU has been I'm sure closely associated with Chicago. You think of what are students thinking of doing when they graduate? Maybe they're going to go home, but the sort of center of gravity it's like Chicago. You mentioned talking about, especially post pandemic, the idea that you can live anywhere. Not everybody wants to live in Manhattan or Santa Clara. And of course, technology over decades has given us the ability to do things remotely and IU is plugged into the globe, doesn't matter where you are. But have you seen either during or post pandemic 'cause we're really in the early stages of this. Are you seeing that? Are you seeing people say, Hey, thinking about their family, where do I want to live? Where do I want to raise my family? I'm in academia and no, I don't want to live in Manhattan. Hey, we can go to IU and we're plugged into the globe. And then students in California we see this, there's some schools on the central coast where people loved living there when they were in college but there was no economic opportunity there. Are you seeing a shift, are basically houses in Bloomington becoming unaffordable because people are saying, you know what, I'm going to stay here. What does that look like? >> I mean, for our group there are a lot of people who do work from home, have chosen to stay in Bloomington. We have had some people who for various reasons want to leave. We want to retain them, so we allow them to work remotely. And that has turned into a tool for recruiting. The kid that graduates from Caltech. Doesn't want to stay in Caltech in California, we have an opportunity now he can move to wherever between here and there and we can hire him do work. We love to have people come to Indiana. We think it is a unique experience, Bloomington, Indianapolis are great places. But I think the reality is, we're not going to get everybody to come live, be a Hoosier, how do we get them to come and work at IU? In some ways disappointing when we don't have buildings full of people, but 40 paying Zoom or teams window, not kind the same thing. But I think this is what we're going to have to figure out, how do we make this kind of environment work. >> Last question here, give you a chance to put in a plug for Indiana University. For those those data scientists those researchers who may be open to working somewhere else, why would they come to Indiana University? What's different about what you do from what every other academic institution does, Aaron? >> Yeah, I think a lot of what we just talked about today in terms of from a network's perspective, that were plugged in globally. I think if you look beyond the networks I think there are tremendous opportunities for folks to come to Bloomington and experience some bleeding edge technology and to work with some very talented people. I've been amazed, I've been at IU for 20 years and as I look at our peers across higher ed, well, I don't want to say they're not doing as well I do want brag at how well we're doing in terms of organizationally addressing things like security in a centralized way that really puts us in a better position. We're just doing a lot of things that I think some of our peers are catching up to and have been catching up to over the last 10, 12 years. >> And I think to sure scale of IU goes unnoticed at times. IU has the largest medical school in the country. One of the largest nursing schools in the country. And people just kind of overlook some of that. Maybe we need to do a better job of talking about it. But for those who are aware there are a lot of opportunities in life sciences, healthcare, the social sciences. IU has the largest logistics program in the world. We teach more languages than anybody else in the world. The varying kinds of things you can get involved with at IU including networks, I think pretty unparalleled. >> Well, making the case for high performance computing in the Hoosier State. Aaron, Dave, thanks very much for joining you making a great case. >> Thank you. >> Thank you. >> We'll be back right after this short message. This is theCUBE. (upbeat music)

Published Date : Nov 16 2022

SUMMARY :

that are exhibiting here. and security that you need. of the things we've done is, in light of some of the and looked to partner with We also like to showcase what we do at IU. of cheap labor for you at least. that they need to do. of the Indiana GigaPOP. and all of the universities in Indiana And what are the benefits and that's the path traffic will take. You're here to claim you're get more people to be and in the network space but in the HPC world I and finding the specific solution, the people come to IU and IU is plugged into the globe, We love to have people come to Indiana. open to working somewhere else, and to work with some And I think to sure scale in the Hoosier State. This is theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

AaronPERSON

0.99+

CaliforniaLOCATION

0.99+

IUORGANIZATION

0.99+

IndianaLOCATION

0.99+

Dave JentPERSON

0.99+

Aaron NealPERSON

0.99+

WisconsinLOCATION

0.99+

ChicagoLOCATION

0.99+

Paul GillPERSON

0.99+

DavePERSON

0.99+

ManhattanLOCATION

0.99+

20 yearsQUANTITY

0.99+

BloomingtonLOCATION

0.99+

DallasLOCATION

0.99+

James DickersonPERSON

0.99+

three weeksQUANTITY

0.99+

35,000 studentsQUANTITY

0.99+

United StatesLOCATION

0.99+

two guestsQUANTITY

0.99+

Indiana UniversityORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

Santa ClaraLOCATION

0.99+

eachQUANTITY

0.99+

IULOCATION

0.99+

oneQUANTITY

0.99+

NSFORGANIZATION

0.99+

twiceQUANTITY

0.99+

40QUANTITY

0.99+

OneQUANTITY

0.99+

thousandsQUANTITY

0.99+

Hoosier StateLOCATION

0.99+

BTAAORGANIZATION

0.98+

todayDATE

0.98+

pandemicEVENT

0.98+

bothQUANTITY

0.98+

TodayDATE

0.98+

OmniSOCORGANIZATION

0.98+

10 years agoDATE

0.98+

Indiana Office of TechnologyORGANIZATION

0.98+

one laptopQUANTITY

0.97+

EsnetORGANIZATION

0.97+

six, seven devicesQUANTITY

0.97+

GlobalNOCORGANIZATION

0.96+

Big TenORGANIZATION

0.96+

single sourceQUANTITY

0.95+

one directionQUANTITY

0.93+

Jet StreamORGANIZATION

0.93+

several years agoDATE

0.92+

Fernando Brandao, AWS & Richard Moulds, AWS Quantum Computing | AWS re:Invent 2020


 

>>From around the globe. It's the cube with digital coverage of AWS reinvent 2020, sponsored by Intel and AWS. >>Welcome back to the queue. It's virtual coverage of Avis reinvent 2020 I'm John furry, your host. Um, this is a cute virtual we're here. Not in, in remote. We're not in person this year, so we're doing the remote interviews. And then this segment is going to build on the quantum conversation we had last year, Richard moles, general manager of Amazon bracket and aid was quantum computing and Fernando Brandao head of quantum algorithms at AWS and Brent professor of theoretical physics at Caltech. Fernando, thanks for coming on, Richard. Thanks for joining us. >>You're welcome to be here. >>So, Fernando, first of all, love your title, quantum algorithms. That's the coolest title I've heard so far and you're pretty smart because you're a theoretical professor of physics at Caltech. So, um, which I'd never be able to get into, but I wish I could get into there someday, but, uh, thanks for coming on. Um, quantum has been quite the rage and you know, there's a lot of people talking about it. Um, it's not ready for prime time. Some say it's moving faster than others, but where are we on quantum right now? What are, what are you, what are you seeing Fernanda where the quantum, where are peg us in the evolution of, of, uh, where we are? >>Um, yeah, what quantum, uh, it's an emerging and rapidly developing fields. Uh, but we are see where are you on, uh, both in terms of, uh, hardware development and in terms of identifying the most impactful use cases of one company. Uh, so, so it's, it's, it's early days for everyone and, and we have like, uh, different players and different technologies that are being sport. And I think it's, it's, it's early, but it's exciting time to be doing quantum computing. And, uh, and it's very interesting to see the interest in industry growing and, and customers. Uh, for example, Casa from AWS, uh, being, uh, being willing to take part in this journey with us in developmental technology. >>Awesome. Richard, last year we talked to bill Vass about this and he was, you know, he set expectations really well, I thought, but it was pretty much in classic Amazonian way. You know, it makes the announcement a lot of progress then makes me give us the update on your end. You guys now are shipping brackets available. What's the update on your end and Verner mentioned in his keynote this week >> as well. Yeah, it was a, it was great until I was really looking at your interview with bill. It was, uh, that was when we launched the launch the service a year ago, almost exactly a year ago this week. And we've come a long way. So as you mentioned, we've, uh, we've, uh, we've gone to general availability with the service now that that happened in August. So now a customer can kind of look into the, uh, to the bracket console and, uh, installed programming concept computers. You know, there's, uh, there's tremendous excitement obviously, as, as you mentioned, and Fernando mentioned, you know, quantum computers, uh, we think >>Have the potential to solve problems that are currently, uh, uh, unsolvable. Um, the goal of bracket is to fundamentally give customers the ability to, uh, to go test, uh, some of those notions to explore the technology and to just start planning for the future. You know, our goal was always to try and solve some of the problems that customers have had for, you know, gee, a decade or so now, you know, they tell us from a variety of different industries, whether it's drug discovery or financial services, whether it's energy or there's chemical engineering, machine learning, you know, th the potential for quantum computer impacts may industries could potentially be disruptive to those industries. And, uh, it's, it's essential that customers can can plan for the future, you know, build their own internal resources, become experts, hire the right staff, figure out where it might impact their business and, uh, and potentially disrupt. >>So, uh, you know, in the past they're finding it hard to, to get involved. You know, these machines are very different, different technologies building in different ways of different characteristics. Uh, the tooling is very disparate, very fragmented. Historically, it's hard for companies to get access to the machines. These tend to be, you know, owned by startups or in, you know, physics labs or universities, very difficult to get access to these things, very different commercial models. Um, and, uh, as you, as you suggested, a lot of interests, a lot of hype, a lot of claims in the industry, customers want to cut through all that. They want to understand what's real, uh, what they can do today, uh, how they can experiment and, uh, and get started. So, you know, we see bracket as a catalyst for innovation. We want to bring together end-users, um, consultants, uh, software developers, um, providers that want to host services on top of bracket, try and get the industry, you know, rubbing along them. You spoke to lots of Amazonians. I'm sure you've heard the phrase innovation flywheel, plenty of times. Um, we see the same approach that we've used successfully in IOT and robotics and machine learning and apply that same approach to content, machine learning software, to quantum computing, and to learn, to bring it together. And, uh, if we get the tooling right, and we make it easy, um, then we don't see any reason why we can't, uh, you know, rapidly try and move this industry forward. And >>It was fun areas where there's a lot of, you know, intellectual computer science, um, technology science involved in super exciting. And Amazon's supposed to some of that undifferentiated heavy. >>That's what I am, you know, it's like, >>There's a Maslow hierarchy of needs in the tech industry. You know, people say, Oh, why five people freak out when there's no wifi? You know, you can't get enough compute. Right. So, you know, um, compute is one of those things with machine learning is seeing the benefits and quantum there's so much benefits there. Um, and you guys made some announcements at, at re-invent, uh, around BRACA. Can you share just quickly share some of those updates, Richard? >>Sure. I mean, it's the way we innovate at AWS. You know, we, we start simple and we, and we build up features. We listen to customers and we learn as we go along, we try and move as quickly as possible. So since going public in, uh, in, in August, we've actually had a string of releases, uh, pretty consistent, um, delivering new features. So we try to tie not the integration with the platform. Customers have told us really very early on that they, they don't just want to play with the technology. They want to figure out how to, how to envisage a production quantum computing service, how it might look, you know, in the context of a broad cloud platform with AWS. So we've, uh, we launched some integration with, uh, other AWS capabilities around security, managing limits, quotas, tagging resources, that type of thing, things that are familiar to, uh, to, to, to current AWS users. >>Uh, we launched some new hardware. Uh, all of our partners D-Wave launched some, uh, uh, you know, a 5,000 cubit machine, uh, just in September. Uh, so we made that available on bracket the same day that they launched that hardware, which was very cool. Um, you know, we've made it, uh, we've, we've made it easier for researchers. We've been, you know, impressed how many academics and researchers have used the service, not just large corporations. Um, they want to have really deep access to these machines. They want to program these things at a low level. So we launched some features, uh, to enable them to do their research, but reinvent, we were really focused on two things, um, simulators and making it much easier to use, uh, hybrid systems systems that, uh, incorporate classical compute, traditional digital computing with quantum machinery, um, in the vein that follow some of the liens that we've seen, uh, in machine learning. >>So, uh, simulators are important. They're a very important part of, uh, learning how to use concepts, computers. They're always available 24, seven they're super convenient to use. And of course they're critical in verifying the accuracy of the results that we get from quantum hardware. When we launched the service behind free simulator for customers to help debug their circuits and experiments quickly, um, but simulating large experiments and large systems is a real challenge on classical computers. You know, it, wasn't hard on classical. Uh, then you wouldn't need a quantum computer. That's the whole point. So running large simulations, you know, is expensive in terms of resources. It's complicated. Uh, we launched a pretty powerful simulator, uh, back in August, which we thought at the time was always powerful managed. Quantum stimulates circuit handled 34 cubits, and it reinvented last week, we launched a new simulator, which actually the first managed simulator to use tensor network technology. >>And it can run up to 50 cubits. So we think is, we think is probably the most powerful, uh, managed quantum simulator on the market today. And customers can flip easily between either using real quantum hardware or either of our, uh, stimulators just by changing a line of code. Um, the other thing we launched was the ability to run these hybrid systems. You know, quantum computers will get more, no don't get onto in a moment is, uh, today's computers are very imperfect, you know, lots of errors. Um, we working, obviously the industry towards fault-tolerant machines and Fernando can talk about some research papers that were published in that area, but right now the machines are far from perfect. And, uh, and the way that we can try to squeeze as much value out of these devices today is to run them in tandem with classical systems. >>We think of the notion of a self-learning quantum algorithm, where you use a classical optimization techniques, such as we see machine learning to tweak and tune the parameters of a quantum algorithm to try and iterate and converge on the best answer and try and overcome some of these issues surrounding errors. That's a lot of moving parts to orchestrate for customers, a lot of different systems, a lot of different programming techniques. And we wanted to make that much easier. We've been impressed with a, a, an open projects, been around for a couple of years, uh, called penny lane after the Beatles song. And, um, so we wanted to double down on that. We were getting a lot of positive feedback from customers about the penny lane talk it, so we decided to, uh, uh, make it a first class citizen on bracket, make it available as a native feature, uh, in our, uh, in our Jupiter notebooks and our tutorials learning examples, um, that open source project has very similar, um, guiding principles that we do, you know, it's open, it's cross platform, it's technology agnostic, and we thought he was a great fit to the service. >>So we, uh, we announced that and made it available to customers and, uh, and, and, uh, already getting great feedback. So, uh, you know, finishing the finishing the year strongly, I think, um, looking forward to 2021, you know, looking forward to some really cool technology it's on the horizon, uh, from a hardware point of view, making it easy to use, um, you know, and always, obviously trying to work back from customer problems. And so congratulations on the success. I'm sure it's not hard to hire people interested, at least finding qualified people it'd be different, but, you know, sign me up. I love quantum great people, Fernando real quick, understanding the relationship with Caltech unique to Amazon. Um, tell us how that fits into the, into this, >>Uh, right. John S no, as I was saying, it's it's early days, uh, for, for quantum computing, uh, and to make progress, uh, in abreast, uh, put together a team of experts, right. To work both on, on find new use cases of quantum computing and also, uh, building more powerful, uh, quantum hardware. Uh, so the AWS center for quantum computing is based at Caltech. Uh, and, and this comes from the belief of AWS that, uh, in quantum computing is key to, uh, to keep close, to stay close of like fresh ideas and to the latest scientific developments. Right. And Caltech is if you're near one computing. So what's the ideal place for doing that? Uh, so in the center, we, we put together researchers and engineers, uh, from computer science, physics, and other subjects, uh, from Amazon, but also from all the academic institutions, uh, of course some context, but we also have Stanford and university of Chicago, uh, among others. So we broke wrongs, uh, in the beauty for AWS and for quantum computer in the summer, uh, and under construction right now. Uh, but, uh, as we speak, John, the team is busy, uh, uh, you know, getting stuff in, in temporary lab space that we have at cottage. >>Awesome. Great. And real quick, I know we've got some time pressure here, but you published some new research, give a quick a plug for the new research. Tell us about that. >>Um, right. So, so, you know, as part of the effort or the integration for one company, uh, we are developing a new cubix, uh, which we choose a combination of acoustic and electric components. So this kind of hybrid Aquacel execute, it has the promise for a much smaller footprint, think about like a few microliters and much longer storage times, like up to settlements, uh, which, which is a big improvement over the scale of the arts sort of writing all export based cubits, but that's not the whole story, right? On six, if you have a good security should make good use of it. Uh, so what we did in this paper, they were just put out, uh, is, is a proposal for an architecture of how to build a scalable quantum computer using these cubits. So we found from our analysis that we can get more than a 10 X overheads in the resources required from URI, a universal thought around quantum computer. >>Uh, so what are these resources? This is like a smaller number of physical cubits. Uh, this is a smaller footprint is, uh, fewer control lines in like a smaller approach and a consistent, right. And, and these are all like, uh, I think this is a solid contribution. Uh, no, it's a theoretical analysis, right? So, so the, uh, the experimental development has to come, but I think this is a solid contribution in the big challenge of scaling up this quantum systems. Uh, so, so, so John, as we speak like, uh, data blessed in the, for quantum computing is, uh, working on the experimental development of this, uh, a highly adequacy architecture, but we also keep exploring other promising ways of doing scalable quantum computers and eventually, uh, to bring a more powerful computer resources to AWS customers. >>It's kind of like machine learning and data science, the smartest people work on it. Then you democratize that. I can see where this is going. Um, Richard real quick, um, for people who want to get involved and participate or consume, what do they do? Give us the playbook real quick. Uh, so simple, just go to the AWS console and kind of log onto the, to the bracket, uh, bracket console, jump in, you know, uh, create, um, create a Jupiter notebook, pull down some of our sample, uh, applications run through the notebook and program a quantum computer. It's literally that simple. There's plenty of tutorials. It's easy to get started, you know, classic cloud style right now from commitment. Jump in, start simple, get going. We want you to go quantum. You can't go back, go quantum. You can't go back to regular computing. I think people will be running concert classical systems in parallel for quite some time. So yeah, this is the, this is definitely not a one way door. You know, you go explore quantum computing and see how it fits into, uh, >>You know, into the, into solving some of the problems that you wanted to solve in the future. But definitely this is not a replacement technology. This is a complimentary technology. >>It's great. It's a great innovation. It's kind of intoxicating technically to get, think about the benefits Fernando, Richard, thanks for coming on. It's really exciting. I'm looking forward to keeping up keeping track of the progress. Thanks for coming on the cube coverage of reinvent, quantum computing going the next level coexisting building on top of the shoulders of other giant technologies. This is where the computing wave is going. It's different. It's impacting people's lives. This is the cube coverage of re-invent. Thanks for watching.

Published Date : Dec 16 2020

SUMMARY :

It's the cube with digital coverage of AWS And then this segment is going to build on the quantum conversation we had last Um, quantum has been quite the rage and you know, Uh, but we are see where are you on, uh, both in terms of, uh, hardware development and Richard, last year we talked to bill Vass about this and he was, you know, he set expectations really well, there's, uh, there's tremendous excitement obviously, as, as you mentioned, and Fernando mentioned, Have the potential to solve problems that are currently, uh, uh, unsolvable. So, uh, you know, in the past they're finding it hard to, to get involved. It was fun areas where there's a lot of, you know, intellectual computer science, So, you know, um, compute is one of those things how it might look, you know, in the context of a broad cloud platform with AWS. uh, uh, you know, a 5,000 cubit machine, uh, just in September. So running large simulations, you know, is expensive in terms of resources. And, uh, and the way that we can try to you know, it's open, it's cross platform, it's technology agnostic, and we thought he was a great fit to So, uh, you know, finishing the finishing the year strongly, but also from all the academic institutions, uh, of course some context, but we also have Stanford And real quick, I know we've got some time pressure here, but you published some new research, uh, we are developing a new cubix, uh, which we choose a combination of acoustic So, so the, uh, the experimental development has to come, to the bracket, uh, bracket console, jump in, you know, uh, create, You know, into the, into solving some of the problems that you wanted to solve in the future. It's kind of intoxicating technically to get, think about the benefits Fernando,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Fernando BrandaoPERSON

0.99+

AWSORGANIZATION

0.99+

RichardPERSON

0.99+

CaltechORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Richard MouldsPERSON

0.99+

SeptemberDATE

0.99+

John SPERSON

0.99+

JohnPERSON

0.99+

FernandoPERSON

0.99+

BrentPERSON

0.99+

AugustDATE

0.99+

last weekDATE

0.99+

VernerPERSON

0.99+

2021DATE

0.99+

StanfordORGANIZATION

0.99+

sixQUANTITY

0.99+

last yearDATE

0.99+

last yearDATE

0.99+

34 cubitsQUANTITY

0.99+

a year agoDATE

0.99+

firstQUANTITY

0.99+

five peopleQUANTITY

0.99+

IntelORGANIZATION

0.99+

FernandaPERSON

0.98+

5,000 cubitQUANTITY

0.98+

todayDATE

0.98+

two thingsQUANTITY

0.98+

bothQUANTITY

0.97+

oneQUANTITY

0.97+

this weekDATE

0.96+

sevenQUANTITY

0.96+

D-WaveORGANIZATION

0.95+

Richard molesPERSON

0.95+

this yearDATE

0.95+

bill VassPERSON

0.94+

up to 50 cubitsQUANTITY

0.94+

24QUANTITY

0.93+

one wayQUANTITY

0.93+

a year ago this weekDATE

0.89+

AquacelORGANIZATION

0.89+

Avis reinvent 2020TITLE

0.88+

one companyQUANTITY

0.87+

BeatlesORGANIZATION

0.86+

AWS Quantum ComputingORGANIZATION

0.8+

BRACALOCATION

0.76+

a decadeQUANTITY

0.76+

computingEVENT

0.75+

couple of yearsQUANTITY

0.75+

10 XQUANTITY

0.74+

more thanQUANTITY

0.73+

re:Invent 2020TITLE

0.62+

playbookCOMMERCIAL_ITEM

0.62+

JupiterORGANIZATION

0.6+

waveEVENT

0.55+

ChicagoLOCATION

0.54+

MaslowORGANIZATION

0.52+

pennyTITLE

0.49+

Mike Gilfix, IBM | AWS re:Invent 2020 Partner Network Day


 

>> Reporter: From around the globe. It's theCUBE with digital coverage of AWS re:Invent 2020. Special coverage sponsored by AWS global partner network. >> Hello, and welcome to theCUBE virtual and our coverage of AWS re:Invent 2020 and our special coverage of APN partner experience. We are theCUBE virtual and I'm your host, Justin Warren. And today I'm joined by Mike Gilfix who is the Chief Product Officer for IBM Cloud Paks. Mike, welcome to theCUBE. >> Thank you. Thanks for having me. Now, Cloud Paks is a new thing from IBM. I'm not particularly familiar with it, but it's related to IBM's partnership with AWS. So maybe you could just start us off quickly by explaining what is Cloud Paks and what's your role as Chief Product Officer there? >> Well, Cloud Paks is sort of our next generation platform. What we've been doing is bringing the power of IBM software really across the board and bringing it to a hybrid cloud environment. So making it really easy for our customers to consume it wherever they want, however, they want to choose to do it with a consistent skillset and making it really easy to kind of get those things up and running and deliver value quickly. And this is part of IBM's hybrid approach. So what we've seen is organizations that can leverage the same skillset and, you know basically take those workloads make them run where they need to yields about a two and a half times ROI and Caltech sit at the center of that running on the OpenShift platform. So they get consistent security, skills and powerful software to run their business running everywhere. And we've been partnering with AWS because we want to make sure that those customers that have made that choice, can get access to those capabilities easy and as fast as possible. >> Right. And the Cloud Paks and Built On the Red Hat open. Now, let me get this right. It's the open hybrid cloud platform. So is that OpenShift? >> It is OpenShift, yes. I mean IBM is incredibly committed to open software and OpenShift does provide that common layer. And the reason that's important is you want consistent security. You want to avoid lock-in, right? That gives you a very powerful platform, (indistinct) if you will, they can truly run anywhere with any workload. And we've been working very closely with AWS to make sure that is a premiere first-class experience on AWS. >> Yes so the OpenShift on AWS is relatively new from IBM. So could you explain what is OpenShift on AWS and how does that differ from the OpenShift that people may be already familiar with? Well, the kernel, if you will, is the same it's the same sort of central open source software but in working closely with AWS we're now making those things available as simple services that you can quickly provision and run. And that makes it really easy for people to get started, but again sort of carrying forward that same sort of skill sets. So that's kind of a key way in which we see that you can gain that sort of consistency, you know, no matter where you're running that workload. And we've been investing in that integration working closely with them, Amazon. >> Yeah, and we all know Red Hat's commitment to open source software in the open ecosystems. Red hat is rightly famous for it. And I am old enough to remember when it was a brand new thing, particularly in enterprise to allow open source to come in and have anything to do with workloads. And now it's all the rage and people are running quite critical workloads on it. So what are you seeing in the adoption within the enterprise of open software? >> The adoption is massive. I think, well first let me describe what's driving it. I mean, people want to tap into innovation and the beauty of open source is you're kind of crowdsourcing if you will, this massive community of developers that are creating just an incredible amount of innovation at incredible speed. And it's a great way to ensure that you avoid vendor lock-in. So enterprises of all types are looking to open solutions as a way, both of innovating faster and getting protection. And that commitment, is something certainly Red Hat has tapped into. It's behind the great success of Red Hat. And it's something that frankly is permeating throughout IBM in that we're very committed to driving this sort of open approach. And that means that, you know, we need to ensure that people can get access to the innovation they need, run it where they want and ensure that they feel that they have choice. >> And the choice I think is a key part of it that isn't really coming through in some of the narrative. There's a lot of discussion about how you should actually pick, should you go cloud? I remember when it was either you should stay on-site or should you go to cloud? And we had a long discussion there. Hybrid cloud really does seem to have come of age where it's a realistic kind of compromise is probably the wrong word, but it's a trade off between doing all the one thing or all another. And for most enterprises, that doesn't actually seem to be the choice that's actually viable for them. So hybrid seems like it's actually just the practical approach. Would that be accurate? >> Well our studies have shown that if you look statistically at the set of workload that's moved to cloud, you know something like 20% of workloads have only moved to cloud meaning the other 80% is experiencing barriers to move. And some of those barriers is figuring out what to do with all this data that's sitting on-prem or you know, these applications that have years and years of intelligence baked into them that can not easily be ported. And so organizations are looking at the hybrid approaches because they give them more choice. It helps them deal with fragmentation. Meaning as I move more workload, I have consistent skillset. It helps me extend my existing investments and bring it into the cloud world. And all those things again are done with consistent security. That's really important, right? Organizations need to make sure they're protecting their assets, their data throughout, you know leveraging a consistent platform. So that's really the benefit of the hybrid approach. It essentially is going to enable these organizations to unlock more workload and gain the acceleration and the transformative effect of cloud. And that's why it's becoming a necessity, right? Because they just can't get that 80% to move yet. >> Yeah and I've long said that the cloud is a state of mind rather than a particular location. It's more about an operational model of how you do things. So hearing that we've only got 20% of workloads have moved to this new way of doing things does rather suggest that there's a lot more work to be done. What, for those organizations that are just looking to do this now or they've done a bit of it and they're looking for those next new workloads, where do you see customers struggling the most and where do you think that IBM can help them there? >> Well,(indistinct) where are they struggling the most? First I think skills. I mean, they have to figure out a new set of technologies to go and transition from this old world to the new and at the heart of that is lots of really critical debates. Like how do they modernize the way that they do software delivery for many enterprises, right? Embrace new ways of doing software delivery. How do they deal with the data issues that arise from where the data sits, their obligations for data protection, what happens if the data spans multiple different places but you have to provide high quality performance and security. These are all parts of issues that, you know, span different environments. And so they have to figure out how to manage those kinds of things and make it work in one place. I think the benefit of partnering, you know, with Amazon is, clearly there's a huge customer base that's interested in Amazon. I think the benefit of the IBM partnership is, you know, we can help to go and unlock some of those new workloads and find ways to get that cloud benefit and help to move them to the cloud faster again with that consistency of experience. And that's why I think it's a good match partnership where we're giving more customers choice. We're helping them to unlock innovation substantially faster. >> Right. And so for people who might want to just get started without it, how would they approach this? People might have some experience with AWS, it's almost difficult not to these days, but for those who aren't familiar with the Red Hat on AWS with OpenShift on AWS, how would they get started with you to explore what's possible? >> Well, one of the things that we're offering to our clients is a service that we refer to as IBM garage. It's, you know, an engagement model if you will, within IBM, where we work with our clients and we really help them to do co-creation so help to understand their business problem or, you know, the target state of where they want their IT to get to. And in working with them in co-creation, you know, we help them to affect that transition. Let's say that it's about delivering business applications faster. Let's say it's about modernizing the applications they have or offering new services, new business models, again all in the spirit of co-creation. And we found that to be really popular. It's a great way to get started. We've leveraged design thinking and approach. They can think about the customer experience and their outcome. If they're creating new business processes, new applications, and then really help them to uplift their skills and, you know, get ready to adopt cloud technology and everything that they do. >> It sounds like this is a lot of established workloads that people already have in their organizations. It's already there, it's generating real money. It's not those experimental workloads that we saw early on which was a, well let's try this. Cloud is a fabulous way where we can run some experiments. And if it doesn't work, we just turn it off again. These sound like a lot more workloads are kind of more important to the business. Is that be true? >> Yeah. I think that's true. Now I wouldn't say they're just existing workloads because I think there's lots of new business innovation that many of our, you know, clients want to go and launch. And so this gives them an opportunity to do that new innovation, but not forget the past meaning they can bring it forward and bring it forward into an integrated experience. I mean, that's what everyone demands of a true digital business, right? They expect that your experience is integrated, that it's responsive, that it's targeted and personalized. And the only way to do that is to allow for experimentation that integrates in with the, you know, standard business processes and things that you did before. And so you need to be able to connect those things together seamlessly. >> Right. So it sounds like it's a transition more than creating new thing completely from scratch. It's well, look, we've done a lot of innovation over the past decade or so in cloud, we know what works but we still have workloads that people clearly know and value. How do we put those things together and do it in such a way that we maintain the flexibility to be able to make new changes as we learn new things. >> Yeah, leverage what you've got play to your strengths. I mean that's how you create speed. If you have to reinvent the wheel every time it's going to be a slow roll. >> Yeah and that does seem like an area where an organization probably at this point should be looking to partner with other people who have done the hard yards. They've already figured this out. Well, as you say, why can't we make all of these obvious areas yourself when you're starting from scratch, when there's a wealth of experience out there and particularly this whole ecosystem that exists around the open software? In fact maybe you could tell us a little bit about the ecosystem opportunities that are there because Red Hat has been part of this for a very long time. AWS has a very broad ecosystem as we're all familiar with being here at re:Invent yet again. How does that ecosystem play into what's possible? >> Well, let me explain why I think IBM brings a different dimension to that trio, right? IBM brings deep industry expertise. I mean, we've long worked with all of our clients, our partners on solving some of their biggest business problems and being embedded in the thing that they do. So we have deep knowledge of their enterprise challenges, deep knowledge of their business processes. deep knowledge of their business processes. We are able to bring that industry know how mixed with, you know, Red Hat's approach to an open foundational platform, coupled with, you know, the great infrastructure you can get from Amazon and, you know, that's a great sort of powerful combination that we can bring to each of our clients. And, you know, maybe just to bring it back a little bit to that idea, okay so what's the role in Cloud Paks in that? I mean, Cloud Paks are the kind of software that we've built to enable enterprises to run their essential business processes, right? In the central digital operations that they run everything from security to protecting their data or giving them powerful data tools to implement AI and you know, to implement AI algorithms in the heart of their business or giving them powerful automation capabilities so they can digitize their operations. And also we make sure those things are going to run effectively. It's those kinds of capabilities that we're bringing in the form of Cloud Paks think of that as that substrate that runs a digital business that now can be brought through right? Running on AWS infrastructure through this integration that we've done. >> Right. So basically taking things as a pre-packaged module that we can just grab that module drop it in and start using it rather than having to build it ourselves from scratch. >> That's right. And they can leverage those powerful capabilities and get focused on innovating the things that matter, right? So the huge accelerant to getting business value. >> And it does sound a lot easier than trying to learn how to do the complex sort of deep learning and linear algorithms that they're involved in machine learning. I have looked into it a bit and trying to manage that sort of deep masses. I think I'd much rather just grab one off the shelf plug it in and just use it. >> Yeah. It's also better than writing assembler code which was some of my first programming experiences as well. So I think the software industry has moved on just a little bit since then. (chuckles) >> I think we have is that I do not miss the days of handwriting assembly at all. Sometimes for this (indistinct) reasons. But if we want to get things done, I think I'd much rather work in something a little higher level. (Mike laughing) So thank you very much for joining me. My guest Mike Gilfix there from IBM, sorry, from IBM cloud. And this has been, sorry, go ahead. We'll cut that. Can we cut and reedit this outro? >> Cameraman: Yeah, you guys can or you can just go ahead and just start over again. >> I'll just do, I'll just do the outro. Try it again. >> Cameraman: Yeah, sounds good. >> So thank you so much for my guests there Mike Gilfix, Chief Product Officer for IBM Cloud Paks from IBM. This has been theCUBES coverage of AWS re:Invent 2020 and the APN partner experience. I've been your host, Justin Warren, make sure you come back and join us for more coverage later on.

Published Date : Nov 28 2020

SUMMARY :

Reporter: From around the globe. and our coverage of AWS re:Invent 2020 So maybe you could just and bringing it to a And the Cloud Paks and And the reason that's important is Well, the kernel, if you will, is the same And I am old enough to remember And that means that, you know, And the choice I get that 80% to move yet. that are just looking to do And so they have to it's almost difficult not to these days, and everything that they do. important to the business. that many of our, you know, and do it in such a way that I mean that's how you create speed. that exists around the open software? and you know, to implement AI algorithms that we can just grab that module So the huge accelerant to just grab one off the shelf So I think the software is that I do not miss the or you can just go ahead I'll just do, I'll just do the outro. and the APN partner experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Justin WarrenPERSON

0.99+

AmazonORGANIZATION

0.99+

Mike GilfixPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

20%QUANTITY

0.99+

80%QUANTITY

0.99+

CaltechORGANIZATION

0.99+

MikePERSON

0.99+

Red HatORGANIZATION

0.99+

OpenShiftTITLE

0.99+

eachQUANTITY

0.99+

FirstQUANTITY

0.98+

Cloud PaksTITLE

0.98+

firstQUANTITY

0.98+

CameramanPERSON

0.97+

Red hatTITLE

0.96+

bothQUANTITY

0.96+

todayDATE

0.95+

one placeQUANTITY

0.95+

APNORGANIZATION

0.95+

oneQUANTITY

0.93+

Red HatTITLE

0.91+

theCUBEORGANIZATION

0.91+

Invent 2020 Partner Network DayEVENT

0.88+

pastDATE

0.85+

2020TITLE

0.82+

about a two and a half timesQUANTITY

0.81+

first programming experiencesQUANTITY

0.77+

re:EVENT

0.69+

IBM cloudORGANIZATION

0.67+

re:Invent 2020EVENT

0.65+

yearsQUANTITY

0.64+

AWSEVENT

0.61+

first-classQUANTITY

0.6+

trioQUANTITY

0.56+

Kazuhiro Gomi, NTT | Upgrade 2020 The NTT Research Summit


 

>> Narrator: From around the globe, it's theCUBE, covering the Upgrade 2020, the NTT Research Summit presented by NTT Research. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in Palo Alto studio for our ongoing coverage of the Upgrade 2020, it's the NTT Research conference. It's our first year covering the event, it's actually the first year for the event inaugural, a year for the events, we're really, really excited to get into this. It's basic research that drives a whole lot of innovation, and we're really excited to have our next guest. He is Kazuhiro Gomi, he is the President and CEO of NTT Research. Kazu, great to see you. >> Hi, good to see you. >> Yeah, so let's jump into it. So this event, like many events was originally scheduled I think for March at Berkeley, clearly COVID came along and you guys had to make some changes. I wonder if you can just share a little bit about your thinking in terms of having this event, getting this great information out, but having to do it in a digital way and kind of rethinking the conference strategy. >> Sure, yeah. So NTT Research, we started our operations about a year ago, July, 2019. and I always wanted to show the world that to give a update of what we have done in the areas of basic and fundamental research. So we plan to do that in March, as you mentioned, however, that the rest of it to some extent history, we needed to cancel the event and then decided to do this time of the year through virtual. Something we learned, however, not everything is bad, by doing this virtual we can certainly reach out to so many peoples around the globe at the same time. So we're taking, I think, trying to get the best out of it. >> Right, right, so you've got a terrific lineup. So let's jump into a little bit. So first thing just about NTT Research, we're all familiar, if you've been around for a little while about Bell Labs, we're fortunate to have Xerox PARC up the street here in Palo Alto, these are kind of famous institutions doing basic research. People probably aren't as familiar at least in the states around NTT basic research. But when you think about real bottom line basic research and how it contributes ultimately, it gets into products, and solutions, and health care, and all kinds of places. How should people think about basic research and its role in ultimately coming to market in products, and services, and all different things. But you're getting way down into the weeds into the really, really basic hardcore technology. >> Sure, yeah, so let me just from my perspective, define the basic research versus some other research and development. For us that the basic research means that we don't necessarily have any like a product roadmap or commercialization roadmap, we just want to look at the fundamental core technology of all things. And from the timescale perspective obviously, not that we're not looking at something new, thing, next year, next six months, that kind of thing. We are looking at five years or sometimes longer than that, potentially 10 years down the road. But you mentioned about the Bell Lab and Xerox PARC. Yeah, well, they used to be such organizations in the United States, however, well, arguably those days have kind of gone, but so that's what's going on in the United States. In Japan, NTT has have done quite a bit of basic research over the years. And so we wanted to, I think because that a lot of the cases that we can talk about the end of the Moore's laws and then the, we are kind of scary time for that. The energy consumptions on ITs We need to make some huge, big, fundamental change has to happen to sustain our long-term development of the ideas and basically for the sake of human beings. >> Right, right. >> So NTT sees that and also we've been doing quite a bit of basic research in Japan. So we recognize this is a time that the let's expand this activities and then by doing, as a part of doing so is open up the research lab in Silicon Valley, where certainly we can really work better, work easier to with that the global talents in this field. So that's how we started this endeavor, like I said, last year. And so far, it's a tremendous progress that we have made, so that's where we are. >> That's great, so just a little bit more specific. So you guys are broken down into three labs as I understand, you've got the Physics, the PHI, which is Physics and Informatics, the CIS lab Cryptography and Information Security, and the MEI lab Medical and Health Informatics, and the conference has really laid out along those same tracks, really day one is a whole lot of stuff, or excuse me, they do to run the Physics and Informatics day. The next day is really Cryptography and Information Security, and then the Medical and Health Informatics. So those are super interesting but very diverse kind of buckets of fundamental research. And you guys are attacking all three of those pillars. >> Yup, so day one, general session, is that we cover the whole, all the topics. And but just that whole general topics. I think some people, those who want to understand what NTT research is all about, joining day one will be a great day to be, to understand more holistic what we are doing. However, given the type of research topic that we are tackling, we need the deep dive conversations, very specific to each topic by the specialist and the experts in each field. Therefore we have a day two, three, and four for a specific topics that we're going to talk about. So that's a configuration of this conference. >> Right, right, and I love. I just have to read a few of the session breakout titles 'cause I think they're just amazing and I always love learning new vocabulary words. Coherent nonlinear dynamics and combinatorial optimization language multipliers, indistinguishability obfuscation from well-founded assumptions, fully deniable communications and computation. I mean, a brief history of the quasi-adaptive NIZKs, which I don't even know what that stands for. (Gomi laughing) Really some interesting topics. But the other thing that jumps out when you go through the sessions is the representation of universities and really the topflight university. So you've got people coming from MIT, CalTech, Stanford, Notre Dame, Michigan, the list goes on and on. Talk to us about the role of academic institutions and how NTT works in conjunction with academic institutions, and how at this basic research level kind of the commercial academic interests align and come together, and work together to really move this basic research down the road. >> Sure, so the working with academic, especially at the top-notch universities are crucial for us. Obviously, that's where the experts in each field of the basic research doing their super activities and we definitely need to get connected, and then we need to accelerate our activities and together with the entities researchers. So that has been kind of one of the number one priority for us to jumpstart and get some going. So as you mentioned, Jeff, that we have a lineup of professors and researchers from each top-notch universities joining to this event and talking at a generous, looking at different sessions. So I'm sure that those who are listening in to those sessions, you will learn well what's going on from the NTT's mind or NTT researchers mind to tackle each problem. But at the same time you will get to hear that top level researchers and professors in each field. So I believe this is going to be a kind of unique, certainly session that to understand what's it's like in a research field of quantum computing, encryptions, and then medical informatics of the world. >> Right. >> So that's, I am sure it's going to be a pretty great lineups. >> Oh, absolutely, a lot of information exchange. And I'm not going to ask you to pick your favorite child 'cause that would be unfair, but what I am going to do is I noticed too that you also write for the Forbes Technology Council members. So you're publishing on Forbes, and one of the articles that you publish relatively recently was about biological digital twins. And this is a topic that I'm really interested in. We used to do a lot of stuff with GE and there was always a lot of conversation about digital twins, for turbines, and motors, and kind of all this big, heavy industrial equipment so that you could get ahead of the curve in terms of anticipating maintenance and basically kind of run simulations of its lifetime. Need concept, now, and that's applied to people in biology, whether that's your heart or maybe it's a bigger system, your cardiovascular system, or the person as a whole. I mean, that just opens up so much interesting opportunities in terms of modeling people and being able to run simulations. If they do things different, I would presume, eat different, walk a little bit more, exercise a little bit more. And you wrote about it, I wonder if you could share kind of your excitement about the potential for digital twins in the medical space. >> Sure, so I think that the benefit is very clear for a lot of people, I would hope that the ones, basically, the computer system can simulate or emulate your own body, not just a generic human body, it's the body for Kazu Gomi at the age of whatever. (Jeff laughing) And so if you get that precise simulation of your body you can do a lot of things. Oh, you, meaning I think a medical professional can do a lot of thing. You can predict what's going to happen to my body in the next year, six months, whatever. Or if I'm feeling sick or whatever the reasons and then the doctor wants to prescribe a few different medicines, but you can really test it out a different kind of medicines, not to you, but to the twin, medical twin then obviously is safer to do some kind of specific medicines or whatever. So anyway, those are the kind of visions that we have. And I have to admit that there's a lot of things, technically we have to overcome, and it will take a lot of years to get there. But I think it's a pretty good goal to define, so we said we did it and I talked with a couple of different experts and I am definitely more convinced that this is a very nice goal to set. However, well, just talking about the goal, just talking about those kinds of futuristic thing, you may just end up with a science fiction. So we need to be more specific, so we have the very researchers are breaking down into different pieces, how to get there, again, it's going to be a pretty long journey, but we're starting from that, they're try to get the digital twin for the cardiovascular system, so basically the create your own heart. Again, the important part is that this model of my heart is very similar to your heart, Jeff, but it's not identical it is somehow different. >> Right, right. >> So we are looking on it and there are certainly some, we're not the only one thinking something like this, there are definitely like-minded researchers in the world. So we are gathered together with those folks and then come up with the exchanging the ideas and coming up with that, the plans, and ideas, that's where we are. But like you said, this is really a exciting goal and exciting project. >> Right, and I like the fact that you consistently in all the background material that I picked up preparing for this today, this focus on tech for good and tech for helping the human species do better down the road. In another topic, in other blog post, you talked about and specifically what are 15 amazing technologies contributing to the greater good and you highlighted cryptography. So there's a lot of interesting conversations around encryption and depending kind of commercialization of quantum computing and how that can break all the existing kind of encryption. And there's going to be this whole renaissance in cryptography, why did you pick that amongst the entire pallet of technologies you can pick from, what's special about cryptography for helping people in the future? >> Okay, so encryption, I think most of the people, just when you hear the study of the encryption, you may think what the goal of these researchers or researches, you may think that you want to make your encryption more robust and more difficult to break. That you can probably imagine that's the type of research that we are doing. >> Jeff: Right. >> And yes, yes, we are doing that, but that's not the only direction that we are working on. Our researchers are working on different kinds of encryptions and basically encryptions controls that you can just reveal, say part of the data being encrypted, or depending upon that kind of attribute of whoever has the key, the information being revealed are slightly different. Those kinds of encryption, well, it's kind of hard to explain verbally, but functional encryption they call is becoming a reality. And I believe those inherit data itself has that protection mechanism, and also controlling who has access to the information is one of the keys to address the current status. Current status, what I mean by that is, that they're more connected world we are going to have, and more information are created through IOT and all that kind of stuff, more sensors out there, I think. So it is great on the one side that we can do a lot of things, but at the same time there's a tons of concerns from the perspective of privacy, and securities, and stuff, and then how to make those things happen together while addressing the concern and the leverage or the benefit you can create super complex accessing systems. But those things, I hate to say that there are some inherently bringing in some vulnerabilities and break at some point, which we don't want to see. >> Right. >> So I think having those securities and privacy mechanism in that the file itself is I think that one of the key to address those issues, again, get the benefit of that they're connected in this, and then while maintaining the privacy and security for the future. >> Right. >> So and then that's, in the end will be the better for everyone and a better society. So I couldn't pick other (Gomi and Jeff laughing) technology but I felt like this is easier for me to explain to a lot of people. So that's mainly the reasons that I went back launching. >> Well, you keep publishing, so I'm sure you'll work your way through most of the technologies over a period of time, but it's really good to hear there's a lot of talk about security not enough about privacy. There's usually the regs and the compliance laws lag, what's kind of happening in the marketplace. So it's good to hear that's really a piece of the conversation because without the privacy the other stuff is not as attractive. And we're seeing all types of issues that are coming up and the regs are catching up. So privacy is a super important piece. But the other thing that is so neat is to be exposed not being an academic, not being in this basic research every day, but have the opportunity to really hear at this level of detail, the amount of work that's being done by big brain smart people to move these basic technologies along, we deal often in kind of higher level applications versus the stuff that's really going on under the cover. So really a great opportunity to learn more and hear from, and probably understand some, understand not all about some of these great, kind of baseline technologies, really good stuff. >> Yup. >> Yeah, so thank-you for inviting us for the first one. And we'll be excited to sit in on some sessions and I'm going to learn. What's that one phrase that I got to learn? The N-I-K-Z-T. NIZKs. (laughs) >> NIZKs. (laughs) >> Yeah, NIZKs, the brief history of quasi-adaptive NI. >> Oh, all right, yeah, yeah. (Gomi and Jeff laughing) >> All right, Kazuhiro, I give you the final word- >> You will find out, yeah. >> You've been working on this thing for over a year, I'm sure you're excited to finally kind of let it out to the world, I wonder if you have any final thoughts you want to share before we send people back off to their sessions. >> Well, let's see, I'm sure if you're watching this video, you are almost there for that actual summit. It's about to start and so hope you enjoy the summit and in a physical, well, I mentioned about the benefit of this virtual, we can reach out to many people, but obviously there's also a flip side of the coin as well. With a physical, we can get more spontaneous conversations and more in-depth discussion, certainly we can do it, perhaps not today. It's more difficult to do it, but yeah, I encourage you to, I think I encouraged my researchers NTT side as well to basic communicate with all of you potentially and hopefully then to have more in-depth, meaningful conversations just starting from here. So just feel comfortable, perhaps just feel comfortable to reach out to me and then all the other NTT folks. And then now, also that the researchers from other organizations, I'm sure they're looking for this type of interactions moving forward as well, yeah. >> Terrific, well, thank-you for that open invitation and you heard it everybody, reach out, and touch base, and communicate, and engage. And it's not quite the same as being physical in the halls, but that you can talk to a whole lot more people. So Kazu, again, thanks for inviting us. Congratulations on the event and really glad to be here covering it. >> Yeah, thank-you very much, Jeff, appreciate it. >> All right, thank-you. He's Kazu, I'm Jeff, we are at the Upgrade 2020, the NTT Research Summit. Thanks for watching, we'll see you next time. (upbeat music)

Published Date : Sep 29 2020

SUMMARY :

the NTT Research Summit of the Upgrade 2020, it's and you guys had to make some changes. and then decided to do this time and health care, and all kinds of places. of the cases that we can talk that the let's expand this and the MEI lab Medical and the experts in each field. and really the topflight university. But at the same time you will get to hear it's going to be a pretty great lineups. and one of the articles that so basically the create your own heart. researchers in the world. Right, and I like the fact and more difficult to break. is one of the keys to and security for the future. So that's mainly the reasons but have the opportunity to really hear and I'm going to learn. NIZKs. Yeah, NIZKs, the brief (Gomi and Jeff laughing) it out to the world, and hopefully then to have more in-depth, and really glad to be here covering it. Yeah, thank-you very the NTT Research Summit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Kazuhiro GomiPERSON

0.99+

CalTechORGANIZATION

0.99+

NTTORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

JapanLOCATION

0.99+

KazuPERSON

0.99+

Silicon ValleyLOCATION

0.99+

MarchDATE

0.99+

Palo AltoLOCATION

0.99+

threeQUANTITY

0.99+

five yearsQUANTITY

0.99+

Bell LabORGANIZATION

0.99+

GomiPERSON

0.99+

Bell LabsORGANIZATION

0.99+

Kazu GomiPERSON

0.99+

fourQUANTITY

0.99+

KazuhiroPERSON

0.99+

United StatesLOCATION

0.99+

next yearDATE

0.99+

MoorePERSON

0.99+

10 yearsQUANTITY

0.99+

NTT ResearchORGANIZATION

0.99+

GEORGANIZATION

0.99+

BerkeleyLOCATION

0.99+

Forbes Technology CouncilORGANIZATION

0.99+

last yearDATE

0.99+

Xerox PARCORGANIZATION

0.99+

StanfordORGANIZATION

0.99+

NTT Research SummitEVENT

0.99+

15 amazing technologiesQUANTITY

0.99+

July, 2019DATE

0.99+

MITORGANIZATION

0.98+

each topicQUANTITY

0.98+

NTT ResearchEVENT

0.98+

Upgrade 2020EVENT

0.98+

oneQUANTITY

0.98+

first yearQUANTITY

0.97+

each fieldQUANTITY

0.97+

todayDATE

0.97+

three labsQUANTITY

0.96+

each problemQUANTITY

0.96+

MichiganLOCATION

0.96+

next six monthsDATE

0.95+

Notre DameORGANIZATION

0.95+

first oneQUANTITY

0.95+

a year agoDATE

0.94+

one sideQUANTITY

0.91+

one phraseQUANTITY

0.9+

over a yearQUANTITY

0.9+

a yearQUANTITY

0.9+

Physics and InformaticsEVENT

0.89+

twinQUANTITY

0.87+

first thingQUANTITY

0.86+

each top-QUANTITY

0.86+

day oneQUANTITY

0.84+

CISORGANIZATION

0.83+

sixQUANTITY

0.82+

Medical and Health InformaticsORGANIZATION

0.8+

one ofQUANTITY

0.72+

ForbesORGANIZATION

0.71+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Machine Learning Applied to Computationally Difficult Problems in Quantum Physics


 

>> My name is Franco Nori. Is a great pleasure to be here and I thank you for attending this meeting and I'll be talking about some of the work we are doing within the NTT-PHI group. I would like to thank the organizers for putting together this very interesting event. The topics studied by NTT-PHI are very exciting and I'm glad to be part of this great team. Let me first start with a brief overview of just a few interactions between our team and other groups within NTT-PHI. After this brief overview or these interactions then I'm going to start talking about machine learning and neural networks applied to computationally difficult problems in quantum physics. The first one I would like to raise is the following. Is it possible to have decoherence free interaction between qubits? And the proposed solution was a postdoc and a visitor and myself some years ago was to study decoherence free interaction between giant atoms made of superconducting qubits in the context of waveguide quantum electrodynamics. The theoretical prediction was confirmed by a very nice experiment performed by Will Oliver's group at MIT was probably so a few months ago in nature and it's called waveguide quantum electrodynamics with superconducting artificial giant atoms. And this is the first joint MIT Michigan nature paper during this NTT-PHI grand period. And we're very pleased with this. And I look forward to having additional collaborations like this one also with other NTT-PHI groups, Another collaboration inside NTT-PHI regards the quantum hall effects in a rapidly rotating polarity and condensates. And this work is mainly driven by two people, a Michael Fraser and Yoshihisa Yamamoto. They are the main driving forces of this project and this has been a great fun. We're also interacting inside the NTT-PHI environment with the groups of marandI Caltech, like McMahon Cornell, Oliver MIT, and as I mentioned before, Fraser Yamamoto NTT and others at NTT-PHI are also very welcome to interact with us. NTT-PHI is interested in various topics including how to use neural networks to solve computationally difficult and important problems. Let us now look at one example of using neural networks to study computationally difficult and hard problems. Everything we'll be talking today is mostly working progress to be extended and improve in the future. So the first example I would like to discuss is topological quantum phase transition retrieved through manifold learning, which is a variety of version of machine learning. This work is done in collaboration with Che, Gneiting and Liu all members of the group. preprint is available in the archive. Some groups are studying a quantum enhanced machine learning where machine learning is supposed to be used in actual quantum computers to use exponential speed-up and using quantum error correction we're not working on these kind of things we're doing something different. We're studying how to apply machine learning applied to quantum problems. For example how to identify quantum phases and phase transitions. We shall be talking about right now. How to achieve, how to perform quantum state tomography in a more efficient manner. That's another work of ours which I'll be showing later on. And how to assist the experimental data analysis which is a separate project which we recently published. But I will not discuss today because the experiments can produce massive amounts of data and machine learning can help to understand these huge tsunami of data provided by these experiments. Machine learning can be either supervised or unsupervised. Supervised is requires human labeled data. So we have here the blue dots have a label. The red dots have a different label. And the question is the new data corresponds to either the blue category or the red category. And many of these problems in machine learning they use the example of identifying cats and dogs but this is typical example. However, there are the cases which are also provides with there are no labels. So you're looking at the cluster structure and you need to define a metric, a distance between the different points to be able to correlate them together to create these clusters. And you can manifold learning is ideally suited to look at problems we just did our non-linearities and unsupervised. Once you're using the principle component analysis along this green axis here which are the principal axis here. You can actually identify a simple structure with linear projection when you increase the axis here, you get the red dots in one area, and the blue dots down here. But in general you could get red green, yellow, blue dots in a complicated manner and the correlations are better seen when you do an nonlinear embedding. And in unsupervised learning the colors represent similarities are not labels because there are no prior labels here. So we are interested on using machine learning to identify topological quantum phases. And this requires looking at the actual phases and their boundaries. And you start from a set of Hamiltonians or wave functions. And recall that this is difficult to do because there is no symmetry breaking, there is no local order parameters and in complicated cases you can not compute the topological properties analytically and numerically is very hard. So therefore machine learning is enriching the toolbox for studying topological quantum phase transitions. And before our work, there were quite a few groups looking at supervised machine learning. The shortcomings that you need to have prior knowledge of the system and the data must be labeled for each phase. This is needed in order to train the neural networks . More recently in the past few years, there has been increased push on looking at all supervised and Nonlinear embeddings. One of the shortcomings we have seen is that they all use the Euclidean distance which is a natural way to construct the similarity matrix. But we have proven that it is suboptimal. It is not the optimal way to look at distance. The Chebyshev distances provides better performance. So therefore the difficulty here is how to detect topological quantifies transition is a challenge because there is no local order parameters. Few years ago we thought well, three or so years ago machine learning may provide effective methods for identifying topological Features needed in the past few years. The past two years several groups are moving this direction. And we have shown that one type of machine learning called manifold learning can successfully retrieve topological quantum phase transitions in momentum and real spaces. We have also Shown that if you use the Chebyshev distance between data points are supposed to Euclidean distance, you sharpen the characteristic features of these topological quantum phases in momentum space and the afterwards we do so-called diffusion map, Isometric map can be applied to implement the dimensionality reduction and to learn about these phases and phase transition in an unsupervised manner. So this is a summary of this work on how to characterize and study topological phases. And the example we used is to look at the canonical famous models like the SSH model, the QWZ model, the quenched SSH model. We look at this momentous space and the real space, and we found that the metal works very well in all of these models. And moreover provides a implications and demonstrations for learning also in real space where the topological invariants could be either or known or hard to compute. So it provides insight on both momentum space and real space and its the capability of manifold learning is very good especially when you have the suitable metric in exploring topological quantum phase transition. So this is one area we would like to keep working on topological faces and how to detect them. Of course there are other problems where neural networks can be useful to solve computationally hard and important problems in quantum physics. And one of them is quantum state tomography which is important to evaluate the quality of state production experiments. The problem is quantum state tomography scales really bad. It is impossible to perform it for six and a half 20 qubits. If you have 2000 or more forget it, it's not going to work. So now we're seeing a very important process which is one here tomography which cannot be done because there is a computationally hard bottleneck. So machine learning is designed to efficiently handle big data. So the question we're asking a few years ago is chemistry learning help us to solve this bottleneck which is quantum state tomography. And this is a project called Eigenstate extraction with neural network tomography with a student Melkani , research scientists of the group Clemens Gneiting and I'll be brief in summarizing this now. The specific machine learning paradigm is the standard artificial neural networks. They have been recently shown in the past couple of years to be successful for tomography of pure States. Our approach will be to carry this over to mixed States. And this is done by successively reconstructing the eigenStates or the mixed states. So it is an iterative procedure where you can slowly slowly get into the desired target state. If you wish to see more details, this has been recently published in phys rev A and has been selected as a editor suggestion. I mean like some of the referees liked it. So tomography is very hard to do but it's important and machine learning can help us to do that using neural networks and these to achieve mixed state tomography using an iterative eigenstate reconstruction. So why it is so challenging? Because you're trying to reconstruct the quantum States from measurements. You have a single qubit, you have a few Pauli matrices there are very few measurements to make when you have N qubits then the N appears in the exponent. So the number of measurements grows exponentially and this exponential scaling makes the computation to be very difficult. It's prohibitively expensive for large system sizes. So this is the bottleneck is these exponential dependence on the number of qubits. So by the time you get to 20 or 24 it is impossible. It gets even worst. Experimental data is noisy and therefore you need to consider maximum-likelihood estimation in order to reconstruct the quantum state that kind of fits the measurements best. And again these are expensive. There was a seminal work sometime ago on ion-traps. The post-processing for eight qubits took them an entire week. There were different ideas proposed regarding compressed sensing to reduce measurements, linear regression, et cetera. But they all have problems and you quickly hit a wall. There's no way to avoid it. Indeed the initial estimate is that to do tomography for 14 qubits state, you will take centuries and you cannot support a graduate student for a century because you need to pay your retirement benefits and it is simply complicated. So therefore a team here sometime ago we're looking at the question of how to do a full reconstruction of 14-qubit States with in four hours. Actually it was three point three hours Though sometime ago and many experimental groups were telling us that was very popular paper to read and study because they wanted to do fast quantum state tomography. They could not support the student for one or two centuries. They wanted to get the results quickly. And then because we need to get these density matrices and then they need to do these measurements here. But we have N qubits the number of expectation values go like four to the N to the Pauli matrices becomes much bigger. A maximum likelihood makes it even more time consuming. And this is the paper by the group in Inns brook, where they go this one week post-processing and they will speed-up done by different groups and hours. Also how to do 14 qubit tomography in four hours, using linear regression. But the next question is can machine learning help with quantum state tomography? Can allow us to give us the tools to do the next step to improve it even further. And then the standard one is this one here. Therefore for neural networks there are some inputs here, X1, X2 X3. There are some weighting factors when you get an output function PHI we just call Nonlinear activation function that could be heavy side Sigmon piecewise, linear logistic hyperbolic. And this creates a decision boundary and input space where you get let's say the red one, the red dots on the left and the blue dots on the right. Some separation between them. And you could have either two layers or three layers or any number layers can do either shallow or deep. This cannot allow you to approximate any continuous function. You can train data via some cost function minimization. And then there are different varieties of neural nets. We're looking at some sequel restricted Boltzmann machine. Restricted means that the input layer speeds are not talking to each other. The output layers means are not talking to each other. And we got reasonably good results with the input layer, output layer, no hidden layer and the probability of finding a spin configuration called the Boltzmann factor. So we try to leverage Pure-state tomography for mixed-state tomography. By doing an iterative process where you start here. So there are the mixed States in the blue area the pure States boundary here. And then the initial state is here with the iterative process you get closer and closer to the actual mixed state. And then eventually once you get here, you do the final jump inside. So you're looking at a dominant eigenstate which is closest pure state and then computer some measurements and then do an iterative algorithm that to make you approach this desire state. And after you do that then you can essentially compare results with some data. We got some data for four to eight trapped-ion qubits approximate W States were produced and they were looking at let's say the dominant eigenstate is reliably recorded for any equal four, five six, seven, eight for the ion-state, for the eigenvalues we're still working because we're getting some results which are not as accurate as we would like to. So this is still work in progress, but for the States is working really well. So there is some cost scaling which is beneficial, goes like NR as opposed to N squared. And then the most relevant information on the quality of the state production is retrieved directly. This works for flexible rank. And so it is possible to extract the ion-state within network tomography. It is cost-effective and scalable and delivers the most relevant information about state generation. And it's an interesting and viable use case for machine learning in quantum physics. We're also now more recently working on how to do quantum state tomography using Conditional Generative Adversarial Networks. Usually the masters student are analyzed in PhD and then two former postdocs. So this CGANs refers to this Conditional Generative Adversarial Networks. In this framework you have two neural networks which are essentially having a dual, they're competing with each other. And one of them is called generator another one is called discriminator. And there they're learning multi-modal models from the data. And then we improved these by adding a cost of neural network layers that enable the conversion of outputs from any standard neural network into physical density matrix. So therefore to reconstruct the density matrix, the generator layer and the discriminator networks So the two networks, they must train each other on data using standard gradient-based methods. So we demonstrate that our quantum state tomography and the adversarial network can reconstruct the optical quantum state with very high fidelity which is orders of magnitude faster and from less data than a standard maximum likelihood metals. So we're excited about this. We also show that this quantum state tomography with these adversarial networks can reconstruct a quantum state in a single evolution of the generator network. If it has been pre-trained on similar quantum States. so requires some additional training. And all of these is still work in progress where some preliminary results written up but we're continuing. And I would like to thank all of you for attending this talk. And thanks again for the invitation.

Published Date : Sep 26 2020

SUMMARY :

And recall that this is difficult to do

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael FraserPERSON

0.99+

Franco NoriPERSON

0.99+

Yoshihisa YamamotoPERSON

0.99+

oneQUANTITY

0.99+

NTT-PHIORGANIZATION

0.99+

two peopleQUANTITY

0.99+

two layersQUANTITY

0.99+

Clemens GneitingORGANIZATION

0.99+

20QUANTITY

0.99+

MITORGANIZATION

0.99+

three hoursQUANTITY

0.99+

firstQUANTITY

0.99+

three layersQUANTITY

0.99+

fourQUANTITY

0.99+

one weekQUANTITY

0.99+

MelkaniPERSON

0.99+

14 qubitsQUANTITY

0.99+

todayDATE

0.98+

one areaQUANTITY

0.98+

first exampleQUANTITY

0.98+

Inns brookLOCATION

0.98+

six and a half 20 qubitsQUANTITY

0.98+

24QUANTITY

0.98+

four hoursQUANTITY

0.98+

Will OliverPERSON

0.98+

two centuriesQUANTITY

0.98+

Few years agoDATE

0.98+

first jointQUANTITY

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

each phaseQUANTITY

0.97+

three pointQUANTITY

0.96+

Fraser YamamotoPERSON

0.96+

two networksQUANTITY

0.96+

first oneQUANTITY

0.96+

2000QUANTITY

0.96+

sixQUANTITY

0.95+

fiveQUANTITY

0.94+

14 qubitQUANTITY

0.94+

BoltzmannOTHER

0.94+

a centuryQUANTITY

0.93+

one exampleQUANTITY

0.93+

eight qubitsQUANTITY

0.92+

CaltechORGANIZATION

0.91+

NTTORGANIZATION

0.91+

centuriesQUANTITY

0.91+

few months agoDATE

0.91+

singleQUANTITY

0.9+

OliverPERSON

0.9+

two former postdocsQUANTITY

0.9+

single qubitQUANTITY

0.89+

few years agoDATE

0.88+

14-qubitQUANTITY

0.86+

NTT-PHITITLE

0.86+

eightQUANTITY

0.86+

MichiganLOCATION

0.86+

past couple of yearsDATE

0.85+

two neuralQUANTITY

0.84+

sevenQUANTITY

0.83+

eight trapped-QUANTITY

0.83+

three or so years agoDATE

0.82+

LiuPERSON

0.8+

PauliOTHER

0.79+

one typeQUANTITY

0.78+

past two yearsDATE

0.77+

some years agoDATE

0.73+

CornellPERSON

0.72+

McMahonORGANIZATION

0.71+

GneitingPERSON

0.69+

ChebyshevOTHER

0.68+

few yearsDATE

0.67+

phys revTITLE

0.65+

past few yearsDATE

0.64+

NTTEVENT

0.64+

ChePERSON

0.63+

CGANsORGANIZATION

0.61+

BoltzmannPERSON

0.57+

EuclideanLOCATION

0.57+

marandIORGANIZATION

0.5+

HamiltoniansOTHER

0.5+

eachQUANTITY

0.5+

NTTTITLE

0.44+

-PHITITLE

0.31+

PHIORGANIZATION

0.31+

Networks of Optical Parametric Oscillators


 

>>Good morning. Good afternoon. Good evening, everyone. I should thank Entity Research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech. And today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum. Photonics should acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or meta materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics. And if you want to extend it even further. Some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down, and the couplings is given by the G I J. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart in standard computers, if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric oscillator on what it is is resonator with non linearity in it and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible face states as the escalation result of these Opio, which are off by pie, and that's one of the important characteristics of them. So I want to emphasize >>a little more on that, and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the strength on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal, which is half the frequency of the pump. >>And I have two of them to show you that they can acquire these face states so they're still face their frequency lock to the pump. But it can also lead in either the zero pie face state on. The idea is to use this binary phase to represent the binary icing spin. So each Opio is going to represent spin, which can be >>either is your pie or up or down, >>and to implement the network of these resonate er's. We use the time off blood scheme, and the idea is that we put impulses in the cavity, these pulses air separated by the repetition period that you put in or t R. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's If you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. If you have any minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can >>have a program. We'll all toe all connected network in this time off like scheme. >>And the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos. Each of them can be either zero pie, and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem thin the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillating the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um the first implementation was on our optical interaction. We also had an unequal 16 implementation and then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing. Hamiltonian is both in the linear and >>nonlinear side and also how we're working on miniaturization of these Opio networks. So >>the first experiment, which was the four Opium machine it was a free space implementation and this is the actual picture of the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. Yeah, so then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one, and you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective. Looks so I'm gonna split this idea of opium based icing machine into two parts One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme, and that's basically >>what gives you the icing Hamiltonian model A. So the optical loss of this network corresponds to the icing Hamiltonian. >>And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. >>So you go either to zero the pie face state, and the expectation is that this the network oscillates in the lowest possible state, the lowest possible loss state. >>There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non their dynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to on the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of States and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate er's which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping. And now we can actually look at the band structure on. This is an actual measurement >>that we get with this associate model and you see how it reasonably how how? Well, it actually follows the prediction and the theory. >>One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as we were running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example >>that we have looked at is we can actually go to the transition off going from top a logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. >>You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, >>uh, network with Harper Hofstadter model when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics. And we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic oh, classical and quantum, non innate behaviors in these networks. >>So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this phase transition, that threshold. So the low threshold we have squeezed state in these Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network. Which, for example, is if one Opio starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also, can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise, behavior So in the degenerate regime, which we call it the order state. You're gonna have the phase being locked to the phase of the pump as I talked about in the non the general regime. However, the phase is the phase is mostly dominated by the quantum diffusion off the off the phase, which is limited by the so called shallow towns limit and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. And if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at So now the question is can utilize this phase transition, which is a face driven phase transition and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition. You can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts of more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to oppose. And that's a very abrupt face transition and compared to the to the single Opio face transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and >>what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non linear elements, where we are now with the optics is probably very similar to seven years ago, which is a tabletop implementation. >>And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's Did you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar and also might affair at Stanford. And, uh, we could show that you can do the >>periodic polling in the phenomenon of it and get all sorts of very highly non in your process is happening in this net. Photonic periodically polls if, um Diabate >>and now we're working on building. Opio was based on that kind of photonic lithium Diabate and these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the OPI ohs and the Opio networks are in the works, and that's not the only way of making large networks. But also I want to point out that the reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint, they also provide some opportunities in terms of the operation regime. On one of them is about making cat states in o pos, which is can we have the quantum superposition of >>the zero pie states that I talked about >>and the nano photonics within? I would provide some opportunities to actually get >>closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform, other existing platforms and to >>go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us. See, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamiltonian implementations on those networks. So if you can't build a pos, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to >>estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pippen O pos that we have been building in the past 50 years or so. >>So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and >>our work that has been going on on icing machines and the >>measurement feedback on I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you >>a little bit about the efforts on miniaturization and going to the to the nano scale. So with that, I would like Thio stop here and thank you for your attention.

Published Date : Sep 21 2020

SUMMARY :

And if you look at the phase locking which is the length of the strength on by that modulation, which is that will make a pump. And I have two of them to show you that they can acquire these face states so they're still face their frequency and the idea is that we put impulses in the cavity, these pulses air separated by the repetition have a program. into the network, then the OPI ohs are expected to oscillating the lowest, So the reason that this implementation was very interesting is that you don't need the end what gives you the icing Hamiltonian model A. So the optical loss of this network and the delay lines are going to give you a different losses. So you go either to zero the pie face state, and the expectation is that this breaking the time reversal symmetry, meaning that you go from one spin to on the one side that we get with this associate model and you see how it reasonably how how? that now you have the flexibility of changing the network as we were running the machine. the to the standard nontrivial. You can then look at the edge states and you can also see the trivial and states and the technological at uh, network with Harper Hofstadter model when you don't have the results the motivation is if you look at the electron ICS and from relatively small scale computers in the order And the question is, how can we utilize nano photonics? periodic polling in the phenomenon of it and get all sorts of very highly non in your been building in the past few months, which I'm not gonna tell you more about. closer to that regime because of the spatial temporal confinement that you can the chi to non linearity and see how and when you can get the Opio be even lower than the type of bulk Pippen O pos that we have been building in the past So let me summarize the talk And I also told you a little bit about the efforts on miniaturization and going to the to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaltechORGANIZATION

0.99+

AndyPERSON

0.99+

twoQUANTITY

0.99+

2016DATE

0.99+

HarvardORGANIZATION

0.99+

USCORGANIZATION

0.99+

EachQUANTITY

0.99+

1000 timesQUANTITY

0.99+

one problemQUANTITY

0.99+

oneQUANTITY

0.99+

five researcherQUANTITY

0.99+

first experimentQUANTITY

0.99+

OneQUANTITY

0.99+

sixQUANTITY

0.99+

Al Gore ismPERSON

0.99+

todayDATE

0.99+

first implementationQUANTITY

0.99+

thousandsQUANTITY

0.99+

eachQUANTITY

0.99+

123QUANTITY

0.99+

one experimentQUANTITY

0.99+

seven years agoDATE

0.99+

GrahamPERSON

0.99+

CreoleORGANIZATION

0.99+

one phaseQUANTITY

0.98+

bothQUANTITY

0.98+

MexicoLOCATION

0.98+

Harper HofstadterPERSON

0.98+

Entity ResearchORGANIZATION

0.98+

eight graduate studentsQUANTITY

0.98+

billionsQUANTITY

0.98+

two partsQUANTITY

0.98+

ThioPERSON

0.98+

two directionsQUANTITY

0.97+

second delayQUANTITY

0.97+

two possible face statesQUANTITY

0.97+

HamiltonianOTHER

0.97+

two lossesQUANTITY

0.97+

seven yearsQUANTITY

0.96+

one exampleQUANTITY

0.96+

singleQUANTITY

0.95+

two timesQUANTITY

0.95+

One voteQUANTITY

0.95+

two simple pendulumQUANTITY

0.95+

firstQUANTITY

0.94+

one spinQUANTITY

0.94+

60DATE

0.94+

70 years agoDATE

0.94+

GaussianOTHER

0.93+

16 implementationQUANTITY

0.92+

NanaORGANIZATION

0.91+

3QUANTITY

0.91+

two identical photonsQUANTITY

0.9+

StanfordORGANIZATION

0.87+

OpioOTHER

0.85+

one sideQUANTITY

0.82+

thousands of problemsQUANTITY

0.79+

first order phaseQUANTITY

0.79+

one delayQUANTITY

0.77+

zeroQUANTITY

0.76+

lithium DiabateOTHER

0.75+

Marko LoncarPERSON

0.75+

four OpiumQUANTITY

0.75+

NanaOTHER

0.73+

G I J.PERSON

0.72+

2QUANTITY

0.72+

J I J.PERSON

0.72+

one ofQUANTITY

0.7+

OshiePERSON

0.69+

past few monthsDATE

0.66+

NPRORGANIZATION

0.65+

zero pieQUANTITY

0.64+

Brian Hall, AWS | AWS re:Invent 2019


 

>>law from Las Vegas. It's the two covering a ws re invent 2019. Brought to you by Amazon Web service is and in along with its ecosystem partners, >>everyone welcome to the Cubes Live coverage in Las Vegas For AWS Reinvent 2019 starts Seventh year of the Cube coverage. Watching the big wave of Amazon continue to pound the pound the beach with more announcements. I'm John Ferrier instructing the seal for the new ways with my partner, David Dante, our next guest. Brian Hall, vice president. Product market for all of AWS >>Brian. Thanks for coming on. The Cube is >>really a pleasure to be here. We've had ready, eh? We've >>had many conversations off camera around opportunities, innovation and watching Andy Jackson Kino, which is a marathon. Three hours, 30 announcements. He's hit his mark. Live music, well done. But he got a ton of stuff in there. Let's unpack the key points. Tell us what you think people should pay attention to. Of all the announcements, one of the three major or one of the major areas that are that stand out that are most notable that you wanna highlight. >>Okay, I'll give you I'll give you four areas that I think are most notable from the keynote. First is we continue to be very focused on how do we give the deepest and broadest platform for all the different things people want to be able to do with computing. And we had a big announcements around new instance instances of easy to that air based on custom design silicon that that we built one of them is called IMF one. These are instances that are focused on machine learning inference. Where it turns out, up to 90% of the cost for machine learning often is. And so we have. We have a brand new set of instances reduce costs by up to 90% for people doing inference in the cloud. We also last year announced a armed chip that we developed called Graviton, and we announced today grab it on two and that their new instances that are running on gravity on thio, including our general purpose computer instances, are compute intensive instances and high memory instances, and people will get up to 40% price performance improvement by using the instances that are based on the >>method of the messages faster more inexpensive. But also there's an architectural shift going on with Compute Way. Heard that with the I. O. T. And the Outpost stuff where computer is moving to the data because moving around is well recognized and now affirmed its expensive. Yeah, this is a big part of it. You got local zone. What's that local zone? Was it a local >>s? So they're kind of two ways that we're addressing that the first is but making it so that our infrastructure is closer to customers. We have outposts for customers that want to run a WS in their own environments. We announced today local zones which are essentially taking the computer storage database capabilities and putting it closer to metro areas where people want to have a single digit Leighton see for applications when going to the clouds over video rendering for gaming and like, that's gonna be very helpful. Is >>that gonna be like a regional point of presence was gonna be installed, Eleni, on any premise anyone wants, I could put my >>outpost can be put in any environment where you have the right power network infrastructure. Local zones are managed by Amazon, so I don't have to have it. I don't have to manage any data center. Anything. I could just choose to deploy to an environment that is geographically very >>smaller than a region. >>Small isn't an ability. Oh, yeah, >>Right. Okay. That's like a mini zone. Yeah, and and so what about the the availability component? It's sort of up to the customer to figure that out There >>it is connected to a region. So, for instance, we're releasing in Los Angeles with availability now, and that's connected to the US West region. So all of the data backup redundancy application duplication of people want to be able to do could do be done, do the region. >>All right, So graviton processor got onto those early press reports that leaked out prior to reinvent. I noticed that didn't match kind of what was announced. Just clarify what the grab it on ship is doing. What was the key? Grab it on a piece of the news here >>s O gravitas to is a arm based process lor designed and built by a W s. It is powering three different instance. Types are for those who know the types the see instances am instances and are instances on dhe available starting today with M six, which is one of our general purpose computing platforms. And so it gives up to 40% better price performance. And there's a whole ecosystem of platforms and APS Little run unarmed today. >>Are you pushing the envelope on computer? Which is great you continue to do That's the core of jewels of AWS, which we love and storage and everything else. Warm story. I get that a second, but I want your thoughts on the stage maker. A lot of time was spent on stage maker kind of levels of the stack infrastructure, machine, learning stage maker and tools. And a I service is. But the big announcement was this new I d frame environments, not a framework. You're taking an environment like an i d for all the different frameworks. Where did this come from? How I mean so obvious. Now, looking back that no one has this this was a big party announcement. You explain this. >>Yeah. So what you're referring to is sage Makers studio. One of the things that people have really liked about sage maker is it takes the whole process of building a model training a model ended up deploying a model and gives you the steps to do it, but there it hasn't been brought together into one environment before. And so sage maker Studio is a integrated development environment for machine learning that lets you spin up. No books. Run experiments test how your models performing. Deploy your model of detective. Your model is drifting all from one place, which gives me essentially a single dashboard for my whole machine learning work. Look, what do >>you think the impact's gonna be on this? Because if I'm just looking at that obvious awesomeness, it's like, OK, that means anyone can get start using machine learning, you know, be a guru or a total math. >>That's that's fundamentally a lot of what we're doing is trying to reduce the barrier for developers or anyone who has who has a desire to start using machine learning to be able to do that and say, you maker studios just another way that we're doing it. Another one we announced on Monday or on Sunday night, of course, a machine learning powered musical keyboard. Everyone knew that was coming right? That's that's just a example like Deep Racer, where we're taking machine learning. We're making it immediately practical and even fun. And then giving people a way to start experimenting does that they'll eventually become developers who are using machine learning for much >>things. Have a question. As you simplify machine learning, people are concerned about explain ability. You guys, I think, have some ways of helping people understand what's going on inside the algorithm. So that's not a pure black box. Is that correct interpretation? >>It is. It is way announced. Today s age maker experiments, which is one of the one of the things about machine learning, is your kind of constantly tuning the different variables that you're using in your model tow. Understand what works? What doesn't. That's all black box. It's really hard to tell with sage major studio and experiments in particular. Now I can see how models perform differently based on tweaking variables, which starts making it much easier to explain what's happening. >>I think you guys got it right, and he laid out the databases. Multiple databases pick your database. It's okay that multiple databases just create some abstracted layers on top. I totally agree with that philosophy and I think that's gonna be a nice haven for opportunity. We agree. >>Used to be that because so much of running a database was all of the operational expertise it took that you wouldn't wanna have too many databases because that's that many database administrators and people doing the undifferentiated heavy lifting now with the cloud. If you have a data set that's better suited for something like a uh uh, workload in Cassandra, we announced the Manage Cassandra service today. You can just been up that service, load your data and start going. And so it creates a lot more opportunity >>talk about quantum because I know you guys yesterday, which is always a signal from Amazon and didn't make the keynote cut, but a ray relevant quantum announcement, the joke was, is gonna be a quantum supremacy messaging. But no, is more of a humble approach from you guys is more. Hey, we're gonna put some quantum out there setting expectations on the horizon, not over playing your hand on that. But you also have an institute with Caltech humble academic thing going on. What's the quantum inside Inside conversation like an Amazon? What's the what's going on with you. What can we expect? >>We're really excited about what quantum computing's going to be able to do for customers, and we say a lot of Amazon on many things. It's date one, which means it's really early. When we look at Quantum somewhere between zero and one, we're not quite sure where. So just live saying it's really early days. And so what we're doing is providing a platform, a partnership with Caltech, to advance the state of the art and then also a Quantum Solutions lab to help customers start to experiment. To figure out how might. This enabled me to solve problems that I couldn't do before >>you? No one can ask. So Andy talked in a keynote about most of the spend is still on. So the early days of cloud were about, you know, infrastructures of service, storage, computer networking, and it seems like we're entering This era of this data is really sort of the driver where you're applying analytics and machine learning. Data's everywhere, and it seems to be driving sort of new forms of compute. It's not just in this sort of stovepipe anymore. You see that you see that sort of new emergence of new compute were close. >>Yeah. Yeah, we definitely do. And in particular, the way that people are starting to use data lakes, which is essentially a way of saying, Hey, I have my data and one place in a bunch of different formats. And I want different analytical tools, different machine learning tools, different applications toe all be able to build on that same data. And once you do that, you start unlocking opportunities for different application developers, different lines of business to take advantage of it. Brian, >>Thanks for coming on The Cube. Really appreciate your VP of all product. Mark. You get the keys to the kingdom, you kind of see what's going on. Take us home and finish the exit interview out by by talking about the best. Now that Jesse Safer last. The best for last was the outpost G A and the five G wavelength with CEO of Arise on. Yeah, I mean, that's gonna bring five G to stadiums for drones, immersive experiences. I mean, that's a big vision. Yeah, I think it's home >>people. People are rightfully excited about five G for having faster connections, but the thing that we're also very excited about is the fact that all these devices will have much lower laden see and the ability to run interactive applications that having a W s with AWS wavelength hosted with the five G providers is gonna give developers chances to melt. >>Brian Hall with With AWS I'm John David Lot. They were here on the Cube studios, sponsored by Intel's Our Signature sponsors of the Intel's Cube Studios. When it's to a shoutout for Intel to them for supporting our mission, bringing the best content from events and extracting the signal from the noise will be back with more after this short break.

Published Date : Dec 3 2019

SUMMARY :

Brought to you by Amazon Web service I'm John Ferrier instructing the seal for the new ways with my partner, David Dante, The Cube is really a pleasure to be here. or one of the major areas that are that stand out that are most notable that you wanna highlight. that are based on the method of the messages faster more inexpensive. We have outposts for customers that want to run a WS in their own I could just choose to deploy to an environment that is geographically very It's sort of up to the customer to figure that out There So all of the data Grab it on a piece of the news here And so it gives up to 40% better price performance. I get that a second, but I want your thoughts on environment for machine learning that lets you spin up. Because if I'm just looking at that obvious awesomeness, the barrier for developers or anyone who has who has a desire to As you simplify machine learning, people are concerned about explain ability. It's really hard to tell with sage major studio and experiments in particular. I think you guys got it right, and he laid out the databases. administrators and people doing the undifferentiated heavy lifting now with the cloud. What's the what's going on with you. And so what we're doing is providing a platform, a partnership So the early days of cloud were about, you know, infrastructures of service, storage, computer networking, And in particular, the way that people You get the keys to the kingdom, the five G providers is gonna give developers chances to melt. from events and extracting the signal from the noise will be back with more after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David DantePERSON

0.99+

AmazonORGANIZATION

0.99+

AndyPERSON

0.99+

CaltechORGANIZATION

0.99+

BrianPERSON

0.99+

Los AngelesLOCATION

0.99+

John FerrierPERSON

0.99+

Brian HallPERSON

0.99+

Three hoursQUANTITY

0.99+

MondayDATE

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

John David LotPERSON

0.99+

Jesse SaferPERSON

0.99+

30 announcementsQUANTITY

0.99+

Las VegasLOCATION

0.99+

last yearDATE

0.99+

FirstQUANTITY

0.99+

Sunday nightDATE

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

MarkPERSON

0.99+

twoQUANTITY

0.99+

AriseORGANIZATION

0.99+

threeQUANTITY

0.99+

yesterdayDATE

0.98+

up to 90%QUANTITY

0.98+

Seventh yearQUANTITY

0.98+

two waysQUANTITY

0.98+

TodayDATE

0.97+

zeroQUANTITY

0.97+

one placeQUANTITY

0.97+

up to 40%QUANTITY

0.96+

US West regionLOCATION

0.96+

CubeORGANIZATION

0.94+

four areasQUANTITY

0.93+

IntelORGANIZATION

0.93+

CassandraTITLE

0.93+

OneQUANTITY

0.92+

single dashboardQUANTITY

0.91+

WSORGANIZATION

0.91+

bigEVENT

0.89+

Andy JacksonPERSON

0.84+

Amazon WebORGANIZATION

0.83+

M sixOTHER

0.82+

EleniPERSON

0.81+

GravitonORGANIZATION

0.81+

one environmentQUANTITY

0.8+

single digitQUANTITY

0.8+

three different instanceQUANTITY

0.78+

QuantumORGANIZATION

0.72+

five GTITLE

0.71+

CubeCOMMERCIAL_ITEM

0.7+

KinoTITLE

0.69+

a secondQUANTITY

0.67+

wsEVENT

0.66+

Deep RacerTITLE

0.65+

Invent 2019EVENT

0.64+

The CubeORGANIZATION

0.63+

Cube StudiosCOMMERCIAL_ITEM

0.6+

OutpostORGANIZATION

0.6+

sage Makers studioORGANIZATION

0.57+

waveEVENT

0.55+

sage makerORGANIZATION

0.54+

LeightonLOCATION

0.54+

VMworld Day 1 General Session | VMworld 2018


 

For Las Vegas, it's the cube covering vm world 2018, brought to you by vm ware and its ecosystem partners. Ladies and gentlemen, Vm ware would like to thank it's global diamond sponsors and it's platinum sponsors for vm world 2018 with over 125,000 members globally. The vm ware User Group connects via vmware customers, partners and employees to vm ware, information resources, knowledge sharing, and networking. To learn more, visit the [inaudible] booth in the solutions exchange or the hemoglobin gene vm village become a part of the community today. This presentation includes forward looking statements that are subject to risks and uncertainties. Actual results may differ materially as a result of various risk factors including those described in the 10 k's 10 q's and k's vm ware. Files with the SEC. Ladies and Gentlemen, please welcome Pat Gelsinger. Welcome to vm world. Good morning. Let's try that again. Good morning and I'll just say it is great to be here with you today. I'm excited about the sixth year of being CEO. When it was on this stage six years ago were Paul Maritz handed me the clicker and that's the last he was seen. We have 20,000 plus here on site in Vegas and uh, you know, on behalf of everyone at Vm ware, you know, we're just thrilled that you would be with us and it's a joy and a thrill to be able to lead such a community. We have a lot to share with you today and we really think about it as a community. You know, it's my 23,000 plus employees, the souls that I'm responsible for, but it's our partners, the thousands and we kicked off our partner day yesterday, but most importantly, the vm ware community is centered on you. You know, we're very aware of this event would be nothing without you and our community and the role that we play at vm wares to build these cool breakthrough innovations that enable you to do incredible things. You're the ones who take our stuff and do amazing things. You altogether. We have truly changed the world over the last two decades and it is two decades. You know, it's our anniversary in 1998, the five people that started a vm ware, right. You know, it was, it was exactly 20 years ago and we're just thrilled and I was thinking about this over the weekend and it struck me, you know, anniversary, that's like old people, you know, we're here, we're having our birthday and it's a party, right? We can't have a drink yet, but next year. Yeah. We're 20 years old. Right. We can do that now. And I'll just say the culture of this community is something that truly is amazing and in my 38 years, 38 years in tech, that sort of sounds like I'm getting old or something, but the passion, the loyalty, almost a cult like behavior that we see in this team of people to us is simply thrilling. And you know, we put together a little video to sort of summarize the 20 years and some of that history and some of the unique and quirky aspects of our culture. Let's watch that now. We knew we had something unique and then we demonstrated that what was unique was also some reasons that we love vm ware, you know, like the community out there. So great. The technology I love it. Ware is solid and much needed. Literally. I do love Vmr. It's awesome. Super Awesome. Pardon? There's always someone that wants to listen and learn from us and we've learned so much from them as well. And we reached out to vm ware to help us start building. What's that future world look like? Since we're doing really cutting edge stuff, there's really no better people to call and Bmr has been known for continuous innovation. There's no better way to learn how to do new things in it than being with a company that's at the forefront of technology. What do you think? Don't you love that commitment? Hey Ashley, you know, but in the prep sessions for this, I thought, boy, what can I do to take my commitment to the next level? And uh, so, uh, you know, coming in a couple days early, I went to down the street to bad ass tattoo. So it's time for all of us to take our commitment up level and sometimes what happens in Vegas, you take home. Thank you. Vm Ware has had this unique role in the industry over these 20 years, you know, and for that we've seen just incredible things that have happened over this period of time and it's truly extraordinary what we've accomplished together. And you know, as we think back, you know, what vm ware has uniquely been able to do is I'll say bridge across know and we've seen time and again that we see these areas of innovation emerging and rapidly move forward. But then as they become utilized by our customers, they create this natural tension of what business wants us flexibility to use across these silos of innovation. And from the start of our history, we have collectively had this uncanny ability to bridge across these cycles of innovation. You know, an act one was clearly the server generation. You know, it may seem a little bit, uh, ancient memory now, but you remember you used to walk into your data center and it looked like the loove the museum of it passed right? You know, and you had your old p series and your z series in your sparks and your pas and your x86 cluster and Yo, it had to decide, well, which architecture or am I going to deploy and run this on? And we bridged across and that was the magic of Esx. You don't want to just changed the industry when that occurred. And I sort of called the early days of Esx and vsphere. It was like the intelligence test. If you weren't using it, you fail because Yup. Servers, 10 servers become one months, become minutes. I still have people today who come up to me and they reflect on their first experience of vsphere or be motion and it was like a holy moment in their life and in their careers. Amazing and act to the Byo d, You know, can we bridge across these devices and users wanted to be able to come in and say, I have my device and I'm productive on it. I don't want to be forced to use the corporate standard. And maybe more than anything was the power of the iphone that was introduced, the two, seven, and suddenly every employee said this is exciting and compelling. I want to use it so I can be more productive when I'm here. Bye. Jody was the rage and again it was a tough challenge and once again vm ware helped to bridge across the surmountable challenge. And clearly our workspace one community today is clearly bridging across these silos and not just about managing devices but truly enabling employee engagement and productivity. Maybe act three was the network and you know, we think about the network, you know, for 30 years we were bound to this physical view of what the network would be an in that network. We are bound to specific protocols. We had to wait months for network upgrades and firewall rules. Once every two weeks we'd upgrade them. If you had a new application that needed a firewall rule, sorry, you know, come back next month we'll put, you know, deep frustration among developers and ceos. Everyone was ready to break the chains. And that's exactly what we did. An NSX and Nice Sierra. The day we acquired it, Cisco stock drops and the industry realizes the networking has changed in a fundamental way. It will never be the same again. Maybe act for was this idea of cloud migration. And if we were here three years ago, it was student body, right to the public cloud. Everything is going there. And I remember I was meeting with a cio of federal cio and he comes up to me and he says, I tried for the last two years to replatform my 200 applications I got to done, you know, and all of a sudden that was this. How do I do cloud migration and the effective and powerful way. Once again, we bridged across, we brought these two worlds together and eliminated this, uh, you know, this gap between private and public cloud. And we'll talk a lot more about that today. You know, maybe our next act is what we'll call the multicloud era. You know, because today in a recent survey by Deloitte said that the average business today is using eight public clouds and expected to become 10 plus public clouds. And you know, as you're managing different tools, different teams, different architectures, those solution, how do you, again bridge across, and this is what we will do in the multicloud era, we will help our community to bridge across and take advantage of these powerful cycles of innovation that are going on, but be able to use them across a consistent infrastructure and operational environment. And we'll have a lot more to talk about on this topic today. You know, and maybe the last item to bridge across maybe the most important, you know, people who are profit. You know, too often we think about this as an either or question. And as a business leader, I'm are worried about the people or the And Milton Friedman probably set us up for this issue decades ago when he said, planet, right? the sole purpose of a business is to make profits. You want to create a multi-decade dilemma, right? For business leaders, could I have both people and profits? Could I do well and do good? And particularly for technology, I think we don't have a choice to think about these separately. We are permeating every aspect of business. And Society, we have the responsibility to do both and have all the things that vm ware has accomplished. I think this might be the one that I'm most proud of over, you know, w we have demonstrated by vsphere and the hypervisor alone that we have saved over 540 million tons of co two emissions. That is what you have done. Can you believe that? Five hundred 40 million tons is enough to have 68 percent of all households for a year. Wow. Thank you for what you have done. Thank you. Or another translation of that. Is that safe enough to drive a trillion miles and the average car or you could go to and from Jupiter just in case that was in your itinerary a thousand times. Right? He was just incredible. What we have done and as a result of that, and I'll say we were thrilled to accept this recognition on behalf of you and what you have done. You know, vm were recognized as number 17 in the fortune. Change the world list last week. And we really view it as accepting this honor on behalf of what you have done with our products and technology tech as a force for good. We believe that fundamentally that is our opportunity, if not our obligation, you know, fundamentally tech is neutral, you know, we together must shape it for good. You know, the printing press by Gutenberg in 1440, right? It was used to create mass education and learning materials also can be used for extremist propaganda. The technology itself is neutral. Our ecosystem has a critical role to play in shaping technology as a force for good. You know, and as we think about that tomorrow, we'll have a opportunity to have a very special guest and I really encourage you to be here, be on time tomorrow morning on the stage and you know, Sanjay's a session, we'll have Malala, Nobel Peace Prize winner and fourth will be a bit of extra security as you come in and you understand that. And I just encourage you not to be late because we see this tech being a force for good in everything that we do at vm ware. And I hope you'll enjoy, I'm quite looking forward to the session tomorrow. Now as we think about the future. I like to put it in this context, the superpowers of tech know and you know, 38 years in the industry, you know, I am so excited because I think everything that we've done over the last four decades is creating a foundation that allows us to do more and go faster together. We're unlocking game, changing opportunities that have not been available to any people in the history of humanity. And we have these opportunities now and I, and I think about these four cloud, you have unimaginable scale. You'll literally with your Amex card, you can go rent, you know, 10,000 cores for $100 per hour. Or if you have Michael's am ex card, we can rent a million cores for $10,000 an hour. Thanks Michael. But we also know that we're in many ways just getting started and we have tremendous issues to bridge across and compatible clouds, mobile unprecedented scale. Literally, your application can reach half the humans on the planet today. But we also know that five percent, the lowest five percent of humanity or the other half of humanity, they're still in the lower income brackets, less than five percent penetrated. And we know that we have customer examples that are using mobile phones to raise impoverished farmers in Africa, out of poverty just by having a smart phone with proper crop, the information field and whether a guidance that one tool alone lifting them out of poverty. Ai knows, you know, I really love the topic of ai in 1986. I'm the chief architect of the 80 46. Some of you remember what that was. Yeah, I, you know, you're, you're my folk, right? Right. And for those of you who don't, it was a real important chip at the time. And my marketing manager comes running into my office and he says, Pat, pat, we must make the 46 a great ai chip. This is 1986. What happened? Nothing an AI is today, a 30 year overnight success because the algorithms, the data have gotten so much bigger that we can produce results, that we can bring intelligence to everything. And we're seeing dramatic breakthroughs in areas like healthcare, radiology, you know, new drugs, diagnosis tools, and designer treatments. We're just scratching the surface, but ai has so many gaps, yet we don't even in many cases know why it works. Right? And we'll call that explainable ai and edge and Iot. We're connecting the physical and the digital worlds was never before possible. We're bridging technology into every dimension of human progress. And today we're largely hooking up things, right? We have so much to do yet to make them intelligent. Network secured, automated, the patch, bringing world class it to Iot, but it's not just that these are super powers. We really see that each and each one of them is a super power in and have their own right, but they're making each other more powerful as well. Cloud enables mobile conductivity. Mobile creates more data, more data makes the AI better. Ai Enables more edge use cases and more edge requires more cloud to store the data and do the computing right? They're reinforcing each other. And with that, we know that we are speeding up and these superpowers are reshaping every aspect of society from healthcare to education, the transportation, financial institutions. This is how it all comes together. Now, just a simple example, how many of you have ever worn a hardhat? Yeah, Yo. Pretty boring thing. And it has one purpose, right? You know, keep things from smacking me in the here's the modern hardhat. It's a complete heads up display with ar head. Well, vr capabilities that give the worker safety or workers or factory workers or supply people the ability to see through walls to understand what's going on inside of the equipment. I always wondered when I was a kid to have x Ray Vision, you know, some of my thoughts weren't good about why I wanted it, but you know, I wanted to. Well now you can have it, you know, but imagine in this environment, the complex application that sits behind it. You know, you're accessing maybe 50 year old building plants, right? You're accessing HVAC systems, but modern ar and vr capabilities and new containerized displays. You'll think about that application. You know, John Gage famously said the network is the computer pat today says the application is now a network and pretty typically a complicated one, you know, and this is the vm ware vision is to make that kind of environment realizable in every aspect of our business and community and we simply have been on this journey, any device, any application, any cloud with intrinsic security. And this vision has been consistent for those of you who have been joining us for a number of years. You've seen this picture, but it's been slowly evolving as we've worked in piece by piece to refine and extend this vision, you know, and for it, we're going to walk through and use this as the compass for our discussion today as we walk through our conversation. And you know, we're going to start by a focus on any cloud. And as we think about this cloud topic, you know, we see it as a multicloud world hybrid cloud, public cloud, but increasingly seeing edge and telco becoming clouds in and have their own right. And we're not gonna spend time on it today, but this area of Telco to the is an enormous opportunity for us in our community. You know, data centers and cloud today are over 80 percent virtualized. The Telco network is less than 10 percent virtualized. Wow. An industry that's almost as big as our industry entirely unvirtualized, although the technologies we've created here can be applied over here and Telco and we have an enormous buildout coming with five g and environments emerging. What an opportunity for us, a virgin market right next to us and we're getting some early mega winds in this area using the technologies that you have helped us cure rate than the So we're quite excited about this topic area as well. market. So let's look at this full view of the multicloud. Any cloud journey. And we see that businesses are on a multicloud journey, you know, and today we see this fundamentally in these two paths, a hybrid cloud and a public cloud. And these paths are complimentary and coexisting, but today, each is being driven by unique requirements and unique teams. Largely the hybrid cloud is being driven by it. And operations, the public cloud being driven more by developers and line of business requirements and as some multicloud environment. So how do we deliver upon that and for that, let's start by digging in on the hybrid cloud aspect of this and as we think about the hybrid cloud, we've been talking about this subject for a number of years and I want to give a very specific and crisp definition. You're the hybrid cloud is the public cloud and the private cloud cooperating with consistent infrastructure and consistent operations simply put seamless path to and from the cloud that my workloads don't care if it's here or there. I'm able to run them in a agile, scalable, flexible, efficient manner across those two environments, whether it's my data center or someone else's, I can bring them together to make that work is the magic of the Vm ware Cloud Foundation. The vm ware Cloud Foundation brings together computer vsphere and the core of why we are here, but combines with that networking storage delivered through a layer of management and automation. The rule of the cloud is ruthlessly automate everything. We laid out this vision of the software defined data center seven years ago and we've been steadfastly working on this vision and vm ware. Cloud Foundation provides this consistent infrastructure and operations with integrated lifecycle management automation. Patching the m ware cloud foundation is the simplest path to the hybrid cloud and the fastest way to get vm ware cloud foundation is hyperconverged infrastructure, you know, and with this we've combined integrated then validated hardware and as a building block inside of this we have validated hardware, the v Sand ready environments. We have integrated appliances and cloud delivered infrastructure, three ways that we deliver that integrate integrated hyperconverged infrastructure solution. And we have by far the broadest ecosystem of partners to do it. A broad set of the sand ready nodes from essentially everybody in the industry. Secondly, we have integrated appliances, the extract of vxrail that we have co engineered with our partners at Dell technology and today in fact Dell is releasing the power edge servers, a major step in blade servers that again are going to be powering vxrail and vxrack systems and we deliver hyperconverged infrastructure through a broader set of Vm ware cloud partners as well. At the heart of the hyperconverged infrastructure is v San and simply put, you know, be San has been the engine that's just been moving rapidly to take over the entire integration of compute and storage and expand to more and more areas. We have incredible momentum over 15,000 customers for v San Today and for those of you who joined us, we say thank you for what you have done with this product today. Really amazing you with 50 percent of the global 2000 using it know vm ware. V San Vxrail are clearly becoming the standard for how hyperconverge is done in the industry. Our cloud partner programs over 500 cloud partners are using ulv sand in their solution, you know, and finally the largest in Hci software revenue. Simply put the sand is the software defined storage technology of choice for the industry and we're seeing that customers are putting this to work in amazing ways. Vm Ware and Dell technologies believe in tech as a force for good and that it can have a major impact on the quality of life for every human on the planet and particularly for the most underdeveloped parts of the world. Those that live on less than $2 per day. In fact that this moment 5 billion people worldwide do not have access to modern affordable surgery. Mercy ships is working hard to change the global surgery crisis with greater than 400 volunteers. Mercy ships operates the largest NGO hospital ship delivering free medical care to the poorest of the poor in Africa. Let's see from them now. When the ship shows up to port, literally people line up for days to receive state of the art life, sane changing life saving surgeries, tumor site limbs, disease blindness, birth defects, but not only that, the personnel are educating and training the local healthcare providers with new skills and infrastructure so they can care for their own. After the ship has left, mercy ships runs on Vm ware, a dell technology with VX rail, Dell Isilon data protection. We are the it platform for mercy ships. Mercy ships is now building their next generation ship called global mercy, which were more than double. It's lifesaving capacity. It's the largest charity hospital ever. It will go live in 20 slash 20 serving Africa and I personally plan on being there for its launch. It is truly amazing what they are doing with our technology. Thanks. So we see this picture of the hybrid cloud. We've talked about how we do that for the private cloud. So let's look over at the public cloud and let's dig into this a little bit more deeply. You know, we're taking this incredible power of the Vm ware Cloud Foundation and making it available for the leading cloud providers in the world and with that, the partnership that we announced almost two years ago with Amazon and on the stage last year, we announced their first generation of products, no better example of the hybrid cloud. And for that it's my pleasure to bring to stage my friend, my partner, the CEO of aws. Please welcome Andy Jassy. Thank you andy. You know, you honor us with your presence, you know, and it really is a pleasure to be able to come in front of this audience and talk about what our teams have accomplished together over the last, uh, year. Yo, can you give us some perspective on that, Andy and what customers are doing with it? Well, first of all, thanks for having me. I really appreciate it. It's great to be here with all of you. Uh, you know, the offering that we have together customers because it allows them to use the same software they've been using to again, where cloud and aws is very appealing to manage their infrastructure for years to be able to deploy it an aws and we see a lot of customer momentum and a lot of customers using it. You see it in every imaginable vertical business segment in transportation. You see it with stagecoach and media and entertainment. You see it with discovery communications in education, Mit and Caltech and consulting and accenture and cognizant and dxc you see in every imaginable vertical business segment and the number of customers using the offering is doubling every quarter. So people were really excited about it and I think that probably the number one use case we see so far, although there are a lot of them, is customers who are looking to migrate on premises applications to the cloud. And a good example of that is mit. We're there right now in the process of migrating. In fact, they just did migrate 3000 vms from their data centers to Vm ware cloud native us. And this would have taken years before to do in the past, but they did it in just three months. It was really spectacular and they're just a fun company to work with and the team there. But we're also seeing other use cases as well. And you're probably the second most common example is we'll say on demand capabilities for things like disaster recovery. We have great examples of customers you that one in particular, his brakes, right? Urban in those. The brings security trucks and they all armored trucks coming by and they had a critical need to retire a secondary data center that they were using, you know, for Dr. so we quickly built to Dr Protection Environment for $600. Bdms know they migrated their mission critical workloads and Wallah stable and consistent Dr and now they're eliminating that site and looking for other migrations as well. The rate of 10 to 15 percent. It was just a great deal. One of the things I believe Andy, he'll customers should never spend capital, uh, Dr ever again with this kind of capability in place. That is just that game changing, you know, and you know, obviously we've been working on expanding our reach, you know, we promised to make the service available a year ago with the global footprint of Amazon and now we've delivered on that promise and in fact today or yesterday if you're an ozzie right down under, we announced in Sydney, uh, as well. And uh, now we're in US Europe and in APJ. Yeah. It's really, I mean it's very exciting. Of course Australia is one of the most virtualized places in the world and, and it's pretty remarkable how fast European customers have started using the offering to and just the quarter that's been out there and probably have the many requests customers has had. And you've had a, probably the number one request has been that we make the offering available in all the regions. The aws has regions and I can tell you by the end of 2019 will largely be there including with golf clubs and golf clap. You guys have been, that's been huge for you guys. Yeah. It's a government only region that we have that a lot of federal government workloads live in and we are pretty close together having the offering a fedramp authority to operate, which is a big deal on a game changer for governments because then there'll be able to use the familiar tools they use and vm ware not just to run their workloads on premises but also in the cloud as well with the data privacy requirements, security requirements they need. So it's a real game changer for government too. Yeah. And this you can see by the picture here basically before the end of next year, everywhere that you are and have an availability zone. We're going to be there running on data. Yup. Yeah. Let's get with it. Okay. We're a team go faster. Okay. You'll and you know, it's not just making it available, but this pace of innovation and you know, you guys have really taught us a few things in this respect and since we went live in the Oregon region, you know, we've been on a quarterly cadence of major releases and two was really about mission critical at scale and we added our second region. We added our hybrid cloud extension with m three. We moved the global rollout and we launched in Europe with m four. We really add a lot of these mission critical governance aspects started to attack all of the industry certifications and today we're announcing and five right. And uh, you know, with that, uh, I think we have this little cool thing you know, two of the most important priorities for that we're doing with ebs and storage. Yeah, we'll take, customers, our cost and performance. And so we have a couple of things to talk about today that we're bringing to you that I think hit both of those on a storage side. We've combined the elasticity of Amazon Elastic Block store or ebs with ware is Va v San and we've provided now a storage option that you'll be able to use that as much. It's very high capacity and much more cost effective and you'll start to see this initially on the Vm ware cloud. Native us are five instances which are compute instances, their memory optimized and so this will change the cost equation. You'll be able to use ebs by default and it'll be much more cost effective for storage or memory intensive workloads. Um, it's something that you guys have asked for. It's been very frequently requested it, it hits preview today. And then the other thing is that we've worked really hard together to integrate vm ware's Nsx along with aws direct neck to have a private even higher performance conductivity between on premises and the cloud. So very, very exciting new capabilities to show deep integration between the companies. Yeah. You know, in that aspect of the deep integration. So it's really been the thing that we committed to, you know, we have large engineering teams that are working literally every day. Right on bringing together and how do we fuse these platforms together at a deep and intimate way so that we can deliver new services just like elastic drs and the c and ebs really powerful, uh, capabilities and that pace of innovation continue. So next maybe. Um, maybe six. I don't know. We'll see. All right. You know, but we're continuing this toward pace of innovation, you know, completing all of the capabilities of Nsx. You'll full integration for all of the direct connect to capabilities. Really expanding that. You're only improving licensed capabilities on the platform. We'll be adding pks on top of for expanded developer a capabilities. So just. Oh, thank you. I, I think that was formerly known as Right, and y'all were continuing this pace of storage Chad. So anyway. innovation going forward, but I think we also have a few other things to talk about today. Andy. Yeah, I think we have some news that hopefully people here will be pretty excited about. We know we have a pretty big database business and aws and it's. It's both on the relational and on the nonrelational side and the business is billions of dollars in revenue for us and on the relational side. We have a service called Amazon relational database service or Amazon rds that we have hundreds of thousands of customers using because it makes it much easier for them to set up, operate and scale their databases and so many companies now are operating in hybrid mode and will be for a while and a lot of those customers have asked us, can you give us the ease of manageability of those databases but on premises. And so we talked about it and we thought about and we work with our partners at Vm ware and I'm excited to announce today, right now Amazon rds on Vm ware and so that will bring all the capabilities of Amazon rds to vm ware's customers for their on premises environments. And so what you'll be able to do is you'll be able to provision databases. You'll be able to scale the compute or the memory or the storage for those database instances. You'll be able to patch the operating system or database engines. You'll be able to create, read replicas to scale your database reads and you can deploy this rep because either on premises or an aws, you'll be able to deploy and high high availability configuration by replicating the data to different vm ware clusters. You'll be able to create online backups that either live on premises or an aws and then you'll be able to take all those databases and if you eventually want to move them to aws, you'll be able to do so rather easily. You have a pretty smooth path. This is going to be available in a few months. It will be available on Oracle sql server, sql postgresql and Maria DB. I think it's very exciting for our customers and I think it's also a good example of where we're continuing to deepen the partnership and listen to what customers want and then innovate on their behalf. Absolutely. Thank you andy. It is thrilling to see this and as we said, when we began the partnership, it was a deep integration of our offerings and our go to market, but also building this bi-directional hybrid highway to give customers the capabilities where they wanted cloud on premise, on premise to the cloud. It really is a unique partnership that we've built, the momentum we're feeling to our customer base and the cool innovations that we're doing. Andy, thank you so much for you Jordan Young, rural 20th. You guys appreciate it. Yeah, we really have just seen incredible momentum and as you might have heard from our earnings call that we just finished this. We finished the last quarter. We just really saw customer momentum here. Accelerating. Really exciting to see how customers are starting to really do the hybrid cloud at scale and with this we're just seeing that this vm ware cloud foundation available on Amazon available on premise. Very powerful, but it's not just the partnership with Amazon. We are thrilled to see the momentum of our Vm ware cloud provider program and this idea of the vm ware cloud providers has continued to gain momentum in the industry and go over five years. Right. This program has now accumulated more than 4,200 cloud partners in over 120 countries around the globe. It gives you choice, your local provider specialty offerings, some of your local trusted partners that you would have in giving you the greatest flexibility to choose from and cloud providers that meet your unique business requirements. And we launched last year a program called Vm ware cloud verified and this was saying you're the most complete embodiment of the Vm ware Cloud Foundation offering by our cloud partners in this program and this logo you know, allows you to that this provider has achieved the highest standard for cloud infrastructure and that you can scale and deliver your hybrid cloud and partnering with them. It know a particular. We've been thrilled to see the momentum that we've had with IBM as a huge partner and our business with them has grown extraordinarily rapidly and triple digits, but not just the customer count, which is now over 1700, but also in the depth of customers moving large portions of the workload. And as you see by the picture, we're very proud of the scope of our partnerships in a global basis. The highest standard of hybrid cloud for you, the Vm ware cloud verified partners. Now when we come back to this picture, you know we, you know, we're, we're growing in our definition of what the hybrid cloud means and through Vm Ware Cloud Foundation, we've been able to unify the private and the public cloud together as never before, but we're also seeing that many of you are interested in how do I extend that infrastructure further and farther and will simply call that the edge right? And how do we move data closer to where? How do we move data center resources and capacity closer to where the data's being generated at the operations need to be performed? Simply the edge and we'll dig into that a little bit more, but as we do that, what are the things that we offer today with what we just talked about with Amazon and our VCP p partners is that they can consume as a service this full vm ware Cloud Foundation, but today we're only offering that in the public cloud until project dimension of project dimension allows us to extend delivered as a service, private, public, and to the edge. Today we're announcing the tech preview, a project dimension Vm ware cloud foundation in a hyperconverged appliance. We're partnered deeply with Dell EMC, Lenovo for the first partners to bring this to the marketplace, built on that same proven infrastructure, a hybrid cloud control plane, so literally just like we're managing the Vm ware cloud today, we're able to do that for your on premise. You're small or remote office or your edge infrastructure through that exact same as a service management and control plane, a complete vm ware operated end to end environment. This is project dimension. Taking the vcf stack, the full vm ware cloud foundation stack, making an available in the cloud to the edge and on premise as well, a powerful solution operated by BM ware. This project dimension and project dimension allows us to have a fundamental building block in our approach to making customers even more agile, flexible, scalable, and a key component of our strategy as well. So let's click into that edge a little bit more and we think about the edge in the following layers, the compute edge, how do we get the data and operations and applications closer to where they need to be. If you remember last year I talked about this pendulum swinging of centralization and decentralization edge is a decentralization force. We're also excited that we're moving the edge of the devices as well and we're doing that in two ways. One with workspace, one for human optimized devices and the second is project pulse or Vm ware pulse. And today we're announcing pulse two point zero where you can consume it now as a service as well as with integrated security. And we've now scaled pulse to support 500 million devices. Isn't that incredible, right? I mean this is getting a scale. Billions and billions and finally networking is a key component. You all that. We're stretching the networking platform, right? And evolving how that edge operates in a more cloud and that's a service white and this is where Nsx St with Velo cloud is such a key component of delivering the edge of network services as well. Taken together the device side, the compute edge and rethinking and evolving the networking layer together is the vm ware edge strategy summary. We see businesses are on this multicloud journey, right? How do we then do that for their private of public coming together, the hybrid cloud, but they're also on a journey for how they work and operate it across the public cloud and the public cloud we have this torrid innovation, you'll want Andy's here, challenges. You know, he's announcing 1500 new services or were extraordinary innovation and you'll same for azure or Google Ibm cloud, but it also creates the same complexity as we said. Businesses are using multiple public clouds and how do I operate them? How do I make them work? You know, how do I keep track of my accounts and users that creates a set of cloud operations problems as well in the complexity of doing that. How do you make it work? Right? And your for that. We'll just see that there's this idea cloud cost compliance, analytics as these common themes that of, you know, keep coming up and we're seeing in our customers that are new role is emerging. The cloud operations role. You're the person who's figuring out how to make these multicloud environments work and keep track of who's using what and which data is landing where today I'm thrilled to tell you that the, um, where is acquiring the leader in this space? Cloudhealth technologies. Thank you. Cloudhealth technologies supports today, Amazon, azure and Google. They have some 3,500 customers, some of the largest and most respected brands in the, as a service industry. And Sasa business today rapidly span expanding feature sets. We will take cloudhealth and we're going to make it a fundamental platform and branded offering from the um, where we will add many of the other vm ware components into this platform, such as our wavefront analytics, our cloud, choreo compliance, and many of the other vm ware products will become part of the cloudhealth suite of services. We will be enabling that through our enterprise channels as well as through our MSP and BCPP partners as well know. Simply put, we will make cloudhealth the cloud operations platform of choice for the industry. I'm thrilled today to have Joe Consella, the CTO and founder. Joe, please stand up. Thank you joe to your team of a couple hundred, you know, mostly in Boston. Welcome to the Vm ware family, the Vm ware community. It is a thrill to have you part of our team. Thank you joe. Thank you. We're also announcing today, and you can think of this, much like we had v realize operations and v realize automation, the compliment to the cloudhealth operations, vm ware, cloud automation, and some of you might've heard of this in the past, this project tango. Well, today we're announcing the initial availability of Vm ware, cloud automation, assemble, manage complex applications, automate their provisioning and cloud services, and manage them through a brokerage the initial availability of cloud automation services, service. Your today, the acquisition of cloudhealth as a platform, the aware of the most complete set of multicloud management tools in the industry, and we're going to do so much more so we've seen this picture of this multicloud journey that our customers are on and you know, we're working hard to say we are going to bridge across these worlds of innovation, the multicloud world. We're doing many other things. You're gonna hear a lot at the show today about this year. We're also giving the tech preview of the Vm ware cloud marketplace for our partners and customers. Also today, Dell technologies is announcing their cloud marketplace to provide a self service, a portfolio of a Dell emc technologies. We're fundamentally in a unique position to accelerate your multicloud journey. So we've built out this any cloud piece, but right in the middle of that any cloud is the network. And when we think about the network, we're just so excited about what we have done and what we're seeing in the industry. So let's click into this a little bit further. We've gotten a lot done over the last five years. Networking. Look at these numbers. 80 million switch ports have been shipped. We are now 10 x larger than number two and software defined networking. We have over 7,500 customers running on Nsx and maybe the stat that I'm most proud of is 82 percent of the fortune 100 has now adopted nsx. You have made nsx these standard and software defined networking. Thank you very much. Thank you. When we think about this journey that we're on, we started. You're saying, Hey, we've got to break the chains inside of the data center as we said. And then Nsx became the software defined networking platform. We started to do it through our cloud provider partners. Ibm made a huge commitment to partner with us and deliver this to their customers. We then said, boy, we're going to make a fundamental to all of our cloud services including aws. We built this bridge called the hybrid cloud extension. We said we're going to build it natively into what we're doing with Telcos, with Azure and Amazon as a service. We acquired the St Wagon, right, and a Velo cloud at the hottest product of Vm ware's portfolio today. The opportunity to fundamentally transform branch and wide area networking and we're extending it to the edge. You're literally, the world has become this complex network. We have seen the world go from the old defined by rigid boundaries, simply put in a distributed world. Hardware cannot possibly work. We're empowering customers to secure their applications and the data regardless of where they sit and when we think of the virtual cloud network, we say it's these three fundamental things, a cloud centric networking fabric with intrinsic security and all of it delivered in software. The world is moving from data centers to centers of data and they need to be connected and Nsx is the way that we will do that. So you'll be aware of is well known for this idea of talking but also showing. So no vm world keynote is okay without great demonstrations of it because you shouldn't believe me only what we can actually show and to do that know I'm going to have our CTL come onstage and CTL y'all. I used to be a cto and the CTO is the certified smart guy. He's also known as the chief talking officer and today he's my demo partner. Please walk, um, Vm ware, cto ray to the stage. Right morning pat. How you doing? Oh, it's great ray, and thanks so much for joining us. Know I promised that we're going to show off some pretty cool stuff here. We've covered a lot already, but are you up to the task? We're going to try and run through a lot of demos. We're going to do it fast and you're going to have to keep me on time to ask an awkward question. Slow me down. Okay. That's my fault if you run along. Okay, I got it. I got it. Let's jump right in here. So I'm a CTO. I get to meet lots of customers that. A few weeks ago I met a cio of a large distribution company and she described her it infrastructure as consisting of a number of data centers troll to us, which he also spoke of a large number of warehouses globally, and each of these had local hyperconverged compute and storage, primarily running surveillance and warehouse management applications, and she pulls me four questions. The first question she asked me, she says, how do I migrate one of these data centers to Vm ware cloud on aws? I want to get out of one of these data centers. Okay. Sounds like something andy and I were just talking exactly, exactly what you just spoke to a few moments ago. She also wanted to simplify the management of the infrastructure in the warehouse as themselves. Okay. He's age and smaller data centers that you've had out there. Her application at the warehouses that needed to run locally, butter developers wanted to develop using cloud infrastructure. Cloud API is a little bit late. The rds we spoken with her in. Her final question was looking to the future, make all this complicated management go away. I want to be able to focus on my application, so that's what my business is about. So give me some new ways of how to automate all of this infrastructure from the edge to the cloud. Sounds pretty clear. Can we do it? Yes we can. So we're going to dive right in right now into one of these demos. And the first demo we're going to look at it is vm ware cloud on aws. This is the best solution for accelerating this public cloud journey. So can we start the demo please? So what you were looking at here is one of those data centers and you should be familiar with this product. It's a familiar vsphere client. You see it's got a bunch of virtual machines running in there. These are the virtual machines that we now want to be able to migrate and move the VMC on aws. So we're going to go through that migration right now. And to do that we use a product that you've seen already atx, however it's the x has been, has got some new cool features since the last time we download it. Probably on this stage here last year, I wanted those in particular is how do we do bulk migration and there's a new cool thing, right? Whole thing we want to move the data center en mass and his concept here is cloud motion with vsphere replication. What this does is it replicates the underlying storage of the virtual machines using vsphere replication. So if and when you want to now do the final migration, it actually becomes a vmotion. So this is what you see going on right here. The replication is in place. Now when you want to touch you move those virtual machines. What you'll do is a vmotion and the key thing to think about here is this is an actual vmotion. Those the ends as room as they're moving a hustler, migrating remained life just as you would in a v motion across one particular infrastructure. Did you feel complete application or data center migration with no dying town? It's a Standard v motion kind of appearance. Wow. That is really impressive. That's correct. Wow. You. So one of the other things we want to talk about here is as we are moving these virtual machines from the on prem infrastructure to the VMC on aws infrastructure, unfortunately when we set up the cloud on VMC and aws, we only set up for hosts, uh, that might not be, that'd be enough because she is going to move the whole infrastructure of that this was something you guys, you and Andy referred to briefly data center. Now, earlier, this concept of elastic drs. what elastic drs does, it allows the VMC on aws to react to the workloads as they're being created and pulled in onto that infrastructure and automatically pull in new hosts into the VMC infrastructure along the way. So what you're seeing here is essentially the MC growing the infrastructure to meet the needs of the workloads themselves. Very cool. So overseeing that elastic drs. we also see the ebs capabilities as well. Again, you guys spoke about this too. This is the ability to be able to take the huge amount of stories that Amazon have, an ebs and then front that by visa you get the same experience of v Sign, but you get this enormous amount of storage capabilities behind it. Wow. That's incredible. That's incredible. I'm excited about this. This is going to enable customers to migrate faster and larger than ever before. Correct. Now she had a series of little questions. Okay. The second question was around what about all those data centers and those age applications that I did not move, and this is where we introduce the project which you've heard of already tonight called project dementia. What this does, it gives you the simplicity of Vm ware cloud, but bringing that out to the age, you know what's basically going on here, vmc on aws is a service which manages your infrastructure in aws. We know stretch that service out into your infrastructure, in your data center and at the age, allowing us to be able to manage that infrastructure in the same way. Once again, let's dive down into a demo and take a look at what this looks like. So what you've got here is a familiar series of services available to you, one of them, which is project dimension. When you enter project dimension, you first get a view of all of the different infrastructure that you have available to you, your data centers, your edge locations. You can then dive deeply into one of these to get a closer look at what's going on here. We're diving into one of these The problem is there's a networking problem going on in this warehouse. warehouses and we see it as a problem here. How do we know? We know because vm ware is running this as a managed service. We are directly managing or sorry, monitoring your infrastructure or we discover there's something going wrong here. We automatically create the ASR, so somebody is dealing with this. You have visibility to what's going on, but the vm ware managed service is already chasing the problem for you. Oh, very good. So now we're seeing this dispersed infrastructure with project dementia, but what's running on it so well before we get with running out, you've got another problem and the problem is of course, if you're managing a lot of infrastructure like this, you need to keep it up to date. And so once again, this is where the vm ware managed service kicks in. We manage that infrastructure in terms of patching it and updating it for you. And as an example, when we released a security patch, here's one for the recent l, one terminal fault, the Vmr managed service is already on that and making sure that your on prem and edge infrastructure is up to date. Very good. Now, what's running? Okay. So what's running, uh, so we mentioned this case of this software running at the edge infrastructure itself, and these are workloads which are running locally in those age, uh, those edge locations. This is a surveillance application. You can see it here at the bottom it says warehouse safety monitor. So this is an application which gathers images and then stores those images He said my sql database on top there, now this is where we leverage the somewhere and it puts them in a database. technology you just learned about when Andy and pat spoke about disability to take rds and run that on your on prem infrastructure. The block of virtual machines in the moment are the rds components from Amazon running in your infrastructure or in your edge location, and this gives you the ability to allow your developers to be able to leverage and operate against those Apis, but now the actual database, the infrastructure is running on prem and you might be doing just for performance reasons because of latency, you might be doing it simply because this data center is not always connected to the cloud. When you take a look into under the hood and see what's going on here, what you actually see this is vsphere, a modified version of vsphere. You see this new concept of my custom availability zone. That is the availability zone running on your infrastructure which supports or ds. What's more interesting is you flip back to the Amazon portal. This is typically what your developers are going to do. Once again, you see an availability zone in your Amazon portal. This is the availability zone running on your equipment in your data center. So we've truly taken that already as infrastructure and moved it to the edge so the developer sees what they're comfortable with and the infrastructure sees what they're comfortable with bridging those two worlds. Fabulous. Right. So the final question of course that we got here was what's next? How do I begin to look to the future and say I am going to, I want to be able to see all of my infrastructure just handled in an automated fashion. And so when you think about that, one of the questions there is how do we leverage new technologies such as ai and ml to do that? So what you've got here is, sorry we've got a little bit later. What you've got here is how do I blend ai in a male and the power of what's in the data center itself. Okay. And we could do that. We're bringing you the AI and ml, right? And fusing them together as never before to truly change how the data center operates. Correct. And it is this introduction is this merging of these things together, which is extremely powerful in my mind. This is a little bit like a self driving vehicle, so thinking about a car driving down the street is self driving vehicle, it is consuming information from all of the environment around it, other vehicles, what's happening, everything from the wetter, but it also has a lot of built in knowledge which is built up to to self learning and training along the way in the kids collecting lots of that data for decades. Exactly. And we've got all that from all the infrastructure that we have. We can now bring that to bear. So what we're focusing on here is a project called project magna and project. Magna leverage is all of this infrastructure. What it does here is it helps connect the dots across huge datasets and again a deep insight across the stack, all the way from the application hardware, the infrastructure to the public cloud, and even the age and what it does, it leverages hundreds of control points to optimize your infrastructure on Kpis of cost performance, even user specified policies. This is the use of machine language in order to fundamentally transform. I'm sorry, machine learning. I'm going back to some. Very early was here, right? This is the use of machine learning and ai, which will automatically transform. How do you actually automate these data centers? The goal is true automation of your infrastructure, so you get to focus on the applications which really served needs of your business. Yeah, and you know, maybe you could think about that as in the past we would have described the software defined data center, but in the future we're calling it the self driving data center. Here we are taking that same acronym and redefining it, right? Because the self driving data center, the steep infusion of ai and machine learning into the management and automation into the storage, into the networking, into vsphere, redefining the self driving data center and with that we believe fundamentally is to be an enormous advance and how they can take advantage of new capabilities from bm ware. Correct. And you're already seeing some of this in pieces of projects such as some of the stuff we do in wavefront and so already this is how do we take this to a new level and that's what project magnet will do. So let's summarize what we've seen in a few demos here as we work in true each of these very quickly going through these demos. First of all, you saw the n word cloud on aws. How do I migrate an entire data center to the cloud with no downtime? Check, we saw project dementia, get the simplicity of Vm ware cloud in the data center and manage it at the age as a managed service check. Amazon rds and Vm ware. Cool Demo, seamlessly deploy a cloud service to an on premises environment. In this case already. Yes, we got that one coming in are in m five. And then finally project magna. What happens when you're looking to the future? How do we leverage ai and ml to self optimize to virtual infrastructure? Well, how did ray do as our demo guy? Thank you. Thanks. Thanks. Right. Thank you. So coming back to this picture, our gps for the day, we've covered any cloud, let's click into now any application, and as we think about any application, we really view it as this breadth of the traditional cloud native and Sas Coobernetti is quickly maybe spectacularly becoming seen as the consensus way that containers will be managed and automate as the framework for how modern APP teams are looking at their next generation environment, quickly emerging as a key to how enterprises build and deploy their applications today. And containers are efficient, lightweight, portable. They have lots of values for developers, but they need to also be run and operate and have many infrastructure challenges as well. Managing automation while patch lifecycle updates, efficient move of new application services, know can be accelerated with containers. We also have these infrastructure problems and you know, one thing we want to make clear is that the best way to run a container environment is on a virtual machine. You know, in fact, every leader in public cloud runs their containers and virtual machines. Google the creator and arguably the world leader in containers. They runs them all in containers. Both their internal it and what they run as well as G K, e for external users as well. They just announced gke on premise on vm ware for their container environments. Google and all major clouds run their containers and vms and simply put it's the best way to run containers. And we have solved through what we have done collectively the infrastructure problems and as we saw earlier, cool new container apps are also typically some ugly combination of cool new and legacy and existing environments as well. How do we bridge those two worlds? And today as people are rapidly moving forward with containers and Coobernetti's, we're seeing a certain set of problems emerge. And Dan cone, right, the director of CNCF, the Coobernetti, uh, the cloud native computing foundation, the body for Coobernetti's collaboration and that, the group that sort of stewards the standardization of this capability and he points out these four challenges. How do you secure them? How do you network and you know, how do you monitor and what do you do for the storage underneath them? Simply put, vm ware is out to be, is working to be is on our way to be the dial tone for Coobernetti's. Now, some of you who were in your twenties might not know what that means, so we know over to a gray hair or come and see me afterward. We'll explain what dial tone means to you or maybe stated differently. Enterprise grade standard for Cooper netties and for that we are working together with our partners at Google as well as pivotal to deliver Vm ware, pks, Cooper netties as an enterprise capability. It builds on Bosh. The lifecycle engine that's foundational to the pivotal have offerings today, uh, builds on and is committed to stay current with the latest Coobernetti's releases. It builds on Nsx, the SDN container, networking and additional contributions that were making like harbor the Vm ware open source contribution for the container registry. It packages those together makes them available on a hybrid cloud as well as public cloud environments with pks operators can efficiently deploy, run, upgrade their coopernetties environments on SDDC or on all public clouds. While developers have the freedom to embrace and run their applications rapidly and efficiently, simply put, pks, the standard for Coobernetti's in the enterprise and underneath that Nsx you'll is emerging as the standard for software defined networking. But when we think about and we saw that quote on the challenges of Kubernetes today, we see that networking is one of the huge challenge is underneath that and in a containerized world, things are changing even more rapidly. My network environment is moving more quickly. NSX provides the environment's easily automate networking and security for rapid deployment of containerized environments that fully supports the MRP chaos, fully supports pivotal's application service, and we're also committed to fully support all of the major kubernetes distribution such as red hat, heptio and docker as well Nsx, the only platform on the planet that can address the complexity and scale of container deployments taken together Vm Ware, pks, the production grade computer for the enterprise available on hybrid cloud, available on major public clouds. Now, let's not just talk about it again. Let's see it in action and please walk up to the stage. When di Carter with Ray, the senior director of cloud native marketing for Vm ware. Thank you. Hi everybody. So we're going to talk about pks because more and more new applications are built using kubernetes and using containers with vm ware pts. We get to simplify the deploying and the operation of Kubernetes at scale. When the. You're the experts on all of this, right? So can you take as true the scenario of how pks or vm ware pts can really help a developer operating the Kubernedes environment, developed great applications, but also from an administrator point of view, I can really handle things like networking, security and those configurations. Sounds great. I love to dive into the demo here. Okay. Our Demo is. Yeah, more pks running coubernetties vsphere. Now pks has a lot of cool functions built in, one of which is Nsx. And today what I'm going to show you is how NSX will automatically bring up network objects as quick Coobernetti's name spaces are spun up. So we're going to start with the fees per client, which has been extended to Ron pks, deployed cooper clusters. We're going to go into pks instance one, and we see that there are five clusters running. We're going to select one other clusters, call application production, and we see that it is running nsx. Now a cluster typically has multiple users and users are assigned namespaces, and these namespaces are essentially a way to provide isolation and dedicated resources to the users in that cluster. So we're going to check how many namespaces are running in this cluster and more brought up the Kubernetes Ui. We're going to click on namespace and we see that this cluster currently has four namespaces running wire. We're going to do next is bringing up a new name space and show that Nsx will automatically bring up the network objects required for that name space. So to do that, we're going to upload a Yammel file and your developer may actually use Ku Kata command to do this as well. We're going to check the namespace and there it is. We have a new name space called pks rocks. Yeah. Okay. Now why is that guy now? It's great. We have a new name space and now we want to make sure it has the network elements assigned to us, so we're going to go to the NSX manager and hit refresh and there it is. PKS rocks has a logical robber and a logical switch automatically assigned to it and it's up and running. So I want to interrupt here because you made this look so easy, right? I'm not sure people realize the power of what happened here. The developer, winton using Kubernetes, is api infrastructure to familiar with added a new namespace and behind the scenes pks and tardy took care of the networking. It combination of Nsx, a combination of what we do at pks to truly automate this function. Absolutely. So this means that if you are on the infrastructure operation, you don't need to worry about your developer springing up namespaces because Nsx will take care of bringing the networking up and then bringing them back down when the namespace is not used. So rate, but that's not it. Now, I was in operations before and I know how hard it is for enterprises to roll out a new product without visibility. Right, so pks took care of those dates, you operational needs as well, so while it's running your clusters, it's also exporting Meta data so that your developers and operators can use wavefront to gain deep visibility into the health of the cluster as well as resources consumed by the cluster. So here you see the wavefront Ui and it's showing you the number of nodes running, active parts, inactive pause, et cetera. You can also dive deeper into the analytics and take a look at information site, Georgia namespace, so you see pks rocks there and you see the number of active nodes running as well as the CPU utilization and memory consumption of that nice space. So now pks rocks is ready to run containerized applications and microservices. So you just get us a very highlight of a demo here to see a little bit what pks pks says, where can we learn more? So we'd love to show you more. Please come by the booth and we have more cool functions running on pks and we'd love to have you come by. Excellent. Thank you, Lindy. Thank you. Yeah, so when we look at these types of workloads now running on vsphere containers, Kubernedes, we also see a new type of workload beginning to appear and these are workloads which are basically machine learning and ai and in many cases they leverage a new type of infrastructure, hardware accelerators, typically gps. What we're going to talk about here is how in video and Vm ware have worked together to give you flexibility to run sophisticated Vdi workloads, but also to leverage those same gpu for deep learning inference workloads also on vsphere. So let's dive right into a demo here. Again, what you're seeing here is again, you're looking at here, you're looking at your standard view realized operations product, and you see we've got two sets of applications here, a Vdi desktop workload and machine learning, and the graph is showing what's happening with the Vdi desktops. These are office workers leveraging these desktops everyday, so of course the infrastructure is super busy during the daytime when they're in the office, but the green area shows this is not been used very heavily outside of those times. So let's take a look. What happens to the machine learning application in this case, this organization leverages those available gpu to run the machine learning operations outside the normal working hours. Let's take a little bit of a deeper dive into what the application it is before we see what we can do from an infrastructure and configuration point of view. So this machine learning application processes a vast number of images and it clarify or sorry, it categorizes these images and as it's doing so, it is moving forward and putting each of these in a database and you can see it's operating here relatively fast and it's leveraging some gps to do that. So typical image processing type of machine learning problem. Now let's take a dive in and look at the infrastructure which is making this happen. First of all, we're going to look only at the Vdi employee Dvt, a Vdi infrastructure here. So I've got a bunch of these applications running Vdi applications. What I want to do is I want to move these so that I can make this image processing out a application run a lot faster. Now normally you wouldn't do this, but pot insisted that we do this demo at 10:30 in the morning when the office workers are in there, so we're going to move older Vdi workloads over to the other cluster and that's what you're seeing is going on right now. So as they move over to this other cluster, what we are now doing is freeing up all of the infrastructure. The GPU that Vdi workload was using here. We see them moving across and now you've freed up that infrastructure. So now we want to take a look at this application itself, the machine learning application and see how we can make use of that. Now freed up infrastructure we've got here is the application is running using one gpu in a vsphere cluster, but I've got three more gpu is available now because I've moved the Vdi workloads. We simply modify the application, let it know that these are available and you suddenly see an increase in the processing capabilities because of what we've done here in terms of making the flexibility of accessing those gps. So what you see here is the same gps that youth for Vdi, which you probably have in your infrastructure today, can also be used to run sophisticated machine learning and ai type of applications on your vsphere infrastructure. So let's summarize what we've seen in the various demos here in this section. First of all, we saw how the MRPS simplifies the deployment and operating operation of Kubernetes at scale. What we've also seen is that leveraging the Nvidia Gpu, we can now run the most demanding workloads on vsphere. When we think about all of these applications and these new types of workloads that people are running. I want to take one second to speak to another workload that we're seeing beginning to appear in the data center. And this is of course blockchain. We're seeing an increasing number of organizations evaluating blockchains for smart contract and digital consensus solutions. So this tech, this technology is really becoming or potentially becoming a critical role in how businesses will interact each other, how they will work together. We'd project concord, which is an open source project that we're releasing today. You get the choice, performance and scale of verifiable trust, which you can then bring to bear and run in the enterprise, but this is not just another blockchain implementation. We have focused very squarely on making sure that this is good for enterprises. It focuses on performance, it focuses on scalability. We have seen examples where running consensus algorithms have taken over 80 days on some of the most common and widely used infrastructure in blockchain and we project conquered. You can do that in two and a half hours. So I encourage you to check out this project on get hub today. You'll also see lots of activity around the whole conference. Speaking about this. Now we're going to dive into another section which is the anti device section. And for that I need to welcome pat back up there. Thank you pat. Thanks right. So diving into any device piece of the puzzle, you and as we think about the superpowers that we have, maybe there are no more area that they are more visible than in the any device aspect of our picture. You know, and as we think about this, the superpowers, you know, think about mobility, right? You know, and how it's enabling new things like desktop as a service in the mobile area, these breadth of smartphones and devices, ai and machine learning allow us to manage them, secure them and this expanding envelope of devices in the edge that need to be connected and wearables and three d printers and so on. We've also seen increasing research that says engaged employees are at the center of business success. Engaged employees are the critical ingredient for digital transformation. And frankly this is how I run vm ware, right? You know, I have my device and my work, all my applications, every one of my 23,000 employees is running on our transformed workspace one environment. Research shows that companies that, that give employees ready anytime access are nearly three x more likely to be leaders in digital transformation. That employees spend 20 percent of their time today on manual processes that can be automated. The way team collaboration and speed of division decisions increases by 16 percent with engaged employees with modern devices. Simply put this as a critical aspect to enabling your business, but you remember this picture from the silos that we started with and each of these environments has their own tribal communities of management, security automation associated with them, and the complexity associated with these is mind boggling and we start to think about these. Remember the I'm a pc and I'm a Mac. Well now you have. I'm an Ios. I'm a droid and other bdi and I'm now a connected printer and I'm a connected watch. You remember citrix manager and good is now bad and sccm a failed model and vpns and Xanax. The chaos is now over at the center of that is vm ware, workspace one, get it out of the business of managing devices, automate them from the cloud, but still have the mentor price. Secure cloud based analytics that brings new capabilities to this critical topic. You'll focus your energy on creating employee and customer experiences. You know, new capabilities to allow like our airlift, the new capability to help customers migrate from their sccm environment to a modern management, expanding the use of workspace intelligence. Last year we announced the chromebook and a partnership with HP and today I'm happy to announce the next step in our partnerships with Dell. And uh, today we're announcing that Dell provisioning for Vm ware, workspace one as part of Dell's ready to work solutions Dallas, taking the next leap and bringing workspace one into the core of their client to offerings. And the way you can think about this as Literally a dell drop ship, lap pops showing up to new employee. day one, productivity. You give them their credential and everything else is delivered by workspace one, your image, your software, everything patched and upgraded, transforming your business, right beginning at that device experience that you give to your customer. And again, we don't want to talk about it. We want to show you how this works. Please walk to the stage with re renew the head of our desktop products marketing. Thank you. So we just heard from pat about how workspace one integrated with Dell laptops is really set up to manage windows devices. What we're broadly focused on here is how do we get a truly modern management system for these devices, but one that has an intelligence behind it to make sure that we're kept with a good understanding of how to keep these devices always up to date and secure. Can we start the demo please? So what we're seeing here is to be the the front screen that you see of workspace one and you see you've got multiple devices a little bit like that demo that patch assured. I've got Ios, android, and of course I've got windows renewal. Can you please take us through how workspace one really changes the ability of somebody an it administrator to update and manage windows into our environment? Absolutely. With windows 10, Microsoft has finally joined the modern management body and we are really excited about that. Now. The good news about modern management is the frequency of ostp updates and how quickly they come out because you can address all those security issues that are hitting our radar on a daily basis, but the bad news about modern management is the frequency of those updates because all of us in it admins, we have to test each and every one of our applications would that latest version because we don't want to roll out that update in case of causes any problems with workspace one, we saw that we simply automate and provide you with the APP compatibility information right out of the box so you can now automate that update process. Let's take a quick look. Let's drill down here further into the windows devices. What we'll see is that only a small percentage of those devices are on that latest version of operating system. Now, that's not a good thing because it might have an important security fix. Let's scroll down further and see what the issue is. We find that it's related to app compatibility. In fact, 38 percent of our devices are blocked from being upgraded and the issue is app compatibility. Now we were able to find that not by asking the admins to test each and every one of those, but we combined windows analytics data with APP intelligent out of the box and be provided that information right here inside of the console. Let's dig down further and see what those devices and apps look like. So knew this is the part that I find most interesting. If I am a system administrator at this point I'm looking at workspace one is giving me a key piece of information. It says if you proceed with this update, it's going to fail 84, 85 percent at a time. So that's an important piece of information here, but not alone. Is it telling me that? It is telling me roughly speaking why it thinks it's going to fail. We've got a number of apps which are not ready to work with this new version, particularly the Mondo card sales lead tracker APP. So what we need to do is get engineering to tackle the problems with this app and make sure that it's updated. So let's get fixing it in order to fix it. What we'll do is create an automation and we can do this right out of the box in this automation will open up a Jira ticket right from within the console to inform the engineers about the problem, not just that we can also flag and send a notification to that engineering manager so that it's top of mine and they can get working on this fixed right away. Let's go ahead and save that automation right here, ray UC. There's the automation that we just So what's happening here is essentially this update is now scheduled meeting. saved. We can go and update oldest windows devices, but workspace one is holding the process of proceeding with that update, waiting for the engineers to update the APP, which is going to cause the problem. That's going to take them some time, right? So the engineers have been working on this, they have a fixed and let's go back and see what's happened to our devices. So going back into the ios updates, what we'll find is now we've unblocked those devices from being upgraded. The 38 percent has drastically dropped down. It can rest in peace that all of the devices are compliant and on that latest version of operating system. And again, this is just a snapshot of the power of workspace one to learn more and see more. I invite you all to join our EOC showcase keynote later this evening. Okay. So we've spoken about the presence of these new devices that it needs to be able to manage and operate across everything that they do. But what we're also seeing is the emergence of a whole new class of computing device. And these are devices which are we commonly speak to have been at the age or embedded devices or Iot. And in many cases these will be in factories. They'll be in your automobiles, there'll be in the building, controlling, controlling, uh, the building itself, air conditioning, etc. Are quite often in some form of industrial environment. There's something like this where you've got A wind farm under embedded in each of these turbines. This is a new class of computing which needs to be managed, secured, or we think virtualization can do a pretty good job of that in new virtualization frontier, right at the edge for iot and iot gateways, and that's gonna. That's gonna, open up a whole new realm of innovation in that space. Let's dive down and taking the demo. This spaces. Well, let's do that. What we're seeing here is a wind turbine farm, a very different than a data center than what we're used to and all the compute infrastructure is being managed by v center and we see to edge gateway hose and they're running a very mission critical safety watchdog vm right on there. Now the safety watchdog vm is an fte mode because it's collecting a lot of the important sensor data and running the mission critical operations for the turbine, so fte mode or full tolerance mode, that's a pretty sophisticated virtualization feature allowing to applications to essentially run in lockstep. So if there's a failure, wouldn't that gets to take over immediately? So this no sophisticated virtualization feature can be brought out all the way to the edge. Exactly. So just like in the data center, we want to perform an update, so as we performed that update, the first thing we'll do is we'll suspend ft on that safety watchdog. Next, we'll put two. Oh, five into maintenance mode. Once that's done, we'll see the power of emotion that we're all familiar with. We'll start to see all the virtual machines vmotion over to the second backup host. Again, all the maintenance, all the update without skipping a heartbeat without taking down any daily operations. So what we're seeing here is the basic power of virtualization being brought out to the age v motion maintenance mode, et cetera. Great. What's the big deal? We've been doing that for years. What's the, you know, come on. What's the big deal? So what you're on the edge. So when you get to the age pack, you're dealing with a whole new class of infrastructure. You're dealing with embedded systems and new types of cpu hours and process. This whole demo has been done on an arm 64. Virtualization brought to arm 64 for embedded devices. So we're doing this on arm on the edge, correct. Specifically focused for embedded for age oems. Okay. Now that's good. Okay. Thank you ray. Actually, we've got a summary here. Pat, just a second before you disappear. A lot to rattle off what we've just seen, right? We've seen workspace one cross platform management. What we've also seen, of course esx for arm to bring the power of vfx to edge on 64, but are in platforms will go no. Okay. Okay. Thank you. Thanks. Now we've seen a look at a customer who is taking advantage of everything that we just saw and again, a story of a customer that is just changing lives in a fundamental way. Let's see. Make a wish. So when a family gets the news that a child is sick and it's a critical illness, it could be a life threatening illness. The whole family has turned upside down. Imagine somebody comes to you and they say, what's the one thing you want that's in your heart? You tell us and then we make that happen. So I was just calling to give you the good news that we're going to be able to grant jackson a wish make, which is the largest wish granting organizations in the United States. English was featured in the cbs 60 minutes episode. Interestingly, it got a lot of hits, but uh, unfortunately for the it team, the whole website crashed make a wish is going through a program right now where we're centralizing technology and putting certain security standards in place at our chapters. So what you're seeing here, we're configuring certain cloud services to make sure that they always are able to deliver on the mission whether they have a local problem or not is we continue to grow the partnership and work with vm ware. It's enabling us to become more efficient in our processes and allows us to grant more wishes. It was a little girl. She had a two year old brother. She just wanted a puppy and she was forthright and I want to name the puppy in my name so my brother would always have me to list them off a five year old. It's something we can't change their medical outcome, but we can change their spiritual outcome and we can transform their lives. Thank you. Working together with you truly making wishes come true. The last topic I want to touch on today, and maybe the most important to me personally is security. You got to fundamentally, when we think about this topic of security, I'll say it's broken today and you know, we would just say that the industry got it wrong that we're trying to bolt on or chasing bad, and when we think about our security spend, we're spending more and we're losing more, right? Every day we're investing more in this aspect of our infrastructure and we're falling more behind. We believe that we have to have much less security products and much more security. You know, fundamentally, you know, if you think about the problem, we build infrastructure, right? Generic infrastructure, we then deploy applications, all kinds of applications, and we're seeing all sorts of threats launched that as daily tens of millions. You're simple virus scanner, right? Is having tens of millions of rules running and changing many times a day. We simply believe the security model needs to change. We need to move from bolted on and chasing bad to an environment that has intrinsic security and is built to ensure good. This idea of built in security. We are taking every one of the core vm ware products and we are building security directly into it. We believe with this, we can eliminate much of the complexity. Many of the sensors and agents and boxes. Instead, they'll directly leverage the mechanisms in the infrastructure and we're using that infrastructure to lock it down to behave as we intended it to ensure good, right on the user side with workspace one on the network side with nsx and microsegmentation and storage with native encryption and on the compute with app defense, we are building in security. We're not chasing threats or adding on, but radically reducing the attack surface. When we look at our applications in the data center, you see this collection of machines running inside of it, right? You know, typically running on vsphere and those machines are increasingly connected. Through nsx and last year we introduced the breakthrough security solution called app defense and app defense. Leverages the unique insight we get into the application so that we can understand the application and map it into the infrastructure and then you can lock down, you could take that understanding, that manifest of its behavior and then lock those vms to that intended behavior and we do that without the operational and performance burden of agents and other rear looking use of attack detection. We're shrinking the attack surface, not chasing the latest attack vector, you know, and this idea of bolt on versus chasing bad. You sort of see it right in the network. Machines have lots of conductivity, lots of applications running and something bad happens. It basically has unfettered access to move horizontally through the data center and most of our security is north, south. MosT of the attacks are eastwest. We introduced this idea of microsegmentation five years ago, and by it we're enabling organizations to secure some networks and separate sensitive applications and services as never before. This idea isn't new, that just was never practical before nsx, but we're not standing still. Our teams are innovating to leap beyond 12. What's next beyond microsegmentation, and we see this in three simple words, learn, imagine a system that can look into the applications and understand their behavior and how they should operate. we're using machine learning and ai instead of chasing were to be able to ensure good where that that system can then locked down its behavior so the system consistently operates that way, but finally we know we have a world of increasing dynamic applications and as we move to more containerize the microservices, we know this world is changing, so we need to adapt. We need to have more automation to adapt to the current behavior. Today I'm very excited to have two major announcements that are delivering on this vision. The first of those vsphere platinum, our flagship vm ware vsphere product now has app defense built right in platinum will enable virtualization teams. Yeah, go ahead. Yeah, let's use it. Platinum will enable virtualization teams you to give an enormous contribution to the security profile of your enterprise. You could see whatever vm is for its purpose, its behavior until the system. That's what it's allowed to do. Dramatically reducing the attack surface without impact. On operations or performance, the capability is so powerful, so profound. We want you to be able to leverage it everywhere, and that's why we're building it directly into vsphere, vsphere platinum. I call it the burger and fries. You know, nobody leaves the restaurant without the fries who would possibly run a vm in the future without turning security on. That's how we want this to work going forward. Vsphere platinum and as powerful as microsegmentation has been as an idea. We're taking the next step with what we call adaptive microsegmentation. We are fusing Together app defense and vsphere with nsx to allow us to align the policies of the application through vsphere and the network. We can then lock down the network and the compute and enable this automation of the microsegment formation taken together adaptive microsegmentation. But again, we don't want to just tell you about it. We want to show you. Please welcome to the stage vj dante, who heads our machine learning team for app dispense. Vj a very good vj. Thanks for joining us. So, you know, I talked about this idea right, of being able to learn, lock and adapt. Uh, can you show it to us? Great. Yeah. Thank you. With vc a platinum, what we have done is we have put in everything you need to learn, lock and adapt, right with the infrastructure. The next time you bring up your wifi at line, you'll actually see a difference right in there. Let's go with that demo. There you go. And when you look at our defense there, what you see is that all your guests, virtual machines and all your host, hundreds of them and thousands of virtual machines enabling for that difference. It's in there. And what that does is immediately gets you visibility into the processes running on those virtual machines and the risk for the first time. Think about it for the first time. You're looking at the infrastructure through the lens of an application. Here, for example, the ecommerce application, you can see the components that make up that application, how they interact with each other, the specific process, a specific ip address on a specific board. That's what you get, but so we're learning the behavior. Yes. Yeah, that's very good. But how do you make sure you only learn good behavior? Exactly. How do we make sure that it's not bad? We actually verify me insured. It's all good. We ensured that everybody these reputation is verified. We ensured that the haven is verified. Let's go to svc host, for example. This process can exhibit hundreds of behaviors across numerous. Realize what we do here is we actually verify that failure saw us. It's actually a machine learning models that had been trained on millions of instances of good, bad at you said, and then automatically verify that for okay, so we said, you. We learned simply, learn now, lock. How does that work? Well, once you learned the application, locking it is as simple as clicking on that verify and protect button and then you can lock both the compute and network and it's done. So we've pushed those policies into nsx and microsegmentation has been established actually locked down the compute. What is the operating system is exactly. Let's first look at compute, protected the processes and the behaviors are locked down to exactly what is allowed for that application. And we have bacon policies and program your firewall. This is nsx being configured automatically for you, laurie, with one single click. Very good. So we said learn lock. Now, how does this adapt thing work? Well, a bad change is the only constant, but modern applications applications change on a continuous basis. What we do is actually pretty simple. We look at every change as it comes in determinant is good or bad. If it's good, we say allow it, update the policies. That's bad. We denied. Let's look at an example as asco dxc. It's exhibiting a behavior that they've not seen getting the learning period. Okay? So this machine has never behave this This hasn't been that way. But. way. But again, our machine learning models had seen thousands of instances of this process. They know this is normal. It talks on three 89 all the time. So what it's done to the few things, it's lowered the criticality of the alarm. Okay, so false positive. Exactly. The bane of security operations, false positives, and it has gone and updated. Jane does locks on compute and network to allow for that behavior. Applications continues to work on this project. Okay, so we can learn and adapt and action right through the compute and the network. What about the client? Well, we do with workplace one, intelligence protect and manage end user endpoint, but what's one intelligence? Nsx and actually work together to protect your entire data center infrastructure, but don't believe me. You can watch it for yourself tomorrow tom cornu keynote. You want to be there, at 1:00 PM, be there or be nowhere. I love you. Thank you veejay. Great job. Thank you so much. So the idea of intrinsic security and ensuring good, we believe fundamentally changing how security will be delivered in the enterprise in the future and changing the entire security industry. We've covered a lot today. I'm thrilled as I stand on stage to stand before this community that truly has been at the center of changing the world of technology over the last couple of decades. In it. We've talked about this idea of the super powers of technology and as they accelerate the huge demand for what you do, you know in the same way we together created this idea of the virtual infrastructure admin. You'll think about all the jobs that we are spawning in the discussion that we had today, the new skills, the new opportunities for each one of us in this room today, quantum program, machine learning engineer, iot and edge expert. We're on the cusp of so many new capabilities and we need you and your skills to do that. The skills that you possess, the abilities that you have to work across these silos of technology and enabled tomorrow. I'll tell you, I am now 38 years in the industry and I've never been more excited because together we have the opportunity to build on the things that collective we have done over the last four decades and truly have a positive global impact. These are hard problems, but I believe together we can successfully extend the lifespan of every human being. I believe together we can eradicate chronic diseases that have plagued mankind for centuries. I believe we can lift the remaining 10 percent of humanity out of extreme poverty. I believe that we can reschedule every worker in the age of the superpowers. I believe that we can give modern ever education to every child on the planet, even in the of slums. I believe that together we could reverse the impact of climate change. I believe that together we have the opportunity to make these a reality. I believe this possibility is only possible together with you. I asked you have a please have a wonderful vm world. Thanks for listening. Happy 20th birthday. Have a great topic.

Published Date : Aug 28 2018

SUMMARY :

of devices in the edge that need to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AndyPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

1998DATE

0.99+

TelcoORGANIZATION

0.99+

1986DATE

0.99+

TelcosORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul MaritzPERSON

0.99+

DellORGANIZATION

0.99+

BostonLOCATION

0.99+

Andy JassyPERSON

0.99+

LenovoORGANIZATION

0.99+

10QUANTITY

0.99+

DeloitteORGANIZATION

0.99+

JoePERSON

0.99+

SydneyLOCATION

0.99+

Joe ConsellaPERSON

0.99+

AfricaLOCATION

0.99+

Pat GelsingerPERSON

0.99+

OregonLOCATION

0.99+

20 percentQUANTITY

0.99+

AshleyPERSON

0.99+

16 percentQUANTITY

0.99+

VegasLOCATION

0.99+

JupiterLOCATION

0.99+

Last yearDATE

0.99+

last yearDATE

0.99+

first questionQUANTITY

0.99+

LindyPERSON

0.99+

telcoORGANIZATION

0.99+

John GagePERSON

0.99+

10 percentQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dan conePERSON

0.99+

68 percentQUANTITY

0.99+

200 applicationsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

50 percentQUANTITY

0.99+

Vm Ware Cloud FoundationORGANIZATION

0.99+

1440DATE

0.99+

30 yearQUANTITY

0.99+

HPORGANIZATION

0.99+

38 percentQUANTITY

0.99+

38 yearsQUANTITY

0.99+

$600QUANTITY

0.99+

20 yearsQUANTITY

0.99+

one monthsQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

windows 10TITLE

0.99+

hundredsQUANTITY

0.99+

yesterdayDATE

0.99+

80 millionQUANTITY

0.99+

five percentQUANTITY

0.99+

second questionQUANTITY

0.99+

JodyPERSON

0.99+

TodayDATE

0.99+

tomorrowDATE

0.99+

SanjayPERSON

0.99+

23,000 employeesQUANTITY

0.99+

five peopleQUANTITY

0.99+

sixth yearQUANTITY

0.99+

82 percentQUANTITY

0.99+

five instancesQUANTITY

0.99+

tomorrow morningDATE

0.99+

CoobernettiORGANIZATION

0.99+