Image Title

Search Results for CME:

George Elissaios, AWS | AWS re:Invent 2021


 

>>Yeah. Hey, everyone, Welcome to the cubes. Continuous coverage of AWS Re invent 2021. I'm Lisa Martin with John Furrier were running one of the industry's largest and most important hybrid tech events with AWS and massive ecosystem of partners. Right now there are two live cube sets to remote sets over 100 guests on the programme and we're pleased to welcome back one of our alum I to talk about the next generation and cloud innovation. Georgia Lisa is joins John to me, the director of product management for EC two edge at A. W S George. Welcome to the programme. >>Glad to be here in person. Thanks Great to be here in person. Awesome to be here in person. Finally, >>one of the things that is very clear is the US flywheel of innovation and there was no slowdown with what's happened in the last 22 months. Amazing announcements, new leadership. We talked a little bit about five g yesterday, but let's talk more about that. Everyone is excited about five g consumers businesses. What's going on? >>So, yeah, I wanted to talk to you today about the new service that we launched called AWS Private. Five g. Essentially, it's a service that allows any AWS customer to build their own private five g network and what we try to do with the services make it that simple and cost effective for anyone without any telco experience or expertise, really, to build their own private five g network. So you just have to go to your AWS console. Um, describe the parameters for network simple stuff like, Where do you want it to be located? The throughput, the number of devices and AWS will build a plan for your network and seep you everything that you need. Just plug it together. Uh, turn it on and the network automatically configures itself. All you got to do is popular sim cards that we send you into your mobile devices and you have a private five g network working in your your premise is >>one of the things that we know and love about AWS is its customer obsession. It's focused on the customer's that whole flywheel of all the innovation that comes out as Adam was saying yesterday to the customers, we deliver this, but but you wanted more. We said we deliver this, but you wanted more. Talk to me a little bit about some of the customer catalysts for private five G. >>Actually, one of the good examples is where we are right now. More and more AWS customers need to connect an increased number of devices, and these devices become more data hungry. You know they need to push data around. They also become more and more wireless, right? Uh, so when you are trying to connect devices in the manufacturing floor, bit sensors, you know, connect the tracks, forklifts or in a convention centre. You look at how many devices there are around us. When you're trying to connect these devices with a wired network, you quickly run into physical problems like it's. It's hard to lay cable anywhere, and customers try to use for many of these use cases. But as a number of devices grows into the thousands and you know you need to put more and more data around, you quickly reach the limitations of what the WiFi technology and also WiFi is not really great at covering really open, large space. So that's where these customers, you know, think of college campuses, convention centres, manufacturing floors, all of these customers. Really? What they need to be able to do is to level the power of the mobile networks. However, doing that by yourself is pretty hard. So that's what we aim to to enable here we are waiting to enable these customers to build very easily and cost effectively their own. Uh, >>Okay, George. So I have to ask. I'm truly curious. I love this announcement. Um, because it brings together kind of the edge story. But also, I'm a band with love. I love more broad. Give me more broadband. Faster, cheaper and more broadband. How does it work? So take me through the use case of what do I need to deploy? Do I need to have a back haul connection? What does that look like? Is there a certain band with requirements? How big is the footprint? What's the radius? Just walk me through. How do I roll this out? >>Yeah, sure. Some of that stuff actually depends on your requirements, right. How How big? How much of a space do you want to cover? Basically, what we see, if you were in preview right now, so we're sipping you. The simplest configuration, which is basically these things called small cells there, you know, radio units and antennas. And all you have to do is connect them to your local. The network has Internet access. These things connect and automatically had, you know, connect home to the cloud and basically integrate and build up your whole network. All all you need is that Internet connection, and I don't know what to do. Now, how big is the network? You can You can make it pretty big. You can cover hundreds of thousands of square feet with with cellular networks with mobile networks. Um, you know, the bigger you they especially want to cover the more of these radio units. We're gonna stop you, uh, >>classic wireless radios. >>Yes. You >>light up the area with five g connected to the network. That's your choke point. The big of the pipe >>took the bigger pipe. That toxic. I mean, well, there, there's two. There's two things to consider here. There is local connectivity. So devices talking to each other, and there was connectivity back to somewhere else, like the Internet or the cloud. There are use cases, for example. Let's say data video feeds that you want to push up to do some inference in the cloud. In these use cases, you're basically pushing all of the data up. There is no left. There's no East West connectivity locally, and that's where our simplest configuration works best. There are other, uh, use cases where there is a lot of connectivity and devices talk to each other locally, like in this place, for example, right in this. In these cases, we can sip you that second configuration where we actually see Pew, a managed hardware WS managed hardware on premises, and that runs the smart of the network and allows all of your data traffic to remain local. That's >>wavelength Outpost, or both. >>A different configuration of A. W s private five G. It's a managed service. We take. We take care of it. You basically it's very It has a pricing model, which is very customer friendly because you like multi W services. You can start with no upfront fees. You can scale and pay as you scale because >>it's designed to deploy easily. >>Yep, deploys the >>footprint. Just I'm just curious if the poll is it like, it's like an antenna. Is it like so and >>yeah, well, the antenna is, you know, the small cell. They call them small cells in, you know, in in cellular land there, this big. And you can you can hide this. There is actually a demo in the Venetian of the private service. So you can you can actually see it in action, but yeah, that thing can cover 10,000 square feet, just one of them. So you can >>go out and put a five g network downtown and be like the king. >>You could Yes. You could have your own private network. You can monetise that next >>on the Q. >>Great stuff. >>So in terms of industries adopting this, you gave us some examples. Obviously. Convention centres, campuses, universities. I'm just curious, given the amount of acceleration that we've seen in every industry the last 22 months where organisations must become digital. They depend on that for their livelihood. And we saw this all these pivots, right? 22 months ago. How do we survive this? How do we thrive? Are consumers now are whether it's an injury or consumer or enterprise. Have this expectation that we're gonna be able to communicate no matter where we are 24 by seven. Whether it's health care, financial services. I'm just curious if you're seeing any industries in particular that you think are really prime for this private five >>G. Yeah. So manufacturing is a is a really great example because you have to cover large spaces. You have thousands of devices, sensors, etcetera and using other solutions like WiFi does not provide you the depth of capabilities like, for example, you know, advanced security capabilities or even capabilities to prioritise traffic from some devices over others, which is what a five G network can do for you. But also, you know, it involves large spaces both indoors and outdoors. We, you know, actually, Amazon is a really great example of you know of using this. We're working with Amazon fulfilment centres. These are the warehouses that fulfil your orders when you order online. Um, and they are a mix of indoor space and outer space, and you can think of, you know, I don't know if you've seen pictures or videos. There's robots running around their sensors everywhere. There is packing lines, etcetera, all of these things in order to operate performantly, but also securely and safely for the people that are around. You need to be well connected at a very high reliability rate. Right? So, uh, Amazon for two networks is actually using private A W s private five G to connect all of these devices. The really key thing here is you don't have to go drop 1000 of these access points we're talking about you. Can you can. You can probably cover your space with 5 10 of these. So your operational expenses, your maintenance goes down and there is less interruption of your normal operations like you can't. You don't have to stop your manufacturing line for someone to come in and fix your WiFi access. >>It's great for campuses like college campuses, college >>campuses, a great one. We you know, we've worked with college campuses, including the CME University in the past two, you know, with some of our partners to, uh, to to deploy. So >>that's how close you have these distribution, gas systems, distribution, whatever they call it accelerate whatever amplifies into get extra coverage, this seems to be a good fit. Um, for that how you mentioned in the preview? How do people get involved? Is there like a criteria. How was it going to >>be available to get priority? Don't get you >>tell them ready to jump in. Take us through the programme. What's the plants? >>So currently we're you know, we're in that preview mode. So we're keeping you this small configuration, the simpler configuration. You can sign up on the AWS website and you know, we, as we scale our operations are supply chain. Because this involves also, you know, hardware, etcetera. We're gonna go to general availability g A over the next few months and we have both configurations open. So I I encourage everyone who is interested go to the W s website and sign up. We're asking to get that in customers' hands because we're getting overwhelmingly positive feedback on what we built. >>This is transformative. I mean, clearly what you're talking about here is going to transform industry and help organisations transform themselves and outpaced the competitors that are in the rear view mirror Aren't going to be able to take advantage of this were on the show floor. We've got lots of people here. Where can people actually go and see this preview tested up? >>There is an actual demo in the Venetian. I can't remember. Sorry, I can't remember the room. I think it's on the Yes, actually, it's on the floor on the third floor where the meeting rooms are on outside 35 or one. If anyone wants to go, we're >>going to start buying lunch time. >>Yes. Yeah, you can see it in action. And, you know, you could You could see a future where everything, You know, you look around. There's thousands of devices here. You could power all of these devices with a single cell and, you know, really scaled throughput >>in the five G. Just curious, um on the range is better than wifi >>ranges. Better outdoors, >>obviously, or factories. What's the throughput on the >>depending on the spectrum that you choose? And that's actually a really good save way. The device, the service that we built, its spectrum agnostic so it can be used on right now. We're using it on what we call C BRS spectrum, which is the free for all you can. You know, you can you can use it yourself. But also, customers can bring their own spectrum. And we're working with a batch of, uh, CSP operators to build advanced bundles where you can work this on licence spectrum. So if you're going up the spectrum in what they called millimetre wave >>spectrum owner to bring your own licence, >>you could So telco right? You could be a telco, bring your, you know, and work with us as a partner or some actually, actually, manufacturing customers have purchased rights to small spectrum bands so they can use those in combination with this service to deploy. So to your original question, as you're going back up the spectrum, you can drive more and more throughput. You it's not. It's not unheard of to drive one gig. You know what's so >>The low hanging fruit is the the use cases that have critical need for edge connectivity manufacturing? Um, certainly the retail or whatever that they help do the deployment >>we can. We can. We can see this being applicable because because you can start super small. You can see this being applicable even to branch offices, right? Like, uh, let's say I was talking to a customer yesterday. They were thinking or have all these branch offices. I don't even I don't even want to have I thought either he just wants something that's very quickly and easily. You know, I can manage centrally and it just connects. >>Can I should have fixed wireless shot to the wavelength order to have back all with wire >>too. Oh, they actually we are planning to. You know, I talked about where the smarts of the network live in the they can live in a region, they can live in the locals, and they can live in a wave election. So we're combining more and more of these products as well. And it's computing, obviously, is a is an obvious thing that, you know, we should be working on >>incredible work, George, that you and the team have done transforming industries. And I don't know if a feeling there might be a cube to Is it? Would it be too dot >>Oh, John, >>he's ready. Big George, Thank you so much for joining joining me today. It's great >>to be here. Thanks for having that >>for John Ferrier. I'm Lisa Martin. You're watching the Cube, the global leader in live coverage. Mhm

Published Date : Dec 2 2021

SUMMARY :

Georgia Lisa is joins John to me, the director of product management for EC two edge at A. Thanks Great to be here in person. one of the things that is very clear is the US flywheel of innovation and there So you just have to go to your AWS console. was saying yesterday to the customers, we deliver this, but but you wanted more. But as a number of devices grows into the thousands and you know you need to put How big is the footprint? Um, you know, the bigger you they especially The big of the pipe In these cases, we can sip you that second configuration where we actually see Pew, You can scale and pay as you scale because Just I'm just curious if the poll is it like, it's like an antenna. So you can you can actually see it in action, but yeah, You can monetise that next So in terms of industries adopting this, you gave us some examples. you know, actually, Amazon is a really great example of you know of using this. in the past two, you know, with some of our partners to, uh, to to deploy. Um, for that how you mentioned in the preview? What's the plants? You can sign up on the AWS website and you know, are in the rear view mirror Aren't going to be able to take advantage of this were on the show floor. actually, it's on the floor on the third floor where the meeting rooms are on outside And, you know, you could You could see a future where everything, You know, What's the throughput on the depending on the spectrum that you choose? So to your original question, as you're going back up the spectrum, you can drive more and more We can see this being applicable because because you can start super small. obviously, is a is an obvious thing that, you know, we should be working on incredible work, George, that you and the team have done transforming industries. It's great to be here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

George ElissaiosPERSON

0.99+

GeorgePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

John FerrierPERSON

0.99+

CME UniversityORGANIZATION

0.99+

24QUANTITY

0.99+

10,000 square feetQUANTITY

0.99+

1000QUANTITY

0.99+

telcoORGANIZATION

0.99+

yesterdayDATE

0.99+

AdamPERSON

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

third floorQUANTITY

0.99+

Georgia LisaPERSON

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

22 months agoDATE

0.99+

one gigQUANTITY

0.99+

WSORGANIZATION

0.99+

bothQUANTITY

0.99+

VenetianLOCATION

0.99+

sevenQUANTITY

0.99+

5 10QUANTITY

0.99+

thousandsQUANTITY

0.98+

hundreds of thousands of square feetQUANTITY

0.98+

second configurationQUANTITY

0.97+

over 100 guestsQUANTITY

0.97+

two networksQUANTITY

0.97+

John FurrierPERSON

0.96+

35QUANTITY

0.96+

A. W S GeorgeORGANIZATION

0.93+

five gQUANTITY

0.91+

USLOCATION

0.91+

last 22 monthsDATE

0.9+

five gORGANIZATION

0.88+

2021DATE

0.87+

single cellQUANTITY

0.85+

two live cube setsQUANTITY

0.84+

thousands of devicesQUANTITY

0.81+

A. WORGANIZATION

0.79+

Five g.TITLE

0.76+

C BRSORGANIZATION

0.76+

CubePERSON

0.76+

Big GeorgePERSON

0.76+

Re invent 2021EVENT

0.74+

wave electionEVENT

0.71+

fiveQUANTITY

0.68+

EC two edgeORGANIZATION

0.67+

five gTITLE

0.66+

WORGANIZATION

0.65+

millimetre waveEVENT

0.64+

PewORGANIZATION

0.63+

monthsDATE

0.63+

nextDATE

0.59+

G.OTHER

0.53+

five gOTHER

0.52+

gTITLE

0.49+

pointsQUANTITY

0.49+

fiveOTHER

0.43+

gCOMMERCIAL_ITEM

0.42+

GOTHER

0.4+

InventEVENT

0.39+

Data Drivers Snowflake's Award Winning Customers


 

>>Hi, everyone. And thanks for joining us today for our session on the 2020 Data Drivers Award winners. I'm excited to be here today with you. I'm a lease. Bergeron, vice president, product marketing for snowflake. Thes rewards are intended to recognize companies and individuals for using snowflakes, data cloud to drive innovation and impact in their organizations. Before we start our conversations, I want to quickly congratulate all of our award winners. First in the business awards are data driver of the year is Cisco. Our machine learning master is you Nipper, Our data sharing leader is Rakuten. Our data application of the year is observed and our data for good award goes to door dash for the individual and team awards. We first have the cost. Jane, Chief Digital officer of Paccar. We have a militiamen, director of cybersecurity and data science winning our data science Manager of the Year award at Comcast for a date. A pioneer of the year. We have Faisal KP, who's our senior manager of enterprise data Services at Pizza Hut. And lastly, we have our best data team going to McKesson, led by Jimmy Herff Data and Analytics platform leader Huge congratulations to all of these winners. It was very difficult to pick them amongst amazing set of nominations. So now let's dive into our conversations. We'll start with the data driver of the year. Representing Cisco today is Robbie. I'm a month do director data platform, data and analytics. >>Let me welcome everybody to the wonderful. Within a few years before Cisco used to be a company, you know, in making the decisions partly with the data and partly with the cuts. Because, you know, the data is told in multiple places the trading is not done right and things like that. So we, you know, really understood it. You know what was a challenge in the organism? By then we defined the data strategy on we put in a few plants in place, and it is working very well. But what is more important is basically how we provide the data towards data scientists and the data community in Cisco. I'm making them available in a highly available scalable on the elastic platforms. That's where you know, snowflake came into picture really very well for arrest, along with the other data strategies that we have had in place more importantly, data. Democratization was a key. You know, you along with the simplification, something technologies involved in the past. Our clients need to be worrying, laudable the technologies involved, you know, for example, we used to manage her before we make it. Snowflake Andi Snowflake, in a solve all of these problems for us with the ease on it. Really helping enabling a data data given ordinances in our >>system. In the data sharing leaders category, Rockhampton was our winner. We have mark staying trigger VP of analytics here to share their story. I >>wanna thank Snowflake for the award, and it's an honor to be a today. The ease of use of snowflake has allowed projects to move forward innovation to move forward in a way that it simply couldn't have done on old Duke systems or or or other platforms. And I think the truth the same is true for us on a lot of the similar topics, but also in the data sharing space, data sharing is a part off innovation. Like I think, most of the tech companies we work with certainly are business partners, merchants, but also with a range of other service providers and other technology vendors, um on other companies that we strategically share data with 2 May benefit of their service or thio to allow data modeling or advanced data collaboration or strategic business deals using the data and evaluated with the data on. But I think if you look Greece snowflake, you would see a lot of time and effort money going to just establishing that data connection that often involved substantial investments in technology data pipelines, risk evaluation, hashing, encrypt encryption. Security on what we found with snowflakes sharing functionality is that we can not eliminate those concerns, but that the technology just supports the ability to share data securely easily, quickly in a way that we could never do >>previously. Now we have a really inspiring winner of the data for good award door dash with their Project Dash Initiative here to speak about their work is act shot near Engineering manager >>Thank you sports to snowflake for recognizing us for this initiative. Eso For those of you who don't know, Dash, the logistics technology platform company that connects people with the best in their cities and Project Dash, our flagship social impact program, uses the door dash logistics platform to tackle the challenges like hunger and food waste. It was launched in 2018 on over the first two years in partnership with food recovery organizations, we powered the delivery off over £2 million of surplus food from businesses to hunger relief agencies across the U. S. And Canada. Andi simply do Toko with tremendous need has a much we were ableto power. The delivery often estimated 5.8 million meals to food insecure communities and frontline workers across 48 states on the 3.5 million off. These meals have been delivered since much. We do all of our analysis for our business functions from like product development to skills and social impact in snowflake On the numbers I just provided here actually have come from Snowflake on. We have used it to provide various forms of reporting, tow our government and non profit partners on this snowflake. We can help them understand the impact, analyzed friends and ensure complaints in cases where we are supporting efforts for agencies like FEMA, our USDA onda. Lastly, our team is really excited to be recognized by snowflake for using data for good. It has reminded us to continue doubling down on our commitment to using our product and expertise to partner with communities we operated. Thank you again. >>The winner of the machine Learning Master's word is unit for Energy. Viola Sarcoma Data Innovation leader is here on behalf of unit for >>Hello, everyone, Thanks for having me here. It's really a pleasure. And we were really proud to get this award. It means a lot for you. Nipper. It's huge recognition for our effort since last couple of years assed part of our journey and also a celebration off our success now for you. Newport. It would not be possible to start looking at Advanced Analytics techniques, not having a solid data foundation in place. And that's where we invested a lot in our cloud data platform in the cloud back by snowflake. Having this platform allowed us to employ advanced analytics techniques, combining data from Markit from fundamental data, different other sources of data like weather and extracting new friends, new signals that basically help us to partly or even in some cases fully automate some trading strategy. And we believe this will be really fundamental for for the future off raiding in our company and we will definitely invest in this area in the future. >>Our data application of the year is observed. Observers recognizes the most innovative, data driven application built on Snowflake and representing observed today is their CEO, Jeremy Burton. >>Let me just echo the thanks from the other folks on the coal. I mean snowflakes, separation of storage. Compute. I can't overstate what a really big deal it is. Um, it means that we can ingest in store data. Really? For the price of Amazon s three on board, we're in a category where vendors of historically charged for volume of data ingested. So you can imagine this really represents huge savings. Um, in addition, and maybe on a more technical note, snowflakes, elastic architectures really enables us to direct queries appropriately, based on the complexity of the query. So small queries or simple queries weaken director extra small warehouses and complex queries. We can direct, you know, for Excel. Or I think even a six x l is either there are on its way. The key thing there is that users they're not sitting around waiting for results to appear regardless of the query complexity. So I mean, really? The separation storage compute on the elastic architectures is a really big deal for us. >>Turning to the data Pioneer of the Year Award, I'm excited to be here with Faisal KP, senior manager of Enterprise Data Services from Pizza Hut. >>First of all, thank you, Snowflake, for giving this wonderful person. I think it means a lot for us in terms of validating what we're doing. I think we were one of the earlier adopters of Snowflake. We saw the vision of snowflake, you know, stories. Russell's computer separation on all the goodies, right? Right from back in 2017, I believe what snowflake enabled us is to actually get the scale with very little manpower, which is needed to man the entire system. So on the Super Bowl day, we have, you know, the entire crew literally a boardroom where the right from the CME, most of the CEOs to all the folks will be sitting and watching what is happening in the system. And we have to do a lot of real time analytics during that time. So with snowflake, you know, way used the elasticity of the platform we use, you know, platform you know their solutions, like snow pipe to basically automate the data ingestion coming through various channels, from the commas, from the stores, everything simultaneously. So as soon as the program is done, you know, we can scale scale down to our normal volume, which means we can, you know, way can save a lot. Of course. So definitely it snowflake has been game changer for us in terms of how we provide real time analytics. Our systems are used by thousands off restaurants throughout the country and, you know, by hundreds of franchisees. So the scale is something we have achieved with a lot of ability and success. >>In the category of data science Manager of the Year Award, we have a mission Min, director of cybersecurity and data science at Comcast. >>So thank you for having me and thank you for this wonderful award. So one of the biggest challenges you see in this other security spaces the tremendous amount of data that we have to compute every day to find the gold haystack. So one of the big challenges we overcame with by uniting snowflake was how do we go from like my other counterparts on the panel have said Theo operational overhead of maintaining a large data store and moved to more of results driven and data focused environment. And, you know, part of that journey was really the tremendous leadership. Comcast saying, You know, we want Thio through our day to day lives by relying less on operational work and Maura on answering questions. And so you know, over the last year we've really put Snowflake at the center of our ecosystem, knowing that it's elastic platform and its ability scale infinitely have given us the ability to dream big and use it to drop five cybersecurity. And while it's traditionally used for cybersecurity, we're starting to see the benefits right away and the beauty of the snowflake. Ecos, Miss. We're now able to enable folks that not traditionally have big data skills, but they have standards, sequel skills, and they could still work in the snowflake platform. So, you know, the transition to cloud has been very powerful for us as an organization. But I think the end story, the real takeaways, by moving our secretary operation to the cloud, we're now been able to enable more people and get the results they were looking for. You know, as other people have said fast, people hate to wait. So the scale of snowflake really shines. >>Yeah. Now, let's hear from our data Executive of the year. The Cost. Jane. Chief Digital Officer Packer. >>Thank you very much, Snowflake, for this really incredible recognition and honor of the work we're doing it back. Are we began. The first step in this process was for us to develop an enterprise Great data platform in the cloud capable off managing every aspect of data at scale. This this platform includes snowflake as our analytics data warehouse amongst many other technologies that we used for ingestion of data, data processing, uh, data governance, transactional, uh, needs and others. So this platform, once developed, has really helped us leverage data across the broad pack. Our systems and applications globally very efficiently and is enabling pack are, as a result to enhance every aspect. Selfish business with data. >>Ah, big congratulations again to all of the winners of the 2020 Data Drivers Awards. Thanks so much for joining us for a great conversation. And we hope that you enjoy the rest of the data cloud summit

Published Date : Nov 19 2020

SUMMARY :

Our data application of the year is observed laudable the technologies involved, you know, for example, we used to manage her before we make it. In the data sharing leaders category, but that the technology just supports the ability to share data of the data for good award door dash with their Project Dash Initiative here to speak about their work snowflake On the numbers I just provided here actually have come from Snowflake on. leader is here on behalf of unit for a lot in our cloud data platform in the cloud back by snowflake. Our data application of the year is observed. We can direct, you know, for Excel. Turning to the data Pioneer of the Year Award, I'm excited to be here with Faisal KP, So the scale is something we have achieved with a lot of ability and success. In the category of data science Manager of the Year Award, we have a mission Min, So one of the big challenges we overcame with by uniting snowflake was The Cost. of the work we're doing it back. And we hope that you enjoy the rest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ComcastORGANIZATION

0.99+

Jeremy BurtonPERSON

0.99+

CiscoORGANIZATION

0.99+

USDAORGANIZATION

0.99+

2018DATE

0.99+

2017DATE

0.99+

FEMAORGANIZATION

0.99+

3.5 millionQUANTITY

0.99+

PaccarORGANIZATION

0.99+

JanePERSON

0.99+

BergeronPERSON

0.99+

RobbiePERSON

0.99+

RussellPERSON

0.99+

AmazonORGANIZATION

0.99+

Super BowlEVENT

0.99+

Faisal KPPERSON

0.99+

MauraPERSON

0.99+

5.8 millionQUANTITY

0.99+

Pizza HutORGANIZATION

0.99+

todayDATE

0.99+

FirstQUANTITY

0.99+

over £2 millionQUANTITY

0.99+

U. S.LOCATION

0.99+

ExcelTITLE

0.99+

Viola SarcomaPERSON

0.99+

Enterprise Data ServicesORGANIZATION

0.99+

firstQUANTITY

0.98+

2 MayDATE

0.98+

TheoPERSON

0.98+

Jimmy HerffPERSON

0.98+

DashORGANIZATION

0.98+

McKessonORGANIZATION

0.98+

RakutenORGANIZATION

0.98+

GreeceLOCATION

0.98+

48 statesQUANTITY

0.96+

ThioPERSON

0.96+

last yearDATE

0.96+

oneQUANTITY

0.96+

threeQUANTITY

0.96+

DukeORGANIZATION

0.96+

hundreds of franchiseesQUANTITY

0.95+

MarkitORGANIZATION

0.94+

first stepQUANTITY

0.94+

first two yearsQUANTITY

0.91+

2020 Data Drivers AwardsEVENT

0.91+

CMEORGANIZATION

0.9+

NewportLOCATION

0.89+

fiveQUANTITY

0.89+

Andi SnowflakePERSON

0.82+

RockhamptonORGANIZATION

0.82+

PizzaORGANIZATION

0.82+

Data Drivers SnowflakeEVENT

0.82+

SnowflakeTITLE

0.81+

sixQUANTITY

0.79+

NipperPERSON

0.79+

SnowflakeORGANIZATION

0.78+

AndiPERSON

0.76+

CanadaLOCATION

0.73+

snowflakeTITLE

0.73+

thousands offQUANTITY

0.72+

last couple of yearsDATE

0.71+

overQUANTITY

0.67+

TokoORGANIZATION

0.65+

Pioneer of the Year AwardTITLE

0.61+

ChiefPERSON

0.61+

Project DashOTHER

0.53+

yearsDATE

0.53+

snowflakeEVENT

0.52+

Data DriversTITLE

0.51+

SnowflakeEVENT

0.48+

2020EVENT

0.44+

HutLOCATION

0.44+

AnalyticsORGANIZATION

0.41+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Neuromorphic in Silico Simulator For the Coherent Ising Machine


 

>>Hi everyone, This system A fellow from the University of Tokyo before I thought that would like to thank you she and all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today or some of the recent works that have been done either by me or by character of Hong Kong Noise Group indicating the title of my talk is a neuro more fic in silica simulator for the commenters in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then I will show some proof of concept of the game in performance that can be obtained using dissimulation in the second part and the production of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is adapted from a recent natural tronics paper from the Village Back hard People. And this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, Interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba purification machine, or a recently proposed restricted Bozeman machine, FPD eight, by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition influx you beat or the energy efficiency off memory sisters uh P. J. O are still an attractive platform for building large theorizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particle in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system in this respect, the f. D. A s. They are interesting from the perspective, off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see. And so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for suggesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics. Orphan, chaotic because of symmetry, is interconnectivity. The infrastructure. No neck talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's a schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the Cortes in machine, which is a growing toe the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo F represents the monitor optical parts, the district optical parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback cooking cm using oh, more than detection and refugee A then injection off the cooking time and eso this dynamics in both cases of CME in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the Eyes in coping and the H is the extension of the rising and attorney in India and expect so. >>Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of >>this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted convergence to the global minimum of there's even 20 and using this approach. And so this is >>why we propose toe introduce a macro structure the system or where one analog spin or one D o. P. O is replaced by a pair off one and knock spin and one error on cutting. Viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a >>learning process for searching for the ground state of the icing. Every 20 >>within this massacre structure the role of the ER variable eyes to control the amplitude off the analog spins to force the amplitude of the expense toe, become equal to certain target amplitude. A Andi. This is known by moderating the strength off the icing complaints or see the the error variable e I multiply the icing complain here in the dynamics off UH, D o p o on Then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different, I think introduces a >>symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here for solving certain current size off, um, escape problem, Uh, in which the exiled from here in the i r. From here and the value of the icing energy is shown in the bottom plots. And you see this Celtics search that visit various local minima of the as Newtonian and eventually finds the local minima Um, >>it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing hamiltonian so that we're gonna do not get stuck in any of them. On more over the other types of attractors, I can eventually appear, such as the limits of contractors or quality contractors. They can also be destabilized using a moderation of the target amplitude. And so we have proposed in the past two different motivation of the target constitute the first one is a moderation that ensure the 100 >>reproduction rate of the system to become positive on this forbids the creation of any non tree retractors. And but in this work I will talk about another modulation or Uresti moderation, which is given here that works, uh, as well as this first, uh, moderation, but is easy to be implemented on refugee. >>So this couple of the question that represent the current the stimulation of the cortex in machine with some error correction, they can be implemented especially efficiently on an F B G. And here I show the time that it takes to simulate three system and eso in red. You see, at the time that it takes to simulate the X, I term the EI term, the dot product and the rising everything. Yet for a system with 500 spins analog Spain's equivalent to 500 g. O. P. S. So in f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements tobacco cm in which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as, ah one gear repression to replicate the post phaser CIA. Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts, all the dog products, respect to the problem size. And and if we had a new infinite amount of resources and PGA to simulate the dynamics, then the non in optical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a low carrot off end and while the kite off end. Because computing the dot product involves the summing, all the terms in the products, which is done by a nephew, Jay by another tree, which heights scares a logarithmic any with the size of the system. But this is in the case if we had an infinite amount of resources on the LPGA food but for dealing for larger problems off more than 100 spins, usually we need to decompose the metrics into ah smaller blocks with the block side that are not you here. And then the scaling becomes funny non inner parts linear in the and over you and for the products in the end of you square eso typically for low NF pdf cheap P a. You know you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance started path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product. By increasing the size of this at the tree and this can be done by organizing Yeah, click the extra co components within the F p G A in order which is shown here in this right panel here in order to minimize the finding finance of the system and to minimize the long distance that the path in the in the fpt So I'm not going to the details of how this is implemented the PGA. But just to give you a new idea off why the Iraqi Yahiko organization off the system becomes extremely important toe get good performance for simulator organizing mission. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should result for solving escape problems, free connected person, randomly person minus one, spin last problems and we sure, as we use as a metric the numbers >>of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with Nina successful BT against the problem size here and and in red here there's propose F B J implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior. It's similar to the car testing machine >>and security. You see that the scaling off the numbers of metrics victor product necessary to solve this problem scales with a better exponents than this other approaches. So so So that's interesting feature of the system and next we can see what is the real time to solution. To solve this, SK instances eso in the last six years, the time institution in seconds >>to find a grand state of risk. Instances remain answers is possibility for different state of the art hardware. So in red is the F B G. A presentation proposing this paper and then the other curve represent ah, brick, a local search in in orange and center dining in purple, for example, and So you see that the scaring off this purpose simulator is is rather good and that for larger politicizes, we can get orders of magnitude faster than the state of the other approaches. >>Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FBT implementation would be faster than risk Other recently proposed izing machine, such as the Hope you know network implemented on Memory Sisters. That is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the >>restricted Bosman machine implemented a PGA proposed by some group in Brooklyn recently again, which is very fast for small promise sizes. But which canning is bad So that, uh, this worse than the purpose approach so that we can expect that for promise sizes larger than, let's say, 1000 spins. The purpose, of course, would be the faster one. >>Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better cut values that have been previously found by any other >>algorithms. So they are the best known could values to best of our knowledge. And, um, or so which is shown in this paper table here in particular, the instances, Uh, 14 and 15 of this G set can be We can find better converse than previously >>known, and we can find this can vary is 100 times >>faster than the state of the art algorithm and cp to do this which is a recount. Kasich, it s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. >>So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g A onda and carefully routing the trickle components within the P G A. And and we can draw some projections of what type of performance we can achieve in >>the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape problems respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital and, you know, free to is shown in the green here, the green >>line without that's and, uh and we should two different, uh, prosthesis for this productions either that the time to solution scales as exponential off n or that >>the time of social skills as expression of square root off. So it seems according to the data, that time solution scares more as an expression of square root of and also we can be sure >>on this and this production showed that we probably can solve Prime Escape Program of Science 2000 spins to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP or optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this, what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account out on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will >>be just based on the simple common line access for the simulator and in which will have just a classical approximation of the system. We don't know Sturm, binary weights and Museum in >>term, but then will propose a second version that would extend the current arising machine to Iraq off eight f p g. A. In which we will add the more refined models truncated bigger in the bottom question model that just talked about on the supports in which he valued waits for the rising problems and support the cement. So we will announce >>later when this is available, and Farah is working hard to get the first version available sometime in September. Thank you all, and we'll be happy to answer any questions that you have.

Published Date : Sep 24 2020

SUMMARY :

know that the classical approximation of the Cortes in machine, which is a growing toe So the well known problem of And so this is And the addition of this chemical structure introduces learning process for searching for the ground state of the icing. off the analog spins to force the amplitude of the expense toe, symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here is a moderation that ensure the 100 reproduction rate of the system to become positive on this forbids the creation of any non tree in the in the fpt So I'm not going to the details of how this is implemented the PGA. of the mattress Victor products since it's the bottleneck of the computation, uh, You see that the scaling off the numbers of metrics victor product necessary to solve So in red is the F B G. A presentation proposing Moreover, the relatively good scanning off the But which canning is bad So that, scheme scales well that you can find the maximum cut values off benchmark the instances, Uh, 14 and 15 of this G set can be We can find better faster than the state of the art algorithm and cp to do this which is a recount. So given that the performance off the design depends on the height the near future based on the, uh, implementation that we are currently working. the time of social skills as expression of square root off. And so the idea of this model is that instead of having the very be just based on the simple common line access for the simulator and in which will have just a classical to Iraq off eight f p g. A. In which we will add the more refined models any questions that you have.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrooklynLOCATION

0.99+

SeptemberDATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Hong Kong Noise GroupORGANIZATION

0.99+

CIAORGANIZATION

0.99+

300 yardsQUANTITY

0.99+

1000 spinsQUANTITY

0.99+

IndiaLOCATION

0.99+

15 yearsQUANTITY

0.99+

second versionQUANTITY

0.99+

first versionQUANTITY

0.99+

FarahPERSON

0.99+

second partQUANTITY

0.99+

first partQUANTITY

0.99+

twoQUANTITY

0.99+

500 spinsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

first stepQUANTITY

0.99+

20QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

ScottPERSON

0.99+

University of TokyoORGANIZATION

0.99+

500 g.QUANTITY

0.98+

MexicanLOCATION

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

KasichPERSON

0.98+

first versionQUANTITY

0.98+

firstQUANTITY

0.98+

IraqLOCATION

0.98+

third partQUANTITY

0.98+

13 clock cyclesQUANTITY

0.98+

43 clock cyclesQUANTITY

0.98+

first thingQUANTITY

0.98+

0.5 microsecondsQUANTITY

0.97+

JayPERSON

0.97+

HaiderLOCATION

0.97+

15QUANTITY

0.97+

one microsecondsQUANTITY

0.97+

SpainLOCATION

0.97+

about 10 secondsQUANTITY

0.97+

LPGAORGANIZATION

0.96+

eachQUANTITY

0.96+

500 timerQUANTITY

0.96+

one strategyQUANTITY

0.96+

both casesQUANTITY

0.95+

one errorQUANTITY

0.95+

20 wattsQUANTITY

0.95+

NinaPERSON

0.95+

about 0.1 microsecondsQUANTITY

0.95+

nineQUANTITY

0.95+

each graphQUANTITY

0.93+

14QUANTITY

0.92+

CMEORGANIZATION

0.91+

IraqiOTHER

0.91+

billions of neuronsQUANTITY

0.91+

99 successQUANTITY

0.9+

about 100QUANTITY

0.9+

larger than 500 speedsQUANTITY

0.9+

VectorORGANIZATION

0.89+

spinsQUANTITY

0.89+

VictorORGANIZATION

0.89+

last six yearsDATE

0.86+

oneQUANTITY

0.85+

one analogQUANTITY

0.82+

hamiltonianOTHER

0.82+

SimulatorTITLE

0.8+

EuropeanOTHER

0.79+

three neuro inspired principlesQUANTITY

0.78+

BosmanPERSON

0.75+

three systemQUANTITY

0.75+

trumpPERSON

0.74+

Xia PiosCOMMERCIAL_ITEM

0.72+

100QUANTITY

0.7+

one gearQUANTITY

0.7+

P.QUANTITY

0.68+

FPD eightCOMMERCIAL_ITEM

0.66+

first oneQUANTITY

0.64+

Escape Program of Science 2000TITLE

0.6+

CelticsOTHER

0.58+

TobyPERSON

0.56+

MachineTITLE

0.54+

Refugee ATITLE

0.54+

coupleQUANTITY

0.53+

TektronixORGANIZATION

0.51+

OpaOTHER

0.51+

P. J. OORGANIZATION

0.51+

BozemanORGANIZATION

0.48+

Real Time Emotion Detection Using EEG With Real Time Noise Reduction


 

>>Hello. Nice to meet you. My name is yes. Um Escuela. I'm a professor in a university in Japan. So today I want to introduce my research. That title is a really time emotional detection using e g with riel time knowing the reduction. First of all, I want to introduce myself. My major is system identification and signal processing for large removed and by American signal process for owner off them. A common technique. It's most magical. Modern by you creation using this opportunity identification method. So today topic it's e easy modern by the Barriers Council with heavy notes. We call this technique the concept moody. Now what is a concept? I mean, the concept is Japanese world because studies are first in Japan. So consider is similar to emotion and sensibility, but quite different. The commercial nous sensibility is innate ability. The concert is acquired after birth, so concept is similar to how to be So we focus on this can see using the brain signals. As for the brain Sina, there is ah, many way to know the brain. For example, the optical leading X c T m i m e g e g optical topography um, function and my by using these devices, we have three areas off research, for example, like neural engineering area for obligation, including new market neuroscience area for understanding the mechanism a medically oil area for treatment. So but it's very important to use, depending on the purpose. So what did they can be obtained? Uh, in the case of e g, we can see the activity of neurons that scalp the case of in years so we can attain the river off oxygen bar Pratt The case off natural and safe Alagem we can see the activity of new uh, that contact is neck case off position. Martian topography. We can get activity off reception by the contact list. If we use that, I we can measure the amount of blood by the contractors. These devices are showing these figures. So our motivation is to get the concept question using their model by system identification where it's not removed on. The second motivation is to theorize that's simple and small cancer X election using the each information when we use the ever my the large scale and the expensive on binding. So it is unuseful. So we focus on the EEG because the e g iss Moscow inexpensive a non binding on to use. So we focus on the energy. So e g is actually a potential from the major from the scalp that detective data is translated to the pregnancy domain. And if you can see domain that their point to 44. We call the data death of it 4 to 6. We called a cedar with on 17. 14 were called the Alfa Hour and 14 to 26. We called a better work in a conventional method we want if we want use the cats a deep sleep, we use that death of it in a case of light sleep we used a secretive and so but this is just only the sensible method. So we cannot use that for all the film Actuary accuracies under the 20%. So we need to define the situation original. So recall this technique council modeling. So these are the block diagram Kansi the concept What? So this field this part eyes for the noise, this part for the mathematical model. So we calculate this transfer function like this. This is a discrete time water, and, uh, this time, uh, is continuous time model. So then we really right this part Thio Discrete time water. So we cull Create, uh, this part us like this This'll first part on the second part is calculated by the party application so we can get this the argumentative model. So then that we were right this part by using that the transfer function transport formation. So we right this argument ID model like this. So the off about the inverse and better off the inverse is the point as this equation. So each the coefficient is corrugated by this equation on. But then we calculate a way too busy with beaver by using this because of a least squares algorithm. So we call this identification method the self joining identification method. Um, that this is an example of stories modeling. The first of all, we decide we gather the data like a story. It's moving. So we move the small beans, try to trade at 41 hour. So last 10 minutes we used as stories and we measure that culture soul for sliced levin Onda. We associate the egg and we measure the 8000 data. Uh, in 17 years we? Yeah, that's a 17 years. So in the case, off the simple, easy universes that there are many simply devices in the world like this so many of them the There we calculate the signal nodes. Lazio, The signal means the medical easy system on the each device made it sn Lazio. And we investigate 58 kinds off devices on almost off All devices are noise devices. So I'm also asked about to various parts more device that best. So my answer is anything. Our skill is, you know, processing on def. With love. Data can be obtained from the device. No, but what device? He may use the same result commission. Our novelty is level Signal processing on our system is structured by 17 years Data for one situation. So the my answer is what? Anything. So we applied this system to Arial product. We call this product concern Analyzer. In a concept analyzer, you can see the concept that right the our time a concept dinner influence Solis sickness concentration on like so that we combine that this can't say analyzer And the camera system We made the euro system your account so pretty show it this is in Eureka. Well, this is, uh, e g system and we can get can say by using the iPhone on the, uh, we combine the camera system by the iPhone camera and if the cancer is higher than the 6% 60% so automatically recorded like this. Mhm. So every time we wear the e g devices, we can see the no awareness, the constant way. That's so finally we combine the each off cancer. So like that this movie, so we can see the thes one days. Can't say the movie s Oh, this is a miracle. On the next example, it's neuro marketing using a constant analyzer. So this is a but we don't know what is the number one point. So then we analyze the deeds CME by using concert analyzer so we can get the rial time concept then that we can see the one by one situation like this. So this is the interest level and we can see the high interest like this. So the recorded a moment automatically on the next one is really application. The productive design. Ah, >>Japanese professor has come up with a new technology she claims can read minds, she says. The brainwave analysis system will help businesses better understand their customers, needs workers at a major restaurant chain or testing a menu item that is being developed. This device measures brain waves from the frontal lobes of people who try the product. An application analyzes five feelings how much they like something and their interest, concentration, stress and sleepiness. >>The >>new menu item is a cheese souffle topped with kiwi, orange and other fruit. The APP checks the reaction of a person who sees the souffle for the first time. Please open your eyes. When she sees the souffle, the like and interest feelings surge on the ground. This proves the desert is visually appealing. Now please try it. After the first bite, the like level goes up to 60. That shows she likes how the dessert tastes. After another bite, the like level reaches 80. She really enjoys the taste of the souffle. It scores high in terms of both looks and taste, but there's an unexpected problem. When she tries to scoop up the fruit, the stress level soars to 90. I didn't know where to put the spoon. I felt it was a little difficult to eat. It turned out it was difficult to scoop up the fruit with a small spoon. So people at the restaurant chain are thinking of serving this a flavor with a fork instead. Green well. How could be the difference with the device? We can measure emotional changes in minute detail in real time. This is a printing and design firm in Tokyo. >>It >>designs direct mail and credit card application forms. The company is using the brainwave analyzing system to improve the layout of its products. The idea is to make them easier to read during this test, The subject wears an eye tracking device to record where she's looking. In addition to the brainwave analyzing device, her eye movements are shown by the red dots on the screen. Stress levels are indicated on the graph on the left. Please fill out the form. This is a credit card application form. Right after she turns her eyes to this section, her stress levels shoots up. It was difficult to read as each line contained 60 characters, so they decided to divide the section in two, cutting the length of the lines by half 15 a Hong Kong. This system is very useful for us. We can offer differentiated service to our clients by providing science based solutions. The brain wave analyzed. >>Okay, uh, now the we construct a concert detection like this. Like this. Like concentration, interest sickness stories contain, like comfortable, uncomfortable. I'm present the rats emotion, deadly addictive case lighting, comfort, satisfaction and the achievement. So finally we conquer more presentation. So in this presentation, we introduce the our such we construct the council question Onda we demonstrate that c street signal processing and we apply the proposed method to Arial product. Uh, we named the constant riser. So this is the first in the world, that's all. Thank you so much.

Published Date : Sep 21 2020

SUMMARY :

Uh, in the case of e g, we can see The brainwave analysis system will help businesses better understand their customers, at the restaurant chain are thinking of serving this a flavor with a fork instead. the brainwave analyzing system to improve the layout of its products. So finally we

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TokyoLOCATION

0.99+

JapanLOCATION

0.99+

58 kindsQUANTITY

0.99+

17 yearsQUANTITY

0.99+

60 charactersQUANTITY

0.99+

firstQUANTITY

0.99+

each lineQUANTITY

0.99+

five feelingsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

first partQUANTITY

0.99+

first timeQUANTITY

0.99+

second partQUANTITY

0.99+

41 hourQUANTITY

0.99+

90QUANTITY

0.99+

Barriers CouncilORGANIZATION

0.99+

first biteQUANTITY

0.99+

todayDATE

0.99+

Hong KongLOCATION

0.98+

one situationQUANTITY

0.98+

twoQUANTITY

0.98+

half 15QUANTITY

0.98+

each informationQUANTITY

0.97+

8000 dataQUANTITY

0.97+

eachQUANTITY

0.97+

20%QUANTITY

0.97+

each deviceQUANTITY

0.97+

80QUANTITY

0.97+

second motivationQUANTITY

0.95+

FirstQUANTITY

0.95+

MoscowLOCATION

0.95+

AmericanOTHER

0.92+

JapaneseOTHER

0.92+

6%QUANTITY

0.91+

EurekaLOCATION

0.9+

44QUANTITY

0.88+

one daysQUANTITY

0.85+

up to 60QUANTITY

0.84+

oneQUANTITY

0.83+

both looksQUANTITY

0.81+

60%QUANTITY

0.81+

KansiPERSON

0.8+

PrattPERSON

0.78+

4QUANTITY

0.76+

6QUANTITY

0.74+

LazioLOCATION

0.68+

10 minutesQUANTITY

0.66+

ArialORGANIZATION

0.65+

number one pointQUANTITY

0.64+

26DATE

0.62+

threeQUANTITY

0.57+

Alfa HourCOMMERCIAL_ITEM

0.55+

MartianOTHER

0.53+

JapaneseLOCATION

0.52+

14QUANTITY

0.49+

17QUANTITY

0.49+

CMEORGANIZATION

0.49+

Jennifer Chronis, AWS | AWS Public Sector Online


 

>>from around the globe. It's the queue with digital coverage of AWS Public sector online brought to you by Amazon Web services. Everyone welcome back to the Cube's virtual coverage of AWS Public sector online summit, which is also virtual. I'm John Furrier, host of the Cube, with a great interview. He remotely Jennifer Cronus, who's the general manager with the D. O. D. Account for Amazon Web services. Jennifer, welcome to the Cube, and great to have you over the phone. I know we couldn't get the remote video cause location, but glad to have you via your voice. Thanks for joining us. >>Well, thank you very much, John. Thanks for the opportunity here >>to the Department of Defense. Big part of the conversation over the past couple of years, One of many examples of the agencies modernizing. And here at the public sector summit virtual on line. One of your customers, the Navy with their air p is featured. Yes, this is really kind of encapsulate. It's kind of this modernization of the public sector. So tell us about what they're doing and their journey. >>Sure, Absolutely. So ah, maybe er P, which is Navy enterprise resource planning is the department of the Navy's financial system of record. It's built on S AP, and it provides financial acquisition and my management information to maybe commands and Navy leadership. Essentially keep the Navy running and to increase the effectiveness and the efficiency of baby support warfighter. It handles about $70 billion in financial transactions each year and has over 72,000 users across six Navy commands. Um, and they checked the number of users to double over the next five years. So essentially, you know, this program was in a situation where their on premises infrastructure was end of life. They were facing an expensive tech upgrade in 2019. They had infrastructure that was hard to steal and prone to system outages. Data Analytics for too slow to enable decision making, and users actually referred to it as a fragile system. And so, uh, the Navy made the decision last year to migrate the Europe E system to AWS Cloud along with S AP and S two to s AP National Security Services. So it's a great use case for a government organization modernizing in the cloud, and we're really happy to have them speaking at something this year. >>Now, was this a new move for the Navy to move to the cloud? Actually, has a lot of people are end life in their data center? Certainly seeing in public sector from education to modernize. So is this a new move for them? And what kind of information does this effect? I mean, ASAP is kind of like, Is it, like just financial data as an operational data? What is some of the What's the move about it Was that new? And what kind of data is impacted? >>Sure. Yeah, well, the Navy actually issued a Cloud First Policy in November of 2017. So they've been at it for a while, moving lots of different systems of different sizes and shapes to the cloud. But this migration really marked the first significant enterprise business system for the Navy to move to the actually the largest business system. My migrate to the cloud across D o D. Today to date. And so, essentially, what maybe Air P does is it modernizes and standardizes Navy business operation. So everything think about from time keeping to ordering missile and radar components for Navy weapon system. So it's really a comprehensive system. And, as I said, the migration to AWS govcloud marks the Navy's largest cloud migration to date. And so this essentially puts the movement and documentation of some $70 billion worth of parts of goods into one accessible space so the information can be shared, analyzed and protected more uniformly. And what's really exciting about this and you'll hear from the Navy at Summit is that they were actually able to complete this migration in just under 10 months, which was nearly half the time it was originally expected to take different sizing complexity. So it's a really, really great spring. >>That's huge numbers. I mean, they used to be years. Well, that was the minicomputer. I'm old enough to remember like, Oh, it's gonna be a two year process. Um, 10 months, pretty spectacular. I got to ask, What is some of the benefits that they're seeing in the cloud? Is that it? Has it changed the roles and responsibilities? What's what's some of the impact that they're seeing expecting to see quickly? >>Yeah, I'd say, you know, there's been a really big impact to the Navy across probably four different areas. One is in decision making. Also better customer experience improves security and then disaster recovery. So we just kind of dive into each of those a little bit. So, you know, moving the system to the cloud has really allowed the Navy make more timely and informed decisions, as well as to conduct advanced analytics that they weren't able to do as efficiently in the past. So as an example, pulling financial reports and using advanced analytics on their own from system used to take them around 20 hours. And now ah, maybe your API is able to all these ports in less than four hours, obviously allowing them to run the reports for frequently and more efficiently. And so this is obviously lead to an overall better customer experience enhance decision making, and they've also been able to deploy their first self service business intelligence capabilities. So to put the hat, you know, the capability, Ah, using these advanced analytics in the hands of the actual users, they've also experienced improve security. You know, we talk a lot about the security benefits of migrating to the cloud, but it's given them of the opportunity to increase their data protection because now there's only one based as a. We have data to protect instead of multiple across a whole host of your traditional computing hardware. And then finally, they've implemented a really true disaster recovery system by implementing a dual strategy by putting data in both our AWS about East and govcloud West. They were the first to the Navy to do those to provide them with true disaster become >>so full govcloud edge piece. So that brings up the question around. And I love all this tactical edge military kind of D o d. Thinking the agility makes total sense. Been following that for a couple of years now, is this business side of it that the business operations Or is there a tactical edge military component here both. Or is that next ahead for the Navy? >>Yeah. You know, I think there will ultimately both You know that the Navy's big challenge right now is audit readiness. So what they're focusing on next is migrating all of these financial systems into one General ledger for audit readiness, which has never been done before. I think you know, audit readiness press. The the D has really been problematic. So the next thing that they're focusing on in their journey is not only consolidating to one financial ledger, but also to bring on new users from working capital fund commands across the Navy into this one platform that is secure and stable, more fragile system that was previously in place. So we expect over time, once all of the systems migrate, that maybe your API is going to double in size, have more users, and the infrastructure is already going to be in place. Um, we are seeing use of all of the tactical edge abilities in other parts of the Navy. Really exciting programs for the Navy is making use of our snowball and snowball edge capabilities. And, uh, maybe your key that that this follows part of their migration. >>I saw snow cones out. There was no theme there. So the news Jassy tweeted. You know, it's interesting to see the progression, and you mentioned the audit readiness. The pattern of cloud is implementing the business model infrastructure as a service platform as a service and sass, and on the business side, you've got to get that foundational infrastructure audit, readiness, monitoring and then the platform, and then ultimately, the application so a really, you know, indicator that this is happening much faster. So congratulations. But I want to bring that back to now. The d o d. Generally, because this is the big surge infrastructure platform sas. Um, other sessions at the Public sector summit here on the D. O. D is the cybersecurity maturity model, which gets into this notion of base lining at foundation and build on top. What is this all about? The CME EMC. What does it mean? >>Yeah, well, I'll tell you, you know, I think the most people know that are U S defense industrial base of what we call the Dev has experienced and continues to experience an increasing number of cyber attacks. So every year, the loss of sensitive information and an election property across the United States, billions each year. And really, it's our national security. And there's many examples for weapons systems and sensitive information has been compromised. The F 35 Joint Strike Fighter C 17 the Empty Nine Reaper. All of these programs have unfortunately, experience some some loss of sensitive information. So to address this, the d o. D. Has put in place, but they all see em and see which is the Cybersecurity Maturity Models certification framework. It's a mouthful, which is really designed to ensure that they did the defense industrial base. And all of the contractors that are part of the Defense Supply Chain network are protecting federal contract information and controlled unclassified information, and that they have the appropriate levels of cyber security in place to protect against advanced, persistent, persistent threats. So in CMC, there are essentially five levels with various processes and practices in each level. And this is a morton not only to us as a company but also to all of our partners and customers. Because with new programs the defense, investor base and supply take, companies will be required to achieve a certain see MNC certification level based on the sensitivity of the programs data. So it's really important initiative for the for the Deal E. And it's really a great way for us to help >>Jennifer. Thanks so much for taking the time to come on the phone. I really appreciate it. I know there's so much going on the D o d Space force Final question real quick for a minute. Take a minute to just share what trends within the d o. D you're watching around this modernization. >>Yeah, well, it has been a really exciting time to be serving our customers in the D. And I would say there's a couple of things that we're really excited about. One is the move to tactical edge that you've talked about using out at the tactical edge. We're really excited about capabilities like the AWS Snowball Edge, which helped Navy Ear Key hybrid. So the cloud more quickly but also, as you mentioned, our AWS cone, which isn't even smaller military grades for edge computing and data transfer device that was just under £5 kids fitness entered mailbox or even a small backpacks. It's a really cool capability for our diode, the warfighters. Another thing. That's what we're really watching. Mostly it's DRDs adoption of artificial intelligence and machine learning. So you know, Dear D has really shown that it's pursuing deeper integration of AI and ML into mission critical and business systems for organizations like the Joint Artificial Intelligence. Enter the J and the Army AI task force to help accelerate the use of cloud based AI really improved war fighting abilities And then finally, what I'd say we're really excited about is the fact that D o. D is starting Teoh Bill. New mission critical systems in the cloud born in the cloud, so to speak. Systems and capabilities like a BMS in the airports. Just the Air Force Advanced data management system is being constructed and created as a born in the cloud systems. So we're really, really excited about those things and think that continued adoption at scale of cloud computing The idea is going to ensure that our military and our nation maintain our technological advantages, really deliver on mission critical systems. >>Jennifer, Thanks so much for sharing that insight. General General manager at Amazon Web services handling the Department of Defense Super important transformation efforts going on across the government modernization. Certainly the d o d. Leading the effort. Thank you for your time. This is the Cube's coverage here. I'm John Furrier, your host for AWS Public sector Summit online. It's a cube. Virtual. We're doing the remote interviews and getting all the content and share that with you. Thank you for watching. Yeah, Yeah, yeah, yeah, yeah

Published Date : Jun 30 2020

SUMMARY :

I'm John Furrier, host of the Cube, Thanks for the opportunity here One of many examples of the agencies modernizing. Essentially keep the Navy running and to increase the What is some of the What's the move about it Was that new? as I said, the migration to AWS govcloud marks the Navy's largest cloud migration to date. I got to ask, What is some of the benefits that they're seeing in the cloud? So to put the hat, you know, ahead for the Navy? So the next thing that they're focusing on in their journey So the news Jassy tweeted. And all of the contractors that are part of the Defense Supply Chain network Thanks so much for taking the time to come on the phone. One is the move to tactical edge that you've talked We're doing the remote interviews and getting all the content and share that with you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jennifer CronusPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

JenniferPERSON

0.99+

November of 2017DATE

0.99+

Jennifer ChronisPERSON

0.99+

2019DATE

0.99+

AWSORGANIZATION

0.99+

JassyPERSON

0.99+

two yearQUANTITY

0.99+

NavyORGANIZATION

0.99+

10 monthsQUANTITY

0.99+

United StatesLOCATION

0.99+

over 72,000 usersQUANTITY

0.99+

about $70 billionQUANTITY

0.99+

last yearDATE

0.99+

bothQUANTITY

0.99+

each levelQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

less than four hoursQUANTITY

0.99+

sixQUANTITY

0.98+

firstQUANTITY

0.98+

one platformQUANTITY

0.98+

five levelsQUANTITY

0.98+

$70 billionQUANTITY

0.98+

D. O. DLOCATION

0.98+

Amazon WebORGANIZATION

0.98+

this yearDATE

0.97+

AP National Security ServicesORGANIZATION

0.97+

under £5QUANTITY

0.97+

one financial ledgerQUANTITY

0.97+

around 20 hoursQUANTITY

0.97+

Snowball EdgeCOMMERCIAL_ITEM

0.97+

CubeCOMMERCIAL_ITEM

0.97+

D o. DPERSON

0.97+

under 10 monthsQUANTITY

0.96+

each yearQUANTITY

0.96+

D o D.LOCATION

0.95+

oneQUANTITY

0.95+

billions each yearQUANTITY

0.95+

F 35 Joint Strike Fighter C 17COMMERCIAL_ITEM

0.94+

OneQUANTITY

0.94+

CME EMCORGANIZATION

0.93+

ASAPORGANIZATION

0.91+

govcloud WestORGANIZATION

0.91+

Amazon Web servicesORGANIZATION

0.88+

DPERSON

0.87+

Navy Ear Key hybridCOMMERCIAL_ITEM

0.86+

Amazon Web servicesORGANIZATION

0.85+

eachQUANTITY

0.83+

TodayDATE

0.82+

U SORGANIZATION

0.81+

sectorEVENT

0.81+

public sectorEVENT

0.81+

Public sector SummitEVENT

0.81+

Europe ELOCATION

0.79+

twoTITLE

0.77+

first significantQUANTITY

0.76+

Rob Esker & Matt Baldwin, NetApp | KubeCon + CloudNativeCon NA 2019


 

>>live from San Diego, California It's the Q covering Koopa and Cloud Native Cot brought to you by Red Cloud. Native Computing Pounding and its ecosystem >>Welcome back. This is the cubes. Fourth year of coverage at Q. Khan Cloud, Native Con. We're here in San Diego. It's 2019. I'm stewed. Minutemen, my host for this afternoon is Justin Warren and happy to welcome to guests from the newly minted platinum member of the CNC F Net Up. Sitting to my right is that Baldwin, who is the director of Cloud Native and Communities Engineering and sitting to his right is Rob Bhaskar, who's the product product strategy for Kubernetes. And it's also a board member on the CME CF, thank you both for joining us. Thank you. All right, s O, you know, maybe start with you. You know, uh, you know, companies that No, I've got plenty of history with net up there. What I've been hearing from that up last few years is you know, the Corvette has always been software, and it is a multi cloud world. I've been hearing this message before. Kind of the cloud native Trinity's piece was going, Of course, there's been some acquisitions and met up continuing to go through its transformations if you will s o help us understand kind of net ops positioning in this ecosystem >>in communities. Yes. Okay, so what we're doing is we're building a product that large manage cloud native workloads on top of community. So we've solved the infrastructure problem. And that's kind of the old problem. We're bored to death. Talking about that problem, but we try to do is try to provide a single painting class to manage on premise. Workloads and off permits were close. So that's what we're trying to do. We're trying to say it's now more about the AP taxonomy in communities. And then what type of tooling do you build to manage that that application and communities and says what we're building right now? That's where we're headed with hybrid. >>There's a piece of it, though, that does draw from the historical strength of map, Of course. So we're building way have, essentially already in marketing capability that allows you to deploy communities an agnostic way, using pure, open unmodified kubernetes on all of the major public clouds, but also on trump. But over time and some of this is already evident. You'll see it married to the storage and data management capabilities that we draw from the historical NetApp and that we're starting to deploy into those public clouds >>with the idea that you should be able to take a project. So project being the name space, new space, having a certain application in it. So you have multiple deployments. I should be able to protect that name space or that project. I feel to move that and the data goes with it. So they were very data where that's what we're trying to do with our. Our software is, you know, make it very data. Where have that aligned with APS inside of communities, >>So maybe step back for a second. What? One of the one of things we've heard a few times at this show before and was talking about the keynote this morning is it is project over company when it comes to the C N C F Project Project over company. So it's about the ecosystem. The C in C F tries not to be opinionated, so it's okay for multiple projects to fitness face not moving up to a platinum a sponsor level. You know, participant here, Ned. It's got lots of history's in participating and driving standards, helping move where the industry's going. Where doesn't it up? See its position in, you know, the participating in the foundation and participating in this ecosystem? >>Yeah, So great question, actually. Love it. It's for my favorite topic. So I think the way we look at it is oftentimes, project to the extent they become ubiquitous, define a standard a de facto standard, so not necessarily ratified by some standards body. And so we're very interested in making sure that in a scenario where you would employ the standard from a technology integration perspective, our capabilities can can operate as an implementation behind the standard. So you get the distinguishing qualities of our capabilities. Our products in our service is Visa VI or in the context of the standard. We're not trying to take you down a walled garden path in a proprietary, uh, journey, if you will weigh, would rather actually compel you to work with us on the basis of the value, not necessarily operating off a proprietary set of interface. Kubernetes broadly perceive it as a defacto standard at this point, there's still some work to be done on running out the edges a lot of underway this week. It's definitely the case that there's a new appeal to making this more off herbal by pardon the expression mere mortals way. Think we can offer Cem, Cem, Cem help in that respect as well? >>Yeah, for us, its usability, right? I mean, that's the reason I started stacking. Cloud was that there was usability problem with kubernetes. I had a usability problem. That's what we're trying. That's how I'm looking at the landscape. And I look at kind of all the projects inside the C N c f. And I look at my role is our role is to How do we tie these together? How do we make these? So they're very, very usable to the users. How were engaging with the community is to try to like a line like this, basically pure upstream projects, and create a usability layer on top of that. But we're not gonna we don't want ever say we're gonna fork into these projects what we're gonna contribute back into these. >>That's one concern that I have heard from. Customers were speaking with some of them yesterday. One of the concerns I had was that when you add that manageability onto the base kubernetes layer, that often very spenders become rather opinionated about which way we think this is a good way to do that. And when you're trying to maintain that compatibility across the ecosystem. So some customers saying, Well, I actually don't want to have to be too closely welded to anyone. Vendor was part of the benefit of Kubernetes. I can move my workloads around. So how do you navigate What? What is the right level of opinion? Tohave and which part should actually just be part of a common sense >>should be along the lines of best practices is how we do it. So like, Let's take a number policy, for example, like applying a sane default network policy to every name space defying a saying default pod security policy. You know, building a cluster in the best practices fashion with security turned on hardening done where you would have done this already as a user. So we're not looking you in any way there, so that's we're not trying. I'm not trying to carry any type of opinion in the product we're trying to do is urbanize your experience across all of this ecosystem so that you don't ever have to think about time now building a cluster on top of Amazon. So I gotta worry about how do I manage this on Amazon? I don't want you to think about those providers anymore, right? And then on top of those on top of that infrastructure, I wanna have a way that you're thinking about managing the applications on those environments in the exact same way. So I'm scaling protecting an application on premise in the identical way I'm doing it in the cloud. >>So if it's the same everywhere, what's the value that you're providing? That means that I should choose your option than something else. >>So wait, do have This is where we have controllers and live inside of the clusters that manage this stuff for the user's so you could rebuild what we're doing, But you would have to roll it all by hands, but you could, you know, we don't stand in the way of your operations either. So, like if we go down, you don't go down that idea, but we do have controllers we have. We're using charities. And so, like our management technology, our controllers are just watching for workload to come into the environment. And then we show that in the interface. But you could just walk away as well if you wanted to. >>There's also a constellation of other service is that we're building around this experience, you know, they do draw again from some of the storage and management capabilities. So staple sets your traditional workloads that want to interact with or transact data against a block or a shared file system. We're providing capabilities for sophisticated qualities of persistence that can be can exist in all of those same public clouds. But moreover, over time, we're gonna be in on premises. Well, we're gonna be able to actually move migrate, place, cash her policy. Your put your persistent data with your workload as you move migrate scale burst would repatriate whatever the model is as you move across in between clouds. >>Okay, How how far down that pathway do you think we are? Because 11 criticism of proven it is is that a lot of the tooling that were used to from more traditional ways of operating this kind of infrastructure isn't really there yet. Hence into the question about we actually need to make this easy to use. How far down that pathway away? >>Why would argue that tooling that I've built has already solved some of those problems. So I think we're pretty far down. The people ride down the path. Now what we haven't done is open sourced. You know all my tools, right? To make it easier on everybody else. >>Get up, Scott. Strong partnerships across the cloud platforms. I had a chance to interview George at the Google Cloud event. New partner of the year. I believe some of the stuff help us understand how you know something about the team building. Interact with the public cloud. You look at anthems and azure Arkin. Of course, Amazon has many different ways. You can do your container and management piece there, you know, to talk a little bit of that relationship and how both with those partners and then across those partners, you know, work. >>Yeah, it's a wow. So how much time we have? So so there's certainly a lot of facets to to that, But drawing from the Google experience. We just announced the general availability of cloud volumes on top. So the ability to stand up and manage your own on top instance and Google's cloud. Likewise, we've announced the general availability of the cloud volume service, which gives you manage put fun as a service experience of shared file system on demand. Google, I believe, is either today or yesterday in London. I guess maybe I'll blame that on the time zone covers, not knowing what what day it was. But the point is that's now generally available. Some of those capabilities are going to be able to be connected to our ability from an ks to deploy, uh on demand kubernetes cluster and deploy applications from a market marketplace experience in a common way, not just with Google, but has your with Amazon. And so, you know, frankly, the story doesn't differ a little bit from one cloud to the next, but the the Endeavour is to provide common capabilities across all of them. It's also the case that we do have people that are very opinionated about I want to live only in the Google or that Microsoft of the Amazon, because we're trying to deliver a rich experience for those folks as well, even if you don't value the agnostic multi cloud expert. >>Yeah and Matt, You know, I'm sure you have a viewpoint on this, but you know, it's that skill set that that's really challenging. And I was at the Microsoft show and you've got people you know. It's not just about dot net, there's all that. They're they're embracing and opened all of these environment. But people tend to have the environment that you used to and for multi cloud to be a reality, it needs to be a little bit easier for me to go between them, but it's still we're still we're making progress. But there's work to do. Yeah, s so I just, you know, you know, I know you're building tools and everything, but what what more do we didn't need to do? What were some of the areas that you know you're hopeful for about a >>year before I need to go for the supreme? It's down. It's coming down to the data side like I need to be able to say that on when I turn on data service is inside of kubernetes. I need be able to have that work would go anywhere, right? And because it is a developer. So I have I'm running a production. I'm running an Amazon. But maybe I'm doing test locally on my bare metal environments. Right? I need I want to be able to maybe sink down some of my data. I'm working with a production down to my test environment. That stuff's missing. There's no one doing that right now, and that's where we're headed. That's the path that's where we're headed. >>Yeah. I'm glad you brought that up, actually, because one of the things that I feel like I heard a little bit last year, but it is violated more this year is we're talking a little bit more to the application to the application developer because, you know, communities is a piece of the infrastructure, But it's about the Colonel. Yeah, yeah, yeah. It's the colonel there. So, you know, how do we make sure you know, we're standing between what the APP developer needs and still making sure that, you know, infrastructure is taken care of because storage and networking they're still hard. >>It is. Yeah. Yeah. I mean, I'm I'm approaching. I'm thinking more along the lines of I'm trying to work about app developers personally than infrastructure This point on for me, you know, like so I have I give you a cluster in three minutes, right? So I don't really have to worry about that problem, you know, way also put Theo on top of the clusters. So it's like we're trying to create this whole narrative that you can manage that environment on day one day, two versions. But and that's for like, an I T manager, right? And society instead of our product. How I'm addressing this is you have personas and so you have this concept. You have an I T manager. They do these things that could set limits for the developer who's building the applications or the service's and pushing those up into the environment. They need to have a sense of freedom, right? And said on that side of the house, you know, I'm trying not to break them out of their tooling. So, like wait part of our product ties in to get s o. We have CD, you know? So you just get push, get commit to a branch and weaken target multiple clusters, Right? But no point to the developer, actually, drafty animal or anything. We make way basically create the container for you. Read the deployment, bring it online. And I feel like there's these lines and that I t guys need to be able to say I need to create the guard rails for the Debs. I don't want to make it seem like I'm creating guardrails for the deaths caused the deaths. Don't like that. That's how I'm balancing it. >>Okay, Because that has always been the tension and that there's a lot of talk about Dev ops, but you don't talkto application developers, and they don't wanna have anything to do with infrastructure. They just want a program to an A p I and get things done. They would like this infrastructure to be seamless. Yeah, >>and what we did, like also what I'm giving them is like service dashboards. Because as a developer, you know, because now you're in charge of your cue, eh? You're writing your tests you're pushing. If your c I is going to ct you on your service in production, right? And so we're delivering dashboards as well for service Is that the developers are running, so they dig in and say, Oh, here's an issue or here's where the issue is probably gonna be at I'm gonna go fix this. Yeah, and we're trying to create that type of like scenario for developer and for an I T manager, >>slightly different angle on it, by understanding that question correctly is part of the complexity of infrastructure is something we're also turned Friday deterministic sort of easy button capability, for perhaps you're familiar with them. That's nice. And a C I product, which we we kind of expand that as hybrid cloud infrastructure. If the intention is to make it a simple private cloud capability and indeed are not, a community service operates directly off of it. It's a big part of actually how we deliver Cloud Service is from it. The point is, is that if you're that application developer, if you want the effective and CASS on prom thing, Endeavor with are not a PhD. I product is to give you that sort of easy button extremes because you didn't really want to be a storage admin network at you didn't want to get into the be mired in the details of infra. So So you know, that's obviously work in progress. But we think we're definitely headed down the right direction >>for him. >>Yeah, it just seemed that a lot of enterprises wanna have the cloud like experience, but they want to be able to bring it home that we're seeing a lot more. Yeah. >>So this is like, this turn cheon from this turnkey cloud on premise and played with think has weaken like the same auto scaling. So take so take the dynamic nature of opportunities. Right. So I have a base cluster size of four worker notes, right? But my work, let's gonna maybe maybe need to have more notes. So my out of scale is gonna increase the size my cluster and decrease the size right Pretty much everybody only do that in the public cloud. I could do that in public and on premise now and so that's That's what we're trying to deliver. And that's nickel stuff. I think >>that there's a lot of advantages thio enterprises operating in that way because I have I people that here I can I can go and buy them, hire them and say way, need you to operate this gear and you, you've already done elsewhere. You can do it in cloud. You can do it on side. I could know run my operations the same across no matter where my applications leave, Which saves me a lot of money on training costs on development costs on generally makes for a much more smooth and seamless experience. So, Rob, if you could just love >>your takeaway on, you know, kind of net up participation here at the event and what you want people to take away off from the show this year. >>So it's certainly the case that we're doing a lot of great work. We, like people toe become aware of it. Not up, of course, is not. I think we talked about this and perhaps other context, not strictly a storage and data management company. Only way do draw from the strength of that as we're providing full stack capabilities in a way that are interconnected with public cloud things like are not a Cuban. Any service is really the foundational glue in many ways how we deliver the application run time, but over time will build a consolation of data centric capabilities around that as well. >>I would just love to get your viewpoint Is someone that you know built a company in this ecosystem. There's so many start ups here. Give us kind of that founder viewpoint of being in. They're so sort of ecosystem of the >>ecosystem. So this is how I came into the ecosystem at the beginning. I would have to say that it does feel different. Att This point, I'm gonna speak as Matt, not as now. And so my my thinking has always been It feels a lot like kind of your really your big fan of that rock bands, right? And you go to a local club way all get to know each other at that local club. There's, like maybe 500 of us or 1000 of us. And then that band gets signed a Warner Brothers and goes to the top it. Now there's 20,000 people or 12,000 people. That's how it feels to me right now, I think. But what I like about it is that just shows the power of the community is now at a point where is drawing in like cities now, not just a small collection of a tribe of people, right? And I think that's a very powerful thing with this community. And like all the where they called the kubernetes summits that they're doing way, didn't have any of those back when we first got going. I mean, it was tough to fill the room, you know, Now, now we can fill the room and it's amazing. And what I like seeing is is people moving past the problem with kubernetes itself and moving into, like, what other problems can I solve on top of kubernetes, you know? So you're starting to see that all these really exciting startups doing really need things, you know, and I really likes it like this vendor hall I really like, you know, because you get to see all the new guys. But there's a lot of stuff going on, and I'm excited to see where the community goes in the next five years. But it's we've gone from 0 to 60 insanely because you guys were at the original coupon. I think, Well, >>it's our fourth year doing the Cube at this show, but absolutely we've watched the early days, You know, I'm not supposed to mention open stack of this show, but we remember talking T o J j. And some of the early people there and wait interviewed Chris McCloskey back into Google days, right? So, yeah, we've been fortunate to be on here, really? Day zero here and definitely great energy. So much. Congrats. So much on the progress. Really appreciate the updates, Everything going. As you said, right, we've reached a certain estate and just adding more value on top of this whole >>environment. We're now like we're in, like, Junior high now. Right on were in grade school for a few years. >>All right, Matt. Rob, Thank you so much for the update. Hopefully not an awkward dance tonight for the junior people. For Justin Warren. I'm stupid and back with more coverage here from Q Khan Cloud native 2019. Diego, Thank you for watching Cute

Published Date : Nov 21 2019

SUMMARY :

Koopa and Cloud Native Cot brought to you by Red Cloud. And it's also a board member on the CME CF, thank you both for joining us. And then what type of tooling do you build that allows you to deploy communities an agnostic way, using pure, So you have multiple deployments. So it's about the ecosystem. It's definitely the case that there's a new appeal to making this the projects inside the C N c f. And I look at my role is our role is to How do we tie these One of the concerns I had was that when you add that manageability onto the base So we're not looking you in any way there, so that's we're not trying. So if it's the same everywhere, what's the value that you're providing? So, like if we go down, you don't go down that idea, you know, they do draw again from some of the storage and management capabilities. of proven it is is that a lot of the tooling that were used to from more traditional ways of operating this kind of infrastructure The people ride down the path. of the stuff help us understand how you know something about the team building. availability of the cloud volume service, which gives you manage put fun as a service experience But people tend to have the environment that you used to and for That's the path that's where we're headed. to the application developer because, you know, communities is a piece of the infrastructure, And said on that side of the house, you know, I'm trying not to break them out of their tooling. Okay, Because that has always been the tension and that there's a lot of talk about Dev ops, Because as a developer, you know, because now you're in charge of your cue, So So you know, that's obviously work in progress. Yeah, it just seemed that a lot of enterprises wanna have the cloud like experience, but they want to be able to bring it home So my out of scale is gonna increase the size my cluster and decrease the size right Pretty I could know run my operations the same across no matter where my applications leave, at the event and what you want people to take away off from the show this year. So it's certainly the case that we're doing a lot of great work. They're so sort of ecosystem of the and I really likes it like this vendor hall I really like, you know, because you get to see all the new guys. So much on the progress. We're now like we're in, like, Junior high now. for the junior people.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Rob BhaskarPERSON

0.99+

Chris McCloskeyPERSON

0.99+

AmazonORGANIZATION

0.99+

GeorgePERSON

0.99+

ScottPERSON

0.99+

RobPERSON

0.99+

LondonLOCATION

0.99+

MattPERSON

0.99+

2019DATE

0.99+

MicrosoftORGANIZATION

0.99+

Red CloudORGANIZATION

0.99+

Matt BaldwinPERSON

0.99+

San DiegoLOCATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

fourth yearQUANTITY

0.99+

DiegoPERSON

0.99+

GoogleORGANIZATION

0.99+

yesterdayDATE

0.99+

Fourth yearQUANTITY

0.99+

todayDATE

0.99+

BaldwinPERSON

0.99+

three minutesQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

20,000 peopleQUANTITY

0.99+

this yearDATE

0.99+

OneQUANTITY

0.99+

last yearDATE

0.99+

Warner BrothersORGANIZATION

0.99+

two versionsQUANTITY

0.99+

0QUANTITY

0.99+

one concernQUANTITY

0.99+

60QUANTITY

0.98+

bothQUANTITY

0.98+

Cloud Native and Communities EngineeringORGANIZATION

0.98+

KubeConEVENT

0.97+

Visa VITITLE

0.97+

TrinityPERSON

0.97+

CME CFORGANIZATION

0.97+

NetAppTITLE

0.97+

Cloud Native CotORGANIZATION

0.97+

this weekDATE

0.95+

tonightDATE

0.95+

CloudNativeConEVENT

0.95+

NedPERSON

0.95+

Q KhanPERSON

0.94+

CNC F Net UpORGANIZATION

0.94+

four worker notesQUANTITY

0.93+

500 of usQUANTITY

0.93+

one dayQUANTITY

0.92+

FridayDATE

0.91+

Con.ORGANIZATION

0.9+

KubernetesORGANIZATION

0.9+

11 criticismQUANTITY

0.9+

Rob EskerPERSON

0.89+

oneQUANTITY

0.88+

next five yearsDATE

0.88+

this morningDATE

0.88+

trumpPERSON

0.88+

T o J j.PERSON

0.88+

C FTITLE

0.87+

Day zeroQUANTITY

0.87+

GoogleTITLE

0.81+

last few yearsDATE

0.81+

1000 of usQUANTITY

0.81+

NativeEVENT

0.79+

this afternoonDATE

0.79+

firstQUANTITY

0.77+

about a >>yearQUANTITY

0.76+

one cloudQUANTITY

0.74+

GoogleEVENT

0.73+

kubernetesEVENT

0.72+

C N C F ProjectTITLE

0.72+

CemORGANIZATION

0.71+

Q. Khan CloudORGANIZATION

0.69+

single painting classQUANTITY

0.66+

NA 2019EVENT

0.63+

CorvetteORGANIZATION

0.63+

t peopleQUANTITY

0.6+

John Hartigan, Intiva Health | Blockchain Unbound 2018


 

>> Announcer: Live from San Juan, Puerto Rico, it's theCUBE covering Blockchain Unbound. Brought to you buy Blockchain Industries. (upbeat music) >> Hello everyone, welcome to our exclusive coverage here in Puerto Rico with theCUBE on the ground for extensive two days of coverage for Blockchain Unbound in Puerto Rico where all the action is. It's a global conference where investors, entrepreneurs, thought leaders are all coming together to check out the future and set the agenda for Blockchain cryptocurrency and the decentralized internet. My next guest is John Hartigan, Executive Vice President in Intiva Health. Welcome to theCUBE. >> Thank you. >> So we were talking yesterday with Hash-Craft, CTO, you guys are part of that ecosystem, you guys are doing some of these things with health. Take a minute to explain what you guys are working on and your value proposition. >> Sure, so, Intiva Health is a career and credential management platform for physicians and all licensed medical professionals, and it streamlines and automates the credential management process that they have to go through every time that they either change positions or take on temporary work. And the Hash-Craft integration is allowing us to do instantaneous credential verification. Currently the state of affairs in the granting of privileges at a particular hospital or a facility can take literally weeks and in some cases months to complete. It's a very analog process, and with our integration with Hash-Craft, it will take seconds. >> So I was watching The New York Times today, an our Wall Street Journal article about verification of work history. This Blockchain is certainly a good example of that, but you're now getting it into more of health, what is the use case, what's the low hanging fruit that you guys are going after with your solution, and how does that evolve and how you see that evolving? >> Well, so, like I mentioned, the current verification process for the granting of privileges in a hospital setting, it is pretty much unchanged since the 1950s. The internet helps a lot but what you're talking about is somebody getting a credential paper file with 25 or 30 documents, and opening the file and picking up the phone and calling, and verifying the reputation and provenance of that particular physician. And it's truly a bureaucratic nightmare. It's red tape to the nth degree. And so that represents thousands and hundreds of thousands of hours and billions and billions of dollars in waste that could be reallocated to better patient care for example. >> The big use case we're seeing education, the workplace, but now healthcare. I see a perfect storm for innovation. Healthcare is not known for moving fast. >> John: Correct. >> HIPAA regulations in the past couple decades really put a damper on data sharing for privacy reasons. At that time it seemed like a good call. Has things like HIPAA, has the cloud computing model opened up new avenues for health because everyone wants great healthcare, but the data is stuck in some silo, database. >> Database, absolutely. >> That's the problem. >> That's absolutely a problem. >> So what's your reaction to that? >> So the approach that we're seeing a lot of organizations take is they are attempting to go after the EHRs and the EMRs, the Electronic Health Records for Patients. Of course that is something that needs to be fixed. However the medical space is truly influenced, the main stakeholders are the physicians. They sit on all the committees, they run all the budgets, they make the policy. So it's imperative that we address the physicians and get their buy into any kind of significant change. And what you're seeing now is states, as well as other organizations including the federal medical board, the Federal Association of Medical Boards, as well as the State of Illinois, Wyoming is here, as a matter of fact, representing, and they are all looking at Blockchain solutions for this verification problem for the medical space and remaining HIPAA compliant. >> Let's talk about security because hospitals and healthcare organizations have been really good targets for ransomware. >> John: Absolutely. >> And so we're seeing that mainly because their IT systems have been kind of ancient in some cases, but they're right in the target of, they don't have a lot of IT support. One of the things about Blockchain, it makes these things immutability. So is that something that is on the radar, and how is, I mean, not necessarily ransomware, that's one example of many security issues 'cause you got Internet of Things, you have a slew of cloud-edge technologies-- >> John: Yes. >> That are emerging, that opened up a surface area for a text. So what's your thoughts on that? >> So, as you mentioned, the traditional models have been layered on top of each other overtime. It's a patchwork situation. And because it's a patchwork situation, there is vulnerabilities all over the place, in facilities a lot of times. And besides that, the medical space is probably 10 years behind the times when it comes to technology, maybe five at a minimum. The model that we're using, you mentioned earlier that there are siloed information in these different facilities and hospitals, and that's absolutely true. So all of that information, you have facility A, facility B, facility C, they all have information on one particular provider or physician, but they don't talk to each, and that information is at different levels of accuracy and timeliness, you mentioned time and date stamps. So our model works where the information follows the provider, okay, it's all built around the provider themselves, and then the individual facilities can tap into that information, and also they can influence the information, they can update it. So everybody will then be talking to each other in an anonymous fashion around the one provider updating that information and making it the most accurate in the market, and we get away from the old SaaS model. >> Before we deep dive in here, I'm going to ask you one more thing around as you walked into healthcare providers and then the healthcare industry, you're a different breed, you have Blockchain, you got different solution, the conversation that they're having is, let's put a data leg out there, again, centralized data leg. ISPs are doing that. We know with cybersecurity, any time you have centralized data resources, it's just an easier target to hack. >> John: Correct. >> So it's clear that centralized is not going to be the ideal architecture, and this entire movement is based upon the principles of decentralized data. >> John: Yes. >> So what's it like when you go in there? It must be like, do you have like three heads to them? Or are you like a martian, you're like speaking some foreign language? I mean what is it like, are there people receptive to what you talk about? Talk about some of the experiences you had when you walked in the door and knocked on the front door and walked in and talked to them. >> So it is an interesting situation. When I speak with CEOs and when I speak with COOs, they understand that they're vulnerable when it comes to their data, and they understand how expensive it is if, for example, if they have a HIPAA breach, it's $10,000 per occurrence. Now that means if somebody texts patient information to somebody else on a normal phone, that $10,000 every time that happens, okay. And so if it's a major data breach, and a record of files if they have 50,000 files lost, I mean it could be a killing, a business killing event under the right circumstances. So I tried to educate them about-- >> Do they look at Blockchain as a solution there? Or are they scratching their heads, kicking the tires? What's the reaction? >> They're interested, they don't understand exactly how we can apply Blockchain, and we're trying to educate them as to how that is, we are capable of doing so. We're explaining about the vast security improvements by decentralizing the information, and they are receptive, they're just reticent because they're very, tend to be more conservative. So as these organizations like the State of Illinois and the Federal Association of Medical Boards, as they start to adopt the hospitals and facilities, they're starting to look in and oh say, "Hey, this is a real thing, "and there may be a real application here." >> Talk about your business, you market, you go on after obviously healthcare, product specifically in the business model, where are you guys? How big are you? Are you funded? Are you doing an ICO? How are you using token economics? How is it working? Give us a status on the company. >> Sure, so, we've been in business for approximately two years. We're a funded startup out of Austin, Texas. We are born actually out of a practice management company which is an important point because a technology company trying to solve this problem would really struggle because there is a lot of bureaucracy, there's a lot of nuance in how the system operates because it is evolved overtime. So that gives us a very significant advantage. We have an operating platform that has been out for a little over a year now, and we have thousands and thousands of physicians and other licensed medical professionals that use the platform now. >> Are they paying customers or are they just users? >> No, so the model works like this, it's free to the providers, it's also free to the facilities and medical groups, and so we allow that platform, that utility for them to use. How we monetize is we have other curated goods and services for the providers along their career journey. So, for example, continuing medical education. All providers are required to take so many units a year, and we have a very robust online library of CME. And we also have partnerships with medical malpractice organizations. >> So it's a premium model. You get them using the platform. >> Correct, that's right. >> Where does tokens fit in? Where does the cryptocurrency fit in? Do you have a token as a utility, obviously, it's a utility token. I mean explain the model. >> Correct. Yeah so we just announced last Friday. in South by Southwest that we are launching a token, a utility token, and it'll go on sale April 19th. And basically how it works is the providers, the physicians will earn tokens by taking actions in the platform that update their data for example, or if they look for a job on our platform, or if they do different tasks in the platform that improve the veracity of their data, and then they will be able to use those tokens to purchase the continuing medical education courses, travel courses, medical malpractice insurance, a number of different resources. >> Token will monitor behavior, engage behavior, and then a two-sided marketplace for clearing house. >> Exactly. >> How does the token go up in value? >> We have multiple partners that are involved, so the partners will be also purchasing advertising time, or it's a sponsorship model, so they'll be able to sponsor within the platform. So the more partners we bring in, the more providers we have, the value-- >> So suppliers, people who want to reach those guys. So >> Exactly. >> You get the coins, you see who's doing what. You get a vibe on who's active and then >> Exactly. That's a signal to potential people who want to buy coins. >> Yeah, and when we announced that we were doing this token, we had multiple partners that we have been in business with for the last two years, saying, "We want in, we want to do this, "we want to get involved." Oh another thing that we're doing with the token, we have an exclusive relationship with the National Osteoporosis Foundation, and we put forth to them that we would like to set them up with a crypto wallet so that they can accept donations, and then we would also match those donations up to a certain point that they receive in crypto. So we want to help our organizations, our not-for-profits by facilitating crypto acceptance. >> So talk about your relationship with Hash-Craft. It's two days old but it's been around for two years, they announced a couple days ago. It got good feedback, a lot of developers are using it. It's not a theorem but that's the compatibility to a theorem. You're betting on that platform. How long have you worked with these guys, and why the bet on Hash-Craft? >> So we were looking at Blockchain Technologies about two years ago because we realized, as you mentioned earlier, the security issues we have. We have to be very aware of the type of data that we're holding. So at the time though, there were significant issues with speed, significant issues with storage, and how it would work by actually putting a credential packet into Blockchain, and the technology frankly just wasn't there, and so we started looking for alternatives. Thankfully we were in Texas, and we happened to run into Hash-Craft, and they explained what they were doing, and we thought this must be too good to be true. It checked off all of our boxes. And we had multiple conversations about how we would actually execute an integration into our current platform with Hash-Craft. So we've been in talks with them for, I think, a little over five or six months, and we will actually, it looks like be one of the very first applications on the market integrating Hash-Craft. >> It's interesting, they don't really have a Blockchain-based solution, it's a DAG, a directed acyclic graphic model. Did that bother you guys? You don't care, it's plumbing. I mean does it matter? >> So actually the way that it is established, it has all of the benefits of Blockchain, and none of the fat and sugar, so to speak. I mean there are a number of things that they do that Blockchain-- >> You mean performance issues and security? >> Performance, speed is a big one, but also fairness on the date and timestamps, because with the verification system, you have to prove, you have to be able to prove and show that this date and timestamp is immutable, and that it has been established in a fair manner. And they have been able to solve that problem, where the Blockchain model, there is still some question about, if you have some bad actors in there, they can significantly influence the date and timestamps. And that was very significant for our model. >> Alright, well, congratulations. What's next for the company? What are you guys doing? What's the plan, what's the team like? Well, excited obviously. What's next? >> So we are going to be announcing some very big partnerships that we've established here late spring. I was hoping to do it here now, however we've-- >> Come on, break it out then. >> I would like to but I have to be careful. So we have some big partnerships we're going to be announcing, and of course we have the token sale coming up so there'll be a big-- >> Host: When is that sale happening? >> So it starts April 19th, and it'll run for about six weeks. >> What's the hard cap and soft cap? >> Yeah, we prefer not to talk about that, but let's say, soft cap, about 12 million. And we have some interested parties that want to do more, and so we're looking at what our best options are as far as setting the value to the token, and what the partnerships that are going to significantly impact it will be. >> Well, great job, congratulations. One of the big concerns to this market is scams versus legit, and you're starting to see clearly that this is a year, flight to quality, where real businesses are tokenizing for real reasons, to scale, provide value. You guys are a great example of that. Thanks for sharing that information. >> We're really excited, and it's very exciting to bring this to the healthcare space which is, as we said, conservative and somewhat traditional. And we believe that we will be setting the standard moving forward for primary source verification. >> And you can just summarize the main problem that you solve. >> Yeah, it is that analog primary source verification of the credential documents, and when our platform goes live, we will literally be putting hours of time a day, something like eight hours back into the providers' lives, and back to the money of that, associated with that back to their pockets, which we hope translates into better patient care. >> So verification trust and they save time. >> John: Absolutely. >> It's always a good thing when you can reduce the steps to do something, save time, make it easy. That's a business model of success. >> Absolutely and more secure. >> John Hatigan, who's with Intiva, Executive Vice President from Austin, Texas here in Puerto Rico for theCUBE coverage. Day Two of two days of live coverage here in Puerto Rico, I'm John Furrier with theCUBE host. We'll be back with more live coverage after this short break. (upbeat music)

Published Date : Mar 17 2018

SUMMARY :

Brought to you buy Blockchain Industries. and set the agenda for So we were talking that they have to go and how does that evolve and and opening the file and picking the workplace, but now healthcare. but the data is stuck in some silo, So it's imperative that we have been really good So is that something that is on the radar, that opened up a surface area for a text. and that information the conversation that they're having is, So it's clear that centralized and knocked on the front door and they understand how expensive it is and the Federal Association in the business model, and we have thousands and and so we allow that platform, So it's a premium model. I mean explain the model. that improve the veracity of their data, and then a two-sided marketplace So the more partners we bring in, So suppliers, people who You get the coins, That's a signal to potential and then we would also but that's the compatibility to a theorem. and the technology Did that bother you guys? and none of the fat and that it has been What's the plan, what's the team like? So we are going to be and of course we have and it'll run for about six weeks. as far as setting the value to the token, One of the big concerns to this market be setting the standard the main problem that you solve. and back to the money of that, and they save time. That's a business model of success. Day Two of two days of live

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John HatiganPERSON

0.99+

John HartiganPERSON

0.99+

TexasLOCATION

0.99+

25QUANTITY

0.99+

Federal Association of Medical BoardsORGANIZATION

0.99+

National Osteoporosis FoundationORGANIZATION

0.99+

April 19thDATE

0.99+

Puerto RicoLOCATION

0.99+

thousandsQUANTITY

0.99+

Federal Association of Medical BoardsORGANIZATION

0.99+

$10,000QUANTITY

0.99+

two yearsQUANTITY

0.99+

John FurrierPERSON

0.99+

fiveQUANTITY

0.99+

10 yearsQUANTITY

0.99+

HIPAATITLE

0.99+

two daysQUANTITY

0.99+

30 documentsQUANTITY

0.99+

Intiva HealthORGANIZATION

0.99+

last FridayDATE

0.99+

Austin, TexasLOCATION

0.99+

50,000 filesQUANTITY

0.99+

late springDATE

0.99+

billionsQUANTITY

0.99+

eight hoursQUANTITY

0.99+

yesterdayDATE

0.99+

two-sidedQUANTITY

0.98+

oneQUANTITY

0.98+

1950sDATE

0.98+

San Juan, Puerto RicoLOCATION

0.98+

about six weeksQUANTITY

0.98+

about 12 millionQUANTITY

0.98+

two years agoDATE

0.98+

Day TwoQUANTITY

0.97+

approximately two yearsQUANTITY

0.97+

three headsQUANTITY

0.97+

billions of dollarsQUANTITY

0.96+

State of IllinoisORGANIZATION

0.96+

IllinoisLOCATION

0.96+

todayDATE

0.96+

over fiveQUANTITY

0.95+

eachQUANTITY

0.95+

OneQUANTITY

0.95+

one providerQUANTITY

0.94+

theCUBEORGANIZATION

0.9+

Blockchain TechnologiesORGANIZATION

0.9+

past couple decadesDATE

0.9+

WyomingLOCATION

0.87+

physiciansQUANTITY

0.87+

couple days agoDATE

0.86+

hundreds of thousands of hoursQUANTITY

0.86+

six monthsQUANTITY

0.86+

medical boardORGANIZATION

0.86+

Hash-CraftORGANIZATION

0.84+

over a yearQUANTITY

0.84+

two days oldQUANTITY

0.84+

first applicationsQUANTITY

0.82+

Blockchain UnboundEVENT

0.82+

one exampleQUANTITY

0.82+

Executive Vice PresidentPERSON

0.81+

2018DATE

0.78+

a yearQUANTITY

0.76+

a dayQUANTITY

0.75+

last two yearsDATE

0.75+

nth degreeQUANTITY

0.74+

for PatientsORGANIZATION

0.72+

ElectronicORGANIZATION

0.67+

Wall Street JournalTITLE

0.67+

IntivaPERSON

0.65+

CTOORGANIZATION

0.62+

yearQUANTITY

0.62+

UnboundEVENT

0.61+

New York TimesORGANIZATION

0.61+

John Lockwood, Algo Logic Systems | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing '17, brought to you by Intel. (electronic music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Denver, Colorado at Super Computing 2017. 12,000 people, our first trip to the show. We've been trying to come for awhile, it's pretty amazing. A lot of heavy science in terms of the keynotes. All about space and looking into brain mapping and it's heavy lifting, academics all around. We're excited to have our next guest, who's an expert, all about speed and that's John Lockwood. He's the CEO of Algo-Logic. First off, John, great to see you. >> Yeah, thanks Jeff, glad to be here. >> Absolutely, so for folks that aren't familiar with the company, give them kind of the quick overview of Algo. >> Yes, Algo-Logic puts algorithms into logic. So our main focus is taking things are typically done in software and putting them into FPGAs and by doing that we make them go faster. >> So it's a pretty interesting phenomenon. We've heard a lot from some of the Intel execs about kind of the software overlay that now, kind of I guess, a broader ecosystem of programmers into hardware, but then still leveraging the speed that you get in hardware. So it's a pretty interesting combination to get those latencies down, down, down. >> Right, right, I mean Intel certainly made a shift to go on into heterogeneous compute. And so in this heterogeneous world, we've got software running on Xeons, Xeon Phis. And we've also got the need though, to use new compute in more than just the traditional microprocessor. And so with the acquisition of Altera, is that now Intel customers can use FPGAs in order to get the benefit in speed. And so Algo-Logic, we typically provide applications with software APIs, so it makes it really easy for end customers to deploy FPGAs into their data center, into their hosts, into their network and start using them right away. >> And you said one of your big customer sets is financial services and trading desk. So low latency there is critical as millions and millions and millions if not billions of dollars. >> Right, so Algo-Logic we have a whole product line of high-frequency trading systems. And so our Tick-To-Trade system is unique in the fact that it has a sub-microsecond trading latency and this means going from market data that comes in, for example on CME for options and futures trading, to time that we can place a fix order back out to the market. All of that happens in an FPGA. That happens in under a microsecond. So under a millionth of second and that beats every other software system that's being used. >> Right, which is a game change, right? Wins or losses can be made on those time frames. >> It's become a must have is that if you're trading on Wall Street or trading in Chicago and you're not trading with an FPGA, you're trading at a severe disadvantage. And so we make a product that enables all the trading firms to be playing on a fair, level playing field against the big firms. >> Right, so it's interesting because the adoption of Flash and some of these other kind of speed accelerator technologies that have been happening over the last several years, people are kind of getting accustomed to the fact that speed is better, but often it was kind of put aside in this kind of high-value applications like financial services and not really proliferating to a broader use of applications. I wonder if you're seeing that kind of change a little bit, where people are seeing the benefits of real time and speed beyond kind of the classic high-value applications? >> Well, I think the big change that's happened is that it's become machine-to-machine now. And so humans, for example in trading, are not part of the loop anymore and so it's not a matter of am I faster than another person? It's am I faster than the other person's machine? And so this notion of having compute that goes fast has become suddenly dramatically much more important because everything now is going to machine versus machine. And so if you're an ad tech advertiser, is that how quickly you can do an auction to place an ad matters and if you can get a higher value ad placed because you're able to do a couple rounds of an auction, that's worth a lot. And so, again, with Algo-Logic we make things go faster and that time benefit means, that all thing else being the same, you're the first to come to a decision. >> Right, right and then of course the machine-to-machine obviously brings up the hottest topic that everybody loves to talk about is autonomous vehicles and networked autonomous vehicles and just the whole IOT space with the compute moving out to the edge. So this machine-to-machine systems are only growing in importance and really percentage of the total compute consumption by far. >> That's right, yeah. So last year at Super Computing, we demonstrated a drone, bringing in realtime data from a drone. So doing realtime data collection and doing processing with our Key Value Store. So this year, we have a machine learning application, a Markov Decision Process where we show that we can scale-out a machine learning process and teach cars how to drive in a few minutes. >> Teach them how to drive in a few minutes? >> Right. >> So that's their learning. That's not somebody programming the commands. They're actually going through a process of learning? >> Right, well so the Key Value Store is just a part of this. We're just the part of the system that makes the scale-outs that runs well in a data center. And so we're still running the Markov Decision Process in simulations in software. So we have a couple Xeon servers that we brought with us to do the machine learning and a data center would scale-out to be dozens of racks, but even with a few machines though, for simple highway driving, what we can show is we start off with, the system's untrained and that in the Markov Decision Process, we reward the final state of not having accidents. And so at first, the cars drive and they're bouncing into each other. It's like bumper cars, but within a few minutes and after about 15 million simulations, which can be run that quickly, is that the cars start driving better than humans. And so I think that's a really phenomenal step, is the fact that you're able to get to a point where you can train a system how to drive and give them 15 man years of experience in a matter of minutes by the scale-out compute systems. >> Right, 'cause then you can put in new variables, right? You can change that training and modify it over time as conditions change, throw in snow or throw in urban environments and other things. >> Absolutely, right. And so we're not pretending that our machine learning, that application we're showing here is an end-all solution. But as you bring in other factors like pedestrians, deer, other cars running different algorithms or crazy drivers, is that you want to expose the system to those conditions as well. And so one of the questions that came up to us was, "What machine learning application are you running?" So we're showing all 25 cars running one machine learned application and that's incrementally getting better as they learn to drive, but we could also have every car running a different machine learning application and see how different AIs interact with each other. And I think that's what you're going to see on the highway as we have more self-driving cars running different algorithms, we have to make sure they all place nice with each other. >> Right, but it's really a different way of looking at the world, right, using machine learning, machine-to-machine versus single person or a team of people writing a piece of software to instruct something to do something and then you got to go back and change it. This is a much more dynamic realtime environment that we're entering into with IOT. >> Right, I mean the machine-to-human, which was kind of last year and years before, were, "How do you make interactions "between the computers better than humans?" But now it's about machine-to-machine and it's,"How do you make machines interact better "with other machines?" And that's where it gets really competitive. I mean, you can imagine with drones for example, for applications where you have drones against drones, the drones that are faster are going to be the ones that win. >> Right, right, it's funny, we were just here last week at the commercial drone show and it's pretty interesting how they're designing the drones now into a three-part platform. So there's the platform that flies around. There's the payload, which can be different sensors or whatever it's carrying, could be herbicide if it's an agricultural drone. And then they've opened up the STKs, both on the control side as well as the mobile side, in terms of the controls. So it's a very interesting way that all these things now, via software could tie together, but as you say, using machine learning you can train them to work together even better, quicker, faster. >> Right, I mean having a swarm or a cluster of these machines that work with each other, you could really do interesting things. >> Yeah, that's the whole next thing, right? Instead of one-to-one it's many-to-many. >> And then when swarms interact with other swarms, then I think that's really fascinating. >> So alright, is that what we're going to be talking about? So if we connect in 2018, what are we going to be talking about? The year's almost over. What are your top priorities for next year? >> Our top priorities are to see. We think that FPGA is just playing this important part. A GPU for example, became a very big part of the super computing systems here at this conference. But the other side of heterogeneous is the FPGA and the FPGA has seen almost, just very minimal adoption so far. But the FPGA has the capability of providing, especially when it comes to doing network IO transactions, it's speeding up realtime interactions, it has an ability to change the world again for HPC. And so I'm expecting that in a couple years, at this HPC conference, that what we'll be talking about, is the biggest top 500 super computers, is that how big of FPGAs do they have. Not how big of GPUs do they have. >> All right, time will tell. Well, John, thanks for taking a few minutes out of your day and stopping by. >> Okay, thanks Jeff, great to talk to you. >> All right, he's John Lockwood, I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. >> Bye. (electronic music)

Published Date : Nov 14 2017

SUMMARY :

Covering Super Computing '17, brought to you by Intel. A lot of heavy science in terms of the keynotes. that aren't familiar with the company, and by doing that we make them go faster. still leveraging the speed that you get in hardware. And so with the acquisition of Altera, And you said one of your big customer sets Right, so Algo-Logic we have a whole product line Right, which is a game change, right? And so we make a product that enables all the trading firms Right, so it's interesting because the adoption of Flash And so this notion of having compute that goes fast and just the whole IOT space and teach cars how to drive in a few minutes. That's not somebody programming the commands. and that in the Markov Decision Process, Right, 'cause then you can put in new variables, right? And so one of the questions that came up to us was, of looking at the world, right, using machine learning, Right, I mean the machine-to-human, in terms of the controls. you could really do interesting things. Yeah, that's the whole next thing, right? And then when swarms interact with other swarms, So alright, is that what we're going to be talking about? And so I'm expecting that in a couple years, All right, time will tell. All right, he's John Lockwood, I'm Jeff Frick. (electronic music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

John LockwoodPERSON

0.99+

ChicagoLOCATION

0.99+

Jeff FrickPERSON

0.99+

JohnPERSON

0.99+

2018DATE

0.99+

millionsQUANTITY

0.99+

25 carsQUANTITY

0.99+

last weekDATE

0.99+

Algo-LogicORGANIZATION

0.99+

last yearDATE

0.99+

billions of dollarsQUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

Wall StreetLOCATION

0.99+

Denver, ColoradoLOCATION

0.99+

AlteraORGANIZATION

0.99+

next yearDATE

0.99+

this yearDATE

0.99+

Algo Logic SystemsORGANIZATION

0.99+

first tripQUANTITY

0.98+

under a microsecondQUANTITY

0.98+

oneQUANTITY

0.98+

IntelORGANIZATION

0.98+

dozens of racksQUANTITY

0.98+

firstQUANTITY

0.98+

Super Computing 2017EVENT

0.97+

FirstQUANTITY

0.97+

bothQUANTITY

0.97+

under a millionth of secondQUANTITY

0.96+

500 super computersQUANTITY

0.96+

Super Computing '17EVENT

0.94+

15 man yearsQUANTITY

0.94+

about 15 million simulationsQUANTITY

0.93+

three-part platformQUANTITY

0.89+

minutesQUANTITY

0.88+

XeonORGANIZATION

0.84+

theCUBEORGANIZATION

0.82+

single personQUANTITY

0.78+

one ofQUANTITY

0.75+

last several yearsDATE

0.74+

Key Value StoreORGANIZATION

0.72+

coupleQUANTITY

0.63+

couple yearsQUANTITY

0.61+

FlashTITLE

0.61+

yearsDATE

0.55+

Xeon PhisCOMMERCIAL_ITEM

0.51+

machinesQUANTITY

0.5+

questionsQUANTITY

0.5+

Value StoreORGANIZATION

0.49+

KeyTITLE

0.47+

XeonsORGANIZATION

0.4+

MarkovORGANIZATION

0.39+

CMETITLE

0.39+