Image Title

Search Results for exascale:

The Spaceborne Computer | Exascale Day


 

>> Narrator: From around the globe. It's theCUBE with digital coverage of Exascale Day. Made possible by Hewlett Packard Enterprise. >> Welcome everyone to theCUBE's celebration of Exascale Day. Dr. Mark Fernandez is here. He's the HPC technology officer for the Americas at Hewlett Packard enterprise. And he's a developer of the spaceborne computer, which we're going to talk about today. Mark, welcome. It's great to see you. >> Great to be here. Thanks for having me. >> You're very welcome. So let's start with Exascale Day. It's on 10 18, of course, which is 10 to the power of 18. That's a one followed by 18 zeros. I joke all the time. It takes six commas to write out that number. (Mark laughing) But Mark, why don't we start? What's the significance of that number? >> So it's a very large number. And in general, we've been marking the progress of our computational capabilities in thousands. So exascale is a thousand times faster than where we are today. We're in an era today called the petaflop era which is 10 to the 15th. And prior to that, we were in the teraflop era, which is 10 to the 12th. I can kind of understand a 10 to the 12th and I kind of can discuss that with folks 'cause that's a trillion of something. And we know a lot of things that are in trillions, like our national debt, for example. (Dave laughing) But a billion, billion is an exascale and it will give us a thousand times more computational capability than we have in general today. >> Yeah, so when you think about going from terascale to petascale to exascale I mean, we're not talking about orders of magnitude, we're talking about a much more substantial improvement. And that's part of the reason why it's sort of takes so long to achieve these milestones. I mean, it kind of started back in the sixties and seventies and then... >> Yeah. >> We've been in the petascale now for more than a decade if I think I'm correct. >> Yeah, correct. We got there in 2007. And each of these increments is an extra comma, that's the way to remember it. So we want to add an extra comma and get to the exascale era. So yeah, like you say, we entered the current petaflop scale in 2007. Before that was the terascale, teraflop era and it was in 1997. So it took us 10 years to get that far, but it's taken us, going to take us 13 or 14 years to get to the next one. >> And we say flops, we're talking about floating point operations. And we're talking about the number of calculations that can be done in a second. I mean, talk about not being able to get your head around it, right? Is that's what talking about here? >> Correct scientists, engineers, weather forecasters, others use real numbers and real math. And that's how you want to rank those performance is based upon those real numbers times each other. And so that's why they're floating point numbers. >> When I think about supercomputers, I can't help but remember whom I consider the father of supercomputing Seymour Cray. Cray of course, is a company that Hewlett Packard Enterprise acquired. And he was kind of an eclectic fellow. I mean, maybe that's unfair but he was an interesting dude. But very committed to his goal of really building the world's fastest computers. When you look at back on the industry, how do you think about its developments over the years? >> So one of the events that stands out in my mind is I was working for the Naval Research Lab outside Stennis Space Center in Mississippi. And we were doing weather modeling. And we got a Cray supercomputer. And there was a party when we were able to run a two week prediction in under two weeks. So the scientists and engineers had the math to solve the problem, but the current computers would take longer than just sitting and waiting and looking out the window to see what the weather was like. So when we can make a two week prediction in under two weeks, there was a celebration. And that was in the eighties, early nineties. And so now you see that we get weather predictions in eight hours, four hours and your morning folks will get you down to an hour. >> I mean, if you think about the history of super computing it's really striking to consider the challenges in the efforts as we were just talking about, I mean, decade plus to get to the next level. And you see this coming to fruition now, and we're saying exascale likely 2021. So what are some of the innovations in science, in medicine or other areas you mentioned weather that'll be introduced as exascale computing is ushered in, what should people expect? >> So we kind of alluded to one and weather affects everybody, everywhere. So we can get better weather predictions, which help everybody every morning before you get ready to go to work or travel or et cetera. And again, storm predictions, hurricane predictions, flood predictions, the forest fire predictions, those type things affect everybody, everyday. Those will get improved with exascale. In terms of medicine, we're able to take, excuse me, we're able to take genetic information and attempt to map that to more drugs quicker than we have in the past. So we'll be able to have drug discovery happening much faster with an exascale system out there. And to some extent that's happening now with COVID and all the work that we're doing now. And we realize that we're struggling with these current computers to find these solutions as fast as everyone wants them. And exascale computers will help us get there much faster in the future in terms of medicine. >> Well, and of course, as you apply machine intelligence and AI and machine learning to the applications running on these supercomputers, that just takes it to another level. I mean, people used to joke about you can't predict the weather and clearly we've seen that get much, much better. Now it's going to be interesting to see with climate change. That's another wildcard variable but I'm assuming the scientists are taking that into consideration. I mean, actually been pretty accurate about the impacts of climate change, haven't they? >> Yeah, absolutely. And the climate change models will get better with exascale computers too. And hopefully we'll be able to build a confidence in the public and the politicians in those results with these better, more powerful computers. >> Yeah let's hope so. Now let's talk about the spaceborne computer and your involvement in that project. Your original spaceborne computer it went up on a SpaceX reusable rocket. Destination of course, was the international space station. Okay, so what was the genesis of that project and what was the outcome? So we were approached by a long time customer NASA Ames. And NASA Ames says its mission is to model rocket launches and space missions and return to earth. And they had the foresight to realize that their supercomputers here on earth, could not do that mission when we got to Mars. And so they wanted to plan ahead and they said, "Can you take a small part of our supercomputer today and just prove that it can work in space? And if it can't figure out what we need to do to make it work, et cetera." So that's what we did. We took identical hardware, that's present at NASA Ames. We put it on a SpaceX rocket no special preparations for it in terms of hardware or anything of that sort, no special hardening, because we want to take the latest technology just before we head to Mars with us. I tell people you wouldn't want to get in the rocket headed to Mars with a flip phone. You want to take the latest iPhone, right? And all of the computers on board, current spacecrafts are about the 2007 era that we were talking about, in that era. So we want to take something new with us. We got the spaceone computer on board. It was installed in the ceiling because in space, there's no gravity. And you can put computers in the ceiling. And we immediately made a computer run. And we produced a trillion calculations a second which got us into the teraflop range. The first teraflop in space was pretty exciting. >> Well, that's awesome. I mean, so this is the ultimate example of edge computing. >> Yes. You mentioned you wanted to see if it could work and it sounds like it did. I mean, there was obviously a long elapse time to get it up and running 'cause you have to get it up there. But it sounds like once you did, it was up and running very quickly so it did work. But what were some of the challenges that you encountered maybe some of the learnings in terms of getting it up and running? >> So it's really fascinating. Astronauts are really cool people but they're not computer scientists, right? So they see a cord, they see a place to plug it in, they plug it in and of course we're watching live on the video and you plugged it in the wrong spot. So (laughs) Mr. Astronaut, can we back up and follow the procedure more carefully and get this thing plugged in carefully. They're not computer technicians used to installing a supercomputer. So we were able to get the system packaged for the shake, rattle and roll and G-forces of launch in the SpaceX. We were able to give astronaut instructions on how to install it and get it going. And we were able to operate it here from earth and get some pretty exciting results. >> So our supercomputers are so easy to install even an astronaut can do it. I don't know. >> That's right. (both laughing) Here on earth we have what we call a customer replaceable units. And we had to replace a component. And we looked at our instructions that are tried and true here on earth for average Joe, a customer to do that and realized without gravity, we're going to have to update this procedure. And so we renamed it an astronaut replaceable unit and it worked just fine. >> Yeah, you can't really send an SE out to space to fix it, can you? >> No sir. (Dave laughing) You have to have very careful instructions for these guys but they're great. It worked out wonderfully. >> That's awesome. Let's talk about spaceborne two. Now that's on schedule to go back to the ISS next year. What are you trying to accomplish this time? >> So in retrospect, spaceborne one was a proof of concept. Can we package it up to fit on SpaceX? Can we get the astronauts to install it? And can we operate it from earth? And if so, how long will it last? And do we get the right answers? 100% mission success on that. Now spaceborne two is, we're going to release it to the community of scientists, engineers and space explorers and say, "Hey this thing is rock solid, it's proven. Come use it to improve your edge computing." We'd like to preserve the network downlink bandwidth for all that imagery, all that genetic data, all that other data and process it on the edge as the whole world is moving to now. Don't move the data, let's compute at the edge and that's what we're going to do with spaceborne two. And so what's your expectation for how long the project is going to last? What does success look like in your mind? So spaceborne one was given a one year mission just to see if we could do it but the idea then was planted it's going to take about three years to get to Mars and back. So if you're successful, let's see if this computer can last three years. And so we're going up February 1st, if we go on schedule and we'll be up two to three years and as long as it works, we'll keep computing and computing on the edge. >> That's amazing. I mean, I feel like, when I started the industry, it was almost like there was a renaissance in supercomputing. You certainly had Cray and you had all these other companies, you remember thinking machines and convex spun out tried to do a mini supercomputer. And you had, really a lot of venture capital and then things got quiet for a while. I feel like now with all this big data and AI, we're seeing in all the use cases that you talked about, we're seeing another renaissance in supercomputing. I wonder if you could give us your final thoughts. >> Yeah, absolutely. So we've got the generic like you said, floating point operations. We've now got specialized image processing processors and we have specialized graphics processing units, GPUs. So all of the scientists and engineers are looking at these specialized components and bringing them together to solve their missions at the edge faster than ever before. So there's heterogeneity of computing is coming together to make humanity a better place. And how are you going to celebrate Exascale Day? You got to special cocktail you going to shake up or what are you going to do? It's five o'clock somewhere on 10 18, and I'm a Parrothead fan. So I'll probably have a margarita. There you go all right. Well Mark, thanks so much for sharing your thoughts on Exascale Day. Congratulations on your next project, the spaceborne two. Really appreciate you coming to theCUBE. Thank you very much I've enjoyed it. All right, you're really welcome. And thank you for watching everybody. Keep it right there. This is Dave Vellante for thecUBE. We're celebrating Exascale Day. We'll be right back. (upbeat music)

Published Date : Oct 16 2020

SUMMARY :

Narrator: From around the globe. And he's a developer of Great to be here. I joke all the time. And prior to that, we And that's part of the reason why We've been in the petascale and get to the exascale era. And we say flops, And that's how you want And he was kind of an eclectic fellow. had the math to solve the problem, in the efforts as we And to some extent that's that just takes it to another level. And the climate change And all of the computers on board, I mean, so this is the ultimate to see if it could work on the video and you plugged are so easy to install And so we renamed it an You have to have very careful instructions Now that's on schedule to go for how long the project is going to last? And you had, really a So all of the scientists and engineers

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarkPERSON

0.99+

Dave VellantePERSON

0.99+

2007DATE

0.99+

1997DATE

0.99+

February 1stDATE

0.99+

MarsLOCATION

0.99+

four hoursQUANTITY

0.99+

Mark FernandezPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

13QUANTITY

0.99+

Seymour CrayPERSON

0.99+

Naval Research LabORGANIZATION

0.99+

one yearQUANTITY

0.99+

14 yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

Hewlett PackardORGANIZATION

0.99+

earthLOCATION

0.99+

100%QUANTITY

0.99+

eight hoursQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CrayPERSON

0.99+

DavePERSON

0.99+

MississippiLOCATION

0.99+

two weekQUANTITY

0.99+

next yearDATE

0.99+

Exascale DayEVENT

0.99+

thousandsQUANTITY

0.99+

SpaceXORGANIZATION

0.99+

10QUANTITY

0.99+

10 18DATE

0.98+

six commasQUANTITY

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

an hourQUANTITY

0.98+

early ninetiesDATE

0.98+

JoePERSON

0.98+

five o'clockDATE

0.98+

under two weeksQUANTITY

0.98+

18QUANTITY

0.98+

12thDATE

0.98+

more than a decadeQUANTITY

0.98+

15thDATE

0.97+

eightiesDATE

0.97+

spaceborne twoTITLE

0.96+

Hewlett Packard EnterpriseORGANIZATION

0.95+

sixtiesDATE

0.94+

2021DATE

0.94+

three yearsQUANTITY

0.94+

about three yearsQUANTITY

0.94+

AmericasLOCATION

0.94+

bothQUANTITY

0.93+

NASA AmesORGANIZATION

0.92+

18 zerosQUANTITY

0.92+

trillionsQUANTITY

0.9+

teraflop eraDATE

0.89+

thousand timesQUANTITY

0.86+

spaceborne twoORGANIZATION

0.85+

Stennis Space CenterLOCATION

0.84+

one of the eventsQUANTITY

0.84+

COVIDOTHER

0.83+

a trillion calculationsQUANTITY

0.82+

billion, billionQUANTITY

0.8+

first teraflopQUANTITY

0.79+

oneQUANTITY

0.79+

ISSEVENT

0.71+

CrayORGANIZATION

0.71+

Exascale – Why So Hard? | Exascale Day


 

from around the globe it's thecube with digital coverage of exascale day made possible by hewlett packard enterprise welcome everyone to the cube celebration of exascale day ben bennett is here he's an hpc strategist and evangelist at hewlett-packard enterprise ben welcome good to see you good to see you too son hey well let's evangelize exascale a little bit you know what's exciting you uh in regards to the coming of exoskilled computing um well there's a couple of things really uh for me historically i've worked in super computing for many years and i have seen the coming of several milestones from you know actually i'm old enough to remember gigaflops uh coming through and teraflops and petaflops exascale is has been harder than many of us anticipated many years ago the sheer amount of technology that has been required to deliver machines of this performance has been has been us utterly staggering but the exascale era brings with it real solutions it gives us opportunities to do things that we've not been able to do before if you look at some of the the most powerful computers around today they've they've really helped with um the pandemic kovid but we're still you know orders of magnitude away from being able to design drugs in situ test them in memory and release them to the public you know we still have lots and lots of lab work to do and exascale machines are going to help with that we are going to be able to to do more um which ultimately will will aid humanity and they used to be called the grand challenges and i still think of them as that i still think of these challenges for scientists that exascale class machines will be able to help but also i'm a realist is that in 10 20 30 years time you know i should be able to look back at this hopefully touch wood look back at it and look at much faster machines and say do you remember the days when we thought exascale was faster yeah well you mentioned the pandemic and you know the present united states was tweeting this morning that he was upset that you know the the fda in the u.s is not allowing the the vaccine to proceed as fast as you'd like it in fact it the fda is loosening some of its uh restrictions and i wonder if you know high performance computing in part is helping with the simulations and maybe predicting because a lot of this is about probabilities um and concerns is is is that work that is going on today or are you saying that that exascale actually you know would be what we need to accelerate that what's the role of hpc that you see today in regards to sort of solving for that vaccine and any other sort of pandemic related drugs so so first a disclaimer i am not a geneticist i am not a biochemist um my son is he tries to explain it to me and it tends to go in one ear and out the other um um i just merely build the machines he uses so we're sort of even on that front um if you read if you had read the press there was a lot of people offering up systems and computational resources for scientists a lot of the work that has been done understanding the mechanisms of covid19 um have been you know uncovered by the use of very very powerful computers would exascale have helped well clearly the faster the computers the more simulations we can do i think if you look back historically no vaccine has come to fruition as fast ever under modern rules okay admittedly the first vaccine was you know edward jenner sat quietly um you know smearing a few people and hoping it worked um i think we're slightly beyond that the fda has rules and regulations for a reason and we you don't have to go back far in our history to understand the nature of uh drugs that work for 99 of the population you know and i think exascale widely available exoscale and much faster computers are going to assist with that imagine having a genetic map of very large numbers of people on the earth and being able to test your drug against that breadth of person and you know that 99 of the time it works fine under fda rules you could never sell it you could never do that but if you're confident in your testing if you can demonstrate that you can keep the one percent away for whom that drug doesn't work bingo you now have a drug for the majority of the people and so many drugs that have so many benefits are not released and drugs are expensive because they fail at the last few moments you know the more testing you can do the more testing in memory the better it's going to be for everybody uh personally are we at a point where we still need human trials yes do we still need due diligence yes um we're not there yet exascale is you know it's coming it's not there yet yeah well to your point the faster the computer the more simulations and the higher the the chance that we're actually going to going to going to get it right and maybe compress that time to market but talk about some of the problems that you're working on uh and and the challenges for you know for example with the uk government and maybe maybe others that you can you can share with us help us understand kind of what you're hoping to accomplish so um within the united kingdom there was a report published um for the um for the uk research institute i think it's the uk research institute it might be epsrc however it's the body of people responsible for funding um science and there was a case a science case done for exascale i'm not a scientist um a lot of the work that was in this documentation said that a number of things that can be done today aren't good enough that we need to look further out we need to look at machines that will do much more there's been a program funded called asimov and this is a sort of a commercial problem that the uk government is working with rolls royce and they're trying to research how you build a full engine model and by full engine model i mean one that takes into account both the flow of gases through it and how those flow of gases and temperatures change the physical dynamics of the engine and of course as you change the physical dynamics of the engine you change the flow so you need a closely coupled model as air travel becomes more and more under the microscope we need to make sure that the air travel we do is as efficient as possible and currently there aren't supercomputers that have the performance one of the things i'm going to be doing as part of this sequence of conversations is i'm going to be having an in detailed uh sorry an in-depth but it will be very detailed an in-depth conversation with professor mark parsons from the edinburgh parallel computing center he's the director there and the dean of research at edinburgh university and i'm going to be talking to him about the azimoth program and and mark's experience as the person responsible for looking at exascale within the uk to try and determine what are the sort of science problems that we can solve as we move into the exoscale era and what that means for humanity what are the benefits for humans yeah and that's what i wanted to ask you about the the rolls-royce example that you gave it wasn't i if i understood it wasn't so much safety as it was you said efficiency and so that's that's what fuel consumption um it's it's partly fuel consumption it is of course safety there is a um there is a very specific test called an extreme event or the fan blade off what happens is they build an engine and they put it in a cowling and then they run the engine at full speed and then they literally explode uh they fire off a little explosive and they fire a fan belt uh a fan blade off to make sure that it doesn't go through the cowling and the reason they do that is there has been in the past uh a uh a failure of a fan blade and it came through the cowling and came into the aircraft depressurized the aircraft i think somebody was killed as a result of that and the aircraft went down i don't think it was a total loss one death being one too many but as a result you now have to build a jet engine instrument it balance the blades put an explosive in it and then blow the fan blade off now you only really want to do that once it's like car crash testing you want to build a model of the car you want to demonstrate with the dummy that it is safe you don't want to have to build lots of cars and keep going back to the drawing board so you do it in computers memory right we're okay with cars we have computational power to resolve to the level to determine whether or not the accident would hurt a human being still a long way to go to make them more efficient uh new materials how you can get away with lighter structures but we haven't got there with aircraft yet i mean we can build a simulation and we can do that and we can be pretty sure we're right um we still need to build an engine which costs in excess of 10 million dollars and blow the fan blade off it so okay so you're talking about some pretty complex simulations obviously what are some of the the barriers and and the breakthroughs that are kind of required you know to to do some of these things that you're talking about that exascale is going to enable i mean presumably there are obviously technical barriers but maybe you can shed some light on that well some of them are very prosaic so for example power exoscale machines consume a lot of power um so you have to be able to design systems that consume less power and that goes into making sure they're cooled efficiently if you use water can you reuse the water i mean the if you take a laptop and sit it on your lap and you type away for four hours you'll notice it gets quite warm um an exascale computer is going to generate a lot more heat several megawatts actually um and it sounds prosaic but it's actually very important to people you've got to make sure that the systems can be cooled and that we can power them yeah so there's that another issue is the software the software models how do you take a software model and distribute the data over many tens of thousands of nodes how do you do that efficiently if you look at you know gigaflop machines they had hundreds of nodes and each node had effectively a processor a core a thread of application we're looking at many many tens of thousands of nodes cores parallel threads running how do you make that efficient so is the software ready i think the majority of people will tell you that it's the software that's the problem not the hardware of course my friends in hardware would tell you ah software is easy it's the hardware that's the problem i think for the universities and the users the challenge is going to be the software i think um it's going to have to evolve you you're just you want to look at your machine and you just want to be able to dump work onto it easily we're not there yet not by a long stretch of the imagination yeah consequently you know we one of the things that we're doing is that we have a lot of centers of excellence is we will provide well i hate say the word provide we we sell super computers and once the machine has gone in we work very closely with the establishments create centers of excellence to get the best out of the machines to improve the software um and if a machine's expensive you want to get the most out of it that you can you don't just want to run a synthetic benchmark and say look i'm the fastest supercomputer on the planet you know your users who want access to it are the people that really decide how useful it is and the work they get out of it yeah the economics is definitely a factor in fact the fastest supercomputer in the planet but you can't if you can't afford to use it what good is it uh you mentioned power uh and then the flip side of that coin is of course cooling you can reduce the power consumption but but how challenging is it to cool these systems um it's an engineering problem yeah we we have you know uh data centers in iceland where it gets um you know it doesn't get too warm we have a big air cooled data center in in the united kingdom where it never gets above 30 degrees centigrade so if you put in water at 40 degrees centigrade and it comes out at 50 degrees centigrade you can cool it by just pumping it round the air you know just putting it outside the building because the building will you know never gets above 30 so it'll easily drop it back to 40 to enable you to put it back into the machine um right other ways to do it um you know is to take the heat and use it commercially there's a there's a lovely story of they take the hot water out of the supercomputer in the nordics um and then they pump it into a brewery to keep the mash tuns warm you know that's that's the sort of engineering i can get behind yeah indeed that's a great application talk a little bit more about your conversation with professor parsons maybe we could double click into that what are some of the things that you're going to you're going to probe there what are you hoping to learn so i think some of the things that that are going to be interesting to uncover is just the breadth of science that can be uh that could take advantage of exascale you know there are there are many things going on that uh that people hear about you know we people are interested in um you know the nobel prize they might have no idea what it means but the nobel prize for physics was awarded um to do with research into black holes you know fascinating and truly insightful physics um could it benefit from exascale i have no idea uh i i really don't um you know one of the most profound pieces of knowledge in in the last few hundred years has been the theory of relativity you know an austrian patent clerk wrote e equals m c squared on the back of an envelope and and voila i i don't believe any form of exascale computing would have helped him get there any faster right that's maybe flippant but i think the point is is that there are areas in terms of weather prediction climate prediction drug discovery um material knowledge engineering uh problems that are going to be unlocked with the use of exascale class systems we are going to be able to provide more tools more insight [Music] and that's the purpose of computing you know it's not that it's not the data that that comes out and it's the insight we get from it yeah i often say data is plentiful insights are not um ben you're a bit of an industry historian so i've got to ask you you mentioned you mentioned mentioned gigaflop gigaflops before which i think goes back to the early 1970s uh but the history actually the 80s is it the 80s okay well the history of computing goes back even before that you know yes i thought i thought seymour cray was you know kind of father of super computing but perhaps you have another point of view as to the origination of high performance computing [Music] oh yes this is um this is this is one for all my colleagues globally um you know arguably he says getting ready to be attacked from all sides arguably you know um computing uh the parallel work and the research done during the war by alan turing is the father of high performance computing i think one of the problems we have is that so much of that work was classified so much of that work was kept away from commercial people that commercial computing evolved without that knowledge i uh i have done in in in a previous life i have done some work for the british science museum and i have had the great pleasure in walking through the the british science museum archives um to look at how computing has evolved from things like the the pascaline from blaise pascal you know napier's bones the babbage's machines uh to to look all the way through the analog machines you know what conrad zeus was doing on a desktop um i think i think what's important is it doesn't matter where you are is that it is the problem that drives the technology and it's having the problems that requires the you know the human race to look at solutions and be these kicks started by you know the terrible problem that the us has with its nuclear stockpile stewardship now you've invented them how do you keep them safe originally done through the ascii program that's driven a lot of computational advances ultimately it's our quest for knowledge that drives these machines and i think as long as we are interested as long as we want to find things out there will always be advances in computing to meet that need yeah and you know it was a great conversation uh you're a brilliant guest i i love this this this talk and uh and of course as the saying goes success has many fathers so there's probably a few polish mathematicians that would stake a claim in the uh the original enigma project as well i think i think they drove the algorithm i think the problem is is that the work of tommy flowers is the person who took the algorithms and the work that um that was being done and actually had to build the poor machine he's the guy that actually had to sit there and go how do i turn this into a machine that does that and and so you know people always remember touring very few people remember tommy flowers who actually had to turn the great work um into a working machine yeah super computer team sport well ben it's great to have you on thanks so much for your perspectives best of luck with your conversation with professor parsons we'll be looking forward to that and uh and thanks so much for coming on thecube a complete pleasure thank you and thank you everybody for watching this is dave vellante we're celebrating exascale day you're watching the cube [Music]

Published Date : Oct 16 2020

SUMMARY :

that requires the you know the human

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
mark parsonsPERSON

0.99+

ben bennettPERSON

0.99+

todayDATE

0.99+

hundreds of nodesQUANTITY

0.99+

dave vellantePERSON

0.98+

pandemicEVENT

0.98+

united kingdomLOCATION

0.98+

seymour crayPERSON

0.98+

one earQUANTITY

0.98+

first vaccineQUANTITY

0.98+

markPERSON

0.98+

four hoursQUANTITY

0.97+

tens of thousands of nodesQUANTITY

0.97+

blaise pascalPERSON

0.97+

one percentQUANTITY

0.97+

50 degrees centigradeQUANTITY

0.97+

oneQUANTITY

0.97+

40QUANTITY

0.97+

nobel prizeTITLE

0.97+

rolls royceORGANIZATION

0.96+

each nodeQUANTITY

0.96+

early 1970sDATE

0.96+

hpcORGANIZATION

0.96+

10 million dollarsQUANTITY

0.95+

uk governmentORGANIZATION

0.95+

fdaORGANIZATION

0.95+

united statesORGANIZATION

0.94+

bothQUANTITY

0.94+

this morningDATE

0.94+

40 degrees centigradeQUANTITY

0.94+

one deathQUANTITY

0.93+

hewlett packardORGANIZATION

0.93+

earthLOCATION

0.93+

exascaleTITLE

0.93+

above 30QUANTITY

0.93+

99 of the populationQUANTITY

0.92+

Why So Hard?TITLE

0.92+

uk research instituteORGANIZATION

0.92+

lots of carsQUANTITY

0.92+

exascale dayEVENT

0.9+

conrad zeusPERSON

0.9+

firstQUANTITY

0.9+

edinburgh universityORGANIZATION

0.89+

many years agoDATE

0.89+

asimovTITLE

0.88+

Exascale DayEVENT

0.88+

ukLOCATION

0.87+

professorPERSON

0.87+

parsonsPERSON

0.86+

99 ofQUANTITY

0.86+

above 30 degrees centigradeQUANTITY

0.85+

edward jennerPERSON

0.85+

alan turingPERSON

0.83+

thingsQUANTITY

0.83+

80sDATE

0.82+

epsrcORGANIZATION

0.82+

last few hundred yearsDATE

0.82+

ExascaleTITLE

0.8+

a lot of peopleQUANTITY

0.79+

covid19OTHER

0.78+

hewlett-packardORGANIZATION

0.77+

britishOTHER

0.76+

tommyPERSON

0.75+

edinburgh parallel computing centerORGANIZATION

0.74+

one ofQUANTITY

0.73+

nordicsLOCATION

0.71+

so many drugsQUANTITY

0.7+

manyQUANTITY

0.69+

many yearsQUANTITY

0.68+

lots and lots of lab workQUANTITY

0.68+

large numbers of peopleQUANTITY

0.68+

hpcEVENT

0.68+

peopleQUANTITY

0.68+

Harnessing the Power of Sound for Nature – Soundscape Ecological Research | Exascale Day 2020


 

>> From around the globe, it's theCUBE, with digital coverage of Exascale Day. Made possible by Hewlett Packard Enterprise. >> Hey, welcome back everybody Jeff Frick here with theCUBE. We are celebrating Exascale Day. 10, 18, I think it's the second year of celebrating Exascale Day, and we're really excited to have our next guest and talk about kind of what this type of compute scale enables, and really look a little bit further down the road at some big issues, big problems and big opportunities that this is going to open up. And I'm really excited to get in this conversation with our next guest. He is Bryan Pijanowski the Professor of Landscape and Soundscape Ecology at Purdue University. Bryan, great to meet you. >> Great to be here. >> So, in getting ready for this conversation, I just watched your TED Talk, and I just loved one of the quotes. I actually got one of quote from it that's basically saying you are exploring the world through sound. I just would love to get a little deeper perspective on that, because that's such a unique way to think about things and you really dig into it and explain why this is such an important way to enjoy the world, to absorb the world and think about the world. >> Yeah, that's right Jeff. So the way I see it, sound is kind of like a universal variable. It exists all around us. And you can't even find a place on earth where there's no sound, where it's completely silent. Sound is a signal of something that's happening. And we can use that information in ways to allow us to understand the earth. Just thinking about all the different kinds of sounds that exist around us on a daily basis. I hear the birds, I hear the insects, but there's just a lot more than that. It's mammals and some cases, a lot of reptiles. And then when you begin thinking outside the biological system, you begin to hear rain, wind, thunder. And then there's the sounds that we make, sounds of traffic, the sounds of church bells. All of this is information, some of it's symbolic, some of it's telling me something about change. As an ecologist that's what I'm interested in, how is the earth changing? >> That's great and then you guys set up at Purdue, the Purdue Center for Global Soundscapes. Tell us a little bit about the mission and some of the work that you guys do. >> Well, our mission is really to use sound as a lens to study the earth, but to capture it in ways that are meaningful and to bring that back to the public to tell them a story about how the earth kind of exists. There's an incredible awe of nature that we all experience when we go out and listen into to the wild spaces of the earth. I've gone to the Eastern Steppes of Mongolian, I've climbed towers in the Paleotropics of Borneo and listened at night. And ask the question, how are these sounds different? And what is a grassland really supposed to sound like, without humans around? So we use that information and bring it back and analyze it as a means to understand how the earth is changing and really what the biological community is all about, and how things like climate change are altering our spaces, our wild spaces. I'm also interested in the role that people play and producing sound and also using sound. So getting back to Mongolia, we have a new NSF funded project where we're going to be studying herders and the ways in which they use sonic practices. They use a lot of sounds as information sources about how the environment is changing, but also how they relate back to place and to heritage a special sounds that resonate, the sounds of a river, for example, are the resonance patterns that they tune their throat to that pay homage to their parents that were born at the side of that river. There's these special connections that people have with place through sound. And so that's another thing that we're trying to do. In really simple terms, I want to go out and, what I call it sounds rather simple, record the earth-- >> Right. >> What does that mean? I want to go to every major biome and conduct a research study there. I want to know what does a grassland sound like? What is a coral reef sound like? A kelp forest and the oceans, a desert, and then capture that as baseline and use that information-- >> Yeah. >> For scientific purposes >> Now, there's so much to unpack there Bryan. First off is just kind of the foundational role that sound plays in our lives that you've outlined in great detail and you talked about it's the first sense that's really activated as we get consciousness, even before we're born right? We hear the sounds of our mother's heartbeat and her voice. And even the last sense that goes at the end a lot of times, in this really intimate relationship, as you just said, that the sounds represent in terms of our history. We don't have to look any further than a favorite song that can instantly transport you, almost like a time machine to a particular place in time. Very, very cool. Now, it's really interesting that what you're doing now is taking advantage of new technology and just kind of a new angle to capture sound in a way that we haven't done before. I think you said you have sound listening devices oftentimes in a single location for a year. You're not only capturing sound, the right sound is changes in air pressure, so that you're getting changes in air pressure, you're getting vibration, which is kind of a whole different level of data. And then to be able to collect that for a whole year and then start to try to figure out a baseline which is pretty simple to understand, but you're talking about this chorus. I love your phrase, a chorus, because that sound is made up of a bunch of individual inputs. And now trying to kind of go under the covers to figure out what is that baseline actually composed of. And you talk about a bunch of really interesting particular animals and species that combine to create this chorus that now you know is a baseline. How did you use to do that before? I think it's funny one of your research papers, you reach out to the great bird followers and bird listeners, 'cause as you said, that's the easiest way or the most prolific way for people to identify birds. So please help us in a crowdsource way try to identify all the pieces that make this beautiful chorus, that is the soundscape for a particular area. >> Right, yeah, that's right. It really does take a team of scientists and engineers and even folks in the social sciences and the humanities to really begin to put all of these pieces together. Experts in many fields are extremely valuable. They've got great ears because that's the tools that they use to go out and identify birds or insects or amphibians. What we don't have are generalists that go out and can tell you what everything sounds like. And I'll tell you that will probably never ever happen. That's just way too much, we have millions of species that exist on this planet. And we just don't have a specific catalog of what everything sounds like, it's just not possible or doable. So I need to go out and discover and bring those discoveries back that help us to understand nature and understand how the earth is changing. I can't wait for us to eventually develop that catalog. So we're trying to develop techniques and tools and approaches that allow us to develop this electronic catalog. Like you're saying this chorus, and it doesn't necessarily have to be a species specific chorus, it can be a chorus of all these different kind of sounds that we think relate back to this kind of animal or that kind of animal based upon the animals instrument-- >> Right, great. >> And this is the sound. >> Now again, you know, keep it to the exascale theme, right? You're collecting a lot of data and you mentioned in one of the pieces I've dug up, that your longest study in a single location is 17 years. You've got over 4 million recordings. And I think you said over 230 years if you wanted to listen to them all back to back. I mean, this is a huge, a big data problem in terms of the massive amount of data that you have and need to run through an analysis. >> Yeah, that's right. We're collecting 48,000 data points per second. So that's 48 kilohertz. And then so you multiply everything and then you have a sense of how many data points you actually have to put them all together. When you're listening to a sound file over 10 minutes, you have hundreds of sounds that exist in them. Oftentimes you just don't know what they are, but you can more or less put some kind of measure on all of them and then begin to summarize them over space and time and try to understand it from a perspective of really science. >> Right, right. And then I just love to get your take as you progress down this kind of identification road, we're all very familiar with copyright infringement hits on YouTube or social media or whatever, when it picks up on some sound and the technology is actually really sophisticated to pick up some of those sound signatures. But to your point, it's a lot easier to compare against the known and to search for that known. Then when you've got this kind of undefined chorus that said we do know that there can be great analysis done that we've seen AI and ML applied, especially in the surveillance side on the video-- >> Right. >> With video that it can actually do a lot of computation and a lot of extracting signal from the noise, if you will. As you look down the road on the compute side for the algorithms that you guys are trying to build with the human input of people that know what you're listening to, what kind of opportunities do you see and where are we on that journey where you can get more leverage out of some of these technology tools? >> Well, I think what we're doing right now is developing the methodological needs, kind of describe what it is we need to move into that new space, which is going to require these computational, that computational infrastructure. So, for example, we have a study right now where we're trying to identify certain kinds of mosquitoes (chuckling) a vector-borne mosquitoes, and our estimates is that we need about maybe 900 to 1200 specific recordings per species to be able to put it into something like a convolutional neural network to be able to extract out the information, and look at the patterns and data, to be able to say indeed this is the species that we're interested in. So what we're going to need and in the future here is really a lot of information that allow us to kind of train these neural networks and help us identify what's in the sound files. As you can imagine the computational infrastructure needed to do that for data storage and CPU, GPU is going to be truly amazing. >> Right, right. So I want to get your take on another topic. And again the basis of your research is really all bound around the biodiversity crisis right? That's from the kind of-- >> Yeah. >> The thing that's started it and now you're using sound as a way to measure baseline and talk about loss of species, reduced abundancies and rampant expansion of invasive species as part of your report. But I'd love to get your take on cities. And how do you think cities fit the future? Clearly, it's an efficient way to get a lot of people together. There's a huge migration of people-- >> Right. >> To cities, but one of your themes in your Ted Talk is reconnecting with nature-- >> Yeah. >> Because we're in cities, but there's this paradox right? Because you don't want people living in nature can be a little bit disruptive. So is it better to kind of get them all in a tip of a peninsula in San Francisco or-- >> Yeah. >> But then do they lose that connection that's so important. >> Yeah. >> I just love to get your take on cities and the impacts that they're have on your core research. >> Yeah, I mean, it truly is a paradox as you just described it. We're living in a concrete jungle surrounded by not a lot of nature, really, honestly, occasional bird species that tend to be fairly limited, selected for limited environments. So many people just don't get out into the wild. But visiting national parks certainly is one of those kinds of experience that people oftentimes have. But I'll just say that it's getting out there and truly listening and feeling this emotional feeling, psychological feeling that wraps around you, it's a solitude. It's just you and nature and there's just no one around. >> Right. >> And that's when it really truly sinks in, that you're a part of this place, this marvelous place called earth. And so there are very few people that have had that experience. And so as I've gone to some of these places, I say to myself I need to bring this back. I need to tell the story, tell the story of the awe of nature, because it truly is an amazing place. Even if you just close your eyes and listen. >> Right, right. >> And it, the dawn chorus in the morning in every place tells me so much about that place. It tells me about all the animals that exist there. The nighttime tells me so much too. As a scientist that's spent most of his career kind of going out and working during the day, there's so much happening at night. Matter of fact-- >> Right. >> There's more sounds at night than there were during the day. So there is a need for us to experience nature and we don't do that. And we're not aware of these crises that are happening all over the planet. I do go to places and I listen, and I can tell you I'm listening for things that I think should be there. You can listen and you can hear the gaps, the gaps and that in that chorus, and you think what should be there-- >> Right. >> And then why isn't it there? And that's where I really want to be able to dig deep into my sound files and start to explore that more fully. >> It's great, it's great, I mean, I just love the whole concept of, and you identified it in the moment you're in the tent, the thunderstorm came by, it's really just kind of changing your lens. It's really twisting your lens, changing your focus, because that sound is there, right? It's been there all along, it's just, do you tune it in or do you tune it out? Do you pay attention? Do not pay attention is an active process or a passive process and like-- >> Right. >> I love that perspective. And I want to shift gears a little bit, 'cause another big environmental thing, and you mentioned it quite frequently is feeding the world's growing population and feeding it-- >> Yeah. >> In an efficient way. And anytime you see kind of factory farming applied to a lot of things you wonder is it sustainable, and then all the issues that come from kind of single output production whether that's pigs or coffee or whatever and the susceptibility to disease and this and that. So I wonder if you could share your thoughts on, based on your research, what needs to change to successfully and without too much destruction feed this ever increasing population? >> Yeah, I mean, that's one of the grand challenges. I mean, society is facing so many at the moment. In the next 20 years or so, 30 years, we're going to add another 2 billion people to the planet, and how do we feed all of them? How do we feed them well and equitably across the globe? I don't know how to do that. But I'll tell you that our crops and the ecosystem that supports the food production needs the animals and the trees and the microbes for the ecosystem to function. We have many of our crops that are pollinated by birds and insects and other animals, seeds need to be dispersed. And so we need the rest of life to exist and thrive for us to thrive too. It's not an either, it's not them or us, it has to be all of us together on this planet working together. We have to find solutions. And again, it's me going out to some of these places and bringing it back and saying, you have to listen, you have to listen to these places-- >> Right. >> They're truly a marvelous. >> So I know most of your listening devices are in remote areas and not necessarily in urban areas, but I'm curious, do you have any in urban areas? And if so, how has that signature changed since COVID? I just got to ask, (Bryan chuckling) because we went to this-- >> Yeah. >> Light switch moment in the middle of March, human activity slowed down-- >> Yeah. >> In a way that no one could have forecast ever on a single event, globally which is just fascinating. And you think of the amount of airplanes that were not flying and trains that we're not moving and people not moving. Did you have any any data or have you been able to collect data or see data as the impact of that? Not only directly in wherever the sensors are, but a kind of a second order impact because of the lack of pollution and the other kind of human activity that just went down. I mean, certainly a lot of memes (Bryan chuckling) on social media of all the animals-- >> Yeah. >> Come back into the city. But I'm just curious if you have any data in the observation? >> Yeah, we're part of actually a global study, there's couple of hundred of us that are contributing our data to what we call the Silent Cities project. It's being coordinated out of Europe right now. So we placed our sensors out in different areas, actually around West Lafayette area here in Indiana, near road crossings and that sort of thing to be able to kind of capture that information. We have had in this area here now, the 17 year study. So we do have studies that get into areas that tend to be fairly urban. So we do have a lot of information. I tell you, I don't need my sensors to tell me something that I already know and you suspect is true. Our cities were quiet, much quieter during the COVID situation. And it's continued to kind of get a little bit louder, as we've kind of released some of the policies that put us into our homes. And so yes, there is a major change. Now there have been a couple of studies that just come out that are pretty interesting. One, which was in San Francisco looking at the white-crowned sparrow. And they looked at historical data that went back something like 20 years. And they found that the birds in the cities were singing a much softer, 30% softer. >> Really? >> And they, yeah, and they would lower their frequencies. So the way sound works is that if you lower your frequencies that sound can travel farther. And so the males can now hear themselves twice as far just due to the fact that our cities are quieter. So it does have an impact on animals, truly it does. There was some studies back in 2001, during  the September, the 9/11 crisis as well, where people are going out and kind of looking at data, acoustic data, and discovering that things were much quieter. I'd be very interested to look at some of the data we have in our oceans, to what extent are oceans quieter. Our oceans sadly are the loudest part of this planet. It's really noisy, sound travels, five times farther. Generally the noise is lower frequencies, and we have lots of ships that are all over the planet and in our oceans. So I'd really be interested in those kinds of studies as well, to what extent is it impacting and helping our friends in the oceans. >> Right, right, well, I was just going to ask you that question because I think a lot of people clearly understand sound in the air that surrounds us, but you talk a lot about sound in ocean, and sound as an indicator of ocean health, and again, this concept of a chorus. And I think everybody's probably familiar with the sounds of the humpback whale right? He got very popular and we've all seen and heard that. But you're doing a lot of research, as you said, in oceans and in water. And I wonder if you can, again, kind of provide a little bit more color around that, because I don't think you people, maybe we're just not that tuned into it, think of the ocean or water as a rich sound environment especially to the degree as you're talking about where you can actually start to really understand what's going on. >> Yeah, I mean, some of us think that sound in the oceans is probably more important to animals than on land, on the terrestrial side. Sound helps animals to navigate through complex waterways and find food resources. You can only use site so far underwater especially when it gets to be kind of dark, once you get down to certain levels. So there many of us think that sound is probably going to be an important component to measuring the status of health in our oceans. >> It's great. Well, Bryan, I really enjoyed this conversation. I've really enjoyed your Ted Talk, and now I've got a bunch of research papers I want to dig into a little bit more as well. >> Okay.(chuckling) >> It's a fascinating topic, but I think the most important thing that you talked about extensively in your Ted Talk is really just taking a minute to take a step back from the individual perspective, appreciate what's around us, hear, that information and I think there's a real direct correlation to the power of exascale, to the power of hearing this data, processing this data, and putting intelligence on that data, understanding that data in a good way, in a positive way, in a delightful way, spiritual way, even that we couldn't do before, or we just weren't paying attention like with what you know is on your phone please-- >> Yeah, really. >> It's all around you. It's been there a whole time. >> Yeah. (both chuckling) >> Yeah, Jeff, I really encourage your viewers to count it, just go out and listen. As we say, go out and listen and join the mission. >> I love it, and you can get started by going to the Center for Global Soundscapes and you have a beautiful landscape. I had it going earlier this morning while I was digging through some of the research of Bryan. (Bryan chuckling) Thank you very much (Bryan murmurs) and really enjoyed the conversation best to you-- >> Okay. >> And your team and your continued success. >> Alright, thank you. >> Alright, thank you. All right, he's Bryan-- >> Goodbye. >> I'm Jeff, you're watching theCUBE. (Bryan chuckling) for continuing coverage of Exascale Day. Thanks for watching. We'll see you next time. (calm ambient music)

Published Date : Oct 16 2020

SUMMARY :

From around the globe, it's theCUBE, And I'm really excited to and I just loved one of the quotes. I hear the birds, I hear the insects, and some of the work that you guys do. and analyze it as a means to understand A kelp forest and the oceans, a desert, And then to be able to and even folks in the social amount of data that you have and then you have a sense against the known and to for the algorithms that you and our estimates is that we need about And again the basis of your research But I'd love to get your take on cities. So is it better to kind of get them all that connection that's I just love to get your take on cities tend to be fairly limited, And so as I've gone to the dawn chorus in the and you think what should be there-- to explore that more fully. and you identified it in the and you mentioned it quite frequently a lot of things you for the ecosystem to function. of all the animals-- Come back into the city. that tend to be fairly urban. that are all over the planet going to ask you that question to be kind of dark, and now I've got a It's been there a whole time. Yeah. listen and join the mission. the conversation best to you-- and your continued success. Alright, thank you. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichielPERSON

0.99+

AnnaPERSON

0.99+

DavidPERSON

0.99+

BryanPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

ChrisPERSON

0.99+

NECORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

KevinPERSON

0.99+

Dave FramptonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Kerim AkgonulPERSON

0.99+

Dave NicholsonPERSON

0.99+

JaredPERSON

0.99+

Steve WoodPERSON

0.99+

PeterPERSON

0.99+

Lisa MartinPERSON

0.99+

NECJORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Mike OlsonPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

Michiel BakkerPERSON

0.99+

FCAORGANIZATION

0.99+

NASAORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

Lee CaswellPERSON

0.99+

ECECTORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

OTELORGANIZATION

0.99+

David FloyerPERSON

0.99+

Bryan PijanowskiPERSON

0.99+

Rich LanePERSON

0.99+

KerimPERSON

0.99+

Kevin BoguszPERSON

0.99+

Jeff FrickPERSON

0.99+

Jared WoodreyPERSON

0.99+

LincolnshireLOCATION

0.99+

KeithPERSON

0.99+

Dave NicholsonPERSON

0.99+

ChuckPERSON

0.99+

JeffPERSON

0.99+

National Health ServicesORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

WANdiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MarchDATE

0.99+

NutanixORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

IrelandLOCATION

0.99+

Dave VellantePERSON

0.99+

Michael DellPERSON

0.99+

RajagopalPERSON

0.99+

Dave AllantePERSON

0.99+

EuropeLOCATION

0.99+

March of 2012DATE

0.99+

Anna GleissPERSON

0.99+

SamsungORGANIZATION

0.99+

Ritika GunnarPERSON

0.99+

Mandy DhaliwalPERSON

0.99+

Making AI Real – A practitioner’s view | Exascale Day


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of Exascale day, made possible by Hewlett Packard Enterprise. >> Hey, welcome back Jeff Frick here with the cube come due from our Palo Alto studios, for their ongoing coverage in the celebration of Exascale day 10 to the 18th on October 18th, 10 with 18 zeros, it's all about big powerful giant computing and computing resources and computing power. And we're excited to invite back our next guest she's been on before. She's Dr. Arti Garg, head of advanced AI solutions and technologies for HPE. Arti great to see you again. >> Great to see you. >> Absolutely. So let's jump into before we get into Exascale day I was just looking at your LinkedIn profile. It's such a very interesting career. You've done time at Lawrence Livermore, You've done time in the federal government, You've done time at GE and industry, I just love if you can share a little bit of your perspective going from hardcore academia to, kind of some government positions, then into industry as a data scientist, and now with originally Cray and now HPE looking at it really from more of a vendor side. >> Yeah. So I think in some ways, I think I'm like a lot of people who've had the title of data scientists somewhere in their history where there's no single path, to really working in this industry. I come from a scientific background. I have a PhD in physics, So that's where I started working with large data sets. I think of myself as a data scientist before the term data scientist was a term. And I think it's an advantage, to be able to have seen this explosion of interest in leveraging data to gain insights, whether that be into the structure of the galaxy, which is what I used to look at, or whether that be into maybe new types of materials that could advance our ability to build lightweight cars or safety gear. It's allows you to take a perspective to not only understand what the technical challenges are, but what also the implementation challenges are, and why it can be hard to use data to solve problems. >> Well, I'd just love to get your, again your perspective cause you are into data, you chose that as your profession, and you probably run with a whole lot of people, that are also like-minded in terms of data. As an industry and as a society, we're trying to get people to do a better job of making database decisions and getting away from their gut and actually using data. I wonder if you can talk about the challenges of working with people who don't come from such an intense data background to get them to basically, I don't know if it's understand the value of more of a data kind decision making process or board just it's worth the effort, cause it's not easy to get the data and cleanse the data, and trust the data and get the right context, working with people that don't come from that background. And aren't so entrenched in that point of view, what surprises you? How do you help them? What can you share in terms of helping everybody get to be a more data centric decision maker? >> So I would actually rephrase the question a little bit Jeff, and say that actually I think people have always made data driven decisions. It's just that in the past we maybe had less data available to us or the quality of it was not as good. And so as a result most organizations have developed organize themselves to make decisions, to run their processes based on a much smaller and more refined set of information, than is currently available both given our ability to generate lots of data, through software and sensors, our ability to store that data. And then our ability to run a lot of computing cycles and a lot of advanced math against that data, to learn things that maybe in the past took, hundreds of years of experiments in scientists to understand. And so before I jumped into, how do you overcome that barrier? Just I'll use an example because you mentioned, I used to work in industry I used to work at GE. And one of the things that I often joked about, is the number of times I discovered Bernoulli's principle, in data coming off a GE jet engines you could do that overnight processing these large data but of course historically that took hundreds of years, to really understand these physical principles. And so I think when it comes to how do we bridge the gap between people who are adapt at processing large amounts of data, and running algorithms to pull insights out? I think it's both sides. I think it's those of us who are coming from the technical background, really understanding the way decisions are currently made, the way process and operations currently work at an organization. And understanding why those things are the way they are maybe their security or compliance or accountability concerns, that a new algorithm can't just replace those. And so I think it's on our end, really trying to understand, and make sure that whatever new approaches we're bringing address those concerns. And I think for folks who aren't necessarily coming from a large data set, and analytical background and when I say analytical, I mean in the data science sense, not in the sense of thinking about things in an abstract way to really recognize that these are just tools, that can enhance what they're doing, and they don't necessarily need to be frightening because I think that people who have been say operating electric grids for a long time, or fixing aircraft engines, they have a lot of expertise and a lot of understanding, and that's really important to making any kind of AI driven solution work. >> That's great insight but that but I do think one thing that's changed you come from a world where you had big data sets, so you kind of have a big data set point of view, where I think for a lot of decision makers they didn't have that data before. So we won't go through all the up until the right explosions of data, and obviously we're talking about Exascale day, but I think for a lot of processes now, the amount of data that they can bring to bear, is so dwarfs what they had in the past that before they even consider how to use it they still have to contextualize it, and they have to manage it and they have to organize it and there's data silos. So there's all this kind of nasty processes stuff, that's in the way some would argue has been kind of a real problem with the promise of BI, and does decision support tools. So as you look at at this new stuff and these new datasets, what are some of the people in process challenges beyond the obvious things that we can think about, which are the technical challenges? >> So I think that you've really hit on, something I talk about sometimes it was kind of a data deluge that we experienced these days, and the notion of feeling like you're drowning in information but really lacking any kind of insight. And one of the things that I like to think about, is to actually step back from the data questions the infrastructure questions, sort of all of these technical questions that can seem very challenging to navigate. And first ask ourselves, what problems am I trying to solve? It's really no different than any other type of decision you might make in an organization to say like, what are my biggest pain points? What keeps me up at night? or what would just transform the way my business works? And those are the problems worth solving. And then the next question becomes, if I had more data if I had a better understanding of something about my business or about my customers or about the world in which we all operate, would that really move the needle for me? And if the answer is yes, then that starts to give you a picture of what you might be able to do with AI, and it starts to tell you which of those data management challenges, whether they be cleaning the data, whether it be organizing the data, what it, whether it be building models on the data are worth solving because you're right, those are going to be a time intensive, labor intensive, highly iterative efforts. But if you know why you're doing it, then you will have a better understanding of why it's worth the effort. And also which shortcuts you can take which ones you can't, because often in order to sort of see the end state you might want to do a really quick experiment or prototype. And so you want to know what matters and what doesn't at least to that. Is this going to work at all time. >> So you're not buying the age old adage that you just throw a bunch of data in a data Lake and the answers will just spring up, just come right back out of the wall. I mean, you bring up such a good point, It's all about asking the right questions and thinking about asking questions. So again, when you talk to people, about helping them think about the questions, cause then you've got to shape the data to the question. And then you've got to start to build the algorithm, to kind of answer that question. How should people think when they're actually building algorithm and training algorithms, what are some of the typical kind of pitfalls that a lot of people fall in, haven't really thought about it before and how should people frame this process? Cause it's not simple and it's not easy and you really don't know that you have the answer, until you run multiple iterations and compare it against some other type of reference? >> Well, one of the things that I like to think about just so that you're sort of thinking about, all the challenges you're going to face up front, you don't necessarily need to solve all of these problems at the outset. But I think it's important to identify them, is I like to think about AI solutions as, they get deployed being part of a kind of workflow, and the workflow has multiple stages associated with it. The first stage being generating your data, and then starting to prepare and explore your data and then building models for your data. But sometimes I think where we don't always think about it is the next two phases, which is deploying whatever model or AI solution you've developed. And what will that really take especially in the ecosystem where it's going to live. If is it going to live in a secure and compliant ecosystem? Is it actually going to live in an outdoor ecosystem? We're seeing more applications on the edge, and then finally who's going to use it and how are they going to drive value from it? Because it could be that your AI solution doesn't work cause you don't have the right dashboard, that highlights and visualizes the data for the decision maker who will benefit from it. So I think it's important to sort of think through all of these stages upfront, and think through maybe what some of the biggest challenges you might encounter at the Mar, so that you're prepared when you meet them, and you can kind of refine and iterate along the way and even upfront tweak the question you're asking. >> That's great. So I want to get your take on we're celebrating Exascale day which is something very specific on 1018, share your thoughts on Exascale day specifically, but more generally I think just in terms of being a data scientist and suddenly having, all this massive compute power. At your disposal yoy're been around for a while. So you've seen the development of the cloud, these huge data sets and really the ability to, put so much compute horsepower against the problems as, networking and storage and compute, just asymptotically approach zero, I mean for as a data scientist you got to be pretty excited about kind of new mysteries, new adventures, new places to go, that we just you just couldn't do it 10 years ago five years ago, 15 years ago. >> Yeah I think that it's, it'll--only time will tell exactly all of the things that we'll be able to unlock, from these new sort of massive computing capabilities that we're going to have. But a couple of things that I'm very excited about, are that in addition to sort of this explosion or these very large investments in large supercomputers Exascale super computers, we're also seeing actually investment in these other types of scientific instruments that when I say scientific it's not just academic research, it's driving pharmaceutical drug discovery because we're talking about these, what they call light sources which shoot x-rays at molecules, and allow you to really understand the structure of the molecules. What Exascale allows you to do is, historically it's been that you would go take your molecule to one of these light sources and you shoot your, x-rays edit and you would generate just masses and masses of data, terabytes of data it was each shot. And being able to then understand, what you were looking at was a long process, getting computing time and analyzing the data. We're on the precipice of being able to do that, if not in real time much closer to real time. And I don't really know what happens if instead of coming up with a few molecules, taking them, studying them, and then saying maybe I need to do something different. I can do it while I'm still running my instrument. And I think that it's very exciting, from the perspective of someone who's got a scientific background who likes using large data sets. There's just a lot of possibility of what Exascale computing allows us to do in from the standpoint of I don't have to wait to get results, and I can either stimulate much bigger say galaxies, and really compare that to my data or galaxies or universes, if you're an astrophysicist or I can simulate, much smaller finer details of a hypothetical molecule and use that to predict what might be possible, from a materials or drug perspective, just to name two applications that I think Exascale could really drive. >> That's really great feedback just to shorten that compute loop. We had an interview earlier in some was talking about when the, biggest workload you had to worry about was the end of the month when you're running your financial, And I was like, why wouldn't that be nice to be the biggest job that we have to worry about? But now I think we saw some of this at animation, in the movie business when you know the rendering for whether it's a full animation movie, or just something that's a heavy duty three effects. When you can get those dailies back to the, to the artist as you said while you're still working, or closer to when you're working versus having this, huge kind of compute delay, it just changes the workflow dramatically and the pace of change and the pace of output. Because you're not context switching as much and you can really get back into it. That's a super point. I want to shift gears a little bit, and talk about explainable AI. So this is a concept that a lot of people hopefully are familiar with. So AI you build the algorithm it's in a box, it runs and it kicks out an answer. And one of the things that people talk about, is we should be able to go in and pull that algorithm apart to know, why it came out with the answer that it did. To me this just sounds really really hard because it's smart people like you, that are writing the algorithms the inputs and the and the data that feeds that thing, are super complex. The math behind it is very complex. And we know that the AI trains and can change over time as you you train the algorithm it gets more data, it adjusts itself. So it's explainable AI even possible? Is it possible at some degree? Because I do think it's important. And my next question is going to be about ethics, to know why something came out. And the other piece that becomes so much more important, is as we use that output not only to drive, human based decision that needs some more information, but increasingly moving it over to automation. So now you really want to know why did it do what it did explainable AI? Share your thoughts. >> It's a great question. And it's obviously a question that's on a lot of people's mind these days. I'm actually going to revert back to what I said earlier, when I talked about Bernoulli's principle, and just the ability sometimes when you do throw an algorithm at data, it might come the first thing it will find is probably some known law of physics. And so I think that really thinking about what do we mean by explainable AI, also requires us to think about what do we mean by AI? These days AI is often used anonymously with deep learning which is a particular type of algorithm that is not very analytical at its core. And what I mean by that is, other types of statistical machine learning models, have some underlying theory of what the population of data that you're studying. And whereas deep learning doesn't, it kind of just learns whatever pattern is sitting in front of it. And so there is a sense in which if you look at other types of algorithms, they are inherently explainable because you're choosing your algorithm based on what you think the is the sort of ground truth, about the population you're studying. And so I think we going to get to explainable deep learning. I think it's kind of challenging because you're always going to be in a position, where deep learning is designed to just be as flexible as possible. I'm sort of throw more math at the problem, because there may be are things that your sort of simpler model doesn't account for. However deep learning could be, part of an explainable AI solution. If for example, it helps you identify what are important so called features to look at what are the important aspects of your data. So I don't know it depends on what you mean by AI, but are you ever going to get to the point where, you don't need humans sort of interpreting outputs, and making some sets of judgments about what a set of computer algorithms that are processing data think. I think it will take, I don't want to say I know what's going to happen 50 years from now, but I think it'll take a little while to get to the point where you don't have, to maybe apply some subject matter understanding and some human judgment to what an algorithm is putting out. >> It's really interesting we had Dr. Robert Gates on a years ago at another show, and he talked about the only guns in the U.S. military if I'm getting this right, that are automatic, that will go based on what the computer tells them to do, and start shooting are on the Korean border. But short of that there's always a person involved, before anybody hits a button which begs a question cause we've seen this on the big data, kind of curve, i think Gartner has talked about it, as we move up from kind of descriptive analytics diagnostic analytics, predictive, and then prescriptive and then hopefully autonomous. So I wonder so you're saying will still little ways in that that last little bumps going to be tough to overcome to get to the true autonomy. >> I think so and you know it's going to be very application dependent as well. So it's an interesting example to use the DMZ because that is obviously also a very, mission critical I would say example but in general I think that you'll see autonomy. You already do see autonomy in certain places, where I would say the States are lower. So if I'm going to have some kind of recommendation engine, that suggests if you look at the sweater maybe like that one, the risk of getting that wrong. And so fully automating that as a little bit lower, because the risk is you don't buy the sweater. I lose a little bit of income I lose a little bit of revenue as a retailer, but the risk of I make that turn, because I'm going to autonomous vehicle as much higher. So I think that you will see the progression up that curve being highly dependent on what's at stake, with different degrees of automation. That being said you will also see in certain places where there's, it's either really expensive or it's humans aren't doing a great job. You may actually start to see some mission critical automation. But those would be the places where you're seeing them. And actually I think that's one of the reasons why you see actually a lot more autonomy, in the agriculture space, than you do in the sort of passenger vehicle space. Because there's a lot at stake and it's very difficult for human beings to sort of drive large combines. >> plus they have a real they have a controlled environment. So I've interviewed Caterpillar they're doing a ton of stuff with autonomy. Cause they're there control that field, where those things are operating, and whether it's a field or a mine, it's actually fascinating how far they've come with autonomy. But let me switch to a different industry that I know is closer to your heart, and looking at some other interviews and let's talk about diagnosing disease. And if we take something specific like reviewing x-rays where the computer, and it also brings in the whole computer vision and bringing in computer vision algorithms, excuse me they can see things probably fast or do a lot more comparisons, than potentially a human doctor can. And or hopefully this whole signal to noise conversation elevate the signal for the doctor to review, and suppress the noise it's really not worth their time. They can also review a lot of literature, and hopefully bring a broader potential perspective of potential diagnoses within a set of symptoms. You said before you both your folks are physicians, and there's a certain kind of magic, a nuance, almost like kind of more childlike exploration to try to get out of the algorithm if you will to think outside the box. I wonder if you can share that, synergy between using computers and AI and machine learning to do really arduous nasty things, like going through lots and lots and lots and lots of, x-rays compared to and how that helps with, doctor who's got a whole different kind of set of experience a whole different kind of empathy, whole different type of relationship with that patient, than just a bunch of pictures of their heart or their lungs. >> I think that one of the things is, and this kind of goes back to this question of, is AI for decision support versus automation? And I think that what AI can do, and what we're pretty good at these days, with computer vision is picking up on subtle patterns right now especially if you have a very large data set. So if I can train on lots of pictures of lungs, it's a lot easier for me to identify the pictures that somehow these are not like the other ones. And that can be helpful but I think then to really interpret what you're seeing and understand is this. Is it actually bad quality image? Is it some kind of some kind of medical issue? And what is the medical issue? I think that's where bringing in, a lot of different types of knowledge, and a lot of different pieces of information. Right now I think humans are a little bit better at doing that. And some of that's because I don't think we have great ways to train on, sort of sparse datasets I guess. And the second part is that human beings might be 40 years of training a model. They 50 years of training a model as opposed to six months, or something with sparse information. That's another thing that human beings have their sort of lived experience, and the data that they bring to bear, on any type of prediction or classification is actually more than just say what they saw in their medical training. It might be the people they've met, the places they've lived what have you. And I think that's that part that sort of broader set of learning, and how things that might not be related might actually be related to your understanding of what you're looking at. I think we've got a ways to go from a sort of artificial intelligence perspective and developed. >> But it is Exascale day. And we all know about the compound exponential curves on the computing side. But let's shift gears a little bit. I know you're interested in emerging technology to support this effort, and there's so much going on in terms of, kind of the atomization of compute store and networking to be able to break it down into smaller, smaller pieces, so that you can really scale the amount of horsepower that you need to apply to a problem, to very big or to very small. Obviously the stuff that you work is more big than small. Work on GPU a lot of activity there. So I wonder if you could share, some of the emerging technologies that you're excited about to bring again more tools to the task. >> I mean, one of the areas I personally spend a lot of my time exploring are, I guess this word gets used a lot, the Cambrian  explosion of new AI accelerators. New types of chips that are really designed for different types of AI workloads. And as you sort of talked about going down, and it's almost in a way where we were sort of going back and looking at these large systems, but then exploring each little component on them, and trying to really optimize that or understand how that component contributes to the overall performance of the whole. And I think one of the things that just, I don't even know there's probably close to a hundred active vendors in the space of developing new processors, and new types of computer chips. I think one of the things that that points to is, we're moving in the direction of generally infrastructure heterogeneity. So it used to be when you built a system you probably had one type of processor, or you probably had a pretty uniform fabric across your system you usually had, I think maybe storage we started to get tearing a little bit earlier. But now I think that what we're going to see, and we're already starting to see it with Exascale systems where you've got GPUs and CPUs on the same blades, is we're starting to see as the workloads that are running at large scales are becoming more complicated. Maybe I'm doing some simulation and then I'm running I'm training some kind of AI model, and then I'm inferring it on some other type, some other output of the simulation. I need to have the ability to do a lot of different things, and do them in at a very advanced level. Which means I need very specialized technology to do it. And I think it's an exciting time. And I think we're going to test, we're going to break a lot of things. I probably shouldn't say that in this interview, but I'm hopeful that we're going to break some stuff. We're going to push all these systems to the limit, and find out where we actually need to push a little harder. And I some of the areas I think that we're going to see that, is there We're going to want to move data, and move data off of scientific instruments, into computing, into memory, into a lot of different places. And I'm really excited to see how it plays out, and what you can do and where the limits are of what you can do with the new systems. >> Arti I could talk to you all day. I love the experience and the perspective, cause you've been doing this for a long time. So I'm going to give you the final word before we sign out and really bring it back, to a more human thing which is ethics. So one of the conversations we hear all the time, is that if you are going to do something, if you're going to put together a project and you justify that project, and then you go and you collect the data and you run that algorithm and you do that project. That's great but there's like an inherent problem with, kind of data collection that may be used for something else down the road that maybe you don't even anticipate. So I just wonder if you can share, kind of top level kind of ethical take on how data scientists specifically, and then ultimately more business practitioners and other people that don't carry that title. Need to be thinking about ethics and not just kind of forget about it. That these are I had a great interview with Paul Doherty. Everybody's data is not just their data, it's it represents a person, It's a representation of what they do and how they lives. So when you think about kind of entering into a project and getting started, what do you think about in terms of the ethical considerations and how should people be cautious that they don't go places that they probably shouldn't go? >> I think that's a great question out a short answer. But I think that I honestly don't know that we have a great solutions right now, but I think that the best we can do is take a very multifaceted, and also vigilant approach to it. So when you're collecting data, and often we should remember a lot of the data that gets used isn't necessarily collected for the purpose it's being used, because we might be looking at old medical records, or old any kind of transactional records whether it be from a government or a business. And so as you start to collect data or build solutions, try to think through who are all the people who might use it? And what are the possible ways in which it could be misused? And also I encourage people to think backwards. What were the biases in place that when the data were collected, you see this a lot in the criminal justice space is the historical records reflect, historical biases in our systems. And so is I there are limits to how much you can correct for previous biases, but there are some ways to do it, but you can't do it if you're not thinking about it. So I think, sort of at the outset of developing solutions, that's important but I think equally important is putting in the systems to maintain the vigilance around it. So one don't move to autonomy before you know, what potential new errors you might or new biases you might introduce into the world. And also have systems in place to constantly ask these questions. Am I perpetuating things I don't want to perpetuate? Or how can I correct for them? And be willing to scrap your system and start from scratch if you need to. >> Well Arti thank you. Thank you so much for your time. Like I said I could talk to you for days and days and days. I love the perspective and the insight and the thoughtfulness. So thank you for sharing your thoughts, as we celebrate Exascale day. >> Thank you for having me. >> My pleasure thank you. All right she's Arti I'm Jeff it's Exascale day. We're covering on the queue thanks for watching. We'll see you next time. (bright upbeat music)

Published Date : Oct 16 2020

SUMMARY :

Narrator: From around the globe, Arti great to see you again. I just love if you can share a little bit And I think it's an advantage, and you probably run with and that's really important to making and they have to manage it and it starts to tell you which of those the data to the question. and then starting to prepare that we just you just and really compare that to my and pull that algorithm apart to know, and some human judgment to what the computer tells them to do, because the risk is you the doctor to review, and the data that they bring to bear, and networking to be able to break it down And I some of the areas I think Arti I could talk to you all day. in the systems to maintain and the thoughtfulness. We're covering on the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

50 yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

JeffPERSON

0.99+

Paul DohertyPERSON

0.99+

GEORGANIZATION

0.99+

both sidesQUANTITY

0.99+

ArtiPERSON

0.99+

six monthsQUANTITY

0.99+

BernoulliPERSON

0.99+

Arti GargPERSON

0.99+

second partQUANTITY

0.99+

GartnerORGANIZATION

0.99+

hundreds of yearsQUANTITY

0.99+

firstQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

oneQUANTITY

0.99+

10 years agoDATE

0.99+

1018DATE

0.98+

Dr.PERSON

0.98+

ExascaleTITLE

0.98+

each shotQUANTITY

0.98+

CaterpillarORGANIZATION

0.98+

Robert GatesPERSON

0.98+

15 years agoDATE

0.98+

LinkedInORGANIZATION

0.98+

HPEORGANIZATION

0.98+

first stageQUANTITY

0.97+

bothQUANTITY

0.96+

five years agoDATE

0.95+

Exascale dayEVENT

0.95+

two applicationsQUANTITY

0.94+

October 18thDATE

0.94+

two phasesQUANTITY

0.92+

18thDATE

0.91+

10DATE

0.9+

one thingQUANTITY

0.86+

U.S. militaryORGANIZATION

0.82+

one typeQUANTITY

0.81+

a years agoDATE

0.81+

each little componentQUANTITY

0.79+

single pathQUANTITY

0.79+

Korean borderLOCATION

0.72+

hundredQUANTITY

0.71+

terabytes of dataQUANTITY

0.71+

18 zerosQUANTITY

0.71+

three effectsQUANTITY

0.68+

one of these lightQUANTITY

0.68+

Exascale DayEVENT

0.68+

ExascaleEVENT

0.67+

thingsQUANTITY

0.66+

CrayORGANIZATION

0.61+

Exascale day 10EVENT

0.6+

Lawrence LivermorePERSON

0.56+

vendorsQUANTITY

0.53+

fewQUANTITY

0.52+

reasonsQUANTITY

0.46+

lotsQUANTITY

0.46+

CambrianOTHER

0.43+

DMZORGANIZATION

0.41+

ExascaleCOMMERCIAL_ITEM

0.39+

Computer Science & Space Exploration | Exascale Day


 

>>from around the globe. It's the Q. With digital coverage >>of exa scale day made possible by Hewlett Packard Enterprise. We're back at the celebration of Exa Scale Day. This is Dave Volant, and I'm pleased to welcome to great guests Brian Dance Berries Here. Here's what The ISS Program Science office at the Johnson Space Center. And Dr Mark Fernandez is back. He's the Americas HPC technology officer at Hewlett Packard Enterprise. Gentlemen, welcome. >>Thank you. Yeah, >>well, thanks for coming on. And, Mark, Good to see you again. And, Brian, I wonder if we could start with you and talk a little bit about your role. A T. I s s program Science office as a scientist. What's happening these days? What are you working on? >>Well, it's been my privilege the last few years to be working in the, uh, research integration area of of the space station office. And that's where we're looking at all of the different sponsors NASA, the other international partners, all the sponsors within NASA, and, uh, prioritizing what research gets to go up to station. What research gets conducted in that regard. And to give you a feel for the magnitude of the task, but we're coming up now on November 2nd for the 20th anniversary of continuous human presence on station. So we've been a space faring society now for coming up on 20 years, and I would like to point out because, you know, as an old guy myself, it impresses me. That's, you know, that's 25% of the US population. Everybody under the age of 20 has never had a moment when they were alive and we didn't have people living and working in space. So Okay, I got off on a tangent there. We'll move on in that 20 years we've done 3000 experiments on station and the station has really made ah, miraculously sort of evolution from, ah, basic platform, what is now really fully functioning national lab up there with, um, commercially run research facilities all the time. I think you can think of it as the world's largest satellite bus. We have, you know, four or five instruments looking down, measuring all kinds of things in the atmosphere during Earth observation data, looking out, doing astrophysics, research, measuring cosmic rays, X ray observatory, all kinds of things, plus inside the station you've got racks and racks of experiments going on typically scores, you know, if not more than 50 experiments going on at any one time. So, you know, the topic of this event is really important. Doesn't NASA, you know, data transmission Up and down, all of the cameras going on on on station the experiments. Um, you know, one of one of those astrophysics observatory's you know, it has collected over 15 billion um uh, impact data of cosmic rays. And so the massive amounts of data that that needs to be collected and transferred for all of these experiments to go on really hits to the core. And I'm glad I'm able toe be here and and speak with you today on this. This topic. >>Well, thank you for that, Bryan. A baby boomer, right? Grew up with the national pride of the moon landing. And of course, we've we've seen we saw the space shuttle. We've seen international collaboration, and it's just always been something, you know, part of our lives. So thank you for the great work that you guys were doing their mark. You and I had a great discussion about exa scale and kind of what it means for society and some of the innovations that we could maybe expect over the coming years. Now I wonder if you could talk about some of the collaboration between what you guys were doing and Brian's team. >>Uh, yeah, so yes, indeed. Thank you for having me early. Appreciate it. That was a great introduction. Brian, Uh, I'm the principal investigator on Space Born computer, too. And as the two implies, where there was one before it. And so we worked with Bryant and his team extensively over the past few years again high performance computing on board the International Space Station. Brian mentioned the thousands of experiments that have been done to date and that there are currently 50 orm or going on at any one time. And those experiments collect data. And up until recently, you've had to transmit that data down to Earth for processing. And that's a significant amount of bandwidth. Yeah, so with baseball and computer to we're inviting hello developers and others to take advantage of that onboard computational capability you mentioned exa scale. We plan to get the extra scale next year. We're currently in the era that's called PETA scale on. We've been in the past scale era since 2000 and seven, so it's taken us a while to make it that next lead. Well, 10 years after Earth had a PETA scale system in 2017 were able to put ah teraflop system on the International space station to prove that we could do a trillion calculations a second in space. That's where the data is originating. That's where it might be best to process it. So we want to be able to take those capabilities with us. And with H. P. E. Acting as a wonderful partner with Brian and NASA and the space station, we think we're able to do that for many of these experiments. >>It's mind boggling you were talking about. I was talking about the moon landing earlier and the limited power of computing power. Now we've got, you know, water, cool supercomputers in space. I'm interested. I'd love to explore this notion of private industry developing space capable computers. I think it's an interesting model where you have computer companies can repurpose technology that they're selling obviously greater scale for space exploration and apply that supercomputing technology instead of having government fund, proprietary purpose built systems that air. Essentially, you use case, if you will. So, Brian, what are the benefits of that model? The perhaps you wouldn't achieve with governments or maybe contractors, you know, kind of building these proprietary systems. >>Well, first of all, you know, any any tool, your using any, any new technology that has, you know, multiple users is going to mature quicker. You're gonna have, you know, greater features, greater capabilities, you know, not even talking about computers. Anything you're doing. So moving from, you know, governor government is a single, um, you know, user to off the shelf type products gives you that opportunity to have things that have been proven, have the technology is fully matured. Now, what had to happen is we had to mature the space station so that we had a platform where we could test these things and make sure they're gonna work in the high radiation environments, you know, And they're gonna be reliable, because first, you've got to make sure that that safety and reliability or taken care of so that that's that's why in the space program you're gonna you're gonna be behind the times in terms of the computing power of the equipment up there because, first of all and foremost, you needed to make sure that it was reliable and say, Now, my undergraduate degree was in aerospace engineering and what we care about is aerospace engineers is how heavy is it, how big and bulky is it because you know it z expensive? You know, every pound I once visited Gulfstream Aerospace, and they would pay their employees $1000 that they could come up with a way saving £1 in building that aircraft. That means you have more capacity for flying. It's on the orders of magnitude. More important to do that when you're taking payloads to space. So you know, particularly with space born computer, the opportunity there to use software and and check the reliability that way, Uh, without having to make the computer, you know, radiation resistance, if you will, with heavy, you know, bulky, um, packaging to protect it from that radiation is a really important thing, and it's gonna be a huge advantage moving forward as we go to the moon and on to Mars. >>Yeah, that's interesting. I mean, your point about cots commercial off the shelf technology. I mean, that's something that obviously governments have wanted to leverage for a long, long time for many, many decades. But but But Mark the issue was always the is. Brian was just saying the very stringent and difficult requirements of space. Well, you're obviously with space Born one. You got to the point where you had visibility of the economics made sense. It made commercial sense for companies like Hewlett Packard Enterprise. And now we've sort of closed that gap to the point where you're sort of now on that innovation curve. What if you could talk about that a little bit? >>Yeah, absolutely. Brian has some excellent points, you know, he said, anything we do today and requires computers, and that's absolutely correct. So I tell people that when you go to the moon and when you go to the Mars, you probably want to go with the iPhone 10 or 11 and not a flip phone. So before space born was sent up, you went with 2000 early two thousands computing technology there which, like you said many of the people born today weren't even around when the space station began and has been occupied so they don't even know how to program or use that type of computing. Power was based on one. We sent the exact same products that we were shipping to customers today, so they are current state of the art, and we had a mandate. Don't touch the hardware, have all the protection that you can via software. So that's what we've done. We've got several philosophical ways to do that. We've implemented those in software. They've been successful improving in the space for one, and now it's space born to. We're going to begin the experiments so that the rest of the community so that the rest of the community can figure out that it is economically viable, and it will accelerate their research and progress in space. I'm most excited about that. Every venture into space as Brian mentioned will require some computational capability, and HP has figured out that the economics air there we need to bring the customers through space ball into in order for them to learn that we are reliable but current state of the art, and that we could benefit them and all of humanity. >>Guys, I wanna ask you kind of a two part question. And, Brian, I'll start with you and it z somewhat philosophical. Uh, I mean, my understanding was and I want to say this was probably around the time of the Bush administration w two on and maybe certainly before that, but as technology progress, there was a debate about all right, Should we put our resource is on moon because of the proximity to Earth? Or should we, you know, go where no man has gone before and or woman and get to Mars? Where What's the thinking today, Brian? On that? That balance between Moon and Mars? >>Well, you know, our plans today are are to get back to the moon by 2024. That's the Artemus program. Uh, it's exciting. It makes sense from, you know, an engineering standpoint. You take, you know, you take baby steps as you continue to move forward. And so you have that opportunity, um, to to learn while you're still, you know, relatively close to home. You can get there in days, not months. If you're going to Mars, for example, toe have everything line up properly. You're looking at a multi year mission you know, it may take you nine months to get there. Then you have to wait for the Earth and Mars to get back in the right position to come back on that same kind of trajectory. So you have toe be there for more than a year before you can turn around and come back. So, you know, he was talking about the computing power. You know, right now that the beautiful thing about the space station is, it's right there. It's it's orbiting above us. It's only 250 miles away. Uh, so you can test out all of these technologies. You can rely on the ground to keep track of systems. There's not that much of a delay in terms of telemetry coming back. But as you get to the moon and then definitely is, you get get out to Mars. You know, there are enough minutes delay out there that you've got to take the computing power with you. You've got to take everything you need to be able to make those decisions you need to make because there's not time to, um, you know, get that information back on the ground, get back get it back to Earth, have people analyze the situation and then tell you what the next step is to do. That may be too late. So you've got to think the computing power with you. >>So extra scale bring some new possibilities. Both both for, you know, the moon and Mars. I know Space Born one did some simulations relative. Tomorrow we'll talk about that. But But, Brian, what are the things that you hope to get out of excess scale computing that maybe you couldn't do with previous generations? >>Well, you know, you know, market on a key point. You know, bandwidth up and down is, of course, always a limitation. In the more computing data analysis you can do on site, the more efficient you could be with parsing out that that bandwidth and to give you ah, feel for just that kind of think about those those observatory's earth observing and an astronomical I was talking about collecting data. Think about the hours of video that are being recorded daily as the astronauts work on various things to document what they're doing. They many of the biological experiments, one of the key key pieces of data that's coming back. Is that video of the the microbes growing or the plants growing or whatever fluid physics experiments going on? We do a lot of colloids research, which is suspended particles inside ah liquid. And that, of course, high speed video. Is he Thio doing that kind of research? Right now? We've got something called the I s s experience going on in there, which is basically recording and will eventually put out a syriza of basically a movie on virtual reality recording. That kind of data is so huge when you have a 360 degree camera up there recording all of that data, great virtual reality, they There's still a lot of times bringing that back on higher hard drives when the space six vehicles come back to the Earth. That's a lot of data going on. We recorded videos all the time, tremendous amount of bandwidth going on. And as you get to the moon and as you get further out, you can a man imagine how much more limiting that bandwidth it. >>Yeah, We used to joke in the old mainframe days that the fastest way to get data from point a to Point B was called C Tam, the Chevy truck access method. Just load >>up a >>truck, whatever it was, tapes or hard drive. So eso and mark, of course space born to was coming on. Spaceport one really was a pilot, but it proved that the commercial computers could actually work for long durations in space, and the economics were feasible. Thinking about, you know, future missions and space born to What are you hoping to accomplish? >>I'm hoping to bring. I'm hoping to bring that success from space born one to the rest of the community with space born to so that they can realize they can do. They're processing at the edge. The purpose of exploration is insight, not data collection. So all of these experiments begin with data collection. Whether that's videos or samples are mold growing, etcetera, collecting that data, we must process it to turn it into information and insight. And the faster we can do that, the faster we get. Our results and the better things are. I often talk Thio College in high school and sometimes grammar school students about this need to process at the edge and how the communication issues can prevent you from doing that. For example, many of us remember the communications with the moon. The moon is about 250,000 miles away, if I remember correctly, and the speed of light is 186,000 miles a second. So even if the speed of light it takes more than a second for the communications to get to the moon and back. So I can remember being stressed out when Houston will to make a statement. And we were wondering if the astronauts could answer Well, they answered as soon as possible. But that 1 to 2 second delay that was natural was what drove us crazy, which made us nervous. We were worried about them in the success of the mission. So Mars is millions of miles away. So flip it around. If you're a Mars explorer and you look out the window and there's a big red cloud coming at you that looks like a tornado and you might want to do some Mars dust storm modeling right then and there to figure out what's the safest thing to do. You don't have the time literally get that back to earth have been processing and get you the answer back. You've got to take those computational capabilities with you. And we're hoping that of these 52 thousands of experiments that are on board, the SS can show that in order to better accomplish their missions on the moon. And Omar, >>I'm so glad you brought that up because I was gonna ask you guys in the commercial world everybody talks about real time. Of course, we talk about the real time edge and AI influencing and and the time value of data I was gonna ask, you know, the real time, Nous, How do you handle that? I think Mark, you just answered that. But at the same time, people will say, you know, the commercial would like, for instance, in advertising. You know, the joke the best. It's not kind of a joke, but the best minds of our generation tryingto get people to click on ads. And it's somewhat true, unfortunately, but at any rate, the value of data diminishes over time. I would imagine in space exploration where where you're dealing and things like light years, that actually there's quite a bit of value in the historical data. But, Mark, you just You just gave a great example of where you need real time, compute capabilities on the ground. But but But, Brian, I wonder if I could ask you the value of this historic historical data, as you just described collecting so much data. Are you? Do you see that the value of that data actually persists over time, you could go back with better modeling and better a i and computing and actually learn from all that data. What are your thoughts on that, Brian? >>Definitely. I think the answer is yes to that. And, you know, as part of the evolution from from basically a platform to a station, we're also learning to make use of the experiments in the data that we have there. NASA has set up. Um, you know, unopened data access sites for some of our physical science experiments that taking place there and and gene lab for looking at some of the biological genomic experiments that have gone on. And I've seen papers already beginning to be generated not from the original experimenters and principal investigators, but from that data set that has been collected. And, you know, when you're sending something up to space and it to the space station and volume for cargo is so limited, you want to get the most you can out of that. So you you want to be is efficient as possible. And one of the ways you do that is you collect. You take these earth observing, uh, instruments. Then you take that data. And, sure, the principal investigators air using it for the key thing that they designed it for. But if that data is available, others will come along and make use of it in different ways. >>Yeah, So I wanna remind the audience and these these these air supercomputers, the space born computers, they're they're solar powered, obviously, and and they're mounted overhead, right? Is that is that correct? >>Yeah. Yes. Space borne computer was mounted in the overhead. I jokingly say that as soon as someone could figure out how to get a data center in orbit, they will have a 50 per cent denser data station that we could have down here instead of two robes side by side. You can also have one overhead on. The power is free. If you can drive it off a solar, and the cooling is free because it's pretty cold out there in space, so it's gonna be very efficient. Uh, space borne computer is the most energy efficient computer in existence. Uh, free electricity and free cooling. And now we're offering free cycles through all the experimenters on goal >>Eso Space born one exceeded its mission timeframe. You were able to run as it was mentioned before some simulations for future Mars missions. And, um and you talked a little bit about what you want to get out of, uh, space born to. I mean, are there other, like, wish list items, bucket bucket list items that people are talking about? >>Yeah, two of them. And these air kind of hypothetical. And Brian kind of alluded to them. Uh, one is having the data on board. So an example that halo developers talk to us about is Hey, I'm on Mars and I see this mold growing on my potatoes. That's not good. So let me let me sample that mold, do a gene sequencing, and then I've got stored all the historical data on space borne computer of all the bad molds out there and let me do a comparison right then and there before I have dinner with my fried potato. So that's that's one. That's very interesting. A second one closely related to it is we have offered up the storage on space borne computer to for all of your raw data that we process. So, Mr Scientist, if if you need the raw data and you need it now, of course, you can have it sent down. But if you don't let us just hold it there as long as they have space. And when we returned to Earth like you mentioned, Patrick will ship that solid state disk back to them so they could have a new person, but again, reserving that network bandwidth, uh, keeping all that raw data available for the entire duration of the mission so that it may have value later on. >>Great. Thank you for that. I want to end on just sort of talking about come back to the collaboration between I S s National Labs and Hewlett Packard Enterprise, and you've got your inviting project ideas using space Bourne to during the upcoming mission. Maybe you could talk about what that's about, and we have A We have a graphic we're gonna put up on DSM information that you can you can access. But please, mark share with us what you're planning there. >>So again, the collaboration has been outstanding. There. There's been a mention off How much savings is, uh, if you can reduce the weight by a pound. Well, our partners ice s national lab and NASA have taken on that cost of delivering baseball in computer to the international space station as part of their collaboration and powering and cooling us and giving us the technical support in return on our side, we're offering up space borne computer to for all the onboard experiments and all those that think they might be wanting doing experiments on space born on the S s in the future to take advantage of that. So we're very, very excited about that. >>Yeah, and you could go toe just email space born at hp dot com on just float some ideas. I'm sure at some point there'll be a website so you can email them or you can email me david dot volonte at at silicon angle dot com and I'll shoot you that that email one or that website once we get it. But, Brian, I wanna end with you. You've been so gracious with your time. Uh, yeah. Give us your final thoughts on on exa scale. Maybe how you're celebrating exa scale day? I was joking with Mark. Maybe we got a special exa scale drink for 10. 18 but, uh, what's your final thoughts, Brian? >>Uh, I'm going to digress just a little bit. I think I think I have a unique perspective to celebrate eggs a scale day because as an undergraduate student, I was interning at Langley Research Center in the wind tunnels and the wind tunnel. I was then, um, they they were very excited that they had a new state of the art giant room size computer to take that data we way worked on unsteady, um, aerodynamic forces. So you need a lot of computation, and you need to be ableto take data at a high bandwidth. To be able to do that, they'd always, you know, run their their wind tunnel for four or five hours. Almost the whole shift. Like that data and maybe a week later, been ableto look at the data to decide if they got what they were looking for? Well, at the time in the in the early eighties, this is definitely the before times that I got there. They had they had that computer in place. Yes, it was a punchcard computer. It was the one time in my life I got to put my hands on the punch cards and was told not to drop them there. Any trouble if I did that. But I was able thio immediately after, uh, actually, during their run, take that data, reduce it down, grabbed my colored pencils and graph paper and graph out coefficient lift coefficient of drag. Other things that they were measuring. Take it back to them. And they were so excited to have data two hours after they had taken it analyzed and looked at it just pickled them. Think that they could make decisions now on what they wanted to do for their next run. Well, we've come a long way since then. You know, extra scale day really, really emphasizes that point, you know? So it really brings it home to me. Yeah. >>Please, no, please carry on. >>Well, I was just gonna say, you know, you talked about the opportunities that that space borne computer provides and and Mark mentioned our colleagues at the I S s national lab. You know, um, the space station has been declared a national laboratory, and so about half of the, uh, capabilities we have for doing research is a portion to the national lab so that commercial entities so that HP can can do these sorts of projects and universities can access station and and other government agencies. And then NASA can focus in on those things we want to do purely to push our exploration programs. So the opportunities to take advantage of that are there marks opening up the door for a lot of opportunities. But others can just Google S s national laboratory and find some information on how to get in the way. Mark did originally using s national lab to maybe get a good experiment up there. >>Well, it's just astounding to see the progress that this industry is made when you go back and look, you know, the early days of supercomputing to imagine that they actually can be space born is just tremendous. Not only the impacts that it can have on Space six exploration, but also society in general. Mark Wayne talked about that. Guys, thanks so much for coming on the Cube and celebrating Exa scale day and helping expand the community. Great work. And, uh, thank you very much for all that you guys dio >>Thank you very much for having me on and everybody out there. Let's get the XO scale as quick as we can. Appreciate everything you all are >>doing. Let's do it. >>I've got a I've got a similar story. Humanity saw the first trillion calculations per second. Like I said in 1997. And it was over 100 racks of computer equipment. Well, space borne one is less than fourth of Iraq in only 20 years. So I'm gonna be celebrating exa scale day in anticipation off exa scale computers on earth and soon following within the national lab that exists in 20 plus years And being on Mars. >>That's awesome. That mark. Thank you for that. And and thank you for watching everybody. We're celebrating Exa scale day with the community. The supercomputing community on the Cube Right back

Published Date : Oct 16 2020

SUMMARY :

It's the Q. With digital coverage We're back at the celebration of Exa Scale Day. Thank you. And, Mark, Good to see you again. And to give you a feel for the magnitude of the task, of the collaboration between what you guys were doing and Brian's team. developers and others to take advantage of that onboard computational capability you with governments or maybe contractors, you know, kind of building these proprietary off the shelf type products gives you that opportunity to have things that have been proven, have the technology You got to the point where you had visibility of the economics made sense. So I tell people that when you go to the moon Or should we, you know, go where no man has gone before and or woman and You've got to take everything you need to be able to make those decisions you need to make because there's not time to, for, you know, the moon and Mars. the more efficient you could be with parsing out that that bandwidth and to give you ah, B was called C Tam, the Chevy truck access method. future missions and space born to What are you hoping to accomplish? get that back to earth have been processing and get you the answer back. the time value of data I was gonna ask, you know, the real time, And one of the ways you do that is you collect. If you can drive it off a solar, and the cooling is free because it's pretty cold about what you want to get out of, uh, space born to. So, Mr Scientist, if if you need the raw data and you need it now, that's about, and we have A We have a graphic we're gonna put up on DSM information that you can is, uh, if you can reduce the weight by a pound. so you can email them or you can email me david dot volonte at at silicon angle dot com and I'll shoot you that state of the art giant room size computer to take that data we way Well, I was just gonna say, you know, you talked about the opportunities that that space borne computer provides And, uh, thank you very much for all that you guys dio Thank you very much for having me on and everybody out there. Let's do it. Humanity saw the first trillion calculations And and thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

MarkPERSON

0.99+

Mark WaynePERSON

0.99+

BryanPERSON

0.99+

NASAORGANIZATION

0.99+

1997DATE

0.99+

MarsLOCATION

0.99+

BryantPERSON

0.99+

EarthLOCATION

0.99+

Dave VolantPERSON

0.99+

£1QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

3000 experimentsQUANTITY

0.99+

2017DATE

0.99+

twoQUANTITY

0.99+

PatrickPERSON

0.99+

five hoursQUANTITY

0.99+

nine monthsQUANTITY

0.99+

November 2ndDATE

0.99+

HPORGANIZATION

0.99+

25%QUANTITY

0.99+

TomorrowDATE

0.99+

I S s National LabsORGANIZATION

0.99+

50 per centQUANTITY

0.99+

next yearDATE

0.99+

20 yearsQUANTITY

0.99+

iPhone 10COMMERCIAL_ITEM

0.99+

fourQUANTITY

0.99+

2024DATE

0.99+

1QUANTITY

0.99+

todayDATE

0.99+

earthLOCATION

0.99+

a week laterDATE

0.99+

two partQUANTITY

0.99+

OmarPERSON

0.99+

2000DATE

0.99+

Thio CollegeORGANIZATION

0.99+

11COMMERCIAL_ITEM

0.99+

more than a secondQUANTITY

0.99+

10. 18QUANTITY

0.99+

one timeQUANTITY

0.99+

2 secondQUANTITY

0.99+

BothQUANTITY

0.99+

over 100 racksQUANTITY

0.98+

The Impact of Exascale on Business | Exascale Day


 

>>from around the globe. It's the Q with digital coverage of exa scale day made possible by Hewlett Packard Enterprise. Welcome, everyone to the Cube celebration of Exa Scale Day. Shaheen Khan is here. He's the founding partner, an analyst at Orion X And, among other things, he is the co host of Radio free HPC Shaheen. Welcome. Thanks for coming on. >>Thanks for being here, Dave. Great to be here. How are you >>doing? Well, thanks. Crazy with doing these things, Cove in remote interviews. I wish we were face to face at us at a supercomputer show, but, hey, this thing is working. We can still have great conversations. And And I love talking to analysts like you because you bring an independent perspective. You're very wide observation space. So So let me, Like many analysts, you probably have sort of a mental model or a market model that you look at. So maybe talk about your your work, how you look at the market, and we could get into some of the mega trends that you see >>very well. Very well. Let me just quickly set the scene. We fundamentally track the megatrends of the Information Age And, of course, because we're in the information age, digital transformation falls out of that. And the megatrends that drive that in our mind is Ayotte, because that's the fountain of data five G. Because that's how it's gonna get communicated ai and HBC because that's how we're gonna make sense of it Blockchain and Cryptocurrencies because that's how it's gonna get transacted on. That's how value is going to get transferred from the place took place and then finally, quantum computing, because that exemplifies how things are gonna get accelerated. >>So let me ask you So I spent a lot of time, but I D. C and I had the pleasure of of the High Performance computing group reported into me. I wasn't an HPC analyst, but over time you listen to those guys, you learning. And as I recall, it was HPC was everywhere, and it sounds like we're still seeing that trend where, whether it was, you know, the Internet itself were certainly big data, you know, coming into play. Uh, you know, defense, obviously. But is your background mawr HPC or so that these other technologies that you're talking about it sounds like it's your high performance computing expert market watcher. And then you see it permeating into all these trends. Is that a fair statement? >>That's a fair statement. I did grow up in HPC. My first job out of school was working for an IBM fellow doing payroll processing in the old days on and and And it went from there, I worked for Cray Research. I worked for floating point systems, so I grew up in HPC. But then, over time, uh, we had experiences outside of HPC. So for a number of years, I had to go do commercial enterprise computing and learn about transaction processing and business intelligence and, you know, data warehousing and things like that, and then e commerce and then Web technology. So over time it's sort of expanded. But HPC is a like a bug. You get it and you can't get rid of because it's just so inspiring. So supercomputing has always been my home, so to say >>well and so the reason I ask is I wanted to touch on a little history of the industry is there was kind of a renaissance in many, many years ago, and you had all these startups you had Kendall Square Research Danny Hillis thinking machines. You had convex trying to make many supercomputers. And it was just this This is, you know, tons of money flowing in and and then, you know, things kind of consolidate a little bit and, uh, things got very, very specialized. And then with the big data craze, you know, we've seen HPC really at the heart of all that. So what's your take on on the ebb and flow of the HPC business and how it's evolved? >>Well, HBC was always trying to make sense of the world, was trying to make sense of nature. And of course, as much as we do know about nature, there's a lot we don't know about nature and problems in nature are you can classify those problems into basically linear and nonlinear problems. The linear ones are easy. They've already been solved. The nonlinear wants. Some of them are easy. Many of them are hard, the nonlinear, hard, chaotic. All of those problems are the ones that you really need to solve. The closer you get. So HBC was basically marching along trying to solve these things. It had a whole process, you know, with the scientific method going way back to Galileo, the experimentation that was part of it. And then between theory, you got to look at the experiment and the data. You kind of theorize things. And then you experimented to prove the theories and then simulation and using the computers to validate some things eventually became a third pillar of off science. On you had theory, experiment and simulation. So all of that was going on until the rest of the world, thanks to digitization, started needing some of those same techniques. Why? Because you've got too much data. Simply, there's too much data to ship to the cloud. There's too much data to, uh, make sense of without math and science. So now enterprise computing problems are starting to look like scientific problems. Enterprise data centers are starting to look like national lab data centers, and there is that sort of a convergence that has been taking place gradually, really over the past 34 decades. And it's starting to look really, really now >>interesting, I want I want to ask you about. I was like to talk to analysts about, you know, competition. The competitive landscape is the competition in HPC. Is it between vendors or countries? >>Well, this is a very interesting thing you're saying, because our other thesis is that we are moving a little bit beyond geopolitics to techno politics. And there are now, uh, imperatives at the political level that are driving some of these decisions. Obviously, five G is very visible as as as a piece of technology that is now in the middle of political discussions. Covert 19 as you mentioned itself, is a challenge that is a global challenge that needs to be solved at that level. Ai, who has access to how much data and what sort of algorithms. And it turns out as we all know that for a I, you need a lot more data than you thought. You do so suddenly. Data superiority is more important perhaps than even. It can lead to information superiority. So, yeah, that's really all happening. But the actors, of course, continue to be the vendors that are the embodiment of the algorithms and the data and the systems and infrastructure that feed the applications. So to say >>so let's get into some of these mega trends, and maybe I'll ask you some Colombo questions and weaken geek out a little bit. Let's start with a you know, again, it was one of this when I started the industry. It's all it was a i expert systems. It was all the rage. And then we should have had this long ai winter, even though, you know, the technology never went away. But But there were at least two things that happened. You had all this data on then the cost of computing. You know, declines came down so so rapidly over the years. So now a eyes back, we're seeing all kinds of applications getting infused into virtually every part of our lives. People trying to advertise to us, etcetera. Eso So talk about the intersection of AI and HPC. What are you seeing there? >>Yeah, definitely. Like you said, I has a long history. I mean, you know, it came out of MIT Media Lab and the AI Lab that they had back then and it was really, as you mentioned, all focused on expert systems. It was about logical processing. It was a lot of if then else. And then it morphed into search. How do I search for the right answer, you know, needle in the haystack. But then, at some point, it became computational. Neural nets are not a new idea. I remember you know, we had we had a We had a researcher in our lab who was doing neural networks, you know, years ago. And he was just saying how he was running out of computational power and we couldn't. We were wondering, you know what? What's taking all this difficult, You know, time. And it turns out that it is computational. So when deep neural nets showed up about a decade ago, arm or it finally started working and it was a confluence of a few things. Thalib rhythms were there, the data sets were there, and the technology was there in the form of GPS and accelerators that finally made distractible. So you really could say, as in I do say that a I was kind of languishing for decades before HPC Technologies reignited it. And when you look at deep learning, which is really the only part of a I that has been prominent and has made all this stuff work, it's all HPC. It's all matrix algebra. It's all signal processing algorithms. are computational. The infrastructure is similar to H B. C. The skill set that you need is the skill set of HPC. I see a lot of interest in HBC talent right now in part motivated by a I >>mhm awesome. Thank you on. Then I wanna talk about Blockchain and I can't talk about Blockchain without talking about crypto you've written. You've written about that? I think, you know, obviously supercomputers play a role. I think you had written that 50 of the top crypto supercomputers actually reside in in China A lot of times the vendor community doesn't like to talk about crypto because you know that you know the fraud and everything else. But it's one of the more interesting use cases is actually the primary use case for Blockchain even though Blockchain has so much other potential. But what do you see in Blockchain? The potential of that technology And maybe we can work in a little crypto talk as well. >>Yeah, I think 11 simple way to think of Blockchain is in terms off so called permission and permission less the permission block chains or when everybody kind of knows everybody and you don't really get to participate without people knowing who you are and as a result, have some basis to trust your behavior and your transactions. So things are a lot calmer. It's a lot easier. You don't really need all the supercomputing activity. Whereas for AI the assertion was that intelligence is computer herbal. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for permission. Less Blockchain. The assertion is that trust is computer ble and, it turns out for trust to be computer ble. It's really computational intensive because you want to provide an incentive based such that good actors are rewarded and back actors. Bad actors are punished, and it is worth their while to actually put all their effort towards good behavior. And that's really what you see, embodied in like a Bitcoin system where the chain has been safe over the many years. It's been no attacks, no breeches. Now people have lost money because they forgot the password or some other. You know, custody of the accounts have not been trustable, but the chain itself has managed to produce that, So that's an example of computational intensity yielding trust. So that suddenly becomes really interesting intelligence trust. What else is computer ble that we could do if we if we had enough power? >>Well, that's really interesting the way you described it, essentially the the confluence of crypto graphics software engineering and, uh, game theory, Really? Where the bad actors air Incentive Thio mined Bitcoin versus rip people off because it's because because there are lives better eso eso so that so So Okay, so make it make the connection. I mean, you sort of did. But But I want to better understand the connection between, you know, supercomputing and HPC and Blockchain. We know we get a crypto for sure, like in mind a Bitcoin which gets harder and harder and harder. Um and you mentioned there's other things that we can potentially compute on trust. Like what? What else? What do you thinking there? >>Well, I think that, you know, the next big thing that we are really seeing is in communication. And it turns out, as I was saying earlier, that these highly computational intensive algorithms and models show up in all sorts of places like, you know, in five g communication, there's something called the memo multi and multi out and to optimally manage that traffic such that you know exactly what beam it's going to and worth Antenna is coming from that turns out to be a non trivial, you know, partial differential equation. So next thing you know, you've got HPC in there as and he didn't expect it because there's so much data to be sent, you really have to do some data reduction and data processing almost at the point of inception, if not at the point of aggregation. So that has led to edge computing and edge data centers. And that, too, is now. People want some level of computational capability at that place like you're building a microcontroller, which traditionally would just be a, you know, small, low power, low cost thing. And people want victor instructions. There. People want matrix algebra there because it makes sense to process the data before you have to ship it. So HPCs cropping up really everywhere. And then finally, when you're trying to accelerate things that obviously GP use have been a great example of that mixed signal technologies air coming to do analog and digital at the same time, quantum technologies coming so you could do the you know, the usual analysts to buy to where you have analog, digital, classical quantum and then see which, you know, with what lies where all of that is coming. And all of that is essentially resting on HBC. >>That's interesting. I didn't realize that HBC had that position in five G with multi and multi out. That's great example and then I o t. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing at the edge on you're seeing sort of new computing architectures, potentially emerging, uh, video. The acquisition of arm Perhaps, you know, amore efficient way, maybe a lower cost way of doing specialized computing at the edge it, But it sounds like you're envisioning, actually, supercomputing at the edge. Of course, we've talked to Dr Mark Fernandez about space born computers. That's like the ultimate edge you got. You have supercomputers hanging on the ceiling of the International space station, but But how far away are we from this sort of edge? Maybe not. Space is an extreme example, but you think factories and windmills and all kinds of edge examples where supercomputing is is playing a local role. >>Well, I think initially you're going to see it on base stations, Antenna towers, where you're aggregating data from a large number of endpoints and sensors that are gathering the data, maybe do some level of local processing and then ship it to the local antenna because it's no more than 100 m away sort of a thing. But there is enough there that that thing can now do the processing and do some level of learning and decide what data to ship back to the cloud and what data to get rid of and what data to just hold. Or now those edge data centers sitting on top of an antenna. They could have a half a dozen GPS in them. They're pretty powerful things. They could have, you know, one they could have to, but but it could be depending on what you do. A good a good case study. There is like surveillance cameras. You don't really need to ship every image back to the cloud. And if you ever need it, the guy who needs it is gonna be on the scene, not back at the cloud. So there is really no sense in sending it, Not certainly not every frame. So maybe you can do some processing and send an image every five seconds or every 10 seconds, and that way you can have a record of it. But you've reduced your bandwidth by orders of magnitude. So things like that are happening. And toe make sense of all of that is to recognize when things changed. Did somebody come into the scene or is it just you know that you know, they became night, So that's sort of a decision. Cannot be automated and fundamentally what is making it happen? It may not be supercomputing exa scale class, but it's definitely HPCs, definitely numerically oriented technologies. >>Shane, what do you see happening in chip architectures? Because, you see, you know the classical intel they're trying to put as much function on the real estate as possible. We've seen the emergence of alternative processors, particularly, uh, GP use. But even if f b g A s, I mentioned the arm acquisition, so you're seeing these alternative processors really gain momentum and you're seeing data processing units emerge and kind of interesting trends going on there. What do you see? And what's the relationship to HPC? >>Well, I think a few things are going on there. Of course, one is, uh, essentially the end of Moore's law, where you cannot make the cycle time be any faster, so you have to do architectural adjustments. And then if you have a killer app that lends itself to large volume, you can build silicon. That is especially good for that now. Graphics and gaming was an example of that, and people said, Oh my God, I've got all these cores in there. Why can't I use it for computation? So everybody got busy making it 64 bit capable and some grass capability, And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Well, I don't really need 64 but maybe I can do it in 32 or 16. So now you do it for that, and then tens, of course, come about. And so there's that sort of a progression of architecture, er trumping, basically cycle time. That's one thing. The second thing is scale out and decentralization and distributed computing. And that means that the inter communication and intra communication among all these notes now becomes an issue big enough issue that maybe it makes sense to go to a DPU. Maybe it makes sense to go do some level of, you know, edge data centers like we were talking about on then. The third thing, really is that in many of these cases you have data streaming. What is really coming from I o t, especially an edge, is that data is streaming and when data streaming suddenly new architectures like F B G. A s become really interesting and and and hold promise. So I do see, I do see FPG's becoming more prominent just for that reason, but then finally got a program all of these things on. That's really a difficulty, because what happens now is that you need to get three different ecosystems together mobile programming, embedded programming and cloud programming. And those are really three different developer types. You can't hire somebody who's good at all three. I mean, maybe you can, but not many. So all of that is challenges that are driving this this this this industry, >>you kind of referred to this distributed network and a lot of people you know, they refer to this. The next generation cloud is this hyper distributed system. When you include the edge and multiple clouds that etcetera space, maybe that's too extreme. But to your point, at least I inferred there's a There's an issue of Leighton. See, there's the speed of light s So what? What? What is the implication then for HBC? Does that mean I have tow Have all the data in one place? Can I move the compute to the data architecturally, What are you seeing there? >>Well, you fundamentally want to optimize when to move data and when to move, Compute. Right. So is it better to move data to compute? Or is it better to bring compute to data and under what conditions? And the dancer is gonna be different for different use cases. It's like, really, is it worth my while to make the trip, get my processing done and then come back? Or should I just developed processing capability right here? Moving data is really expensive and relatively speaking. It has become even more expensive, while the price of everything has dropped down its price has dropped less than than than like processing. So it is now starting to make sense to do a lot of local processing because processing is cheap and moving data is expensive Deep Use an example of that, Uh, you know, we call this in C two processing like, you know, let's not move data. If you don't have to accept that we live in the age of big data, so data is huge and wants to be moved. And that optimization, I think, is part of what you're what you're referring to. >>Yeah, So a couple examples might be autonomous vehicles. You gotta have to make decisions in real time. You can't send data back to the cloud flip side of that is we talk about space borne computers. You're collecting all this data You can at some point. You know, maybe it's a year or two after the lived out its purpose. You ship that data back and a bunch of disk drives or flash drives, and then load it up into some kind of HPC system and then have at it and then you doom or modeling and learn from that data corpus, right? I mean those air, >>right? Exactly. Exactly. Yeah. I mean, you know, driverless vehicles is a great example, because it is obviously coming fast and furious, no pun intended. And also, it dovetails nicely with the smart city, which dovetails nicely with I o. T. Because it is in an urban area. Mostly, you can afford to have a lot of antenna, so you can give it the five g density that you want. And it requires the Layton sees. There's a notion of how about if my fleet could communicate with each other. What if the car in front of me could let me know what it sees, That sort of a thing. So, you know, vehicle fleets is going to be in a non opportunity. All of that can bring all of what we talked about. 21 place. >>Well, that's interesting. Okay, so yeah, the fleets talking to each other. So kind of a Byzantine fault. Tolerance. That problem that you talk about that z kind of cool. I wanna I wanna sort of clothes on quantum. It's hard to get your head around. Sometimes You see the demonstrations of quantum. It's not a one or zero. It could be both. And you go, What? How did come that being so? And And of course, there it's not stable. Uh, looks like it's quite a ways off, but the potential is enormous. It's of course, it's scary because we think all of our, you know, passwords are already, you know, not secure. And every password we know it's gonna get broken. But give us the give us the quantum 101 And let's talk about what the implications. >>All right, very well. So first off, we don't need to worry about our passwords quite yet. That that that's that's still ways off. It is true that analgesic DM came up that showed how quantum computers can fact arise numbers relatively fast and prime factory ization is at the core of a lot of cryptology algorithms. So if you can fact arise, you know, if you get you know, number 21 you say, Well, that's three times seven, and those three, you know, three and seven or prime numbers. Uh, that's an example of a problem that has been solved with quantum computing, but if you have an actual number, would like, you know, 2000 digits in it. That's really harder to do. It's impossible to do for existing computers and even for quantum computers. Ways off, however. So as you mentioned, cubits can be somewhere between zero and one, and you're trying to create cubits Now there are many different ways of building cubits. You can do trapped ions, trapped ion trapped atoms, photons, uh, sometimes with super cool, sometimes not super cool. But fundamentally, you're trying to get these quantum level elements or particles into a superimposed entanglement state. And there are different ways of doing that, which is why quantum computers out there are pursuing a lot of different ways. The whole somebody said it's really nice that quantum computing is simultaneously overhyped and underestimated on. And that is that is true because there's a lot of effort that is like ways off. On the other hand, it is so exciting that you don't want to miss out if it's going to get somewhere. So it is rapidly progressing, and it has now morphed into three different segments. Quantum computing, quantum communication and quantum sensing. Quantum sensing is when you can measure really precise my new things because when you perturb them the quantum effects can allow you to measure them. Quantum communication is working its way, especially in financial services, initially with quantum key distribution, where the key to your cryptography is sent in a quantum way. And the data sent a traditional way that our efforts to do quantum Internet, where you actually have a quantum photon going down the fiber optic lines and Brookhaven National Labs just now demonstrated a couple of weeks ago going pretty much across the, you know, Long Island and, like 87 miles or something. So it's really coming, and and fundamentally, it's going to be brand new algorithms. >>So these examples that you're giving these air all in the lab right there lab projects are actually >>some of them are in the lab projects. Some of them are out there. Of course, even traditional WiFi has benefited from quantum computing or quantum analysis and, you know, algorithms. But some of them are really like quantum key distribution. If you're a bank in New York City, you very well could go to a company and by quantum key distribution services and ship it across the you know, the waters to New Jersey on that is happening right now. Some researchers in China and Austria showed a quantum connection from, like somewhere in China, to Vienna, even as far away as that. When you then put the satellite and the nano satellites and you know, the bent pipe networks that are being talked about out there, that brings another flavor to it. So, yes, some of it is like real. Some of it is still kind of in the last. >>How about I said I would end the quantum? I just e wanna ask you mentioned earlier that sort of the geopolitical battles that are going on, who's who are the ones to watch in the Who? The horses on the track, obviously United States, China, Japan. Still pretty prominent. How is that shaping up in your >>view? Well, without a doubt, it's the US is to lose because it's got the density and the breadth and depth of all the technologies across the board. On the other hand, information age is a new eyes. Their revolution information revolution is is not trivial. And when revolutions happen, unpredictable things happen, so you gotta get it right and and one of the things that these technologies enforce one of these. These revolutions enforce is not just kind of technological and social and governance, but also culture, right? The example I give is that if you're a farmer, it takes you maybe a couple of seasons before you realize that you better get up at the crack of dawn and you better do it in this particular season. You're gonna starve six months later. So you do that to three years in a row. A culture has now been enforced on you because that's how it needs. And then when you go to industrialization, you realize that Gosh, I need these factories. And then, you know I need workers. And then next thing you know, you got 9 to 5 jobs and you didn't have that before. You don't have a command and control system. You had it in military, but not in business. And and some of those cultural shifts take place on and change. So I think the winner is going to be whoever shows the most agility in terms off cultural norms and governance and and and pursuit of actual knowledge and not being distracted by what you think. But what actually happens and Gosh, I think these exa scale technologies can make the difference. >>Shaheen Khan. Great cast. Thank you so much for joining us to celebrate the extra scale day, which is, uh, on 10. 18 on dso. Really? Appreciate your insights. >>Likewise. Thank you so much. >>All right. Thank you for watching. Keep it right there. We'll be back with our next guest right here in the Cube. We're celebrating Exa scale day right back.

Published Date : Oct 16 2020

SUMMARY :

he is the co host of Radio free HPC Shaheen. How are you to analysts like you because you bring an independent perspective. And the megatrends that drive that in our mind And then you see it permeating into all these trends. You get it and you can't get rid And it was just this This is, you know, tons of money flowing in and and then, And then you experimented to prove the theories you know, competition. And it turns out as we all know that for a I, you need a lot more data than you thought. ai winter, even though, you know, the technology never went away. is similar to H B. C. The skill set that you need is the skill set community doesn't like to talk about crypto because you know that you know the fraud and everything else. And with some of these exa scale technologies, we're trying to, you know, we're getting to that point for Well, that's really interesting the way you described it, essentially the the confluence of crypto is coming from that turns out to be a non trivial, you know, partial differential equation. I want to ask you about that because there's a lot of discussion about real time influencing AI influencing Did somebody come into the scene or is it just you know that you know, they became night, Because, you see, you know the classical intel they're trying to put And then people say, Oh, I know I can use that for a I And you know, now you move it to a I say, Can I move the compute to the data architecturally, What are you seeing there? an example of that, Uh, you know, we call this in C two processing like, it and then you doom or modeling and learn from that data corpus, so you can give it the five g density that you want. It's of course, it's scary because we think all of our, you know, passwords are already, So if you can fact arise, you know, if you get you know, number 21 you say, and ship it across the you know, the waters to New Jersey on that is happening I just e wanna ask you mentioned earlier that sort of the geopolitical And then next thing you know, you got 9 to 5 jobs and you didn't have that before. Thank you so much for joining us to celebrate the Thank you so much. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Shaheen KhanPERSON

0.99+

ChinaLOCATION

0.99+

ViennaLOCATION

0.99+

AustriaLOCATION

0.99+

MIT Media LabORGANIZATION

0.99+

New York CityLOCATION

0.99+

Orion XORGANIZATION

0.99+

New JerseyLOCATION

0.99+

50QUANTITY

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

9QUANTITY

0.99+

ShanePERSON

0.99+

Long IslandLOCATION

0.99+

AI LabORGANIZATION

0.99+

Cray ResearchORGANIZATION

0.99+

Brookhaven National LabsORGANIZATION

0.99+

JapanLOCATION

0.99+

Kendall Square ResearchORGANIZATION

0.99+

5 jobsQUANTITY

0.99+

CovePERSON

0.99+

2000 digitsQUANTITY

0.99+

United StatesLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Danny HillisPERSON

0.99+

a yearQUANTITY

0.99+

half a dozenQUANTITY

0.98+

third thingQUANTITY

0.98+

bothQUANTITY

0.98+

threeQUANTITY

0.98+

oneQUANTITY

0.98+

64QUANTITY

0.98+

Exa Scale DayEVENT

0.98+

32QUANTITY

0.98+

six months laterDATE

0.98+

64 bitQUANTITY

0.98+

third pillarQUANTITY

0.98+

16QUANTITY

0.97+

firstQUANTITY

0.97+

HBCORGANIZATION

0.97+

one placeQUANTITY

0.97+

87 milesQUANTITY

0.97+

tensQUANTITY

0.97+

Mark FernandezPERSON

0.97+

zeroQUANTITY

0.97+

ShaheenPERSON

0.97+

sevenQUANTITY

0.96+

first jobQUANTITY

0.96+

HPC TechnologiesORGANIZATION

0.96+

twoQUANTITY

0.94+

three different ecosystemsQUANTITY

0.94+

every 10 secondsQUANTITY

0.94+

every five secondsQUANTITY

0.93+

ByzantinePERSON

0.93+

Exa scale dayEVENT

0.93+

second thingQUANTITY

0.92+

MoorePERSON

0.9+

years agoDATE

0.89+

HPCORGANIZATION

0.89+

three yearsQUANTITY

0.89+

three different developerQUANTITY

0.89+

Exascale DayEVENT

0.88+

GalileoPERSON

0.88+

three timesQUANTITY

0.88+

a couple of weeks agoDATE

0.85+

exa scale dayEVENT

0.84+

D. CPERSON

0.84+

many years agoDATE

0.81+

a decade agoDATE

0.81+

aboutDATE

0.81+

C twoTITLE

0.81+

one thingQUANTITY

0.8+

10. 18DATE

0.8+

DrPERSON

0.79+

past 34 decadesDATE

0.77+

two thingsQUANTITY

0.76+

LeightonORGANIZATION

0.76+

11 simple wayQUANTITY

0.75+

21 placeQUANTITY

0.74+

three different segmentsQUANTITY

0.74+

more than 100 mQUANTITY

0.73+

FPGORGANIZATION

0.73+

decadesQUANTITY

0.71+

fiveQUANTITY

0.7+

Drug Discovery and How AI Makes a Difference Panel | Exascale Day


 

>> Hello everyone. On today's panel, the theme is Drug Discovery and how Artificial Intelligence can make a difference. On the panel today, we are honored to have Dr. Ryan Yates, principal scientist at The National Center for Natural Products Research, with a focus on botanicals specifically the pharmacokinetics, which is essentially how the drug changes over time in our body and pharmacodynamics which is essentially how drugs affects our body. And of particular interest to him is the use of AI in preclinical screening models to identify chemical combinations that can target chronic inflammatory processes such as fatty liver disease, cognitive impairment and aging. Welcome, Ryan. Thank you for coming. >> Good morning. Thank you for having me. >> The other distinguished panelist is Dr. Rangan Sukumar, our very own, is a distinguished technologist at the CTO office for High Performance Computing and Artificial Intelligence with a PHD in AI and 70 publications that can be applied in drug discovery, autonomous vehicles and social network analysis. Hey Rangan, welcome. Thank you for coming, by sparing the time. We have also our distinguished Chris Davidson. He is leader of our HPC and AI Application and Performance Engineering team. His job is to tune and benchmark applications, particularly in the applications of weather, energy, financial services and life sciences. Yes so particular interest is life sciences he spent 10 years in biotech and medical diagnostics. Hi Chris, welcome. Thank you for coming. >> Nice to see you. >> Well let's start with your Chris, yes, you're regularly interfaced with pharmaceutical companies and worked also on the COVID-19 White House Consortium. You know tell us, let's kick this off and tell us a little bit about your engagement in the drug discovery process. >> Right and that's a good question I think really setting the framework for what we're talking about here is to understand what is the drug discovery process. And that can be kind of broken down into I would say four different areas, there's the research and development space, the preclinical studies space, clinical trial and regulatory review. And if you're lucky, hopefully approval. Traditionally this is a slow arduous process it costs a lot of money and there's a high amount of error. Right, however this process by its very nature is highly iterate and has just huge amounts of data, right it's very data intensive, right and it's these characteristics that make this process a great target for kind of new approaches in different ways of doing things. Right, so for the sake of discussion, right, go ahead. >> Oh yes, so you mentioned data intensive brings to mind Artificial Intelligence, you know, so Artificial Intelligence making the difference here in this process, is that so? >> Right, and some of those novel approaches are actually based on Artificial Intelligence whether it's deep learning and machine learning, et cetera, you know, prime example would say, let's just say for the sake of discussion, let's say there's a brand new virus, causes flu-like symptoms, shall not be named if we focus kind of on the R and D phase, right our goal is really to identify target for the treatment and then screen compounds against it see which, you know, which ones we take forward right to this end, technologies like cryo-electron, cryogenic electron microscopy, just a form of microscopy can provide us a near atomic biomolecular map of the samples that we're studying, right whether that's a virus, a microbe, the cell that it's attaching to and so on, right AI, for instance, has been used in the particle picking aspect of this process. When you take all these images, you know, there are only certain particles that we want to take and study, right whether they have good resolution or not whether it's in the field of the frame and image recognition is a huge part of this, it's massive amounts of data in AI can be very easily, you know, used to approach that. Right, so with docking, you can take the biomolecular maps that you achieved from cryo-electron microscopy and you can take those and input that into the docking application and then run multiple iterations to figure out which will give you the best fit. AI again, right, this is iterative process it's extremely data intensive, it's an easy way to just apply AI and get that best fit doing something in a very, you know, analog manner that would just take humans very long time to do or traditional computing a very long time to do. >> Oh, Ryan, Ryan, you work at the NCNPR, you know, very exciting, you know after all, you know, at some point in history just about all drugs were from natural products yeah, so it's great to have you here today. Please tell us a little bit about your work with the pharmaceutical companies, especially when it is often that drug cocktails or what they call Polypharmacology, is the answer to complete drug therapy. Please tell us a bit more with your work there. >> Yeah thank you again for having me here this morning Dr. Goh, it's a pleasure to be here and as you said, I'm from the National Center for Natural Products Research you'll hear me refer to it as the NCNPR here in Oxford, Mississippi on the Ole Miss Campus, beautiful setting here in the South and so, what, as you said historically, what the drug discovery process has been, and it's really not a drug discovery process is really a therapy process, traditional medicine is we've looked at natural products from medicinal plants okay, in these extracts and so where I'd like to begin is really sort of talking about the assets that we have here at the NCNPR one of those prime assets, unique assets is our medicinal plant repository which comprises approximately 15,000 different medicinal plants. And what that allows us to do, right is to screen mine, that repository for activities so whether you have a disease of interest or whether you have a target of interest then you can use this medicinal plant repository to look for actives, in this case active plants. It's really important in today's environment of drug discovery to really understand what are the actives in these different medicinal plants which leads me to the second unique asset here at the NCNPR and that is our what I'll call a plant deconstruction laboratory so without going into great detail, but what that allows us to do is through a how to put workstation, right, is to facilitate rapid isolation and identification of phytochemicals in these different medicinal plants right, and so things that have historically taken us weeks and sometimes months, think acetylsalicylic acid from salicylic acid as a pain reliever in the willow bark or Taxol, right as an anti-cancer drug, right now we can do that with this system on the matter of days or weeks so now we're talking about activity from a plant and extract down to phytochemical characterization on a timescale, which starts to make sense in modern drug discovery, alright and so now if you look at these phytochemicals, right, and you ask yourself, well sort of who is interested in that and why, right what are traditional pharmaceutical companies, right which I've been working with for 20, over 25 years now, right, typically uses these natural products where historically has used these natural products as starting points for new drugs. Right, so in other words, take this phytochemical and make chemicals synthetic modifications in order to achieve a potential drug. But in the context of natural products, unlike the pharmaceutical realm, there is often times a big knowledge gap between a disease and a plant in other words I have a plant that has activity, but how to connect those dots has been really laborious time consuming so it took us probably 50 years to go from salicylic acid and willow bark to synthesize acetylsalicylic acid or aspirin it just doesn't work in today's environment. So casting about trying to figure out how we expedite that process that's when about four years ago, I read a really fascinating article in the Los Angeles Times about my colleague and business partner, Dr. Rangan Sukumar, describing all the interesting things that he was doing in the area of Artificial Intelligence. And one of my favorite parts of this story is basically, unannounced, I arrived at his doorstep in Oak Ridge, he was working Oak Ridge National Labs at the time, and I introduced myself to him didn't know what was coming, didn't know who I was, right and I said, hey, you don't know me you don't know why I'm here, I said, but let me tell you what I want to do with your system, right and so that kicked off a very fruitful collaboration and friendship over the last four years using Artificial Intelligence and it's culminated most recently in our COVID-19 project collaborative research between the NCNPR and HP in this case. >> From what I can understand also as Chris has mentioned highly iterative, especially with these combination mixture of chemicals right, in plants that could affect a disease. We need to put in effort to figure out what are the active components in that, that affects it yeah, the combination and given the layman's way of understanding it you know and therefore iterative and highly data intensive. And I can see why Rangan can play a huge significant role here, Rangan, thank you for joining us So it's just a nice segue to bring you in here, you know, given your work with Ryan over so many years now, tell I think I'm also quite interested in knowing a little about how it developed the first time you met and the process and the things you all work together on that culminated into the progress at the advanced level today. Please tell us a little bit about that history and also the current work. Rangan. >> So, Ryan, like he mentioned, walked into my office about four years ago and he was like hey, I'm working on this Omega-3 fatty acid, what can your system tell me about this Omega-3 fatty acid and I didn't even know how to spell Omega-3 fatty acids that's the disconnect between the technologist and the pharmacologist, they have terms of their own right since then we've come a long way I think I understand his terminologies now and he understands that I throw words like knowledge graphs and page rank and then all kinds of weird stuff that he's probably never heard in his life before right, so it's been on my mind off to different domains and terminologies in trying to accept each other's expertise in trying to work together on a collaborative project. I think the core of what Ryan's work and collaboration has led me to understanding is what happens with the drug discovery process, right so when we think about the discovery itself, we're looking at companies that are trying to accelerate the process to market, right an average drug is taking 12 years to get to market the process that Chris just mentioned, Right and so companies are trying to adopt what's called the in silico simulation techniques and in silico modeling techniques into what was predominantly an in vitro, in silico, in vivo environment, right. And so the in silico techniques could include things like molecular docking, could include Artificial Intelligence, could include other data-driven discovery methods and so forth, and the essential component of all the things that you know the discovery workflows have is the ability to augment human experts to do the best by assisting them with what computers do really really well. So, in terms of what we've done as examples is Ryan walks in and he's asking me a bunch of questions and few that come to mind immediately, the first few are, hey, you are an Artificial Intelligence expert can you sift through a database of molecules the 15,000 compounds that he described to prioritize a few for next lab experiments? So that's question number one. And he's come back into my office and asked me about hey, there's 30 million publications in PubMag and I don't have the time to read everything can you create an Artificial Intelligence system that once I've picked these few molecules will tell me everything about the molecule or everything about the virus, the unknown virus that shows up, right. Just trying to understand what are some ways in which he can augment his expertise, right. And then the third question, I think he described better than I'm going to was how can technology connect these dots. And typically it's not that the answer to a drug discovery problem sits in one database, right he probably has to think about uniproduct protein he has to think about phytochemical, chemical or informatics properties, data and so forth. Then he talked about the phytochemical interaction that's probably in another database. So when he is trying to answer other question and specifically in the context of an unknown virus that showed up in late last year, the question was, hey, do we know what happened in this particular virus compared to all the previous viruses? Do we know of any substructure that was studied or a different disease that's part of this unknown virus and can I use that information to go mine these databases to find out if these interactions can actually be used as a repurpose saying, hook, say this drug does not interact with this subsequence of a known virus that also seems to be part of this new virus, right? So to be able to connect that dot I think the abstraction that we are learning from working with pharma companies is that this drug discovery process is complex, it's iterative, and it's a sequence of needle in the haystack search problems, right and so one day, Ryan would be like, hey, I need to match genome, I need to match protein sequences between two different viruses. Another day it would be like, you know, I need to sift through a database of potential compounds, identified side effects and whatnot other day it could be, hey, I need to design a new molecule that never existed in the world before I'll figure out how to synthesize it later on, but I need a completely new molecule because of patentability reasons, right so it goes through the entire spectrum. And I think where HP has differentiated multiple times even the recent weeks is that the technology infusion into drug discovery, leads to several aha! Moments. And, aha moments typically happened in the other few seconds, and not the hours, days, months that Ryan has to laboriously work through. And what we've learned is pharma researchers love their aha moments and it leads to a sound valid, well founded hypothesis. Isn't that true Ryan? >> Absolutely. Absolutely. >> Yeah, at some point I would like to have a look at your, peak the list of your aha moments, yeah perhaps there's something quite interesting in there for other industries too, but we'll do it at another time. Chris, you know, with your regular work with pharmaceutical companies especially the big pharmas, right, do you see botanicals, coming, being talked about more and more there? >> Yeah, we do, right. Looking at kind of biosimilars and drugs that are already really in existence is kind of an important point and Dr. Yates and Rangan, with your work with databases this is something important to bring up and much of the drug discovery in today's world, isn't from going out and finding a brand new molecule per se. It's really looking at all the different databases, right all the different compounds that already exist and sifting through those, right of course data is mind, and it is gold essentially, right so a lot of companies don't want to share their data. A lot of those botanicals data sets are actually open to the public to use in many cases and people are wanting to have more collaborative efforts around those databases so that's really interesting to kind of see that being picked up more and more. >> Mm, well and Ryan that's where NCNPR hosts much of those datasets, yeah right and it's interesting to me, right you know, you were describing the traditional way of drug discovery where you have a target and a compound, right that can affect that target, very very specific. But from a botanical point of view, you really say for example, I have an extract from a plant that has combination of chemicals and somehow you know, it affects this disease but then you have to reverse engineer what those chemicals are and what the active ones are. Is that very much the issue, the work that has to be put in for botanicals in this area? >> Yes Doctor Goh, you hit it exactly. >> Now I can understand why a highly iterative intensive and data intensive, and perhaps that's why Rangan, you're highly valuable here, right. So tell us about the challenge, right the many to many intersection to try and find what the targets are, right given these botanicals that seem to affect the disease here what methods do you use, right in AI, to help with this? >> Fantastic question, I'm going to go a little bit deeper and speak like Ryan in terminology, but here we go. So with going back to about starting of our conversation right, so let's say we have a database of molecules on one side, and then we've got the database of potential targets in a particular, could be a virus, could be bacteria, could be whatever, a disease target that you've identified, right >> Oh this process so, for example, on a virus, you can have a number of targets on the virus itself some have the spike protein, some have the other proteins on the surface so there are about three different targets and others on a virus itself, yeah so a lot of people focus on the spike protein, right but there are other targets too on that virus, correct? >> That is exactly right. So for example, so the work that we did with Ryan we realized that, you know, COVID-19 protein sequence has an overlap, a significant overlap with previous SARS-CoV-1 virus, not only that, but it overlap with MERS, that's overlapped with some bad coronavirus that was studied before and so forth, right so knowing that and it's actually broken down into multiple and Ryan I'm going to steal your words, non-structural proteins, envelope proteins, S proteins, there's a whole substructure that you can associate an amino acid sequence with, right so on the one hand, you have different targets and again, since we did the work it's 160 different targets even on the COVID-19 mark, right and so you find a match, that we say around 36, 37 million molecules that are potentially synthesizable and try to figure it out which one of those or which few of those is actually going to be mapping to which one of these targets and actually have a mechanism of action that Ryan's looking for, that'll inhibit the symptoms on a human body, right so that's the challenge there. And so I think the techniques that we can unrule go back to how much do we know about the target and how much do we know about the molecule, alright. And if you start off a problem with I don't know anything about the molecule and I don't know anything about the target, you go with the traditional approaches of docking and molecular dynamics simulations and whatnot, right. But then, you've done so much docking before on the same database for different targets, you'll learn some new things about the ligands, the molecules that Ryan's talking about that can predict potential targets. So can you use that information of previous protein interactions or previous binding to known existing targets with some of the structures and so forth to build a model that will capture that essence of what we have learnt from the docking before? And so that's the second level of how do we infuse Artificial Intelligence. The third level, is to say okay, I can do this for a database of molecules, but then what if the protein-protein interactions are all over the literature study for millions of other viruses? How do I connect the dots across different mechanisms of actions too? Right and so this is where the knowledge graph component that Ryan was talking about comes in. So we've put together a database of about 150 billion medical facts from literature that Ryan is able to connect the dots and say okay, I'm starting with this molecule, what interactions do I know about the molecule? Is there a pretty intruding interaction that affects the mechanism of pathway for the symptoms that a disease is causing? And then he can go and figure out which protein and protein in the virus could potentially be working with this drug so that inhibiting certain activities would stop that progression of the disease from happening, right so like I said, your method of options, the options you've got is going to be, how much do you know about the target? How much do you know the drug database that you have and how much information can you leverage from previous research as you go down this pipeline, right so in that sense, I think we mix and match different methods and we've actually found that, you know mixing and matching different methods produces better synergies for people like Ryan. So. >> Well, the synergies I think is really important concept, Rangan, in additivities, synergistic, however you want to catch that. Right. But it goes back to your initial question Dr. Goh, which is this idea of polypharmacology and historically what we've done with traditional medicines there's more than one active, more than one network that's impacted, okay. You remember how I sort of put you on both ends of the spectrum which is the traditional sort of approach where we really don't know much about target ligand interaction to the completely interpretal side of it, right where now we are all, we're focused on is, in a single molecule interacting with a target. And so where I'm going with this is interesting enough, pharma has sort of migrate, started to migrate back toward the middle and what I mean by that, right, is we had these in a concept of polypharmacology, we had this idea, a regulatory pathway of so-called, fixed drug combinations. Okay, so now you start to see over the last 20 years pharmaceutical companies taking known, approved drugs and putting them in different combinations to impact different diseases. Okay. And so I think there's a really unique opportunity here for Artificial Intelligence or as Rangan has taught me, Augmented Intelligence, right to give you insight into how to combine those approved drugs to come up with unique indications. So is that patentability right, getting back to right how is it that it becomes commercially viable for entities like pharmaceutical companies but I think at the end of the day what's most interesting to me is sort of that, almost movement back toward that complex mixture of fixed drug combination as opposed to single drug entity, single target approach. I think that opens up some really neat avenues for us. As far as the expansion, the applicability of Artificial Intelligence is I'd like to talk to, briefly about one other aspect, right so what Rang and I have talked about is how do we take this concept of an active phytochemical and work backwards. In other words, let's say you identify a phytochemical from an in silico screening process, right, which was done for COVID-19 one of the first publications out of a group, Dr. Jeremy Smith's group at Oak Ridge National Lab, right, identified a natural product as one of the interesting actives, right and so it raises the question to our botanical guy, says, okay, where in nature do we find that phytochemical? What plants do I go after to try and source botanical drugs to achieve that particular end point right? And so, what Rangan's system allows us to do is to say, okay, let's take this phytochemical in this case, a phytochemical flavanone called eriodictyol and say, where else in nature is this found, right that's a trivial question for an Artificial Intelligence system. But for a guy like me left to my own devices without AI, I spend weeks combing the literature. >> Wow. So, this is brilliant I've learned something here today, right, If you find a chemical that actually, you know, affects and addresses a disease, right you can actually try and go the reverse way to figure out what botanicals can give you those chemicals as opposed to trying to synthesize them. >> Well, there's that and there's the other, I'm going to steal Rangan's thunder here, right he always teach me, Ryan, don't forget everything we talk about has properties, plants have properties, chemicals have properties, et cetera it's really understanding those properties and using those properties to make those connections, those edges, those sort of interfaces, right. And so, yes, we can take something like an eriodictyol right, that example I gave before and say, okay, now, based upon the properties of eriodictyol, tell me other phytochemicals, other flavonoid in this case, such as that phytochemical class of eriodictyols part right, now tell me how, what other phytochemicals match that profile, have the same properties. It might be more economically viable, right in other words, this particular phytochemical is found in a unique Himalayan plant that I've never been able to source, but can we find something similar or same thing growing in, you know a bush found all throughout the Southeast for example, like. >> Wow. So, Chris, on the pharmaceutical companies, right are they looking at this approach of getting, building drugs yeah, developing drugs? >> Yeah, absolutely Dr. Goh, really what Dr. Yates is talking about, right it doesn't help us if we find a plant and that plant lives on one mountain only on the North side in the Himalayas, we're never going to be able to create enough of a drug to manufacture and to provide to the masses, right assuming that the disease is widespread or affects a large enough portion of the population, right so understanding, you know, not only where is that botanical or that compound but understanding the chemical nature of the chemical interaction and the physics of it as well where which aspect affects the binding site, which aspect of the compound actually does the work, if you will and then being able to make that at scale, right. If you go to these pharmaceutical companies today, many of them look like breweries to be honest with you, it's large scale, it's large back everybody's clean room and it's, they're making the microbes do the work for them or they have these, you know, unique processes, right. So. >> So they're not brewing beer okay, but drugs instead. (Christopher laughs) >> Not quite, although there are pharmaceutical companies out there that have had a foray into the brewery business and vice versa, so. >> We should, we should visit one of those, yeah (chuckles) Right, so what's next, right? So you've described to us the process and how you develop your relationship with Dr. Yates Ryan over the years right, five years, was it? And culminating in today's, the many to many fast screening methods, yeah what would you think would be the next exciting things you would do other than letting me peek at your aha moments, right what would you say are the next exciting steps you're hoping to take? >> Thinking long term, again this is where Ryan and I are working on this long-term project about, we don't know enough about botanicals as much as we know about the synthetic molecules, right and so this is a story that's inspired from Simon Sinek's "Infinite Game" book, trying to figure it out if human population has to survive for a long time which we've done so far with natural products we are going to need natural products, right. So what can we do to help organizations like NCNPR to stage genomes of natural products to stage and understand the evolution as we go to understand the evolution to map the drugs and so forth. So the vision is huge, right so it's not something that we want to do on a one off project and go away but in the process, just like you are learning today, Dr. Goh I'm going to be learning quite a bit, having fun with life. So, Ryan what do you think? >> Ryan, we're learning from you. >> So my paternal grandfather lived to be 104 years of age. I've got a few years to get there, but back to "The Infinite Game" concept that Rang had mentioned he and I discussed that quite frequently, I'd like to throw out a vision for you that's well beyond that sort of time horizon that we have as humans, right and that's this right, is our current strategy and it's understandable is really treatment centric. In other words, we have a disease we develop a treatment for that disease. But we all recognize, whether you're a healthcare practitioner, whether you're a scientist, whether you're a business person, right or whatever occupation you realize that prevention, right the old ounce, prevention worth a pound of cure, right is how can we use something like Artificial Intelligence to develop preventive sorts of strategies that we are able to predict with time, right that's why we don't have preventive treatment approach right, we can't do a traditional clinical trial and say, did we prevent type two diabetes in an 18 year old? Well, we can't do that on a timescale that is reasonable, okay. And then the other part of that is why focus on botanicals? Is because, for the most part and there are exceptions I want to be very clear, I don't want to paint the picture that botanicals are all safe, you should just take botanicals dietary supplements and you'll be safe, right there are exceptions, but for the most part botanicals, natural products are in fact safe and have undergone testing, human testing for thousands of years, right. So how do we connect those dots? A preventive strategy with existing extent botanicals to really develop a healthcare system that becomes preventive centric as opposed to treatment centric. If I could wave a magic wand, that's the vision that I would figure out how we could achieve, right and I do think with guys like Rangan and Chris and folks like yourself, Eng Lim, that that's possible. Maybe it's in my lifetime I got 50 years to go to get to my grandfather's age, but you never know, right? >> You bring really, up two really good points there Ryan, it's really a systems approach, right understanding that things aren't just linear, right? And as you go through it, there's no impact to anything else, right taking that systems approach to understand every aspect of how things are being impacted. And then number two was really kind of the downstream, really we've been discussing the drug discovery process a lot and kind of the kind of preclinical in vitro studies and in vivo models, but once you get to the clinical trial there are many drugs that just fail, just fail miserably and the botanicals, right known to be safe, right, in many instances you can have a much higher success rate and that would be really interesting to see, you know, more of at least growing in the market. >> Well, these are very visionary statements from each of you, especially Dr. Yates, right, prevention better than cure, right, being proactive better than being reactive. Reactive is important, but we also need to focus on being proactive. Yes. Well, thank you very much, right this has been a brilliant panel with brilliant panelists, Dr. Ryan Yates, Dr. Rangan Sukumar and Chris Davidson. Thank you very much for joining us on this panel and highly illuminating conversation. Yeah. All for the future of drug discovery, that includes botanicals. Thank you very much. >> Thank you. >> Thank you.

Published Date : Oct 16 2020

SUMMARY :

And of particular interest to him Thank you for having me. technologist at the CTO office in the drug discovery process. is to understand what is and you can take those and input that is the answer to complete drug therapy. and friendship over the last four years and the things you all work together on of all the things that you know Absolutely. especially the big pharmas, right, and much of the drug and somehow you know, the many to many intersection and then we've got the database so on the one hand, you and so it raises the question and go the reverse way that I've never been able to source, approach of getting, and the physics of it as well where okay, but drugs instead. foray into the brewery business the many to many fast and so this is a story that's inspired I'd like to throw out a vision for you and the botanicals, right All for the future of drug discovery,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

RyanPERSON

0.99+

Chris DavidsonPERSON

0.99+

NCNPRORGANIZATION

0.99+

Rangan SukumarPERSON

0.99+

National Center for Natural Products ResearchORGANIZATION

0.99+

RanganPERSON

0.99+

Simon SinekPERSON

0.99+

ChristopherPERSON

0.99+

HPORGANIZATION

0.99+

12 yearsQUANTITY

0.99+

third questionQUANTITY

0.99+

50 yearsQUANTITY

0.99+

Rangan SukumarPERSON

0.99+

10 yearsQUANTITY

0.99+

Infinite GameTITLE

0.99+

15,000 compoundsQUANTITY

0.99+

Jeremy SmithPERSON

0.99+

104 yearsQUANTITY

0.99+

COVID-19OTHER

0.99+

Ryan YatesPERSON

0.99+

30 million publicationsQUANTITY

0.99+

five yearsQUANTITY

0.99+

third levelQUANTITY

0.99+

70 publicationsQUANTITY

0.99+

Eng LimPERSON

0.99+

Oak Ridge National LabsORGANIZATION

0.99+

160 different targetsQUANTITY

0.99+

20QUANTITY

0.99+

thousands of yearsQUANTITY

0.99+

second levelQUANTITY

0.99+

GohPERSON

0.99+

The Infinite GameTITLE

0.99+

HimalayasLOCATION

0.99+

over 25 yearsQUANTITY

0.99+

two different virusesQUANTITY

0.98+

more than one networkQUANTITY

0.98+

YatesPERSON

0.98+

late last yearDATE

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

about 150 billion medical factsQUANTITY

0.98+

one databaseQUANTITY

0.97+

both endsQUANTITY

0.97+

SARS-CoV-1 virusOTHER

0.97+

second unique assetQUANTITY

0.97+

single drugQUANTITY

0.97+

Oak Ridge National LabORGANIZATION

0.97+

Oak RidgeLOCATION

0.97+

Tech for Good | Exascale Day


 

(plane engine roars) (upbeat music) >> They call me Dr. Goh. I'm Senior Vice President and Chief Technology Officer of AI at Hewlett Packard Enterprise. And today I'm in Munich, Germany. Home to one and a half million people. Munich is famous for everything from BMW, to beer, to breathtaking architecture and festive markets. The Bavarian capital is the beating heart of Germany's automobile industry. Over 50,000 of its residents work in automotive engineering, and to date, Munich allocated around 30 million euros to boost electric vehicles and infrastructure for them. (upbeat music) >> Hello, everyone, my name is Dr. Jerome Baudry. I am a professor at the University of Alabama in Huntsville. Our mission is to use a computational resources to accelerate the discovery of drugs that will be useful and efficient against the COVID-19 virus. On the one hand, there is this terrible crisis. And on the other hand, there is this absolutely unique and rare global effort to fight it. And that I think is a is a very positive thing. I am working with the Cray HPE machine called Sentinel. This machine is so amazing that it can actually mimic the screening of hundreds of thousands, almost millions of chemicals a day. What we take weeks, if not months, or years, we can do in a matter of a few days. And it's really the key to accelerating the discovery of new drugs, new pharmaceuticals. We are all in this together, thank you. (upbeat music) >> Hello, everyone. I'm so pleased to be here to interview Dr. Jerome Baudry, of the University of Alabama in Huntsville. >> Hello, Dr. Goh, I'm very happy to be meeting with you here, today. I have a lot of questions for you as well. And I'm looking forward to this conversation between us. >> Yes, yes, and I've got lots of COVID-19 and computational science questions lined up for you too Jerome. Yeah, so let's interview each other, then. >> Absolutely, let's do that, let's interview each other. I've got many questions for you. And , we have a lot in common and yet a lot of things we are addressing from a different point of view. So I'm very much looking forward to your ideas and insights. >> Yeah, especially now, with COVID-19, many of us will have to pivot a lot of our research and development work, to address the most current issues. I watch your video and I've seen that you're very much focused on drug discovery using super computing. The central notebook you did, I'm very excited about that. Can you tell us a bit more about how that works, yeah? >> Yes, I'd be happy to in fact, I watch your video as well manufacturing, and it's actually quite surprisingly close, what we do with drugs, and with what other people do with planes or cars or assembly lanes. we are calculating forces, on molecules, on drug candidates, when they hit parts of the viruses. And we essentially try to identify what small molecules will hit the viruses or its components, the hardest to mess with its function in a way. And that's not very different from what you're doing. What you are describing people in the industry or in the transportation industry are doing. So that's our problem, so to speak, is to deal with a lot of small molecules. Guy creating a lot of forces. That's not a main problem, our main problem is to make intelligent choices about what calculates, what kind of data should we incorporate in our calculations? And what kind of data should we give to the people who are going to do the testing? And that's really something I would like you to do to help us understand better. How do you see artificial intelligence, helping us, putting our hands on the right data to start with, in order to produce the right data and accuracy. >> Yeah, that's that's a great question. And it is a question that we've been pondering in our strategy as a company a lot recently. Because more and more now we realize that the data is being generated at the far out edge. By edge. I mean, something that's outside of the cloud and data center, right? Like, for example, a more recent COVID-19 work, doing a lot of cryo electron microscope work, right? To try and get high resolution pictures of the virus and at different angles, so creating lots of movies under electron microscope to try and create a 3D model of the virus. And we realize that's the edge, right, because that's where the microscope is, away from the data center. And massive amounts of data is generated, terabytes and terabytes of data per day generated. And we had to develop means, a workflow means to get that data off the microscope and provide pre-processing and processing, so that they can achieve results without delay. So we learned quite a few lessons there, right, especially trying to get the edge to be more intelligent, to deal with the onslaught of data coming in, from these devices. >> That's fantastic that you're saying that and that you're using this very example of cryo-EM, because that's the kind of data that feeds our computations. And indeed, we have found that it is very, very difficult to get the right cryo-EM data to us. Now we've been working with HPE supercomputer Sentinel, as you may know, for our COVID-19 work. So we have a lot of computational power. But we will be even faster and better, frankly, if we knew what kind of cryo-EM data to focus on. In fact, most of our discussions are based on not so much how to compute the forces of the molecules, which we do quite well on an HP supercomputer. But again, what cryo-EM 3D dimensional space to look at. And it's becoming almost a bottleneck. >> Have access to that. >> And we spend a lot of time, do you envision a point where AI will be able to help us, to make this kind of code almost live or at least as close to live as possible, as that that comes from the edge? How to pack it and not triage it, but prioritize it for the best possible computations on supercomputers? >> What a visionary question and desire, right? Like exactly the vision we have, right? Of course, the ultimate vision, you aim for the best, and that will be a real time stream of processed data coming off the microscope straight, providing your need, right? We are not there. Before this, we are far from there, right? But that's the aim, the ability to push more and more intelligence forward, so that by the time the data reaches you, it is what you need, right, without any further processing. And a lot of AI is applied there, particularly in cryo-EM where they do particle picking, right, they do a lot of active pictures and movies of the virus. And then what they do is, they rotate the virus a little bit, right? And then to try and figure out in all the different images in the movies, to try and pick the particles in there. And this is very much image processing that AI is very good at. So many different stages, application is made. The key thing, is to deal with the data that is flowing at this at this speed, and to get the data to you in the right form, that in time. So yes, that's the desire, right? >> It will be a game changer, really. You'll be able to get things in a matter of weeks, instead of a matter of years to the colleague who will be doing the best day. If the AI can help me learn from a calculation that didn't exactly turn out the way we want it to be, that will be very, very helpful. I can see, I can envision AI being able to, live AI to be able to really revolutionize all the process, not only from the discovery, but all the way to the clinical, to the patient, to the hospital. >> Well, that's a great point. In fact, I caught on to your term live AI. That's actually what we are trying to achieve. Although I have not used that term before. Perhaps I'll borrow it for next time. >> Oh please, by all means. >> You see, yes, we have done, I've been doing also recent work on gene expression data. So a vaccine, clinical trial, they have the blood, they get the blood from the volunteers after the first day. And then to run very, very fast AI analytics on the gene expression data that the one, the transcription data, before translation to emit amino acid. The transcription data is enormous. We're talking 30,000, 60,000 different items, transcripts, and how to use that high dimensional data to predict on day one, whether this volunteer will get an adverse event or will have a good antibody outcome, right? For efficacy. So yes, how to do it so quickly, right? To get the blood, go through an SA, right, get the transcript, and then run the analytics and AI to produce an outcome. So that's exactly what we're trying to achieve, yeah. Yes, I always emphasize that, ultimately, the doctor makes that decision. Yeah, AI only suggests based on the data, this is the likely outcome based on all the previous data that the machine has learned from, yeah. >> Oh, I agree, we wouldn't want the machine to decide the fate of the patient, but to assist the doctor or nurse making the decision that will be invaluable? And are you aware of any kind of industry that already is using this kind of live AI? And then, is there anything in, I don't know in sport or crowd control? Or is there any kind of industry? I will be curious to see who is ahead of us in terms of making this kind of a minute based decisions using AI? Yes, in fact, this is very pertinent question. We as In fact, COVID-19, lots of effort working on it, right? But now, industries and different countries are starting to work on returning to work, right, returning to their offices, returning to the factories, returning to the manufacturing plants, but yet, the employers need to reassure the employees that things, appropriate measures are taken for safety, but yet maintain privacy, right? So our Aruba organization actually developed a solution called contact location tracing inside buildings, inside factories, right? Why they built this, and needed a lot of machine learning methods in there to do very, very well, as you say, live AI right? To offer a solution? Well, let me describe the problem. The problem is, in certain countries, and certain states, certain cities where regulations require that, if someone is ill, right, you actually have to go in and disinfect the area person has been to, is a requirement. But if you don't know precisely where the ill person has been to, you actually disinfect the whole factory. And if you have that, if you do that, it becomes impractical and cost prohibitive for the company to keep operating profitably. So what they are doing today with Aruba is, that they carry this Bluetooth Low Energy tag, which is a quarter size, right? The reason they do that is, so that they extract the tag from the person, and then the system tracks, everybody, all the employees. We have one company, there's 10,000 employees, right? Tracks everybody with the tag. And if there is a person ill, immediately a floor plan is brought up with hotspots. And then you just targeted the cleaning services there. The same thing, contact tracing is also produced automatically, you could say, anybody that is come in contact with this person within two meters, and more than 15 minutes, right? It comes up the list. And we, privacy is our focused here. There's a separation between the tech and the person, on only restricted people are allowed to see the association. And then things like washrooms and all that are not tracked here. So yes, live AI, trying to make very, very quick decisions, right, because this affects people. >> Another question I have for you, if you have a minute, actually has to be the same thing. Though, it's more a question about hardware, about computer hardware purify may. We're having, we're spending a lot of time computing on number crunching giant machines, like Sentinel, for instance, which is a dream to use, but it's very good at something but when we pulled it off, also spent a lot of time moving back and forth, so data from clouds from storage, from AI processing, to the computing cycles back and forth, back and forth, did you envision an architecture, that will kind of, combine the hardware needed for a massively parallel calculations, kind of we are doing. And also very large storage, fast IO to be more AI friendly, so to speak. You see on the horizon, some kind of, I would say you need some machine, maybe it's to be determined, to be ambitious at times but something that, when the AI ahead plan in terms of passing the vector to the massively parallel side, yeah, that makes sense? >> Makes a lot of sense. And you ask it I know, because it is a tough problem to solve, as we always say, computation, right, is growing capability enormously. But bandwidth, you have to pay for, latency you sweat for, right? >> That's a very good >> So moving data is ultimately going to be the problem. >> It is. >> Yeah, and we've move the data a lot of times, right, >> You move back and forth, so many times >> Back and forth, back and forth, from the edge that's where you try to pre-process it, before you put it in storage, yeah. But then once it arrives in storage, you move it to memory to do some work and bring it back and move it memory again, right, and then that's what HPC, and then you put it back into storage, and then the AI comes in you, you do the learning, the other way around also. So lots of back and forth, right. So tough problem to solve. But more and more, we are looking at a new architecture, right? Currently, this architecture was built for the AI side first, but we're now looking and see how we can expand that. And this is that's the reason why we announced HPE Ezmeral Data Fabric. What it does is that, it takes care of the data, all the way from the edge point of view, the minute it is ingested at the edge, it is incorporated in the global namespace. So that eventually where the data arrives, lands at geographically one, or lands at, temperature, hot data, warm data or cold data, regardless of eventually where it lands at, this Data Fabric checks everything, from in a global namespace, in a unified way. So that's the first step. So that data is not seen as in different places, different pieces, it is a unified view of all the data, the minute that it does, Just start from the edge. >> I think it's important that we communicate that AI is purposed for good, A lot of sci-fi movies, unfortunately, showcase some psychotic computers or teams of evil scientists who want to take over the world. But how can we communicate better that it's a tool for a change, a tool for good? >> So key differences are I always point out is that, at least we have still judgment relative to the machine. And part of the reason we still have judgment is because our brain, logical center is automatically connected to our emotional center. So whatever our logic say is tempered by emotion, and whatever our emotion wants to act, wants to do, right, is tempered by our logic, right? But then AI machine is, many call them, artificial specific intelligence. They are just focused on that decision making and are not connected to other more culturally sensitive or emotionally sensitive type networks. They are focus networks. Although there are people trying to build them, right. That's this power, reason why with judgment, I always use the phrase, right, what's correct, is not always the right thing to do. There is a difference, right? We need to be there to be the last Judge of what's right, right? >> Yeah. >> So that says one of the the big thing, the other one, I bring up is that humans are different from machines, generally, in a sense that, we are highly subtractive. We, filter, right? Well, machine is highly accumulative today. So an AI machine they accumulate to bring in lots of data and tune the network, but our brains a few people realize, we've been working with brain researchers in our work, right? Between three and 30 years old, our brain actually goes through a pruning process of our connections. So for those of us like me after 30 it's done right. (laughs) >> Wait till you reach my age. >> Keep the brain active, because it prunes away connections you don't use, to try and conserve energy, right? I always say, remind our engineers about this point, about prunings because of energy efficiency, right? A slice of pizza drives our brain for three hours. (laughs) That's why, sometimes when I get need to get my engineers to work longer, I just offer them pizza, three more hours, >> Pizza is universal solution to our problems, absolutely. Food Indeed, indeed. There is always a need for a human consciousness. It's not just a logic, it's not like Mr. Spock in "Star Trek," who always speaks about logic but forgets the humanity aspect of it. >> Yes, yes, The connection between the the logic centers and emotional centers, >> You said it very well. Yeah, yeah and the thing is, sleep researchers are saying that when you don't get enough REM sleep, this connection is weakened. Therefore, therefore your decision making gets affected if you don't get enough sleep. So I was thinking, people do alcohol test breathalyzer test before they are allowed to operate sensitive or make sensitive decisions. Perhaps in the future, you have to check whether you have enough REM sleep before, >> It is. This COVID-19 crisis obviously problematic, and I wish it never happened, but there is something that I never experienced before is, how people are talking to each other, people like you and me, we have a lot in common. But I hear more about the industry outside of my field. And I talk a lot to people, like cryo-EM people or gene expression people, I would have gotten the data before and process it. Now, we have a dialogue across the board in all aspects of industry, science, and society. And I think that could be something wonderful that we should keep after we finally fix this bug. >> Yes. yes, yes. >> Right? >> Yes, that's that's a great point. In fact, it's something I've been thinking about, right, for employees, things have changed, because of COVID-19. But very likely, the change will continue, yeah? >> Right. Yes, yes, because there are a few positive outcomes. COVID-19 is a tough outcome. But there positive side of things, like communicating in this way, effectively. So we were part of the consortium that developed a natural language processing system in AI system that would allow you scientists to do, I can say, with the link to that website, allows you to do a query. So say, tell me the latest on the binding energy between the Sasko B2 virus like protein and the AC receptor. And then you will, it will give you a list of 10 answers, yeah? And give you a link to the papers that say, they say those answers. If you key that in today to NLP, you see 315 points -13.7 kcal per mole, which is right, I think the general consensus answer, and see a few that are highly out of out of range, right? And then when you go further, you realize those are the earlier papers. So I think this NLP system will be useful. (both chattering) I'm sorry, I didn't mean to interrupt, but I mentioned yesterday about it, because I have used that, and it's a game changer indeed, it is amazing, indeed. Many times by using this kind of intelligent conceptual, analyzes a very direct use, that indeed you guys are developing, I have found connections between facts, between clinical or pharmaceutical aspects of COVID-19. That I wasn't really aware of. So a it's a tool for creativity as well, I find it, it builds something. It just doesn't analyze what has been done, but it creates the connections, it creates a network of knowledge and intelligence. >> That's why three to 30 years old, when it stops pruning. >> I know, I know. (laughs) But our children are amazing, in that respect, they see things that we don't see anymore. they make connections that we don't necessarily think of, because we're used to seeing a certain way. And the eyes of a child, are bringing always something new, which I think is what AI could potentially bring here. So look, this is fascinating, really. >> Yes, yes, difference between filtering subtractive and the machine being accumulative. That's why I believe, the two working together, can have a stronger outcome if used properly. >> Absolutely. And I think that's how AI will be a force for good indeed. Obviously see, seems that we would have missed that would end up being very important. Well, we are very interested in or in our quest for drug discovery against COVID-19, we have been quite successful so far. We have accelerated the process by an order of magnitude. So we're having molecules that are being tested against the virus, otherwise, it would have taken maybe three or four years to get to that point. So first thing, we have been very fast. But we are very interested in natural products, that chemicals that come from plants, essentially. We found a way to mine, I don't want to say explore it, but leverage, that knowledge of hundreds of years of people documenting in a very historical way of what plants do against what diseases in different parts of the world. So that really has been a, not only very useful in our work, but a fantastic bridge to our common human history, basically. And second, yes, plants have chemicals. And of course we love chemicals. Every living cell has chemicals. The chemicals that are in plants, have been fine tuned by evolution to actually have some biological function. They are not there just to look good. They have a role in the cell. And if we're trying to come up with a new growth from scratch, which is also something we want to do, of course, then we have to engineer a function that evolution hasn't already found a solution to, for in plants, so in a way, it's also artificial intelligence. We have natural solutions to our problems, why don't we try to find them and see their work in ourselves, we're going to, and this is certainly have to reinvent the wheel each time. >> Hundreds of millions of years of evolution, >> Hundreds of millions of years. >> Many iterations, >> Yes, ending millions of different plants with all kinds of chemical diversity. So we have a lot of that, at our disposal here. If only we find the right way to analyze them, and bring them to our supercomputers, then we will, we will really leverage this humongus amount of knowledge. Instead of having to reinvent the wheel each time we want to take a car, we'll find that there are cars whose wheels already that we should be borrowing instead of, building one each time. Most of the keys are out there, if we can find them, They' re at our disposal. >> Yeah, nature has done the work after hundreds of millions of years. >> Yes. (chattering) Is to figure out, which is it, yeah? Exactly, exactly hence the importance of biodiversity. >> Yeah, I think this is related to the Knowledge Graph, right? Where, yes, to objects and the linking parameter, right? And then you have hundreds of millions of these right? A chemical to an outcome and the link to it, right? >> Yes, that's exactly what it is, absolutely the kind of things we're pursuing very much, so absolutely. >> Not only only building the graph, but building the dynamics of the graph, In the future, if you eat too much Creme Brulee, or if you don't run enough, or if you sleep, well, then your cells, will have different connections on this graph of the ages, will interact with that molecule in a different way than if you had more sleep or didn't eat that much Creme Brulee or exercise a bit more, >> So insightful, Dr. Baudry. Your, span of knowledge, right, impressed me. And it's such fascinating talking to you. (chattering) Hopefully next time, when we get together, we'll have a bit of Creme Brulee together. >> Yes, let's find out scientifically what it does, we have to do double blind and try three times to make sure we get the right statistics. >> Three phases, three clinical trial phases, right? >> It's been a pleasure talking to you. I like we agreed, you knows this, for all that COVID-19 problems, the way that people talk to each other is, I think the things that I want to keep in this in our post COVID-19 world. I appreciate very much your insight and it's very encouraging the way you see things. So let's make it happen. >> We will work together Dr.Baudry, hope to see you soon, in person. >> Indeed in person, yes. Thank you. >> Thank you, good talking to you.

Published Date : Oct 16 2020

SUMMARY :

and to date, Munich allocated And it's really the key to of the University of to be meeting with you here, today. for you too Jerome. of things we are addressing address the most current issues. the hardest to mess with of the virus. forces of the molecules, and to get the data to you out the way we want it In fact, I caught on to your term live AI. And then to run very, the employers need to reassure has to be the same thing. to solve, as we always going to be the problem. and forth, from the edge to take over the world. is not always the right thing to do. So that says one of the the big thing, Keep the brain active, because but forgets the humanity aspect of it. Perhaps in the future, you have to check And I talk a lot to changed, because of COVID-19. So say, tell me the latest That's why three to 30 years And the eyes of a child, and the machine being accumulative. And of course we love chemicals. Most of the keys are out there, Yeah, nature has done the work Is to figure out, which is it, yeah? it is, absolutely the kind And it's such fascinating talking to you. to make sure we get the right statistics. the way you see things. hope to see you soon, in person. Indeed in person, yes.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

HuntsvilleLOCATION

0.99+

BaudryPERSON

0.99+

Jerome BaudryPERSON

0.99+

threeQUANTITY

0.99+

10 answersQUANTITY

0.99+

hundreds of yearsQUANTITY

0.99+

Star TrekTITLE

0.99+

GohPERSON

0.99+

10,000 employeesQUANTITY

0.99+

COVID-19OTHER

0.99+

University of AlabamaORGANIZATION

0.99+

hundreds of millionsQUANTITY

0.99+

Hundreds of millions of yearsQUANTITY

0.99+

yesterdayDATE

0.99+

BMWORGANIZATION

0.99+

three timesQUANTITY

0.99+

three hoursQUANTITY

0.99+

more than 15 minutesQUANTITY

0.99+

todayDATE

0.99+

13.7 kcalQUANTITY

0.99+

MunichLOCATION

0.99+

first stepQUANTITY

0.99+

four yearsQUANTITY

0.99+

Munich, GermanyLOCATION

0.99+

ArubaORGANIZATION

0.99+

SentinelORGANIZATION

0.99+

Hundreds of millions of yearsQUANTITY

0.99+

315 pointsQUANTITY

0.99+

twoQUANTITY

0.99+

Dr.PERSON

0.98+

hundreds of millions of yearsQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

each timeQUANTITY

0.98+

secondQUANTITY

0.98+

three more hoursQUANTITY

0.98+

around 30 million eurosQUANTITY

0.98+

first thingQUANTITY

0.97+

bothQUANTITY

0.97+

University of AlabamaORGANIZATION

0.97+

first dayQUANTITY

0.97+

Sasko B2 virusOTHER

0.97+

SpockPERSON

0.96+

oneQUANTITY

0.96+

two metersQUANTITY

0.95+

Three phasesQUANTITY

0.95+

GermanyLOCATION

0.95+

one companyQUANTITY

0.94+

COVID-19 virusOTHER

0.94+

HPORGANIZATION

0.92+

Dr.BaudryPERSON

0.91+

Hewlett Packard EnterpriseORGANIZATION

0.91+

day oneQUANTITY

0.89+

30QUANTITY

0.88+

30 years oldQUANTITY

0.88+

BavarianOTHER

0.88+

30 years oldQUANTITY

0.84+

one and a half million peopleQUANTITY

0.84+

millions of chemicals a dayQUANTITY

0.84+

millions ofQUANTITY

0.83+

HPEORGANIZATION

0.82+

COVID-19 crisisEVENT

0.82+

ExascalePERSON

0.81+

Over 50,000 of its residentsQUANTITY

0.81+

ArubaLOCATION

0.8+

30,000, 60,000 different itemsQUANTITY

0.77+

Mr.PERSON

0.77+

doubleQUANTITY

0.73+

plantsQUANTITY

0.7+

Cray HPEORGANIZATION

0.69+

ACOTHER

0.67+

timesQUANTITY

0.65+

three clinical trial phasesQUANTITY

0.65+

The University of Edinburgh and Rolls Royce Drive in Exascale Style | Exascale Day


 

>>welcome. My name is Ben Bennett. I am the director of HPC Strategic programs here at Hewlett Packard Enterprise. It is my great pleasure and honor to be talking to Professor Mark Parsons from the Edinburgh Parallel Computing Center. And we're gonna talk a little about exa scale. What? It means we're gonna talk less about the technology on Maura about the science, the requirements on the need for exa scale. Uh, rather than a deep dive into the enabling technologies. Mark. Welcome. >>I then thanks very much for inviting me to tell me >>complete pleasure. Um, so I'd like to kick off with, I suppose. Quite an interesting look back. You and I are both of a certain age 25 plus, Onda. We've seen these milestones. Uh, I suppose that the S I milestones of high performance computing's come and go, you know, from a gig a flop back in 1987 teraflop in 97 a petaflop in 2000 and eight. But we seem to be taking longer in getting to an ex a flop. Um, so I'd like your thoughts. Why is why is an extra flop taking so long? >>So I think that's a very interesting question because I started my career in parallel computing in 1989. I'm gonna join in. IPCC was set up then. You know, we're 30 years old this year in 1990 on Do you know the fastest computer we have them is 800 mega flops just under a getting flogged. So in my career, we've gone already. When we reached the better scale, we'd already gone pretty much a million times faster on, you know, the step from a tariff block to a block scale system really didn't feel particularly difficult. Um, on yet the step from A from a petaflop PETA scale system. To an extent, block is a really, really big challenge. And I think it's really actually related to what's happened with computer processes over the last decade, where, individually, you know, approached the core, Like on your laptop. Whoever hasn't got much faster, we've just got more often So the perception of more speed, but actually just being delivered by more course. And as you go down that approach, you know what happens in the supercomputing world as well. We've gone, uh, in 2010 I think we had systems that were, you know, a few 1000 cores. Our main national service in the UK for the last eight years has had 118,000 cores. But looking at the X scale we're looking at, you know, four or five million cores on taming that level of parallelism is the real challenge. And that's why it's taking an enormous and time to, uh, deliver these systems. That is not just on the hardware front. You know, vendors like HP have to deliver world beating technology and it's hard, hard. But then there's also the challenge to the users. How do they get the codes to work in the face of that much parallelism? >>If you look at what the the complexity is delivering an annex a flop. Andi, you could have bought an extra flop three or four years ago. You couldn't have housed it. You couldn't have powered it. You couldn't have afforded it on, do you? Couldn't program it. But you still you could have You could have bought one. We should have been so lucky to be unable to supply it. Um, the software, um I think from our standpoint, is is looking like where we're doing mawr enabling with our customers. You sell them a machine on, then the the need then to do collaboration specifically seems mawr and Maura around the software. Um, so it's It's gonna be relatively easy to get one x a flop using limb pack, but but that's not extra scale. So what do you think? On exa scale machine versus an X? A flop machine means to the people like yourself to your users, the scientists and industry. What is an ex? A flop versus >>an exa scale? So I think, you know, supercomputing moves forward by setting itself challenges. And when you when you look at all of the excess scale programs worldwide that are trying to deliver systems that can do an X a lot form or it's actually very arbitrary challenge. You know, we set ourselves a PETA scale challenge delivering a petaflop somebody manage that, Andi. But you know, the world moves forward by setting itself challenges e think you know, we use quite arbitrary definition of what we mean is well by an exit block. So, you know, in your in my world, um, we either way, first of all, see ah flop is a computation, so multiply or it's an ad or whatever on we tend. Thio, look at that is using very high precision numbers or 64 bit numbers on Do you know, we then say, Well, you've got to do the next block. You've got to do a billion billion of those calculations every second. No, a some of the last arbitrary target Now you know today from HPD Aiken by my assistant and will do a billion billion calculations per second. And they will either do that as a theoretical peak, which would be almost unattainable, or using benchmarks that stressed the system on demonstrate a relaxing law. But again, those benchmarks themselves attuned Thio. Just do those calculations and deliver and explore been a steady I'll way if you like. So, you know, way kind of set ourselves this this this big challenge You know, the big fence on the race course, which were clambering over. But the challenge in itself actually should be. I'm much more interesting. The water we're going to use these devices for having built um, eso. Getting into the extra scale era is not so much about doing an extra block. It's a new generation off capability that allows us to do better scientific and industrial research. And that's the interesting bit in this whole story. >>I would tend to agree with you. I think the the focus around exa scale is to look at, you know, new technologies, new ways of doing things, new ways of looking at data and to get new results. So eventually you will get yourself a nexus scale machine. Um, one hopes, sooner rather >>than later. Well, I'm sure you don't tell me one, Ben. >>It's got nothing to do with may. I can't sell you anything, Mark. But there are people outside the door over there who would love to sell you one. Yes. However, if we if you look at your you know your your exa scale machine, Um, how do you believe the workloads are going to be different on an extra scale machine versus your current PETA scale machine? >>So I think there's always a slight conceit when you buy a new national supercomputer. On that conceit is that you're buying a capability that you know on. But many people will run on the whole system. Known truth. We do have people that run on the whole of our archer system. Today's A 118,000 cores, but I would say, and I'm looking at the system. People that run over say, half of that can be counted on Europe on a single hand in a year, and they're doing very specific things. It's very costly simulation they're running on. So, you know, if you look at these systems today, two things show no one is. It's very difficult to get time on them. The Baroque application procedures All of the requirements have to be assessed by your peers and your given quite limited amount of time that you have to eke out to do science. Andi people tend to run their applications in the sweet spot where their application delivers the best performance on You know, we try to push our users over time. Thio use reasonably sized jobs. I think our average job says about 20,000 course, she's not bad, but that does mean that as we move to the exits, kill two things have to happen. One is actually I think we've got to be more relaxed about giving people access to the system, So let's give more people access, let people play, let people try out ideas they've never tried out before. And I think that will lead to a lot more innovation and computational science. But at the same time, I think we also need to be less precious. You know, we to accept these systems will have a variety of sizes of job on them. You know, we're still gonna have people that want to run four million cores or two million cores. That's absolutely fine. Absolutely. Salute those people for trying really, really difficult. But then we're gonna have a huge spectrum of views all the way down to people that want to run on 500 cores or whatever. So I think we need Thio broaden the user base in Alexa Skill system. And I know this is what's happening, for example, in Japan with the new Japanese system. >>So, Mark, if you cast your mind back to almost exactly a year ago after the HPC user forum, you were interviewed for Premier Magazine on Do you alluded in that article to the needs off scientific industrial users requiring, you know, uh on X a flop or an exa scale machine it's clear in your in your previous answer regarding, you know, the workloads. Some would say that the majority of people would be happier with, say, 10 100 petaflop machines. You know, democratization. More people access. But can you provide us examples at the type of science? The needs of industrial users that actually do require those resources to be put >>together as an exa scale machine? So I think you know, it's a very interesting area. At the end of the day, these systems air bought because they are capability systems on. I absolutely take the argument. Why shouldn't we buy 10 100 pattern block systems? But there are a number of scientific areas even today that would benefit from a nexus school system and on these the sort of scientific areas that will use as much access onto a system as much time and as much scale of the system as they can, as you can give them eso on immediate example. People doing chroma dynamics calculations in particle physics, theoretical calculations, they would just use whatever you give them. But you know, I think one of the areas that is very interesting is actually the engineering space where, you know, many people worry the engineering applications over the last decade haven't really kept up with this sort of supercomputers that we have. I'm leading a project called Asimov, funded by M. P S O. C in the UK, which is jointly with Rolls Royce, jointly funded by Rolls Royce and also working with the University of Cambridge, Oxford, Bristol, Warrick. We're trying to do the whole engine gas turbine simulation for the first time. So that's looking at the structure of the gas turbine, the airplane engine, the structure of it, how it's all built it together, looking at the fluid dynamics off the air and the hot gasses, the flu threat, looking at the combustion of the engine looking how fuel is spread into the combustion chamber. Looking at the electrics around, looking at the way the engine two forms is, it heats up and cools down all of that. Now Rolls Royce wants to do that for 20 years. Andi, Uh, whenever they certify, a new engine has to go through a number of physical tests, and every time they do on those tests, it could cost them as much as 25 to $30 million. These are very expensive tests, particularly when they do what's called a blade off test, which would be, you know, blade failure. They could prove that the engine contains the fragments of the blade. Sort of think, continue face really important test and all engines and pass it. What we want to do is do is use an exa scale computer to properly model a blade off test for the first time, so that in future, some simulations can become virtual rather than having thio expend all of the money that Rolls Royce would normally spend on. You know, it's a fascinating project is a really hard project to do. One of the things that I do is I am deaf to share this year. Gordon Bell Price on bond I've really enjoyed to do. That's one of the major prizes in our area, you know, gets announced supercomputing every year. So I have the pleasure of reading all the submissions each year. I what's been really interesting thing? This is my third year doing being on the committee on what's really interesting is the way that big systems like Summit, for example, in the US have pushed the user communities to try and do simulations Nowhere. Nobody's done before, you know. And we've seen this as well, with papers coming after the first use of the for Goku system in Japan, for example, people you know, these are very, very broad. So, you know, earthquake simulation, a large Eddie simulations of boats. You know, a number of things around Genome Wide Association studies, for example. So the use of these computers spans of last area off computational science. I think the really really important thing about these systems is their challenging people that do calculations they've never done before. That's what's important. >>Okay, Thank you. You talked about challenges when I nearly said when you and I had lots of hair, but that's probably much more true of May. Um, we used to talk about grand challenges we talked about, especially around the teraflop era, the ski red program driving, you know, the grand challenges of science, possibly to hide the fact that it was a bomb designing computer eso they talked about the grand challenges. Um, we don't seem to talk about that much. We talk about excess girl. We talk about data. Um Where are the grand challenges that you see that an exa scale computer can you know it can help us. Okay, >>so I think grand challenges didn't go away. Just the phrase went out of fashion. Um, that's like my hair. I think it's interesting. The I do feel the science moves forward by setting itself grand challenges and always had has done, you know, my original backgrounds in particle physics. I was very lucky to spend four years at CERN working in the early stage of the left accelerator when it first came online on. Do you know the scientists there? I think they worked on left 15 years before I came in and did my little ph d on it. Andi, I think that way of organizing science hasn't changed. We just talked less about grand challenges. I think you know what I've seen over the last few years is a renaissance in computational science, looking at things that have previously, you know, people have said have been impossible. So a couple of years ago, for example, one of the key Gordon Bell price papers was on Genome Wide Association studies on some of it. If I may be one of the winner of its, if I remember right on. But that was really, really interesting because first of all, you know, the sort of the Genome Wide Association Studies had gone out of favor in the bioinformatics by a scientist community because people thought they weren't possible to compute. But that particular paper should Yes, you could do these really, really big Continental little problems in a reasonable amount of time if you had a big enough computer. And one thing I felt all the way through my career actually is we've probably discarded Mawr simulations because they were impossible at the time that we've actually decided to do. And I sometimes think we to challenge ourselves by looking at the things we've discovered in the past and say, Oh, look, you know, we could actually do that now, Andi, I think part of the the challenge of bringing an extra service toe life is to get people to think about what they would use it for. That's a key thing. Otherwise, I always say, a computer that is unused to just be turned off. There's no point in having underutilized supercomputer. Everybody loses from that. >>So Let's let's bring ourselves slightly more up to date. We're in the middle of a global pandemic. Uh, on board one of the things in our industry has bean that I've been particularly proud about is I've seen the vendors, all the vendors, you know, offering up machine's onboard, uh, making resources available for people to fight things current disease. Um, how do you see supercomputers now and in the future? Speeding up things like vaccine discovery on help when helping doctors generally. >>So I think you're quite right that, you know, the supercomputer community around the world actually did a really good job of responding to over 19. Inasmuch as you know, speaking for the UK, we put in place a rapid access program. So anybody wanted to do covert research on the various national services we have done to the to two services Could get really quick access. Um, on that, that has worked really well in the UK You know, we didn't have an archer is an old system, Aziz. You know, we didn't have the world's largest supercomputer, but it is happily bean running lots off covert 19 simulations largely for the biomedical community. Looking at Druk modeling and molecular modeling. Largely that's just been going the US They've been doing really large uh, combinatorial parameter search problems on on Summit, for example, looking to see whether or not old drugs could be reused to solve a new problem on DSO, I think, I think actually, in some respects Kobe, 19 is being the sounds wrong. But it's actually been good for supercomputing. Inasmuch is pointed out to governments that supercomputers are important parts off any scientific, the active countries research infrastructure. >>So, um, I'll finish up and tap into your inner geek. Um, there's a lot of technologies that are being banded around to currently enable, you know, the first exa scale machine, wherever that's going to be from whomever, what are the current technologies or emerging technologies that you are interested in excited about looking forward to getting your hands on. >>So in the business case I've written for the U. K's exa scale computer, I actually characterized this is a choice between the American model in the Japanese model. Okay, both of frozen, both of condoms. Eso in America, they're very much gone down the chorus plus GPU or GPU fruit. Um, so you might have, you know, an Intel Xeon or an M D process er center or unarmed process or, for that matter on you might have, you know, 24 g. P. U s. I think the most interesting thing that I've seen is definitely this move to a single address space. So the data that you have will be accessible, but the G p u on the CPU, I think you know, that's really bean. One of the key things that stopped the uptake of GPS today and that that that one single change is going Thio, I think, uh, make things very, very interesting. But I'm not entirely convinced that the CPU GPU model because I think that it's very difficult to get all the all the performance set of the GPU. You know, it will do well in H p l, for example, high performance impact benchmark we're discussing at the beginning of this interview. But in riel scientific workloads, you know, you still find it difficult to find all the performance that has promised. So, you know, the Japanese approach, which is the core, is only approach. E think it's very attractive, inasmuch as you know They're using very high bandwidth memory, very interesting process of which they are going to have to, you know, which they could develop together over 10 year period. And this is one thing that people don't realize the Japanese program and the American Mexico program has been working for 10 years on these systems. I think the Japanese process really interesting because, um, it when you look at the performance, it really does work for their scientific work clothes, and that's that does interest me a lot. This this combination of a A process are designed to do good science, high bandwidth memory and a real understanding of how data flows around the supercomputer. I think those are the things are exciting me at the moment. Obviously, you know, there's new networking technologies, I think, in the fullness of time, not necessarily for the first systems. You know, over the next decade we're going to see much, much more activity on silicon photonics. I think that's really, really fascinating all of these things. I think in some respects the last decade has just bean quite incremental improvements. But I think we're supercomputing is going in the moment. We're a very very disruptive moment again. That goes back to start this discussion. Why is extra skill been difficult to get? Thio? Actually, because the disruptive moment in technology. >>Professor Parsons, thank you very much for your time and your insights. Thank you. Pleasure and folks. Thank you for watching. I hope you've learned something, or at least enjoyed it. With that, I would ask you to stay safe and goodbye.

Published Date : Oct 16 2020

SUMMARY :

I am the director of HPC Strategic programs I suppose that the S I milestones of high performance computing's come and go, But looking at the X scale we're looking at, you know, four or five million cores on taming But you still you could have You could have bought one. challenges e think you know, we use quite arbitrary focus around exa scale is to look at, you know, new technologies, Well, I'm sure you don't tell me one, Ben. outside the door over there who would love to sell you one. So I think there's always a slight conceit when you buy a you know, the workloads. That's one of the major prizes in our area, you know, gets announced you know, the grand challenges of science, possibly to hide I think you know what I've seen over the last few years is a renaissance about is I've seen the vendors, all the vendors, you know, Inasmuch as you know, speaking for the UK, we put in place a rapid to currently enable, you know, I think you know, that's really bean. Professor Parsons, thank you very much for your time and your insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ben BennettPERSON

0.99+

1989DATE

0.99+

Rolls RoyceORGANIZATION

0.99+

UKLOCATION

0.99+

500 coresQUANTITY

0.99+

10 yearsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

JapanLOCATION

0.99+

ParsonsPERSON

0.99+

1990DATE

0.99+

MarkPERSON

0.99+

2010DATE

0.99+

1987DATE

0.99+

HPORGANIZATION

0.99+

118,000 coresQUANTITY

0.99+

first timeQUANTITY

0.99+

four yearsQUANTITY

0.99+

AmericaLOCATION

0.99+

CERNORGANIZATION

0.99+

third yearQUANTITY

0.99+

fourQUANTITY

0.99+

firstQUANTITY

0.99+

30 yearsQUANTITY

0.99+

2000DATE

0.99+

four million coresQUANTITY

0.99+

two million coresQUANTITY

0.99+

Genome Wide AssociationORGANIZATION

0.99+

two servicesQUANTITY

0.99+

BenPERSON

0.99+

first systemsQUANTITY

0.99+

two formsQUANTITY

0.99+

USLOCATION

0.99+

bothQUANTITY

0.99+

IPCCORGANIZATION

0.99+

threeDATE

0.99+

todayDATE

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

University of CambridgeORGANIZATION

0.98+

five million coresQUANTITY

0.98+

a year agoDATE

0.98+

singleQUANTITY

0.98+

Mark ParsonsPERSON

0.98+

two thingsQUANTITY

0.98+

$30 millionQUANTITY

0.98+

oneQUANTITY

0.98+

Edinburgh Parallel Computing CenterORGANIZATION

0.98+

AzizPERSON

0.98+

Gordon BellPERSON

0.98+

MayDATE

0.98+

64 bitQUANTITY

0.98+

EuropeLOCATION

0.98+

OneQUANTITY

0.97+

each yearQUANTITY

0.97+

about 20,000 courseQUANTITY

0.97+

TodayDATE

0.97+

AlexaTITLE

0.97+

this yearDATE

0.97+

HPCORGANIZATION

0.96+

IntelORGANIZATION

0.96+

XeonCOMMERCIAL_ITEM

0.95+

25QUANTITY

0.95+

over 10 yearQUANTITY

0.95+

1000 coresQUANTITY

0.95+

ThioPERSON

0.95+

800 mega flopsQUANTITY

0.95+

ProfessorPERSON

0.95+

AndiPERSON

0.94+

one thingQUANTITY

0.94+

couple of years agoDATE

0.94+

over 19QUANTITY

0.93+

U. KLOCATION

0.92+

Premier MagazineTITLE

0.92+

10 100 petaflop machinesQUANTITY

0.91+

four years agoDATE

0.91+

ExascaleLOCATION

0.91+

HPD AikenORGANIZATION

0.91+

Intro | Exascale Day


 

>> Hi everyone, this is Dave Vellante and I want to welcome you to our celebration of Exascale Day. A community event with support from Hewlett Packard Enterprise. Now, Exascale Day is October 18th, that's 10, 18 as in 10 to the power of 18. And on that day we celebrate the scientists, and researchers, who make breakthrough discoveries, with the assistance, of some of the most sophisticated supercomputers in the world. Ones that can run and Exascale. Now in this program, we're going to kick off the weekend and discuss the significance of Exascale computing, how we got here, why it's so challenging to get to the point where we're at now where we can perform almost, 10 to the 18th floating point operations per second. Or an exaFLOP. We should be there by 2021. And importantly, what innovations and possibilities Exascale computing will unlock. So today, we got a great program for you. We're not only going to dig into a bit of the history of supercomputing, we're going to talk with experts, folks like Dr. Ben Bennett, who's doing and some work with the UK government. And he's going to talk about some of the breakthroughs that we can expect with Exascale. You'll also hear from experts like, Professor Mark Parsons of the University of Edinburgh, who cut his teeth at CERN, in Geneva. And Dr. Brian Pigeon Nuskey of Purdue University, who's studying buyer diversity. We're going to also hear about supercomputers in space as we get as a great action going on with supercomputers up at the International Space Station. Let me think about that, powerful high performance water-cooled supercomputers, running on solar, and mounted overhead, that's right. Even though at the altitude at the International Space Station, there's 90% of the Earth's gravity. Objects, including humans they're essentially in a state of free fall. At 400 kilometers above earth, there no air. You're in a vacuum. Like have you ever been on the Tower of Terror at Disney? In that free fall ride, or a nosedive in an airplane, I have. And if you have binoculars around your neck, they would float. So the supercomputers can actually go into the ceiling, crazy right? And that's not all. We're going to hear from experts on what the exascale era. will usher in for not only space exploration, but things like weather forecasting, life sciences, complex modeling, and all types of scientific endeavors. So stay right there for all the great content. You can use the #ExascaleDay on Twitter, and, enjoy the program. Thanks everybody for watching.

Published Date : Oct 15 2020

SUMMARY :

of the history of supercomputing,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

GenevaLOCATION

0.99+

Ben BennettPERSON

0.99+

2021DATE

0.99+

90%QUANTITY

0.99+

October 18thDATE

0.99+

University of EdinburghORGANIZATION

0.99+

International Space StationLOCATION

0.99+

Brian Pigeon NuskeyPERSON

0.99+

EarthLOCATION

0.99+

400 kilometersQUANTITY

0.99+

Mark ParsonsPERSON

0.99+

Exascale DayEVENT

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

earthLOCATION

0.99+

ExascaleTITLE

0.98+

CERNORGANIZATION

0.98+

18QUANTITY

0.97+

todayDATE

0.97+

Purdue UniversityORGANIZATION

0.97+

DisneyORGANIZATION

0.93+

#ExascaleDayEVENT

0.93+

UK governmentORGANIZATION

0.92+

18thQUANTITY

0.92+

10DATE

0.89+

ProfessorPERSON

0.87+

ExascaleEVENT

0.82+

TwitterORGANIZATION

0.79+

10QUANTITY

0.72+

Tower of TerrorTITLE

0.66+

secondQUANTITY

0.61+

DayTITLE

0.59+

Sam Grocott, Dell Technologies | Exascale Day


 

>> Narrator: From around the globe. It's theCUBE. With digital coverage of Dell Technologies World-Digital Experience. Brought to you by Dell Technologies. >> Hello everyone, and welcome back to theCUBE's continuing coverage of Dell Tech World 2020. This is Dave Vellante, and I'm here with Sam Groccot. Who's the Senior Vice President of Product Marketing at Dell Technologies. Sam, great to see you. Welcome. >> Great to be here, Dave. >> All right, we're going to talk generally about cloud in the coming decade. And really how the cloud model is evolving. But I want to specifically ask Sam about the as a service news that Dell's making at DTW. What those solutions look like. How they're going to evolve. Maybe Sam, we can hit on some of the customer uptake and the feedback as well. Does that sound good? >> Yeah, sounds great. Let's dive right in. >> All right, let's do that. So look, you've come from the world of disrupter. When you joined Isilon, they got acquired by EMC and then Dell. So, you've been on both sides of the competitive table. And cloud is obviously a major force actually I'd say the major disruptive force in our industry. Let's talk about how Dell is responding to the cloud trend generally. Then we'll get into the announcements. >> Yeah, certainly. And you're right I've been on both sides of this. There is no doubt if you look at just over the last decade or so. How customers and partners are really looking at evaluating how they can take advantage of the value of moving workloads to the cloud. And we've seen it happen over the last decade or so. And it's happening at a more frequent pace. There's no doubt that is really what planted the seed of this new operating experience. Kind of a new lifestyle so to speak, around as a service. Because when you go to the cloud, that's the only way they roll. Is you get an as a service experience. So, that really has started to come into the data center. As organizations are moving specific workloads or applications to the cloud. Of hey, how do I get that in an on-premise experience? I think throwing gasoline on that is certainly the pandemic and COVID-19. Has really made organizations evaluate how to move much quicker and more agilely by moving some applications to the cloud. Because frankly on-prem just wasn't able to move as fast as they'd like to see. We're seeing that macrotrend accelerate. I think we're in good shape to take advantage of that as we go forward. >> Well, that brings us to the hard news of what you're calling Project Apex i.e your as a service initiative. What specifically are you announcing this week? >> Yeah. So, Project Apex is one of our big announcements and that's really where we're targeting. How we're bringing together and unifying our product development. Our sales go-to-market. Our marketing go-to-market. Everything coming together underneath Project Apex. Which is our as a service and cloud like experience. Look, we know in that world where customers we're constantly evaluating which applications stay on-prem. Which applications and workloads should go to the cloud. I think the market has certainly voted clearly that it's going to be both. It's going to be a hybrid multicloud world. But what they absolutely are clear that they want is a simple, easy to use as a service experience. Regardless of if they're on-prem or off-prem. And that's where really the traditional on-prem solutions fall down. Because it's just too darn complex still. They've got many different tools, managing many different applications that oversee their cloud operations, their various infrastructure, whether it's server or compute or networking. They all run different tools. So, it gets very, very complex. It also very rigid to scale. You can't move as fast as the cloud. It can't deploy as fast. It requires manual intervention to buy more. You typically got to get a sales rep in-house to come in and extend your environment and grow your environment. And then of course, the traditional method is very CapEx heavy. In a world where organizations are really trying to preserve cash. Cash is king. It doesn't really give them the flexibility traditionally or going forward that they'd like to see on that front. So, what they want to see is a consistent operating experience for their on and off-prem environments. They want to see a single tool that can manage, report and grow and do commerce across that environment. Regardless of if it's on or off-prem. They want something that can scale quickly. Now look, when you're moving equipment on-prem, it's not going to be a click of a button. But you should be able to buy and procure that with a click of a button. And then very quickly, within less than a handful of days. That equipment should be stood up deployed and running in their environment. And then finally, it's got to deliver this more flexible finance model. Whether it's leveraging a flexible subscription models or OPEX friendly models. Customers are really looking for that more OPEX friendly approach. Which we're going to be providing with Project Apex. So very, very excited about kind of the goals and the aspirations of Project Apex. We're going to see a lot of it come to market early next year. I think we're well situated, as I said, to take advantage of this opportunity. >> So, when I was looking through the announcement and sort of squinting through it. The three things jumped out and you've definitely hit on those. One is choice. But sometimes you don't want to give customers too much choice. So, it's got to be simple and it's got to be consistent. So, it feels like you're putting this abstraction layer over your entire portfolio and trying to hit on those three items. Which is somewhat of a balancing act. Is that right? >> Yeah. No, you're exactly right. The kind of the pillars of the Project Apex value proposition so to speak, is simplicity choice and consistency. So, we've got to deliver that simple kind of end to end journey view of their entire cloud and as a service experience. It needs to span our entire portfolio. So, whether it's servers, storage or networking or PCs or cloud. All of that needs to be integrated into essentially a large, single web interface that gives you visibility across all of that. And of course, the ease of scale up and frankly scaled down. Should be able to do that in real time through the system. Choice is a big, big factor for us. We've got the broadest portfolio in the industry. We want to provide customers the ability to consume infrastructure any way they want. Clearly they can consume it the traditional way. But this more as a service flexible consumption approach is fundamental to making sure customers only pay for what they use. So, highly metered environment. Pay as they go. You leverage subscriptions. Essentially give them that OPEX flexibility that they've been looking for. And then finally, I think the real key differentiator is that consistent operating experience. So, whether you move workloads on or off-prem. It's got to be in a single environment that doesn't require you to jump around between different application and management experiences. >> Alright, so I've got to ask you the tough question. I want to hear your answer to it. I mean, we've seen the cloud model. Everybody knows it very well. But why now? People are going to say, okay, you're just responding to HPE. What's different between what you're doing and what some of your competitors are doing? >> Yeah. So, I think it really comes down to the choice and breadth of what we're bringing to the table. So, we're not going to force our customers to go down one of these routes. We're going to provide that ultimate flexibility. And I think what will really define ourselves against them and shine ourselves against them is, that consistent operating experience. We've got that opportunity to provide both an on-prem, Edge and cloud experience. That doesn't require them to move out of that operating experience to jump between different tools. So, whether you're running a Storage as a service environment. Which we'll have in the first half of next year. Looking through our new cloud console that is coming out early next year as well. You're going to be able to have that single view of everything that's going on across your environment. And also be able to move workloads from on-prem and off-prem without breaking that consistent experience. I think that is probably the biggest differentiator we're going to have. When you ladder that onto just the general Dell Technologies value of being able to meet and deliver our solutions anywhere in the world at any point of the data center, at the Edge, or even cloud-native. We've got the broadest portfolio to meet our customer needs wherever we need to go. >> So, my understanding is the offerings, it's designed to encompass the entire Dell Technologies portfolio. >> That's right. >> From client solutions, ISG, et cetera. Not VMware specifically. It's really that whole Dell Technologies portfolio. Correct? >> Yeah and look, over time we totally expect to be able to transact to VMware through this. We do expect that to be part of the solution eventually. So yeah, it is across, PC as a service, Storage as a service, Infrastructure as a service. Our cloud offers all of our services, traditional services that are helping to deliver this as a service experience. And even our traditional financial flexible consumption models will be included in this. Because again, we want to offer ultimate choice and flexibility. We're not going to force our customers to go down any of these paths. But what we want to do is present these paths and go wherever they want to go. We've got the breadth of the portfolio and the offers to get them there. >> Oh, okay. So, it's really a journey. You mentioned Storage as a service coming out first and then as well. If I understand it, the idea is to, I'm going to have visibility and control over my entire state on-prem, cloud, Edge, kind of the whole enchilada. Maybe not right out of the shoot, but that's the vision. >> Absolutely. You've got to be able to see all of that and we'll continue to iterate over time and bring more environments, more applications, more cloud environments into this. But that is absolutely the vision of Project Apex is to deliver that fully integrated core, Edge, cloud partner experience. To all of the environments our customers could be running in. >> I want to put my customer hat on my CFO, CIO hat. Okay, what's the fine print. What are the minimum bars to get in? What's the minimum commitment I need to make? What are some of those nuances? >> Yeah. So, both the Storage as a service, which will be our first offer of many in our portfolio. And the cloud console, which will give you that single web interface to kind of manage, report and kind of thrive in this as a service experience. All that will be released in the first half of the next year. So, we're still frankly defining what that will look like. But we want to make sure that we deliver a solution that can span all segments. From small business to medium business, to the biggest enterprises out there. Globally goal expansion through our channel partners. We're going to have Geos and channel partners fully integrated as well. Service providers as well. As a fundamental important piece of our delivery model and delivering this experience to our customers. So, the fine print Dave will be out early next year. As we GA these releases and bring into market. But ultimate flexibility and choice, up and down the stack and geographically wide is the goal and the intent we plan to deliver that. >> Can you add any color to the sort of product journey, if you will? I even hesitate Sam, to use the word product. Because you're really sort of transferring your mindset into a platform mindset and a services mindset. As opposed to bolting services on top of a price. You sell a product and say okay, service guys you take it from here. You have to sort of rethink, how you deliver. And so you're saying, you start with storage. And then so what can we expect over the next midterm-longterm? >> Yeah. I'll give you an example. Look, we sell a ton of as a service and flexible consumption today. We've been at it for 10 years. In fact in Q2, we sold our annual recurring revenue rate is 1.3 billion growing at 30% very, very pleased. So, this is not new to us. But how you described it Dave is right. We adopt products, customers then pick their product. They pick their service that they want to bolt on. Then they pick their financial payment model they bolted on. So, it's a very good, customized way to build it. That's great. And customers are going to continue to want that and will continue to deliver that. But there is an emerging segment that wants more just kind of think of it as the big easy button. They want to focus on an outcome. Storage as a service is a great example where they're less concerned about what individual product element is part of that. They want it fully managed by Dell Technologies or one of our partners. They don't want to manage it themselves. And of course they want it to be pay-for-use on an OPEX plan that works for their business and gives them that flexibility. So, when customers going forward want to go down this as a service outcome driven path. They're simply going to say, hey, what data service do I want? I want file or block unified object. They pick their data service based on their workload. They pick their performance and capacity tier. There is a term limit, right now we're planning one to five years. Depending on the amount of terms you want to do. And then that's it. It's managed by Dell Technologies. It's on our books from Dell Technologies and it's of course leveraging our great technology portfolio to bring that service and that experience to our customers. So, the service is the product now. It really is making that shift. We are moving into a services driven, services outcome driven set of portfolio and solutions for our customers. >> So, you actually have a lot of data on this. I mean, you talk about a billion dollar business. Maybe talk a little bit about customer uptake. I don't know what you can share in terms of numbers and a number of subscription customers. But I'm really interested in the learnings and the feedback and how that's informed your strategy? >> Yeah. I mean, you're right. Again, we've been at this for many, many years. We have over 2000 customers today that have chosen to take advantage of our flexible consumption and as a service offers that we have today. Nevermind kind of as we move into these kind of turn-key, easy button as a service offers that are to come that early next year. So, we've leveraged all of that learnings and we've heard all of that feedback. It's why it's really important that choice and flexibility is fundamental to the Project Apex strategy. There are some of those customers that they want to build their own. They want to make sure they're running the latest PowerMax or the latest PowerStore. They want to choose their network. They want to choose how they protect it. They want to choose what type of service. They want to cover some of the services. They may want very little from us or vice versa. And then they want to maybe leverage additional, more traditional means to acquire that based on their business goals. That feedback has been loud and clear. But there is that segment that is like, no, no, no. I need to focus more on my business and not my infrastructure. And that's where you're going to see these more turn-key as a service solutions fit that need. Where they want to just define SLAs, outcomes. They want us to take on the burden of managing it for them. So, they can really focus on their applications and their business, not their infrastructure. So, things like metering. Tons of feedback on how we'll want to meter this. Tons of feedback on the types of configurations and scale they're looking for. The applications and workloads that they're targeting for this world. Is very different than the more traditional world. So, we're leveraging all of that information to make sure we deliver our Infrastructure as a service and then eventually Solutions as a service. You think about SAP as a service, VDI as a service. AI machine learning as a service. We'll be moving up the stack as well to meet more of a application integrated as a service experience as well. >> So, I want to ask you. You've given us a couple of data points there, billion dollar plus business. A couple thousand customers. You've got decent average contract values if I do my math right. So, it's not just the little guys. I'm sorry, it's not just the big guys, but there's some fat middle as well that are taking this up. Is that fair to say? >> Totally. I mean, I would say frankly in the enterprise space. It's the mid to larger sides historically and we expect they'll continue to want to kind of choose their best of breed apart. Best of breed of products, Best of breed services. Best of breed financial consumption. Great. And we're in great shape there. We're very confident or competitive and competing in that space today. I think going into the turn-key as a service space that will play up-market. But it will really play down-market, mid-market, smaller businesses. It gives us the opportunity to really drive a solution there. Where they don't have the resources to maybe manage a large storage infrastructure or a backup infrastructure or compute infrastructure. They're going to frankly look to us to provide that experience for them. I think our as a service offers will really play stronger in that mid and kind of lower end of the market. >> So, tell us again. The sort of availability of like the console, for example. When can I actually get-- >> Yeah. >> I can do as a service today. I can buy subscriptions from you. >> Absolutely. >> This is where it all comes together. What's the availability and rollout details? >> Sure. As we look to move to our integrated kind of turn-key as a service offers. The console we're announcing at Dell Technologies World as it's in public preview now. So, for organizations, customers that want to start using it. They can start using it now. The Storage as a service offer is going to be available in the first half of next year. So, we're rapidly kind of working on that now. Looking to early next year to bring that to market. So, you'll see the console and the first as a service offer with storage as a service available in the first half of next year. Readily available to any and everyone that wants to deploy it. We're not that far off right now. But we felt it was really, really important to make sure our customers. Our partners and the industry really understands how important this transformation to as a service and cloud is for Dell Technologies. That's why frankly, externally and internally Project Apex will be that north star to bring our end to end value together across the business. Across our customers, across our teams. And that's why we're really making sure that everybody understands Project Apex and as a services is the future for Dell. And we're very much focused on that. >> As the head of product marketing. This is really a mindset, a cultural change really. You're really becoming the head of service marketing in a way. How are you guys thinking about that mindset shift? >> Well really, it's how am I thinking about it? How is the broader marketing organization thinking about it? How is engineering clearly thinking about it? How is finance thinking about it? How is sale? Like this is transformative across every single function within Dell technologies has a role to play, to do things very differently. Now it's going to take time. It's not going to happen overnight. Various estimates have this as a fairly small percentage of business today in our segments. But we do expect that to start to, and it has started to accelerate ramp. We're preparing for a large percentage of our business to be consumed this way very, very soon. That requires changes in how we sell. Changes in how we market clearly. Changes in how we build products and so forth. And then ultimately, how we account for this has to change. So, we're approaching it I think the right way Dave. Where we're looking at this truly end to end. This isn't a tweak in how we do things or an evolution. This is a revolution. For us to kind of move faster to this model. Again, building on the learnings that we have today with our strong customer base and experience we've built up over the years. But this is a big shift. This isn't an incremental turn of the crank. We know that. I think you expect that. Our customers expect that. And that's the mission we're on with Project Apex. >> Well, I mean, with 30% growth. I mean, that's a clear indicator and people like growth. No doubt. That's a clear indicator that customers are glomming onto this. I think many folks want to buy this way, and I think increasingly that's how they buy SaaS. That's how they buy cloud. Why not buy infrastructure the same way? Give us your closing thoughts Sam. What are the big takeaways? >> Yeah. The big takeaways is from a Dell Technologies perspective. Project Apex is that strategic vision of bringing together our as a service and cloud capabilities into a easy to consume, simple, flexible offer. That provides ultimate choice to our customers. Look, the market has spoken. We're going to be living in a hybrid multicloud world. I think the market is also starting to speak. That they want that to be an as a service experience, regardless if it's on or off ground. It's our job. It's our responsibility to bring that ease. That simplicity and elegance to the on-prem world. It's not certainly not going anywhere. So, that's the mission that we're on with Project Apex. I like the hand we've been dealt. I like the infrastructure and the solutions that we have across our portfolio. And we're going to be after this, for the next couple of years. To refine this and build this out for our customers. This is just the beginning. >> Wow, it's awesome. Thank you so much for coming to theCUBE. We're seeing the cloud model. It's extending on-prem, cloud, multicloud it's going to the Edge. And the way in which customers want to transact business is moving at the same direction. So, Sam good luck with this and thanks so much. Appreciate your time. >> Yeah, thanks Dave. Thanks everyone. Take care. >> All right and thank you for watching. This is Dave Vellante for theCUBE and our continuing coverage of Dell Tech World 2020. The virtual CUBE. We'll be right back right after this short break. (gentle music)

Published Date : Oct 9 2020

SUMMARY :

Brought to you by Dell Technologies. Sam, great to see you. and the feedback as well. Let's dive right in. is responding to the Kind of a new lifestyle so to speak, of what you're calling Project Apex that it's going to be both. and it's got to be consistent. All of that needs to be integrated into People are going to say, okay, We've got that opportunity to it's designed to encompass It's really that whole Dell and the offers to get them there. kind of the whole enchilada. is to deliver that fully integrated What are the minimum bars to get in? and the intent we plan to deliver that. to the sort of product So, this is not new to us. and the feedback and how that are to come that early next year. Is that fair to say? It's the mid to larger sides historically of like the console, for example. I can do as a service today. What's the availability and as a services is the future for Dell. As the head of product marketing. and it has started to accelerate ramp. What are the big takeaways? and the solutions that we it's going to the Edge. Yeah, thanks Dave. and our continuing coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

JeffPERSON

0.99+

Paul GillinPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

PCCWORGANIZATION

0.99+

Dave VolantePERSON

0.99+

AmazonORGANIZATION

0.99+

Michelle DennedyPERSON

0.99+

Matthew RoszakPERSON

0.99+

Jeff FrickPERSON

0.99+

Rebecca KnightPERSON

0.99+

Mark RamseyPERSON

0.99+

GeorgePERSON

0.99+

Jeff SwainPERSON

0.99+

Andy KesslerPERSON

0.99+

EuropeLOCATION

0.99+

Matt RoszakPERSON

0.99+

Frank SlootmanPERSON

0.99+

John DonahoePERSON

0.99+

Dave VellantePERSON

0.99+

Dan CohenPERSON

0.99+

Michael BiltzPERSON

0.99+

Dave NicholsonPERSON

0.99+

Michael ConlinPERSON

0.99+

IBMORGANIZATION

0.99+

MeloPERSON

0.99+

John FurrierPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Joe BrockmeierPERSON

0.99+

SamPERSON

0.99+

MattPERSON

0.99+

Jeff GarzikPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

JoePERSON

0.99+

George CanuckPERSON

0.99+

AWSORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Rebecca NightPERSON

0.99+

BrianPERSON

0.99+

Dave ValantePERSON

0.99+

NUTANIXORGANIZATION

0.99+

NeilPERSON

0.99+

MichaelPERSON

0.99+

Mike NickersonPERSON

0.99+

Jeremy BurtonPERSON

0.99+

FredPERSON

0.99+

Robert McNamaraPERSON

0.99+

Doug BalogPERSON

0.99+

2013DATE

0.99+

Alistair WildmanPERSON

0.99+

KimberlyPERSON

0.99+

CaliforniaLOCATION

0.99+

Sam GroccotPERSON

0.99+

AlibabaORGANIZATION

0.99+

RebeccaPERSON

0.99+

twoQUANTITY

0.99+

Exascale Day V2


 

hi everyone this is dave vellante of the cube and i want to share with you an exciting development with some financial support from hpe the cube is hosting exascale day on friday october 16th high performance technical and business communities are coming together to celebrate exascale day now exascale day is happening on october 18th that's 10 18 as in 10 to the power of 18. now on that day we celebrate the scientists and researchers who make breakthrough discoveries with the assistance of some of the largest supercomputers in the world 10 to the power of 18 is a 1 with 18 zeros after that's six commas or seis comas for you russ hannemann fans of silicon valley fame remember he could only get to tres comas and he became suicidal when his net worth dropped below a billion aka dos comas now an exit scale computer exascale supercomputer can do math at the rate of 10 to the power of 18 calculations per second those are those calculations are called flops or floating point operations per second that's a billion calculations per second or exa-flops now we haven't hit that level yet that exit scale level but dollars to donuts we'll buy we will by next year now today we can do header scale computing that's 10 to the power of 15 calculations per second and we entered the petascale era in 2007 before that was the terrascale era it's kind of like dinosaurs which began in the middle of the dot-com boom in 1997. that's 10 to the 12th calculations per second or trillion per second so we can almost get our heads around that and all the way back in 1972 we had the first gigascale computer which was one times ten to the ninth yeah that's more russ hannemann's speed sorry rush you're not invited to at the exascale day party but you are so go to events dot cube365.net slash 10-18 exascale day it's right there in the screen so check it out mark your calendar we'll be sending out notices so don't worry if you're driving right now we have some of the smartest people in the world joining us they're going to share how innovations with supercomputing are changing the world in healthcare space exploration artificial intelligence and these other mind-melting projects we're super excited to be participating in this program we look forward to some great conversations october 16th right before exascale day put on your calendar see you there

Published Date : Oct 3 2020

SUMMARY :

that's 10 18 as in 10 to the power of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
10QUANTITY

0.99+

october 18thDATE

0.99+

1972DATE

0.99+

october 16thDATE

0.99+

1997DATE

0.99+

2007DATE

0.99+

dave vellantePERSON

0.99+

next yearDATE

0.99+

one timesQUANTITY

0.98+

trillion per secondQUANTITY

0.98+

friday october 16thDATE

0.97+

ninthQUANTITY

0.97+

hannemannPERSON

0.97+

15 calculations per secondQUANTITY

0.97+

tenQUANTITY

0.96+

todayDATE

0.96+

silicon valleyTITLE

0.95+

exascale dayEVENT

0.94+

six commasQUANTITY

0.92+

exascale dayEVENT

0.92+

a billion calculations per secondQUANTITY

0.9+

18 zerosQUANTITY

0.89+

18 calculations per secondQUANTITY

0.88+

exascaleEVENT

0.87+

dot cube365.netOTHER

0.79+

first gigascale computerQUANTITY

0.76+

12th calculations per secondQUANTITY

0.76+

below a billionQUANTITY

0.75+

hpeORGANIZATION

0.69+

18QUANTITY

0.66+

1QUANTITY

0.59+

Exascale Day V2EVENT

0.59+

petascaleDATE

0.58+

10DATE

0.55+

terrascale eraDATE

0.55+

18DATE

0.47+

russPERSON

0.36+

exascaleTITLE

0.33+

Mike Beltrano, AMD & Phil Soper, HPE | HPE Discover 2022


 

(soft upbeat music) >> Narrator: theCUBE presents HPE Discover 2022 brought to you by HPE. >> Hey everyone. Welcome back to Las Vegas. theCUBE is live. We love saying that. theCUBE is live at HPE Discover '22. It's about 8,000 HP folks here, customers, partners, leadership. It's been an awesome day one. We're looking forward to a great conversation next. Lisa Martin, Dave Vellante, two guests join us. We're going to be talking about the power of the channel. Mike Beltrano joins us, Worldwide Channel Sales Leader at AMD, and Phil Soper is here, the North America Head of Channel Sales at HPE. Guys, great to have you. >> Thanks for having us. >> Great to be here. >> So we're talking a lot today about the ecosystem. It's evolved tremendously. Talk to us about the partnership. Mike, we'll start with you. Phil, we'll go to you. What's new with HPE and AMD Better Together? >> It's more than a partnership. It's actually a relationship. We are really tied at the hip, not just in X86 servers but we're really starting to get more diverse in HP's portfolio. We're in their hyper-converged solutions, we're in their storage solutions, we're in GreenLake. It's pretty hard to get away from AMD within the HP portfolio so the relationship is really good. It's gone beyond just a partnership so starting to transition now down into the channel, and we're really excited about it. >> Phil, talk about that more. Talk about the evolution of the partnership and that kind of really that pull-down. >> I think there's an impression sometimes that AMD is kind of the processor that's in our computers and it's so much more, the relationship is so much more than the inclusion of the technology. We co-develop solutions. Interesting news today at Antonio's presentation of the first Exascale supercomputer. We're solving health problems with the supercomputer that was co-developed between AMD and HPE. The other thing I would add is from a channel perspective, it's way more than just what's in the technology. It's how we engage and how we go to market together. And we're very active in working together to offer our solutions to customers and to be competitive and to win. >> Describe that go-to-market model that you guys have, specifically in the channel. >> So, there is a, his organization and mine, we develop joint go-to-market channel programs. We work through the same channel ecosystem of partners. We engage on specific opportunities. We work together to make sure we have the right creative solution pricing to be aggressive in the marketplace and to compete. >> It's a great question because we're in a supply chain crisis right now, right? And you look at the different ways that HP can go to market through the channel. There's probably about four or five ways that channel partners can provide solutions, but it's also route to purchase for the customers. So, we're in a supply chain crisis right now, but we have HP AMD servers in stock in distribution right now. That's a real big competitive advantage, okay? And if those aren't exactly what you need, HP can do custom solutions with AMD platforms all day, across the board. And if you want to go ahead and do it through the cloud, you've got AMD technology in GreenLake. So, it's pretty much have it your way for the customers through the channel and it's really great for the customers too because there's multiple ways for them to procure the equipment through the channel so we really love the way that HP allows us to kind of integrate into their products, but then integrate into their procurement model down through the channel for the end user to make the right choice. So, it's fantastic. >> You mentioned that AMD's in HCI, in storage, in GreenLake and in the channel. What are the different requirements within those areas? How does the channel influence those requirements and what you guys actually go to market with? >> Well, it comes down to awareness. Awareness is our biggest enemy and the channel's just huge for us because AMD's competitive advantage in our technology is much different. And when you think about price and performance and security and sustainability, that's what we're delivering. And really the channel kind of plugs that in and educates their customers through their marketing and demand gen, kind of influences when they hear from their customers or if they're proactively touching them, influences the route to purchase based on their situation, if they want to pay for it as a service, if they want to finance it, if it does happen to be in stock and speed of delivery is important to them, the channel partner influences that through the relationships and distribution or they can go ahead and place it as a custom to order. So, it's just really based on where they're at in their purchasing cycle and also, it's not about the hardware as much as it's about the software and the applications and the high-value workloads that they're running and that kind of just dictates the platform. >> Does hardware matter? >> Yes, it sure does. It does, man. We're just kind of, it's kind of like the vessel at this point and our processors and our GPS are in the HP vessel, but it is about the application. >> I love that analogy. I would say, absolutely does, workloads matter more and then what's the hardware to run those workloads is really critical. >> And to your point though, it's not just about the CPU anymore. It's about, you guys have made some acquisitions to sort of diversify. It's about all the other supporting sort of actors, if you will, that support those new workloads. >> Let me give you an example that's being showcased at this show, okay? Our extreme search solution with being driven by Splunk, okay? And it's a cybersecurity solution that the industry is going to have to be able to handle in regards to response to any sort of breach and when you think about, they have to search through the data and how they have to get through it and do it in a timely fashion. What we've done is developed a DL385 solution where we have a epic processor from AMD, we have a Xilinx which who we own now, they're FGPA, and Samsung SSDs which are four terabytes per drive packed in a DL385. Now you add the Splunk solution on top of that and if there ever is a breach, it would normally take about days to go ahead and access that breach. Now it can be done in 25 minutes and we have that solution here right now so it's not like we acquire Xilinx and we're waiting to integrate it. We hit the ground running and it's fantastic 'cause the solution's being driven by one of our top partners, WWT, and it's live in their booth here today so we're kind of showing that integration of what AMD is doing with our acquisitions in HP servers and being able to show that today with a workload on top of it is real deal. >> Purpose-built to scan through all those log files and actually surface the inside. >> Exactly what it is, and it's on public sector right now, that's a requirement to be able to do that and to not have it take weeks and be able to do it in 25 minutes is pretty impressive. >> Those are the outcomes customers are demanding? >> That's it. People are, if you're purchasing an outcome, HP can deliver it with AMD and if you're looking to build your own, we can give it to you that way too so, it's flexibility. >> Absolutely critical. Mike, from your perspective on the partnership we've seen and obviously a lot of transformation at HPE over the last couple of years, Antonio stood on this stage three years ago and said, "By 2022, we're going to deliver the entire portfolio as a service." How influential has AMD been from a relationship perspective on what he said three years ago and where they are today? >> Oh my gosh! We've been with them all the way through. I mean, HP is just such a great partner, and right now, we're the VDI solution on GreenLake so it's HP GreenLake, VDI solutions powered by AMD. We love that brand recognition as a service, okay? Same with high-performance computing powered by AMD, offered on HP GreenLake so it's really changed it a lot because as a service, it's just a different way for a customer to procure it and they don't have to worry about that hardware and the stack and anything like that. It's more about them going into that GreenLake portal and being able to understand that they're paying it just like they pay their phone bill or anything else so it's really Antonio's been spot-on with that because that's a reality today and it's being delivered through the channel and AMD's proud to be a part of it and it's much different 'cause we don't need to be as evolved as we have to be from a hardware sale perspective when it's going through GreenLake and it makes it much easier for us. >> Phil, you talked about workloads, really kind of what matter, how are they evolving? How is that affecting? What are customers grabbing you and saying, "We need this." What do you and from a workload standpoint and how are you delivering that? >> Well, the edge to the cloud platform or GreenLake is very much as a service offering, aimed at workloads. And so, if HPE is building and focusing its solutions on addressing specific workload needs, it's not about necessarily the performance you mentioned, or you're asking the question about hardware. It's not necessarily about that. It's, what is the workload, should the workload be, or could the workload be in public cloud or is it a workload that needs to be on premise and customers are making those choices and we're working with those customers to help them drive those strategies and then we adapt depending on where the customer wants the workload. >> Well, it's interesting, because Antonio in his keynote today said, "That's the wrong question," and my reaction was that's the question everybody's asking. It may be the wrong question, but that's what so, your challenge is to, I guess, get them to stop asking that question and just run the right tool for the right job kind of thing. >> That's exactly what it's about because you take high-value workloads, okay? And that can mean a lot of different things and if you just pick one of them, let's say like VDI or hyper-converged. HP's the only game in town where they can kind of go into a gun, a battle with four different guns. They give you a lot of choices and they offer them on an AMD platform and they're not locking you in. They give you a lot of flexibility and choice. So, if you were doing hyper-converged through HPE and you were looking to do it on AMD platform, they can offer to you with VMware vSAN ReadyNodes. They can offer it to you with SimpliVity. They can offer it to you with Nutanix. They can offer it to you with Microsoft, all on an AMD stack. And if you want to bring your own VMware and go bare metal, HP will just give you the notes. If you want to go factory integrated or if you want to purchase it via OEM through HP and have them support it, they just deliver it any way you want to get it. It's just a fantastic story. >> I'll just say, look, others could do that, but they don't want to, okay? That's the fact. Sometimes it happens, sometimes the channel cobbles it together in the field, but it's like they do it grinding their teeth so I mean, I think that is a differentiator of HPE. You're agnostic to that. In fact, by design. >> They can bring your own, you can bring your own software. I mean, it's like, you just bring your own. I mean, if you have it, why would we make a customer buy it again? And HP gives them that flexibility and if it's multiple hypervisors and it's brand agnostic, it's more about, let's deliver you the nodes, purpose-built, for the application that you're going to run in that workload and then HP goes ahead and does that across their portfolio on a custom to order. It's just beautiful for us to fit the need for the customer. >> Well, you're meeting customers where they are. >> Yes. >> Which in today's world is critical. There's no, really no other option for companies. Customers are demanding. Demands are not going to go. We're not going to see a decrease after the pandemic's over of demand, right? And the expectations on businesses. So meeting the customers where they are, giving them that choice, that flexibility is table stakes. >> How has those, you've mentioned supply chain constraints, it sounds like you guys are managing that pretty well. It's I think it's a lot of these hard to get supporting components, maybe not the most expensive component, but they just don't have it. So you can't ship the car or you can't ship the server, whatever it is, how is that affecting the channel? How are they dealing with that? Maybe you could give us an update. >> Oh, the channel's just, we love them, they're the front line, that's who the customers call in, who's been waiting to get their technology and we're wading through it, thank goodness that we have GreenLake because if you wanted to buy it traditionally, because HP is supplying supply-to-purchase through distribution in stock, but it's very limited. And then if you go customer order, that's where the long lead times come into place because it's not just the hard drives and memory and the traditional things that are constrained now. Now it's like the clips and the intangibles and things like that and when you get to that point, you got to just do the best you can and HP supply chain has just been fantastic, super informative, AMD, we're not the problem. We got HP, plenty of processors and plenty of accelerators and GPUs and we're standing with them because that back to the relationship, we're facing the customer with them and managing their expectations to the best we can and trying to give them options to keep their business floating. >> So is that going to be, is this a supply chain constraints could be an accelerant for GreenLake because that capacity is in place for you to service your customers with GreenLake presumably. You're planning for that. There's headroom there in terms of being able to deliver that. If you can't deliver GreenLake, all this promise. >> I would say I would be careful not to position GreenLake as an answer to supply chain challenges, right? I think there's a greater value proposition to a client, and keep in mind, you still have technology at the heart of it, right? And so, and to your question though about our partners, honestly in a lot of ways, it's heartbreaking given the challenges that they face, not just with HPE, but other vendors that they sell and support and without our partners and managing those, we'd be in a world of hurt, frankly and we're working on options. We work with our partners really closely. We work with AMD where we have constraints to move to other potential configurations. >> Does GreenLake make it harder or easier for you to forecast? Because on the one hand, it's as a service and on the other hand, I can dial it down as a customer or dial it up and spike it up if I need to. Do you have enough experience to know at this point, whether it's easier or harder to forecast? >> I think intuitively it's probably harder because you have that variable component that you can't forecast, right? It's with GreenLake, you have your baseline so you know what that baseline is going to be, the baseline commitment and you build in that variable component which is as a service, you pay for what you consume. So that variable component is the one thing that is we can estimate but we don't know exactly what the customer is going to use. >> When you do a GreenLake deal, how does it work? Let's say it's a two-year deal or a three-year deal, whatever and you negotiate a price with a customer for price per X. Do you know like what that contract value is going to be over the life or do you only know that that baseline and then everything else is upside for you and extra additional cost? So how does that work? >> It's a good question. So you know both, you know the baseline and you know what the variable capacity is, what the limits are. So at the beginning of the contract, that's what you know, whether or not a customer determines that they have to expand or do a change order to add another workload into the configuration is the one thing that we hope happens. You don't know. >> But you know with certainty that over the life of that contract, the amount of that contract that's booked, you're going to recognize at some point that. You just don't know when. >> Yes, and so that, and that's to your question, you know that element, the fluctuation in terms of usage is depending on what's happening in the world, right? The pandemic, as an example, with GreenLake customers, probably initially at the beginning of the pandemic, their usage went down for obvious reasons and then it fluctuates up. >> I think a lot of people don't understand that. That's an interesting nuance. Cool, thank you. >> Guys, thanks so much for joining us on the program, talking about the relationship that AMD and HPE have together, the benefits for customers on the outcomes that it's achieving. We appreciate your insights and your time. >> Thanks for having us, guys. >> Appreciate it. >> Our pleasure. >> Phil: Thank you. >> For our guests and Dave Vellante. I'm Lisa Martin live in Las Vegas at HPE Discover '22. Stick around. Our keynote analysis is up next. (soft upbeat music)

Published Date : Jun 29 2022

SUMMARY :

brought to you by HPE. and Phil Soper is here, to us about the partnership. It's pretty hard to get away from AMD and that kind of really that pull-down. and to be competitive and to win. model that you guys have, to make sure we have the right that HP can go to market and what you guys actually and also, it's not about the hardware it's kind of like the vessel at this point and then what's the hardware it's not just about the CPU anymore. and being able to show and actually surface the inside. and be able to do it in 25 and if you're looking to build your own, on the partnership we've seen and they don't have to and how are you delivering that? Well, the edge to the that question and just run the right tool they can offer to you with That's the fact. and if it's multiple hypervisors customers where they are. So meeting the customers where they are, that affecting the channel? and the traditional things So is that going to be, is and keep in mind, you and on the other hand, I can the customer is going to use. and you negotiate a price with and you know what the that over the life of that contract, that's to your question, I think a lot of people on the outcomes that it's achieving. analysis is up next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

MikePERSON

0.99+

AntonioPERSON

0.99+

Mike BeltranoPERSON

0.99+

PhilPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HPEORGANIZATION

0.99+

two-yearQUANTITY

0.99+

three-yearQUANTITY

0.99+

HPORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Phil SoperPERSON

0.99+

Las VegasLOCATION

0.99+

Las VegasLOCATION

0.99+

two guestsQUANTITY

0.99+

SamsungORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

25 minutesQUANTITY

0.99+

five waysQUANTITY

0.99+

three years agoDATE

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

25 minutesQUANTITY

0.99+

three years agoDATE

0.99+

todayDATE

0.99+

XilinxORGANIZATION

0.98+

2022DATE

0.98+

oneQUANTITY

0.97+

one thingQUANTITY

0.96+

pandemicEVENT

0.95+

theCUBEORGANIZATION

0.95+

Keith White, HPE | HPE Discover 2022


 

>> Announcer: theCube presents HPE Discover 2022, brought to you by HPE. >> Hey, everyone. Welcome back to Las Vegas. This is Lisa Martin with Dave Vellante live at HPE Discover '22. Dave, it's great to be here. This is the first Discover in three years and we're here with about 7,000 of our closest friends. >> Yeah. You know, I tweeted out this, I think I've been to 14 Discovers between the U.S. and Europe, and I've never seen a Discover with so much energy. People are not only psyched to get back together, that's for sure, but I think HPE's got a little spring in its step and it's feeling more confident than maybe some of the past Discovers that I've been to. >> I think so, too. I think there's definitely a spring in the step and we're going to be unpacking some of that spring next with one of our alumni who joins us, Keith White's here, the executive vice president and general manager of GreenLake Cloud Services. Welcome back. >> Great. You all thanks for having me. It's fantastic that you're here and you're right, the energy is crazy at this show. It's been a lot of pent up demand, but I think what you heard from Antonio today is our strategy's changing dramatically and it's really embracing our customers and our partners. So it's great. >> Embracing the customers and the partners, the ecosystem expansion is so critical, especially the last couple of years with the acceleration of digital transformation. So much challenge in every industry, but lots of momentum on the GreenLake side, I was looking at the Q2 numbers, triple digit growth in orders, 65,000 customers over 70 services, eight new services announced just this morning. Talk to us about the momentum of GreenLake. >> The momentum's been fantastic. I mean, I'll tell you, the fact that customers are really now reaccelerating their digital transformation, you probably heard a lot, but there was a delay as we went through the pandemic. So now it's reaccelerating, but everyone's going to a hybrid, multi-cloud environment. Data is the new currency. And obviously, everyone's trying to push out to the Edge and GreenLake is that edge to cloud platform. So we're just seeing tons of momentum, not just from the customers, but partners, we've enabled the platform so partners can plug into it and offer their solutions to our customers as well. So it's exciting and it's been fun to see the momentum from an order standpoint, but one of the big numbers that you may not be aware of is we have over a 96% retention rate. So once a customer's on GreenLake, they stay on it because they're seeing the value, which has been fantastic. >> The value is absolutely critically important. We saw three great big name customers. The Home Depot was on stage this morning, Oak Ridge National Laboratory was as well, Evil Geniuses. So the momentum in the enterprise is clearly present. >> Yeah. It is. And we're hearing it from a lot of customers. And I think you guys talk a lot about, hey, there's the cloud, data and Edge, these big mega trends that are happening out there. And you look at a company like Barclays, they're actually reinventing their entire private cloud infrastructure, running over a hundred thousand workloads on HPE GreenLake. Or you look at a company like Zenseact, who's basically they do autonomous driving software. So they're doing massive parallel computing capabilities. They're pulling in hundreds of petabytes of data to then make driving safer and so you're seeing it on the data front. And then on the Edge, you look at anyone like a Patrick Terminal, for example. They run a whole terminal shipyard. They're getting data in from exporters, importers, regulators, the works and they have to real-time, analyze that data and say, where should this thing go? Especially with today's supply chain challenges, they have to be so efficient, that it's just fantastic. >> It was interesting to hear Fidelma, Keith, this morning on stage. It was the first time I'd really seen real clarity on the platform itself and that it's obviously her job is, okay, here's the platform, now, you guys got to go build on top of it. Both inside of HPE, but also externally, so your ecosystem partners. So, you mentioned the financial services companies like Barclays. We see those companies moving into the digital world by offering some of their services in building their own clouds. >> Keith: That's right. >> What's your vision for GreenLake in terms of being that platform, to assist them in doing that and the data component there? >> I think that was one of the most exciting things about not just showcasing the platform, but also the announcement of our private cloud enterprise, Cloud Service. Because in essence, what you're doing is you're creating that framework for what most companies are doing, which is they're becoming cloud service providers for their internal business units. And they're having to do showback type scenarios, chargeback type scenarios, deliver cloud services and solutions inside the organization so that open platform, you're spot on. For our ecosystem, it's fantastic, but for our customers, they get to leverage it as well for their own internal IT work that's happening. >> So you talk about hybrid cloud, you talk about private cloud, what's your vision? You know, we use this term Supercloud. This in a layer that goes across clouds. What's your thought about that? Because you have an advantage at the Edge with Aruba. Everybody talks about the Edge, but they talk about it more in the context of near Edge. >> That's right. >> We talked to Verizon and they're going far Edge, you guys are participating in that, as well as some of your partners in Red Hat and others. What's your vision for that? What I call Supercloud, is that part of the strategy? Is that more longer term or you think that's pipe dream by Dave? >> No, I think it's really thoughtful, Dave, 'cause it has to be part of the strategy. What I hear, so for example, Ford's a great example. They run Azure, AWS, and then they made a big deal with Google cloud for their internal cars and they run HPE GreenLake. So they're saying, hey, we got four clouds. How do we sort of disaggregate the usage of that? And Chris Lund, who is the VP of information technology at Liberty Mutual Insurance, he talked about it today, where he said, hey, I can deliver these services to my business unit. And they don't know, am I running on the public cloud? Am I running on our HPE GreenLake cloud? Like it doesn't matter to the end user, we've simplified that so much. So I think your Supercloud idea is super thoughtful, not to use the super term too much, that I'm super excited about because it's really clear of what our customers are trying to accomplish, which it's not about the cloud, it's about the solution and the business outcome that gets to work. >> Well, and I think it is different. I mean, it's not like the last 10 years where it was like, hey, I got my stuff to work on the different clouds and I'm replicating as much as I can, the cloud experience on-prem. I think you guys are there now and then to us, the next layer is that ecosystem enablement. So how do you see the ecosystem evolving and what role does Green Lake play there? >> Yeah. This has been really exciting. We had Tarkan Maner who runs Nutanix and Karl Strohmeyer from Equinix on stage with us as well. And what's happening with the ecosystem is, I used to say, one plus one has to equal three for our customers. So when you bring these together, it has to be that scenario, but we are joking that one plus one plus one equals five now because everything has a partner component to it. It's not about the platform, it's not about the specific cloud service, it's actually about the solution that gets delivered. And that's done with an ISV, it's done with a Colo, it's done even with the Hyperscalers. We have Azure Stack HCI as a fully integrated solution. It happens with managed service providers, delivering managed services out to their folks as well. So that platform being fully partner enabled and that ecosystem being able to take advantage of that, and so we have to jointly go to market to our customers for their business needs, their business outcomes. >> Some of the expansion of the ecosystem. we just had Red Hat on in the last hour talking about- >> We're so excited to partner with them. >> Right, what's going on there with OpenShift and Ansible and Rel, but talk about the customer influence in terms of the expansion of the ecosystem. We know we've got to meet customers where they are, they're driving it, but we know that HPE has a big presence in the enterprise and some pretty big customer names. How are they from a demand perspective? >> Well, this is where I think the uniqueness of GreenLake has really changed HPE's approach with our customers. Like in all fairness, we used to be a vendor that provided hardware components for, and we talked a lot about hardware costs and blah, blah, blah. Now, we're actually a partner with those customers. What's the business outcome you're requiring? What's the SLA that we offer you for what you're trying to accomplish? And to do that, we have to have it done with partners. And so even on the storage front, Qumulo or Cohesity. On the backup and recovery disaster recovery, yes, we have our own products, but we also partner with great companies like Veeam because it's customer choice, it's an open platform. And the Red Hat announcement is just fantastic. Because, hey, from a container platform standpoint, OpenShift provides 5,000 plus customers, 90% of the fortune 500 that they engage with, with that opportunity to take GreenLake with OpenShift and implement that container capabilities on-prem. So it's fantastic. >> We were talking after the keynote, Keith Townsend came on, myself and Lisa. And he was like, okay, what about startups? 'Cause that's kind of a hallmark of cloud. And we felt like, okay, startups are not the ideal customer profile necessarily for HPE. Although we saw Evil Geniuses up on stage, but I threw out and I'd love to get your thoughts on this that within companies, incumbents, you have entrepreneurs, they're trying to build their own clouds or Superclouds as I use the term, is that really the target for the developer audience? We've talked a lot about OpenShift with their other platforms, who says as a partner- >> We just announced another extension with Rancher and- >> Yeah. I saw that. And you have to have optionality for developers. Is that the way we should think about the target audience from a developer standpoint? >> I think it will be as we go forward. And so what Fidelma presented on stage was the new developer platform, because we have come to realize, we have to engage with the developers. They're the ones building the apps. They're the ones that are delivering the solutions for the most part. So yeah, I think at the enterprise space, we have a really strong capability. I think when you get into the sort of mid-market SMB standpoint, what we're doing is we're going directly to the managed service and cloud service providers and directly to our Disty and VARS to have them build solutions on top of GreenLake, powered by GreenLake, to then deliver to their customers because that's what the customer wants. I think on the developer side of the house, we have to speak their language, we have to provide their capabilities because they're going to start articulating apps that are going to use both the public cloud and our on-prem capabilities with GreenLake. And so that's got to work very well. And so you've heard us talk about API based and all of that sort of scenario. So it's an exciting time for us, again, moving HPE strategy into something very different than where we were before. >> Well, Keith, that speaks to ecosystem. So I don't know if you were at Microsoft, when the sweaty Steve Ballmer was working with the developers, developers. That's about ecosystem, ecosystem, ecosystem. I don't expect we're going to see Antonio replicating that. But that really is the sort of what you just described is the ecosystem developing on top of GreenLake. That's critical. >> Yeah. And this is one of the things I learned. So, being at Microsoft for as long as I was and leading the Azure business from a commercial standpoint, it was all about the partner and I mean, in all fairness, almost every solution that gets delivered has some sort of partner component to it. Might be an ISV app, might be a managed service, might be in a Colo, might be with our hybrid cloud, with our Hyperscalers, but everything has a partner component to it. And so one of the things I learned with Azure is, you have to sell through and with your ecosystem and go to that customer with a joint solution. And that's where it becomes so impactful and so powerful for what our customers are trying to accomplish. >> When we think about the data gravity and the value of data that put massive potential that it has, even Antonio talked about it this morning, being data rich but insights poor for a long time. >> Yeah. >> Every company in today's day and age has to be a data company to be competitive, there's no more option for that. How does GreenLake empower companies? GreenLake and its ecosystem empower companies to really live being data companies so that they can meet their customers where they are. >> I think it's a really great point because like we said, data's the new currency. Data's the new gold that's out there and people have to get their arms around their data estate. So then they can make these business decisions, these business insights and garner that. And Dave, you mentioned earlier, the Edge is bringing a ton of new data in, and my Zenseact example is a good one. But with GreenLake, you now have a platform that can do data and data management and really sort of establish and secure the data for you. There's no data latency, there's no data egress charges. And which is what we typically run into with the public cloud. But we also support a wide range of databases, open source, as well as the commercial ones, the sequels and those types of scenarios. But what really comes to life is when you have to do analytics on that and you're doing AI and machine learning. And this is one of the benefits I think that people don't realize with HPE is, the investments we've made with Cray, for example, we have and you saw on stage today, the largest supercomputer in the world. That depth that we have as a company, that then comes down into AI and analytics for what we can do with high performance compute, data simulations, data modeling, analytics, like that is something that we, as a company, have really deep, deep capabilities on. So it's exciting to see what we can bring to customers all for that spectrum of data. >> I was excited to see Frontier, they actually achieve, we hosted an event, co-produced event with HPE during the pandemic, Exascale day. >> Yeah. >> But we weren't quite at Exascale, we were like right on the cusp. So to see it actually break through was awesome. So HPC is clearly a differentiator for Hewlett Packard Enterprise. And you talk about the egress. What are some of the other differentiators? Why should people choose GreenLake? >> Well, I think the biggest thing is, that it's truly is a edge to cloud platform. And so you talk about Aruba and our capabilities with a network attached and network as a service capabilities, like that's fairly unique. You don't see that with the other companies. You mentioned earlier to me that compute capabilities that we've had as a company and the storage capabilities. But what's interesting now is that we're sort of taking all of that expertise and we're actually starting to deliver these cloud services that you saw on stage, private cloud, AI and machine learning, high performance computing, VDI, SAP. And now we're actually getting into these industry solutions. So we talked last year about electronic medical records, this year, we've talked about 5g. Now, we're talking about customer loyalty applications. So we're really trying to move from these sort of baseline capabilities and yes, containers and VMs and bare metal, all that stuff is important, but what's really important is the services that you run on top of that, 'cause that's the outcomes that our customers are looking at. >> Should we expect you to be accelerating? I mean, look at what you did with Azure. You look at what AWS does in terms of the feature acceleration. Should we expect HPE to replicate? Maybe not to that scale, but in a similar cadence, we're starting to see that. Should we expect that actually to go faster? >> I think you couched it really well because it's not as much about the quantity, but the quality and the uses. And so what we've been trying to do is say, hey, what is our swim lane? What is our sweet spot? Where do we have a superpower? And where are the areas that we have that superpower and how can we bring those solutions to our customers? 'Cause I think, sometimes, you get over your skis a bit, trying to do too much, or people get caught up in the big numbers, versus the, hey, what's the real meat behind it. What's the tangible outcome that we can deliver to customers? And we see just a massive TAM. I want to say my last analysis was around $42 billion in the next three years, TAM and the Azure service on-prem space. And so we think that there's nothing but upside with the core set of workloads, the core set of solutions and the cloud services that we bring. So yeah, we'll continue to innovate, absolutely, amen, but we're not in a, hey we got to get to 250 this and 300 that, we want to keep it as focused as we can. >> Well, the vast majority of the revenue in the public cloud is still compute. I mean, not withstanding, Microsoft obviously does a lot in SaaS, but I'm talking about the infrastructure and service. Still, well, I would say over 50%. And so there's a lot of the services that don't make any revenue and there's that long tail, if I hear your strategy, you're not necessarily going after that. You're focusing on the quality of those high value services and let the ecosystem sort of bring in the rest. >> This is where I think the, I mean, I love that you guys are asking me about the ecosystem because this is where their sweet spot is. They're the experts on hyper-converged or databases, a service or VDI, or even with SAP, like they're the experts on that piece of it. So we're enabling that together to our customers. And so I don't want to give you the impression that we're not going to innovate. Amen. We absolutely are, but we want to keep it within that, that again, our swim lane, where we can really add true value based on our expertise and our capabilities so that we can confidently go to customers and say, hey, this is a solution that's going to deliver this business value or this capability for you. >> The partners might be more comfortable with that than, we only have one eye sleep with one eye open in the public cloud, like, okay, what are they going to, which value of mine are they grab next? >> You're spot on. And again, this is where I think, the power of what an Edge to cloud platform like HPE GreenLake can do for our customers, because it is that sort of, I mentioned it, one plus one equals three kind of scenario for our customers so. >> So we can leave your customers, last question, Keith. I know we're only on day one of the main summit, the partner growth summit was yesterday. What's the feedback been from the customers and the ecosystem in terms of validating the direction that HPE is going? >> Well, I think the fantastic thing has been to hear from our customers. So I mentioned in my keynote recently, we had Liberty Mutual and we had Texas Children's Hospital, and they're implementing HPE GreenLake in a variety of different ways, from a private cloud standpoint to a data center consolidation. They're seeing sustainability goals happen on top of that. They're seeing us take on management for them so they can take their limited resources and go focus them on innovation and value added scenarios. So the flexibility and cost that we're providing, and it's just fantastic to hear this come to life in a real customer scenario because what Texas Children is trying to do is improve patient care for women and children like who can argue with that. >> Nobody. >> So, yeah. It's great. >> Awesome. Keith, thank you so much for joining Dave and me on the program, talking about all of the momentum with HPE Greenlake. >> Always. >> You can't walk in here without feeling the momentum. We appreciate your insights and your time. >> Always. Thank you you for the time. Yeah. Great to see you as well. >> Likewise. >> Thanks. >> For Keith White and Dave Vellante, I'm Lisa Martin. You're watching theCube live, day one coverage from the show floor at HPE Discover '22. We'll be right back with our next guest. (gentle music)

Published Date : Jun 28 2022

SUMMARY :

brought to you by HPE. This is the first Discover in three years I think I've been to 14 Discovers a spring in the step and the energy is crazy at this show. and the partners, and GreenLake is that So the momentum in the And I think you guys talk a lot about, on the platform itself and and solutions inside the organization at the Edge with Aruba. that part of the strategy? and the business outcome I mean, it's not like the last and so we have to jointly go Some of the expansion of the ecosystem. to partner with them. in terms of the expansion What's the SLA that we offer you that really the target Is that the way we should and all of that sort of scenario. But that really is the sort and leading the Azure business gravity and the value of data so that they can meet their and secure the data for you. with HPE during the What are some of the and the storage capabilities. in terms of the feature acceleration. and the cloud services that we bring. and let the ecosystem I love that you guys are the power of what an and the ecosystem in terms So the flexibility and It's great. about all of the momentum We appreciate your insights and your time. Great to see you as well. from the show floor at HPE Discover '22.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KeithPERSON

0.99+

DavePERSON

0.99+

Lisa MartinPERSON

0.99+

Steve BallmerPERSON

0.99+

Chris LundPERSON

0.99+

VerizonORGANIZATION

0.99+

BarclaysORGANIZATION

0.99+

Keith WhitePERSON

0.99+

Keith TownsendPERSON

0.99+

FordORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

Karl StrohmeyerPERSON

0.99+

ZenseactORGANIZATION

0.99+

Liberty Mutual InsuranceORGANIZATION

0.99+

Las VegasLOCATION

0.99+

last yearDATE

0.99+

90%QUANTITY

0.99+

GreenLake Cloud ServicesORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Tarkan ManerPERSON

0.99+

65,000 customersQUANTITY

0.99+

fiveQUANTITY

0.99+

threeQUANTITY

0.99+

LisaPERSON

0.99+

this yearDATE

0.99+

Evil GeniusesTITLE

0.99+

VeeamORGANIZATION

0.99+

Texas Children's HospitalORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

firstQUANTITY

0.99+

Liberty MutualORGANIZATION

0.99+

around $42 billionQUANTITY

0.99+

EuropeLOCATION

0.99+

ArubaORGANIZATION

0.99+

eight new servicesQUANTITY

0.99+

todayDATE

0.99+

Texas ChildrenORGANIZATION

0.99+

yesterdayDATE

0.99+

Home DepotORGANIZATION

0.98+

oneQUANTITY

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

EquinixORGANIZATION

0.98+

FidelmaPERSON

0.98+

BothQUANTITY

0.98+

SupercloudORGANIZATION

0.98+

TAMORGANIZATION

0.98+

U.S.LOCATION

0.97+

bothQUANTITY

0.97+

over 50%QUANTITY

0.97+

5,000 plus customersQUANTITY

0.97+

AntonioPERSON

0.97+

hundreds of petabytesQUANTITY

0.97+

14 DiscoversQUANTITY

0.97+

EdgeORGANIZATION

0.97+

DistyORGANIZATION

0.97+

Red HatORGANIZATION

0.96+

RancherORGANIZATION

0.96+

HPE Accelerating Next | HPE Accelerating Next 2021


 

momentum is gathering [Music] business is evolving more and more quickly moving through one transformation to the next because change never stops it only accelerates this is a world that demands a new kind of compute deployed from edge to core to cloud compute that can outpace the rapidly changing needs of businesses large and small unlocking new insights turning data into outcomes empowering new experiences compute that can scale up or scale down with minimum investment and effort guided by years of expertise protected by 360-degree security served up as a service to let it control own and manage massive workloads that weren't there yesterday and might not be there tomorrow this is the compute power that will drive progress giving your business what you need to be ready for what's next this is the compute power of hpe delivering your foundation for digital transformation welcome to accelerating next thank you so much for joining us today we have a great program we're going to talk tech with experts we'll be diving into the changing economics of our industry and how to think about the next phase of your digital transformation now very importantly we're also going to talk about how to optimize workloads from edge to exascale with full security and automation all coming to you as a service and with me to kick things off is neil mcdonald who's the gm of compute at hpe neil always a pleasure great to have you on it's great to see you dave now of course when we spoke a year ago you know we had hoped by this time we'd be face to face but you know here we are again you know this pandemic it's obviously affected businesses and people in in so many ways that we could never have imagined but in the reality is in reality tech companies have literally saved the day let's start off how is hpe contributing to helping your customers navigate through things that are so rapidly shifting in the marketplace well dave it's nice to be speaking to you again and i look forward to being able to do this in person some point the pandemic has really accelerated the need for transformation in businesses of all sizes more than three-quarters of cios report that the crisis has forced them to accelerate their strategic agendas organizations that were already transforming or having to transform faster and organizations that weren't on that journey yet are having to rapidly develop and execute a plan to adapt to this new reality our customers are on this journey and they need a partner for not just the compute technology but also the expertise and economics that they need for that digital transformation and for us this is all about unmatched optimization for workloads from the edge to the enterprise to exascale with 360 degree security and the intelligent automation all available in that as a service experience well you know as you well know it's a challenge to manage through any transformation let alone having to set up remote workers overnight securing them resetting budget priorities what are some of the barriers that you see customers are working hard to overcome simply per the organizations that we talk with are challenged in three areas they need the financial capacity to actually execute a transformation they need the access to the resource and the expertise needed to successfully deliver on a transformation and they have to find the way to match their investments with the revenues for the new services that they're putting in place to service their customers in this environment you know we have a data partner called etr enterprise technology research and the spending data that we see from them is it's quite dramatic i mean last year we saw a contraction of roughly five percent of in terms of i.t spending budgets etc and this year we're seeing a pretty significant rebound maybe a six to seven percent growth range is the prediction the challenge we see is organizations have to they've got to iterate on that i call it the forced march to digital transformation and yet they also have to balance their investments for example at the corporate headquarters which have kind of been neglected is there any help in sight for the customers that are trying to reduce their spend and also take advantage of their investment capacity i think you're right many businesses are understandably reluctant to loosen the purse strings right now given all of the uncertainty and often a digital transformation is viewed as a massive upfront investment that will pay off in the long term and that can be a real challenge in an environment like this but it doesn't need to be we work through hpe financial services to help our customers create the investment capacity to accelerate the transformation often by leveraging assets they already have and helping them monetize them in order to free up the capacity to accelerate what's next for their infrastructure and for their business so can we drill into that i wonder if we could add some specifics i mean how do you ensure a successful outcome what are you really paying attention to as those sort of markers for success well when you think about the journey that an organization is going through it's tough to be able to run the business and transform at the same time and one of the constraints is having the people with enough bandwidth and enough expertise to be able to do both so we're addressing that in two ways for our customers one is by helping them confidently deploy new solutions which we have engineered leveraging decades of expertise and experience in engineering to deliver those workload optimized portfolios that take the risk and the complexity out of assembling some of these solutions and give them a pre-packaged validated supported solution intact that simplifies that work for them but in other cases we can enhance our customers bandwidth by bringing them hpe point next experts with all of the capabilities we have to help them plan deliver and support these i.t projects and transformations organizations can get on a faster track of modernization getting greater insight and control as they do it we're a trusted partner to get the most for a business that's on this journey in making these critical compute investments to underpin the transformations and whether that's planning to optimizing to safe retirement at the end of life we can bring that expertise to bayer to help amplify what our customers already have in-house and help them accelerate and succeed in executing these transformations thank you for that neil so let's talk about some of the other changes that customers are seeing and the cloud has obviously forced customers and their suppliers to really rethink how technology is packaged how it's consumed how it's priced i mean there's no doubt in that to take green lake it's obviously a leading example of a pay as pay-as-you-scale infrastructure model and it could be applied on-prem or hybrid can you maybe give us a sense as to where you are today with green lake well it's really exciting you know from our first pay-as-you-go offering back in 2006 15 years ago to the introduction of green lake hpe has really been paving the way on consumption-based services through innovation and partnership to help meet the exact needs of our customers hpe green lake provides an experience that's the best of both worlds a simple pay-per-use technology model with the risk management of data that's under our customers direct control and it lets customers shift to everything as a service in order to free up capital and avoid that upfront expense that we talked about they can do this anywhere at any scale or any size and really hpe green lake is the cloud that comes to you like that so we've touched a little bit on how customers can maybe overcome some of the barriers to transformation what about the nature of transformations themselves i mean historically there was a lot of lip service paid to digital and and there's a lot of complacency frankly but you know that covered wrecking ball meme that so well describes that if you're not a digital business essentially you're going to be out of business so neil as things have evolved how is hpe addressed the new requirements well the new requirements are really about what customers are trying to achieve and four very common themes that we see are enabling the productivity of a remote workforce that was never really part of the plan for many organizations being able to develop and deliver new apps and services in order to service customers in a different way or drive new revenue streams being able to get insights from data so that in these tough times they can optimize their business more thoroughly and then finally think about the efficiency of an agile hybrid private cloud infrastructure especially one that now has to integrate the edge and we're really thrilled to be helping our customers accelerate all of these and more with hpe compute i want to double click on that remote workforce productivity i mean again the surveys that we see 46 percent of the cios say that productivity improved with the whole work from home remote work trend and on average those improvements were in the four percent range which is absolutely enormous i mean when you think about that how does hpe specifically you know help here what do you guys do well every organization in the world has had to adapt to a different style of working and with more remote workers than they had before and for many organizations that's going to become the new normal even post pandemic many it shops are not well equipped for the infrastructure to provide that experience because if all your workers are remote the resiliency of that infrastructure the latencies of that infrastructure the reliability of are all incredibly important so we provide comprehensive solutions expertise and as a service options that support that remote work through virtual desktop infrastructure or vdi so that our customers can support that new normal of virtual engagements online everything across industries wherever they are and that's just one example of many of the workload optimized solutions that we're providing for our customers is about taking out the guesswork and the uncertainty in delivering on these changes that they have to deploy as part of their transformation and we can deliver that range of workload optimized solutions across all of these different use cases because of our broad range of innovation in compute platforms that span from the ruggedized edge to the data center all the way up to exascale and hpc i mean that's key if you're trying to affect the digital transformation and you don't have to fine-tune you know be basically build your own optimized solutions if i can buy that rather than having to build it and rely on your r d you know that's key what else is hpe doing you know to deliver things new apps new services you know your microservices containers the whole developer trend what's going on there well that's really key because organizations are all seeking to evolve their mix of business and bring new services and new capabilities new ways to reach their customers new way to reach their employees new ways to interact in their ecosystem all digitally and that means app development and many organizations of course are embracing container technology to do that today so with the hpe container platform our customers can realize that agility and efficiency that comes with containerization and use it to provide insights to their data more and more that data of course is being machine generated or generated at the edge or the near edge and it can be a real challenge to manage that data holistically and not have silos and islands an hpe esmerald data fabric speeds the agility and access to data with a unified platform that can span across the data centers multiple clouds and even the edge and that enables data analytics that can create insights powering a data-driven production-oriented cloud-enabled analytics and ai available anytime anywhere in any scale and it's really exciting to see the kind of impact that that can have in helping businesses optimize their operations in these challenging times you got to go where the data is and the data is distributed it's decentralized so i i i like the esmerel of vision and execution there so that all sounds good but with digital transformation you get you're going to see more compute in in hybrid's deployments you mentioned edge so the surface area it's like the universe it's it's ever-expanding you mentioned you know remote work and work from home before so i'm curious where are you investing your resources from a cyber security perspective what can we count on from hpe there well you can count on continued leadership from hpe as the world's most secure industry standard server portfolio we provide an enhanced and holistic 360 degree view to security that begins in the manufacturing supply chain and concludes with a safeguarded end-of-life decommissioning and of course we've long set the bar for security with our work on silicon root of trust and we're extending that to the application tier but in addition to the security customers that are building this modern hybrid are private cloud including the integration of the edge need other elements too they need an intelligent software-defined control plane so that they can automate their compute fleets from all the way at the edge to the core and while scale and automation enable efficiency all private cloud infrastructures are competing with web scale economics and that's why we're democratizing web scale technologies like pinsando to bring web scale economics and web scale architecture to the private cloud our partners are so important in helping us serve our customers needs yeah i mean hp has really upped its ecosystem game since the the middle of last decade when when you guys reorganized it you became like even more partner friendly so maybe give us a preview of what's coming next in that regard from today's event well dave we're really excited to have hp's ceo antonio neri speaking with pat gelsinger from intel and later lisa sue from amd and later i'll have the chance to catch up with john chambers the founder and ceo of jc2 ventures to discuss the state of the market today yeah i'm jealous you guys had some good interviews coming up neil thanks so much for joining us today on the virtual cube you've really shared a lot of great insight how hpe is partnering with customers it's it's always great to catch up with you hopefully we can do so face to face you know sooner rather than later well i look forward to that and uh you know no doubt our world has changed and we're here to help our customers and partners with the technology the expertise and the economics they need for these digital transformations and we're going to bring them unmatched workload optimization from the edge to exascale with that 360 degree security with the intelligent automation and we're going to deliver it all as an as a service experience we're really excited to be helping our customers accelerate what's next for their businesses and it's been really great talking with you today about that dave thanks for having me you're very welcome it's been super neal and i actually you know i had the opportunity to speak with some of your customers about their digital transformation and the role of that hpe plays there so let's dive right in we're here on the cube covering hpe accelerating next and with me is rule siestermans who is the head of it at the netherlands cancer institute also known as nki welcome rule thank you very much great to be here hey what can you tell us about the netherlands cancer institute maybe you could talk about your core principles and and also if you could weave in your specific areas of expertise yeah maybe first introduction to the netherlands institute um we are one of the top 10 comprehensive cancers in the world and what we do is we combine a hospital for treating patients with cancer and a recent institute under one roof so discoveries we do we find within the research we can easily bring them back to the clinic and vis-a-versa so we have about 750 researchers and about 3 000 other employees doctors nurses and and my role is to uh to facilitate them at their best with it got it so i mean everybody talks about digital digital transformation to us it all comes down to data so i'm curious how you collect and take advantage of medical data specifically to support nki's goals maybe some of the challenges that your organization faces with the amount of data the speed of data coming in just you know the the complexities of data how do you handle that yeah it's uh it's it's it's challenge and uh yeah what we we have we have a really a large amount of data so we produce uh terabytes a day and we we have stored now more than one petabyte on data at this moment and yeah it's uh the challenge is to to reuse the data optimal for research and to share it with other institutions so that needs to have a flexible infrastructure for that so a fast really fast network uh big data storage environment but the real challenge is not not so much the i.t bus is more the quality of the data so we have a lot of medical systems all producing those data and how do we combine them and and yeah get the data fair so findable accessible interoperable and reusable uh for research uh purposes so i think that's the main challenge the quality of the data yeah very common themes that we hear from from other customers i wonder if you could paint a picture of your environment and maybe you can share where hpe solutions fit in what what value they bring to your organization's mission yeah i think it brings a lot of flexibility so what we did with hpe is that we we developed a software-defined data center and then a virtual workplace for our researchers and doctors and that's based on the hpe infrastructure and what we wanted to build is something that expect the needs of doctors and nurses but also the researchers and the two kind of different blood groups blood groups and with different needs so uh but we wanted to create one infrastructure because we wanted to make the connection between the hospital and the research that's that's more important so um hpe helped helped us not only with the the infrastructure itself but also designing the whole architecture of it and for example what we did is we we bought a lot of hardware and and and the hardware is really uh doing his his job between nine till five uh dennis everything is working within everyone is working within the institution but all the other time in evening and and nights hours and also the redundant environment we have for the for our healthcare uh that doesn't do nothing of much more or less uh in in those uh dark hours so what we created together with nvidia and hpe and vmware is that we we call it video by day compute by night so we reuse those those servers and those gpu capacity for computational research jobs within the research that's you mentioned flexibility for this genius and and so we're talking you said you know a lot of hard ways they're probably proliant i think synergy aruba networking is in there how are you using this environment actually the question really is when you think about nki's digital transformation i mean is this sort of the fundamental platform that you're using is it a maybe you could describe that yeah it's it's the fundamental platform to to to work on and and and what we see is that we have we have now everything in place for it but the real challenge is is the next steps we are in so we have a a software defined data center we are cloud ready so the next steps is to to make the connection to the cloud to to give more automation to our researchers so they don't have to wait a couple of weeks for it to do it but they can do it themselves with a couple of clicks so i think the basic is we are really flexible and we have a lot of opportunities for automation for example but the next step is uh to create that business value uh really for for our uh employees that's a great story and a very important mission really fascinating stuff thanks for sharing this with our audience today really appreciate your time thank you very much okay this is dave vellante with thecube stay right there for more great content you're watching accelerating next from hpe i'm really glad to have you with us today john i know you stepped out of vacation so thanks very much for joining us neil it's great to be joining you from hawaii and i love the partnership with hpe and the way you're reinventing an industry well you've always excelled john at catching market transitions and there are so many transitions and paradigm shifts happening in the market and tech specifically right now as you see companies rush to accelerate their transformation what do you see as the keys to success well i i think you're seeing actually an acceleration following the covet challenges that all of us faced and i wasn't sure that would happen it's probably at three times the paces before there was a discussion point about how quickly the companies need to go digital uh that's no longer a discussion point almost all companies are moving with tremendous feed on digital and it's the ability as the cloud moves to the edge with compute and security uh at the edge and how you deliver these services to where the majority of applications uh reside are going to determine i think the future of the next generation company leadership and it's the area that neil we're working together on in many many ways so i think it's about innovation it's about the cloud moving to the edge and an architectural play with silicon to speed up that innovation yes we certainly see our customers of all sizes trying to accelerate what's next and get that digital transformation moving even faster as a result of the environment that we're all living in and we're finding that workload focus is really key uh customers in all kinds of different scales are having to adapt and support the remote workforces with vdi and as you say john they're having to deal with the deployment of workloads at the edge with so much data getting generated at the edge and being acted upon at the edge the analytics and the infrastructure to manage that as these processes get digitized and automated is is so important for so many workflows we really believe that the choice of infrastructure partner that underpins those transformations really matters a partner that can help create the financial capacity that can help optimize your environments and enable our customers to focus on supporting their business are all super key to success and you mentioned that in the last year there's been a lot of rapid course correction for all of us a demand for velocity and the ability to deploy resources at scale is more and more needed maybe more than ever what are you hearing customers looking for as they're rolling out their digital transformation efforts well i think they're being realistic that they're going to have to move a lot faster than before and they're also realistic on core versus context they're they're their core capability is not the technology of themselves it's how to deploy it and they're we're looking for partners that can help bring them there together but that can also innovate and very often the leaders who might have been a leader in a prior generation may not be on this next move hence the opportunity for hpe and startups like vinsano to work together as the cloud moves the edge and perhaps really balance or even challenge some of the big big incumbents in this category as well as partners uniquely with our joint customers on how do we achieve their business goals tell me a little bit more about how you move from this being a technology positioning for hpe to literally helping your customers achieve their outcomes they want and and how are you changing hpe in that way well i think when you consider these transformations the infrastructure that you choose to underpin it is incredibly critical our customers need a software-defined management plan that enables them to automate so much of their infrastructure they need to be able to take faster action where the data is and to do all of this in a cloud-like experience where they can deliver their infrastructure as code anywhere from exascale through the enterprise data center to the edge and really critically they have to be able to do this securely which becomes an ever increasing challenge and doing it at the right economics relative to their alternatives and part of the right economics of course includes adopting the best practices from web scale architectures and bringing them to the heart of the enterprise and in our partnership with pensando we're working to enable these new ideas of web scale architecture and fleet management for the enterprise at scale you know what is fun is hpe has an unusual talent from the very beginning in silicon valley of working together with others and creating a win-win innovation approach if you watch what your team has been able to do and i want to say this for everybody listening you work with startups better than any other company i've seen in terms of how you do win win together and pinsando is just the example of that uh this startup which by the way is the ninth time i have done with this team a new generation of products and we're designing that together with hpe in terms of as the cloud moves to the edge how do we get the leverage out of that and produce the results for your customers on this to give the audience appeal for it you're talking with pensano alone in terms of the efficiency versus an amazon amazon web services of an order of magnitude i'm not talking 100 greater i'm talking 10x greater and things from throughput number of connections you do the jitter capability etc and it talks how two companies uniquely who believe in innovation and trust each other and have very similar cultures can work uniquely together on it how do you bring that to life with an hpe how do you get your company to really say let's harvest the advantages of your ecosystem in your advantages of startups well as you say more and more companies are faced with these challenges of hitting the right economics for the infrastructure and we see many enterprises of various sizes trying to come to terms with infrastructures that look a lot more like a service provider that require that software-defined management plane and the automation to deploy at scale and with the work we're doing with pinsando the benefits that we bring in terms of the observability and the telemetry and the encryption and the distributed network functions but also a security architecture that enables that efficiency on the individual nodes is just so key to building a competitive architecture moving forwards for an on-prem private cloud or internal service provider operation and we're really excited about the work we've done to bring that technology across our portfolio and bring that to our customers so that they can achieve those kind of economics and capabilities and go focus on their own transformations rather than building and running the infrastructure themselves artisanally and having to deal with integrating all of that great technology themselves makes tremendous sense you know neil you and i work on a board together et cetera i've watched your summarization skills and i always like to ask the question after you do a quick summary like this what are the three or four takeaways we would like for the audience to get out of our conversation well that's a great question thanks john we believe that customers need a trusted partner to work through these digital transformations that are facing them and confront the challenge of the time that the covet crisis has taken away as you said up front every organization is having to transform and transform more quickly and more digitally and working with a trusted partner with the expertise that only comes from decades of experience is a key enabler for that a partner with the ability to create the financial capacity to transform the workload expertise to get more from the infrastructure and optimize the environment so that you can focus on your own business a partner that can deliver the systems and the security and the automation that makes it easily deployable and manageable anywhere you need them at any scale whether the edge the enterprise data center or all the way up to exascale in high performance computing and can do that all as a service as we can at hpe through hpe green lake enabling our customers most critical workloads it's critical that all of that is underpinned by an ai powered digitally enabled service experience so that our customers can get on with their transformation and running their business instead of dealing with their infrastructure and really only hpe can provide this combination of capabilities and we're excited and committed to helping our customers accelerate what's next for their businesses neil it's fun i i love being your partner and your wingman our values and cultures are so similar thanks for letting me be a part of this discussion today thanks for being with us john it was great having you here oh it's friends for life okay now we're going to dig into the world of video which accounts for most of the data that we store and requires a lot of intense processing capabilities to stream here with me is jim brickmeyer who's the chief marketing and product officer at vlasics jim good to see you good to see you as well so tell us a little bit more about velocity what's your role in this tv streaming world and maybe maybe talk about your ideal customer sure sure so um we're leading provider of carrier great video solutions video streaming solutions and advertising uh technology to service providers around the globe so we primarily sell software-based solutions to uh cable telco wireless providers and broadcasters that are interested in launching their own um video streaming services to consumers yeah so this is this big time you know we're not talking about mom and pop you know a little video outfit but but maybe you can help us understand that and just the sheer scale of of the tv streaming that you're doing maybe relate it to you know the overall internet usage how much traffic are we talking about here yeah sure so uh yeah so our our customers tend to be some of the largest um network service providers around the globe uh and if you look at the uh the video traffic um with respect to the total amount of traffic that that goes through the internet video traffic accounts for about 90 of the total amount of data that uh that traverses the internet so video is uh is a pretty big component of um of how people when they look at internet technologies they look at video streaming technologies uh you know this is where we we focus our energy is in carrying that traffic as efficiently as possible and trying to make sure that from a consumer standpoint we're all consumers of video and uh make sure that the consumer experience is a high quality experience that you don't experience any glitches and that that ultimately if people are paying for that content that they're getting the value that they pay for their for their money uh in their entertainment experience i think people sometimes take it for granted it's like it's like we we all forget about dial up right those days are long gone but the early days of video was so jittery and restarting and and the thing too is that you know when you think about the pandemic and the boom in streaming that that hit you know we all sort of experienced that but the service levels were pretty good i mean how much how much did the pandemic affect traffic what kind of increases did you see and how did that that impact your business yeah sure so uh you know obviously while it was uh tragic to have a pandemic and have people locked down what we found was that when people returned to their homes what they did was they turned on their their television they watched on on their mobile devices and we saw a substantial increase in the amount of video streaming traffic um over service provider networks so what we saw was on the order of 30 to 50 percent increase in the amount of data that was traversing those networks so from a uh you know from an operator's standpoint a lot more traffic a lot more challenging to to go ahead and carry that traffic a lot of work also on our behalf and trying to help operators prepare because we could actually see geographically as the lockdowns happened [Music] certain areas locked down first and we saw that increase so we were able to help operators as as all the lockdowns happened around the world we could help them prepare for that increase in traffic i mean i was joking about dial-up performance again in the early days of the internet if your website got fifty percent more traffic you know suddenly you were you your site was coming down so so that says to me jim that architecturally you guys were prepared for that type of scale so maybe you could paint a picture tell us a little bit about the solutions you're using and how you differentiate yourself in your market to handle that type of scale sure yeah so we so we uh we really are focused on what we call carrier grade solutions which are designed for that massive amount of scale um so we really look at it you know at a very granular level when you look um at the software and and performance capabilities of the software what we're trying to do is get as many streams as possible out of each individual piece of hardware infrastructure so that we can um we can optimize first of all maximize the uh the efficiency of that device make sure that the costs are very low but one of the other challenges is as you get to millions and millions of streams and that's what we're delivering on a daily basis is millions and millions of video streams that you have to be able to scale those platforms out um in an effective in a cost effective way and to make sure that it's highly resilient as well so we don't we don't ever want a consumer to have a circumstance where a network glitch or a server issue or something along those lines causes some sort of uh glitch in their video and so there's a lot of work that we do in the software to make sure that it's a very very seamless uh stream and that we're always delivering at the very highest uh possible bit rate for consumers so that if you've got that giant 4k tv that we're able to present a very high resolution picture uh to those devices and what's the infrastructure look like underneath you you're using hpe solutions where do they fit in yeah that's right yeah so we uh we've had a long-standing partnership with hpe um and we work very closely with them to try to identify the specific types of hardware that are ideal for the the type of applications that we run so we run video streaming applications and video advertising applications targeted kinds of video advertising technologies and when you look at some of these applications they have different types of requirements in some cases it's uh throughput where we're taking a lot of data in and streaming a lot of data out in other cases it's storage where we have to have very high density high performance storage systems in other cases it's i gotta have really high capacity storage but the performance does not need to be quite as uh as high from an io perspective and so we work very closely with hpe on trying to find exactly the right box for the right application and then beyond that also talking with our customers to understand there are different maintenance considerations associated with different types of hardware so we tend to focus on as much as possible if we're going to place servers deep at the edge of the network we will make everything um maintenance free or as maintenance free as we can make it by putting very high performance solid state storage into those servers so that uh we we don't have to physically send people to those sites to uh to do any kind of maintenance so it's a it's a very cooperative relationship that we have with hpe to try to define those boxes great thank you for that so last question um maybe what the future looks like i love watching on my mobile device headphones in no distractions i'm getting better recommendations how do you see the future of tv streaming yeah so i i think the future of tv streaming is going to be a lot more personal right so uh this is what you're starting to see through all of the services that are out there is that most of the video service providers whether they're online providers or they're your traditional kinds of paid tv operators is that they're really focused on the consumer and trying to figure out what is of value to you personally in the past it used to be that services were one size fits all and um and so everybody watched the same program right at the same time and now that's uh that's we have this technology that allows us to deliver different types of content to people on different screens at different times and to advertise to those individuals and to cater to their individual preferences and so using that information that we have about how people watch and and what people's interests are we can create a much more engaging and compelling uh entertainment experience on all of those screens and um and ultimately provide more value to consumers awesome story jim thanks so much for keeping us helping us just keep entertained during the pandemic i really appreciate your time sure thanks all right keep it right there everybody you're watching hpes accelerating next first of all pat congratulations on your new role as intel ceo how are you approaching your new role and what are your top priorities over your first few months thanks antonio for having me it's great to be here with you all today to celebrate the launch of your gen 10 plus portfolio and the long history that our two companies share in deep collaboration to deliver amazing technology to our customers together you know what an exciting time it is to be in this industry technology has never been more important for humanity than it is today everything is becoming digital and driven by what i call the four key superpowers the cloud connectivity artificial intelligence and the intelligent edge they are super powers because each expands the impact of the others and together they are reshaping every aspect of our lives and work in this landscape of rapid digital disruption intel's technology and leadership products are more critical than ever and we are laser focused on bringing to bear the depth and breadth of software silicon and platforms packaging and process with at scale manufacturing to help you and our customers capitalize on these opportunities and fuel their next generation innovations i am incredibly excited about continuing the next chapter of a long partnership between our two companies the acceleration of the edge has been significant over the past year with this next wave of digital transformation we expect growth in the distributed edge and age build out what are you seeing on this front like you said antonio the growth of edge computing and build out is the next key transition in the market telecommunications service providers want to harness the potential of 5g to deliver new services across multiple locations in real time as we start building solutions that will be prevalent in a 5g digital environment we will need a scalable flexible and programmable network some use cases are the massive scale iot solutions more robust consumer devices and solutions ar vr remote health care autonomous robotics and manufacturing environments and ubiquitous smart city solutions intel and hp are partnering to meet this new wave head on for 5g build out and the rise of the distributed enterprise this build out will enable even more growth as businesses can explore how to deliver new experiences and unlock new insights from the new data creation beyond the four walls of traditional data centers and public cloud providers network operators need to significantly increase capacity and throughput without dramatically growing their capital footprint their ability to achieve this is built upon a virtualization foundation an area of intel expertise for example we've collaborated with verizon for many years and they are leading the industry and virtualizing their entire network from the core the edge a massive redesign effort this requires advancements in silicon and power management they expect intel to deliver the new capabilities in our roadmap so ecosystem partners can continue to provide innovative and efficient products with this optimization for hybrid we can jointly provide a strong foundation to take on the growth of data-centric workloads for data analytics and ai to build and deploy models faster to accelerate insights that will deliver additional transformation for organizations of all types the network transformation journey isn't easy we are continuing to unleash the capabilities of 5g and the power of the intelligent edge yeah the combination of the 5g built out and the massive new growth of data at the edge are the key drivers for the age of insight these new market drivers offer incredible new opportunities for our customers i am excited about recent launch of our new gen 10 plus portfolio with intel together we are laser focused on delivering joint innovation for customers that stretches from the edge to x scale how do you see new solutions that this helping our customers solve the toughest challenges today i talked earlier about the superpowers that are driving the rapid acceleration of digital transformation first the proliferation of the hybrid cloud is delivering new levels of efficiency and scale and the growth of the cloud is democratizing high-performance computing opening new frontiers of knowledge and discovery next we see ai and machine learning increasingly infused into every application from the edge to the network to the cloud to create dramatically better insights and the rapid adoption of 5g as i talked about already is fueling new use cases that demand lower latencies and higher bandwidth this in turn is pushing computing to the edge closer to where the data is created and consumed the confluence of these trends is leading to the biggest and fastest build out of computing in human history to keep pace with this rapid digital transformation we recognize that infrastructure has to be built with the flexibility to support a broad set of workloads and that's why over the last several years intel has built an unmatched portfolio to deliver every component of intelligent silicon our customers need to move store and process data from the cpus to fpgas from memory to ssds from ethernet to switch silicon to silicon photonics and software our 3rd gen intel xeon scalable processors and our data centric portfolio deliver new core performance and higher bandwidth providing our customers the capabilities they need to power these critical workloads and we love seeing all the unique ways customers like hpe leverage our technology and solution offerings to create opportunities and solve their most pressing challenges from cloud gaming to blood flow to brain scans to financial market security the opportunities are endless with flexible performance i am proud of the amazing innovation we are bringing to support our customers especially as they respond to new data-centric workloads like ai and analytics that are critical to digital transformation these new requirements create a need for compute that's warlord optimized for performance security ease of use and the economics of business now more than ever compute matters it is the foundation for this next wave of digital transformation by pairing our compute with our software and capabilities from hp green lake we can support our customers as they modernize their apps and data quickly they seamlessly and securely scale them anywhere at any size from edge to x scale but thank you for joining us for accelerating next today i know our audience appreciated hearing your perspective on the market and how we're partnering together to support their digital transformation journey i am incredibly excited about what lies ahead for hp and intel thank you thank you antonio great to be with you today we just compressed about a decade of online commerce progress into about 13 or 14 months so now we're going to look at how one retailer navigated through the pandemic and what the future of their business looks like and with me is alan jensen who's the chief information officer and senior vice president of the sawing group hello alan how are you fine thank you good to see you hey look you know when i look at the 100 year history plus of your company i mean it's marked by transformations and some of them are quite dramatic so you're denmark's largest retailer i wonder if you could share a little bit more about the company its history and and how it continues to improve the customer experience well at the same time keeping costs under control so vital in your business yeah yeah the company founded uh approximately 100 years ago with a department store in in oahu's in in denmark and i think in the 60s we founded the first supermarket in in denmark with the self-service and combined textile and food in in the same store and in beginning 70s we founded the first hyper market in in denmark and then the this calendar came from germany early in in 1980 and we started a discount chain and so we are actually building department store in hyber market info in in supermarket and in in the discount sector and today we are more than 1 500 stores in in three different countries in in denmark poland and germany and especially for the danish market we have a approximately 38 markets here and and is the the leader we have over the last 10 years developed further into online first in non-food and now uh in in food with home delivery with click and collect and we have done some magnetism acquisition in in the convenience with mailbox solutions to our customers and we have today also some restaurant burger chain and and we are running the starbuck in denmark so i can you can see a full plate of different opportunities for our customer in especially denmark it's an awesome story and of course the founder's name is still on the masthead what a great legacy now of course the pandemic is is it's forced many changes quite dramatic including the the behaviors of retail customers maybe you could talk a little bit about how your digital transformation at the sawing group prepared you for this shift in in consumption patterns and any other challenges that that you faced yeah i think uh luckily as for some of the you can say the core it solution in in 19 we just roll out using our computers via direct access so you can work from anywhere whether you are traveling from home and so on we introduced a new agile scrum delivery model and and we just finalized the rolling out teams in in in january february 20 and that was some very strong thing for suddenly moving all our employees from from office to to home and and more or less overnight we succeed uh continuing our work and and for it we have not missed any deadline or task for the business in in 2020 so i think that was pretty awesome to to see and for the business of course the pandemic changed a lot as the change in customer behavior more or less overnight with plus 50 80 on the online solution forced us to do some different priorities so we were looking at the food home delivery uh and and originally expected to start rolling out in in 2022 uh but took a fast decision in april last year to to launch immediately and and we have been developing that uh over the last eight months and has been live for the last three months now in the market so so you can say the pandemic really front loaded some of our strategic actions for for two to three years uh yeah that was very exciting what's that uh saying luck is the byproduct of great planning and preparation so let's talk about when you're in a company with some strong financial situation that you can move immediately with investment when you take such decision then then it's really thrilling yeah right awesome um two-part question talk about how you leverage data to support the solid groups mission and you know drive value for customers and maybe you could talk about some of the challenges you face with just the amount of data the speed of data et cetera yeah i said data is everything when you are in retail as a retailer's detail as you need to monitor your operation down to each store eats department and and if you can say we have challenge that that is that data is just growing rapidly as a year by year it's growing more and more because you are able to be more detailed you're able to capture more data and for a company like ours we need to be updated every morning as a our fully updated sales for all unit department single sku selling in in the stores is updated 3 o'clock in the night and send out to all top management and and our managers all over the company it's actually 8 000 reports going out before six o'clock every day in the morning we have introduced a loyalty program and and you are capturing a lot of data on on customer behavior what is their preferred offers what is their preferred time in the week for buying different things and all these data is now used to to personalize our offers to our cost of value customers so we can be exactly hitting the best time and and convert it to sales data is also now used for what we call intelligent price reductions as a so instead of just reducing prices with 50 if it's uh close to running out of date now the system automatically calculate whether a store has just enough to to finish with full price before end of day or actually have much too much and and need to maybe reduce by 80 before as being able to sell so so these automated [Music] solutions built on data is bringing efficiency into our operation wow you make it sound easy these are non-trivial items so congratulations on that i wonder if we could close hpe was kind enough to introduce us tell us a little bit about the infrastructure the solutions you're using how they differentiate you in the market and i'm interested in you know why hpe what distinguishes them why the choice there yeah as a when when you look out a lot is looking at moving data to the cloud but we we still believe that uh due to performance due to the availability uh more or less on demand we we still don't see the cloud uh strong enough for for for selling group uh capturing all our data we have been quite successfully having one data truth across the whole con company and and having one just one single bi solution and having that huge amount of data i think we have uh one of the 10 largest sub business warehouses in global and but on the other hand we also want to be agile and want to to scale when needed so getting close to a cloud solution we saw it be a green lake as a solution getting close to the cloud but still being on-prem and could deliver uh what we need to to have a fast performance on on data but still in a high quality and and still very secure for us to run great thank you for that and thank alan thanks so much for your for your time really appreciate your your insights and your congratulations on the progress and best of luck in the future thank you all right keep it right there we have tons more content coming you're watching accelerating next from hpe [Music] welcome lisa and thank you for being here with us today antonio it's wonderful to be here with you as always and congratulations on your launch very very exciting for you well thank you lisa and we love this partnership and especially our friendship which has been very special for me for the many many years that we have worked together but i wanted to have a conversation with you today and obviously digital transformation is a key topic so we know the next wave of digital transformation is here being driven by massive amounts of data an increasingly distributed world and a new set of data intensive workloads so how do you see world optimization playing a role in addressing these new requirements yeah no absolutely antonio and i think you know if you look at the depth of our partnership over the last you know four or five years it's really about bringing the best to our customers and you know the truth is we're in this compute mega cycle right now so it's amazing you know when i know when you talk to customers when we talk to customers they all need to do more and and frankly compute is becoming quite specialized so whether you're talking about large enterprises or you're talking about research institutions trying to get to the next phase of uh compute so that workload optimization that we're able to do with our processors your system design and then you know working closely with our software partners is really the next wave of this this compute cycle so thanks lisa you talk about mega cycle so i want to make sure we take a moment to celebrate the launch of our new generation 10 plus compute products with the latest announcement hp now has the broadest amd server portfolio in the industry spanning from the edge to exascale how important is this partnership and the portfolio for our customers well um antonio i'm so excited first of all congratulations on your 19 world records uh with uh milan and gen 10 plus it really is building on you know sort of our you know this is our third generation of partnership with epic and you know you are with me right at the very beginning actually uh if you recall you joined us in austin for our first launch of epic you know four years ago and i think what we've created now is just an incredible portfolio that really does go across um you know all of the uh you know the verticals that are required we've always talked about how do we customize and make things easier for our customers to use together and so i'm very excited about your portfolio very excited about our partnership and more importantly what we can do for our joint customers it's amazing to see 19 world records i think i'm really proud of the work our joint team do every generation raising the bar and that's where you know we we think we have a shared goal of ensuring that customers get the solution the services they need any way they want it and one way we are addressing that need is by offering what we call as a service delivered to hp green lake so let me ask a question what feedback are you hearing from your customers with respect to choice meaning consuming as a service these new solutions yeah now great point i think first of all you know hpe green lake is very very impressive so you know congratulations um to uh to really having that solution and i think we're hearing the same thing from customers and you know the truth is the compute infrastructure is getting more complex and everyone wants to be able to deploy sort of the right compute at the right price point um you know in in terms of also accelerating time to deployment with the right security with the right quality and i think these as a service offerings are going to become more and more important um as we go forward in the compute uh you know capabilities and you know green lake is a leadership product offering and we're very very you know pleased and and honored to be part of it yeah we feel uh lisa we are ahead of the competition and um you know you think about some of our competitors now coming with their own offerings but i think the ability to drive joint innovation is what really differentiate us and that's why we we value the partnership and what we have been doing together on giving the customers choice finally you know i know you and i are both incredibly excited about the joint work we're doing with the us department of energy the oak ridge national laboratory we think about large data sets and you know and the complexity of the analytics we're running but we both are going to deliver the world's first exascale system which is remarkable to me so what this milestone means to you and what type of impact do you think it will make yes antonio i think our work with oak ridge national labs and hpe is just really pushing the envelope on what can be done with computing and if you think about the science that we're going to be able to enable with the first exascale machine i would say there's a tremendous amount of innovation that has already gone in to the machine and we're so excited about delivering it together with hpe and you know we also think uh that the super computing technology that we're developing you know at this broad scale will end up being very very important for um you know enterprise compute as well and so it's really an opportunity to kind of take that bleeding edge and really deploy it over the next few years so super excited about it i think you know you and i have a lot to do over the uh the next few months here but it's an example of the great partnership and and how much we're able to do when we put our teams together um to really create that innovation i couldn't agree more i mean this is uh an incredible milestone for for us for our industry and honestly for the country in many ways and we have many many people working 24x7 to deliver against this mission and it's going to change the future of compute no question about it and then honestly put it to work where we need it the most to advance life science to find cures to improve the way people live and work but lisa thank you again for joining us today and thank you more most importantly for the incredible partnership and and the friendship i really enjoy working with you and your team and together i think we can change this industry once again so thanks for your time today thank you so much antonio and congratulations again to you and the entire hpe team for just a fantastic portfolio launch thank you okay well some pretty big hitters in those keynotes right actually i have to say those are some of my favorite cube alums and i'll add these are some of the execs that are stepping up to change not only our industry but also society and that's pretty cool and of course it's always good to hear from the practitioners the customer discussions have been great so far today now the accelerating next event continues as we move to a round table discussion with krista satrathwaite who's the vice president and gm of hpe core compute and krista is going to share more details on how hpe plans to help customers move ahead with adopting modern workloads as part of their digital transformations krista will be joined by hpe subject matter experts chris idler who's the vp and gm of the element and mark nickerson director of solutions product management as they share customer stories and advice on how to turn strategy into action and realize results within your business thank you for joining us for accelerate next event i hope you're enjoying it so far i know you've heard about the industry challenges the i.t trends hpe strategy from leaders in the industry and so today what we want to do is focus on going deep on workload solutions so in the most important workload solutions the ones we always get asked about and so today we want to share with you some best practices some examples of how we've helped other customers and how we can help you all right with that i'd like to start our panel now and introduce chris idler who's the vice president and general manager of the element chris has extensive uh solution expertise he's led hpe solution engineering programs in the past welcome chris and mark nickerson who is the director of product management and his team is responsible for solution offerings making sure we have the right solutions for our customers welcome guys thanks for joining me thanks for having us krista yeah so i'd like to start off with one of the big ones the ones that we get asked about all the time what we've been all been experienced in the last year remote work remote education and all the challenges that go along with that so let's talk a little bit about the challenges that customers have had in transitioning to this remote work and remote education environment uh so i i really think that there's a couple of things that have stood out for me when we're talking with customers about vdi first obviously there was a an unexpected and unprecedented level of interest in that area about a year ago and we all know the reasons why but what it really uncovered was how little planning had gone into this space around a couple of key dynamics one is scale it's one thing to say i'm going to enable vdi for a part of my workforce in a pre-pandemic environment where the office was still the the central hub of activity for work uh it's a completely different scale when you think about okay i'm going to have 50 60 80 maybe 100 of my workforce now distributed around the globe um whether that's in an educational environment where now you're trying to accommodate staff and students in virtual learning uh whether that's uh in the area of things like uh formula one racing where we had uh the desire to still have events going on but the need for a lot more social distancing not as many people able to be trackside but still needing to have that real-time experience this really manifested in a lot of ways and scale was something that i think a lot of customers hadn't put as much thought into initially the other area is around planning for experience a lot of times the vdi experience was planned out with very specific workloads or very specific applications in mind and when you take it to a more broad-based environment if we're going to support multiple functions multiple lines of business there hasn't been as much planning or investigation that's gone into the application side and so thinking about how graphically intense some applications are one customer that comes to mind would be tyler isd who did a fairly large roll out pre-pandemic and as part of their big modernization effort what they uncovered was even just changes in standard windows applications had become so much more graphically intense with windows 10 with the latest updates with programs like adobe that they were really needing to have an accelerated experience for a much larger percentage of their install base than than they had counted on so in addition to planning for scale you also need to have that visibility into what are the actual applications that are going to be used by these remote users how graphically intense those might be what's the login experience going to be as well as the operating experience and so really planning through that experience side as well as the scale and the number of users uh is is kind of really two of the biggest most important things that i've seen yeah mark i'll i'll just jump in real quick i think you you covered that pretty comprehensively there and and it was well done the couple of observations i've made one is just that um vdi suddenly become like mission critical for sales it's the front line you know for schools it's the classroom you know that this isn't a cost cutting measure or a optimization nit measure anymore this is about running the business in a way it's a digital transformation one aspect of about a thousand aspects of what does it mean to completely change how your business does and i think what that translates to is that there's no margin for error right you really need to deploy this in a way that that performs that understands what you're trying to use it for that gives that end user the experience that they expect on their screen or on their handheld device or wherever they might be whether it's a racetrack classroom or on the other end of a conference call or a boardroom right so what we do in in the engineering side of things when it comes to vdi or really understand what's a tech worker what's a knowledge worker what's a power worker what's a gp really going to look like what's time of day look like you know who's using it in the morning who's using it in the evening when do you power up when do you power down does the system behave does it just have the it works function and what our clients can can get from hpe is um you know a worldwide set of experiences that we can apply to making sure that the solution delivers on its promises so we're seeing the same thing you are krista you know we see it all the time on vdi and on the way businesses are changing the way they do business yeah and it's funny because when i talk to customers you know one of the things i heard that was a good tip is to roll it out to small groups first so you could really get a good sense of what the experience is before you roll it out to a lot of other people and then the expertise it's not like every other workload that people have done before so if you're new at it make sure you're getting the right advice expertise so that you're doing it the right way okay one of the other things we've been talking a lot about today is digital transformation and moving to the edge so now i'd like to shift gears and talk a little bit about how we've helped customers make that shift and this time i'll start with chris all right hey thanks okay so you know it's funny when it comes to edge because um the edge is different for for every customer in every client and every single client that i've ever spoken to of hp's has an edge somewhere you know whether just like we were talking about the classroom might be the edge but but i think the industry when we're talking about edge is talking about you know the internet of things if you remember that term from not to not too long ago you know and and the fact that everything's getting connected and how do we turn that into um into telemetry and and i think mark's going to be able to talk through a couple of examples of clients that we have in things like racing and automotive but what we're learning about edge is it's not just how do you make the edge work it's how do you integrate the edge into what you're already doing and nobody's just the edge right and and so if it's if it's um ai mldl there's that's one way you want to use the edge if it's a customer experience point of service it's another you know there's yet another way to use the edge so it turns out that having a broad set of expertise like hpe does to be able to understand the different workloads that you're trying to tie together including the ones that are running at the at the edge often it involves really making sure you understand the data pipeline you know what information is at the edge how does it flow to the data center how does it flow and then which data center uh which private cloud which public cloud are you using i think those are the areas where where we really sort of shine is that we we understand the interconnectedness of these things and so for example red bull and i know you're going to talk about that in a minute mark um uh the racing company you know for them the the edge is the racetrack and and you know milliseconds or partial seconds winning and losing races but then there's also an edge of um workers that are doing the design for for the cars and how do they get quick access so um we have a broad variety of infrastructure form factors and compute form factors to help with the edge and this is another real advantage we have is that we we know how to put the right piece of equipment with the right software we also have great containerized software with our esmeral container platform so we're really becoming um a perfect platform for hosting edge-centric workloads and applications and data processing yeah it's uh all the way down to things like our superdome flex in the background if you have some really really really big data that needs to be processed and of course our workhorse proliance that can be configured to support almost every um combination of workload you have so i know you started with edge krista but but and we're and we nail the edge with those different form factors but let's make sure you know if you're listening to this this show right now um make sure you you don't isolate the edge and make sure they integrate it with um with the rest of your operation mark you know what did i miss yeah to that point chris i mean and this kind of actually ties the two things together that we've been talking about here but the edge uh has become more critical as we have seen more work moving to the edge as where we do work changes and evolves and the edge has also become that much more closer because it has to be that much more connected um to your point uh talking about where that edge exists that edge can be a lot of different places but the one commonality really is that the edge is is an area where work still needs to get accomplished it can't just be a collection point and then everything gets shipped back to a data center or back to some some other area for the work it's where the work actually needs to get done whether that's edge work in a use case like vdi or whether that's edge work in the case of doing real-time analytics you mentioned red bull racing i'll i'll bring that up i mean you talk about uh an area where time is of the essence everything about that sport comes down to time you're talking about wins and losses that are measured as you said in milliseconds and that applies not just to how performance is happening on the track but how you're able to adapt and modify the needs of the car uh adapt to the evolving conditions on the track itself and so when you talk about putting together a solution for an edge like that you're right it can't just be here's a product that's going to allow us to collect data ship it back someplace else and and wait for it to be processed in a couple of days you have to have the ability to analyze that in real time when we pull together a solution involving our compute products our storage products our networking products when we're able to deliver that full package solution at the edge what you see are results like a 50 decrease in processing time to make real-time analytic decisions about configurations for the car and adapting to to real-time uh test and track conditions yeah really great point there um and i really love the example of edge and racing because i mean that is where it all every millisecond counts um and so important to process that at the edge now switching gears just a little bit let's talk a little bit about some examples of how we've helped customers when it comes to business agility and optimizing their workload for maximum outcome for business agility let's talk about some things that we've done to help customers with that mark yeah give it a shot so when we when we think about business agility what you're really talking about is the ability to to implement on the fly to be able to scale up to scale down the ability to adapt to real time changing situations and i think the last year has been has been an excellent example of exactly how so many businesses have been forced to do that i think one of the areas that that i think we've probably seen the most ability to help with customers in that agility area is around the space of private and hybrid clouds if you take a look at the need that customers have to to be able to migrate workloads and migrate data between public cloud environments app development environments that may be hosted on-site or maybe in the cloud the ability to move out of development and into production and having the agility to then scale those application rollouts up having the ability to have some of that some of that private cloud flexibility in addition to a public cloud environment is something that is becoming increasingly crucial for a lot of our customers all right well i we could keep going on and on but i'll stop it there uh thank you so much uh chris and mark this has been a great discussion thanks for sharing how we helped other customers and some tips and advice for approaching these workloads i thank you all for joining us and remind you to look at the on-demand sessions if you want to double click a little bit more into what we've been covering all day today you can learn a lot more in those sessions and i thank you for your time thanks for tuning in today many thanks to krista chris and mark we really appreciate you joining today to share how hpe is partnering to facilitate new workload adoption of course with your customers on their path to digital transformation now to round out our accelerating next event today we have a series of on-demand sessions available so you can explore more details around every step of that digital transformation from building a solid infrastructure strategy identifying the right compute and software to rounding out your solutions with management and financial support so please navigate to the agenda at the top of the page to take a look at what's available i just want to close by saying that despite the rush to digital during the pandemic most businesses they haven't completed their digital transformations far from it 2020 was more like a forced march than a planful strategy but now you have some time you've adjusted to this new abnormal and we hope the resources that you find at accelerating next will help you on your journey best of luck to you and be well [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Music] [Applause] [Music] [Applause] [Music] [Applause] so [Music] [Applause] [Music] you

Published Date : Apr 19 2021

SUMMARY :

and the thing too is that you know when

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
jim brickmeyerPERSON

0.99+

lisaPERSON

0.99+

antonioPERSON

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

sixQUANTITY

0.99+

2006DATE

0.99+

two companiesQUANTITY

0.99+

alan jensenPERSON

0.99+

2022DATE

0.99+

46 percentQUANTITY

0.99+

denmarkLOCATION

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

3 o'clockDATE

0.99+

windows 10TITLE

0.99+

10xQUANTITY

0.99+

mark nickersonPERSON

0.99+

germanyLOCATION

0.99+

30QUANTITY

0.99+

hawaiiLOCATION

0.99+

tomorrowDATE

0.99+

fifty percentQUANTITY

0.99+

50QUANTITY

0.99+

360-degreeQUANTITY

0.99+

100QUANTITY

0.99+

360 degreeQUANTITY

0.99+

chrisPERSON

0.99+

100 yearQUANTITY

0.99+

80QUANTITY

0.99+

austinLOCATION

0.99+

360 degreeQUANTITY

0.99+

8 000 reportsQUANTITY

0.99+

april last yearDATE

0.99+

kristaPERSON

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

krista satrathwaitePERSON

0.99+

january february 20DATE

0.99+

netherlands cancer instituteORGANIZATION

0.99+

last yearDATE

0.99+

fourQUANTITY

0.99+

five yearsQUANTITY

0.99+

amazonORGANIZATION

0.99+

john chambersPERSON

0.99+

windowsTITLE

0.99+

two-partQUANTITY

0.99+

ninth timeQUANTITY

0.99+

more than 1 500 storesQUANTITY

0.99+

verizonORGANIZATION

0.99+

three yearsQUANTITY

0.99+

johnPERSON

0.99+

neil mcdonaldPERSON

0.99+

a year agoDATE

0.99+

pat gelsingerPERSON

0.99+

netherlands instituteORGANIZATION

0.99+

markPERSON

0.99+

vinsanoORGANIZATION

0.99+

lisa suePERSON

0.99+

four years agoDATE

0.98+

pandemicEVENT

0.98+

two kindQUANTITY

0.98+

HPEORGANIZATION

0.98+

about 750 researchersQUANTITY

0.98+

two thingsQUANTITY

0.98+

50 percentQUANTITY

0.98+

14 monthsQUANTITY

0.98+

adobeTITLE

0.98+

Pete Ungaro & Addison Snell


 

>> Announcer: From around the globe it's theCUBE with digital coverage of HPE GreenLake Day made possible by Hewlett Packard Enterprise. >> Welcome everybody to this spotlight session here at GreenLake Day and we're going to dig into high-performance computing. Let me first bring in Pete Ungaro who's the GM for HPC and Mission Critical Solutions at Hewlett Packard Enterprise. And then we're going to pivot to Addison Snell, who's the CEO of research firm Intersect360. So Pete started with you welcome and really a pleasure to have you here. I want to first start off by asking you what are the key trends that you see in the HPC and super computing space. And I really appreciate if you could talk about how customer consumption patterns are changing. >> Yeah, appreciate that Dave and thanks for having me. I think the biggest thing that we're seeing is just the massive growth of data. And as we get larger and larger data sets larger and larger models happen and we're having more and more new ways to compute on that data. So new algorithms like AI would be a great example of that. And as people are starting to see this, especially as they're going through digital transformations, more and more people I believe can take advantage of HPC but maybe don't know how and don't know how to get started. And so they're looking for how to get going into this environment. And many customers that are long-time HPC customers just consume it on their own data centers, they have that capability but many don't. And so they're looking at how can I do this? Do I need to build up that capability myself? Do I go to the Cloud? What about my data and where that resides? So there's a lot of things that are going into thinking through how do I start to take advantage of this new infrastructure? >> Excellent, I mean, we all know HPC workloads. You're talking about fording research and discovery for some of the toughest and most complex problems particularly those that are affecting society. So I'm interested in your thoughts on how you see GreenLake helping in these endeavors specifically. >> Yeah, one of the most exciting things about HPC is just the impact that it has. Everywhere from building safer cars and airplanes to looking at climate change to finding new vaccines for things like COVID that we're all dealing with right now. So one of the biggest things is how do we take advantage of that and use that to benefit society overall. And as we think about implementing HPC, how do we get started and then how do we grow and scale as we get more and more capabilities. So that's the biggest things that we're seeing on that front. >> Yeah, okay, so just about a year ago you guys launched the GreenLake initiative and the whole complete focus on as a service. So I'm curious as to how the new GreenLake services the HPC services specifically as it relates to GreenLake, how do they fit into HP's overall high-performance computing portfolio and the strategy? >> Yeah, great question. GreenLake is a new consumption model for us. So it's a very exciting. We keep our entire HPC portfolio that we have today but extend it with GreenLake and offer customers expanded consumption choices. So customers that potentially are dealing with the growth of their data or they're moving to digital transformation applications, they can use GreenLake just easily scale up from workstations to manage their system costs or operational costs or if they don't have staff to expand their environment, GreenLake provides all of that in a managed infrastructure for them. So if they're going from like a pilot environment, I've been to a production environment over time, GreenLake enables them to do that very simply and easily without having to have all that internal infrastructure people, computer data centers, et cetera, GreenLake provides all that for them. So they can have a turnkey solution for HPC. >> So a lot easier entry strategy is a key word that you use there was choice though. So basically you're providing optionality, you're not necessarily forcing them into a particular model, is that correct? >> Yeah, 100% Dave. What we want to do is just expand the choices so customers can buy and acquire and use that technology to their advantages. Whether they're large or small, whether they're a startup or a fortune 500 company, whether they have their own data centers or they want to use a colo facility, whether they have their own staff or not. We want to just provide them the opportunity to take advantage of this leading edge resource. >> Very interesting, Pete, I really appreciate the perspectives that you guys are bringing to the market. I mean, it seems to me it's going to really accelerate broader adoption of high-performance computing to the masses, really giving them an easier entry point. I want to bring in now Addison Snell to the discussion. Addison, he's a CEO, as I said of Intersect360 which in my view is the world's leading market research company focused on HPC. Addison you've been following this space for a while. You're an expert, you've seen a lot of changes over the years. What do you see as the critical aspects in the market specifically as it relates toward this as a service delivery that we were just discussing with Pete? And I wonder if you could sort of work in there the benefits in terms of in your view how it's going to affect HPC usage broadly. >> Yeah, good morning Dave, and thanks very much for having me. Pete it's great to see you again. So we've been tracking a lot of these utility computing models in high-performance computing for years. Particularly as most of the usage by revenue is actually by commercial endeavors using high-performance computing for their R and D and engineering projects and the like. And cloud computing has been a major portion of that and has the highest growth rate in the market right now where we're seeing this double digit growth that accounted for about $1.4 billion of the high-performance computing industry last year. But the bigger trend and which makes GreenLake really interesting is that we saw an additional about a billion dollars worth of spending outside what was directly measured in the cloud portion of the market in areas that we deemed to be cloud-like which were as a service types of contracts that were still utility computing, but they might be under a software as a service portion of a budget under software or some other managed services type of contract that the user wasn't reporting directly as cloud but was certainly influenced by utility computing. And I think that's going to be a really dominant portion of the market going forward when we look at a growth rate and where the market's been evolving. >> So that's interesting. I mean, basically you're saying this utility model is not brand new, we've seen that for years. Cloud was obviously a catalyst that gave that a boost. What is new you're saying is, and I'll say it this way. I'd love to get your independent perspective on this is sort of the definition of cloud is expanding where we people always say, it's not a place, it's an experience and I couldn't agree more. But I wonder if you could give us your independent perspective on that, both on the thoughts of what I just said but also how would you rate HPE position in this market? >> Well, you're right absolutely that the definition of cloud is expanding. And that's a challenge when we run our surveys that we try to be pedantic in a sense and define exactly what we're talking about. And that's how we're able to measure both the direct usage of a typical public cloud but also a more flexible notion of as a service. Now you asked about HPE in particular and that's extremely relevant, not only with GreenLake, but with their broader presence in high-performance computing. HPE is the number one provider of systems for high-performance computing worldwide. And that's largely based on the breadth of HPE's offerings in addition to their performance at various segments. So picking up a lot of the commercial market with our HPE Apollo Gen10 plus, they hit a lot of big memory configurations with the Superdome Flex and scale up to some of the most powerful supercomputers in the world with the HPE Cray EX platforms that go into some of the leading national labs. Now GreenLake gives them an opportunity to offer this kind of flexibility to customers rather than committing all at once to a particular purchase price. But if you want to do position those on a utility computing basis, pay for them as a service without committing to a particular public cloud, I think that's an interesting role for GreenLake to play in the market. >> Yeah, yeah it's interesting. I mean, earlier this year we celebrated Exascale Day with the support from HPE and it really is all about a community and an ecosystem. Is a lot of comradery going on in the space that you guys are deep into. Addison, it says we can wrap what should observe as expect in this HPC market, in this space over the next few years? >> Yeah, that's a great question what to expect because if 2020 has taught us anything it's the hazards of forecasting where we think the market is going. Like when we put out a market forecast, we tend not to look at huge things like unexpected pandemics or wars but it's relevant to the topic here. Because as I said, we were already forecasting cloud and as a service models growing. Anytime you get into uncertainty where it becomes less easy to plan for where you want to be in two years, three years, five years, that model speaks well to things that are cloud or as a service to do very well flexibly. And therefore, when we look at the market and plan out where we think it is in 2020, 2021, anything that accelerates uncertainty actually is going to increase the need for something like GreenLake or an as a service or cloud type of environment. So we're expecting those sorts of deployments to come in over and above where we were already previously expected them in 2020, 2021. Because as a service deals well with uncertainty and that's just the world we've been in recently. >> I think those are great comments and a really good framework. And we've seen this with the pandemic, the pace at which the technology industry in particular and of course HPE specifically have responded to support that. Your point about agility and flexibility being crucial. And I'll go back to something earlier that Pete said around the data, the sooner we can get to the data to analyze things, whether it's compressing the time to a vaccine or pivoting our businesses, the better off we are. So I want to thank Pete and Addison for your perspectives today. Really great stuff, guys, thank you. >> Yeah, thank you. >> Thank you. >> All right, keep it right there for more great insights and content. You're watching GreenLake Day. (ambient music)

Published Date : Nov 23 2020

SUMMARY :

the globe it's theCUBE and really a pleasure to have you here. and don't know how to get started. for some of the toughest So that's the biggest and the whole complete or they're moving to digital into a particular model, is that correct? just expand the choices the perspectives that you guys And I think that's going to both on the thoughts of what I just said that the definition of cloud is expanding. in the space that you guys are deep into. and that's just the world the time to a vaccine for more great insights and content.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Pete UngaroPERSON

0.99+

DavePERSON

0.99+

PetePERSON

0.99+

2020DATE

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Addison SnellPERSON

0.99+

AddisonPERSON

0.99+

2021DATE

0.99+

Intersect360ORGANIZATION

0.99+

GreenLake DayTITLE

0.99+

three yearsQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

five yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

last yearDATE

0.99+

about $1.4 billionQUANTITY

0.99+

Addison SnellPERSON

0.99+

HPORGANIZATION

0.99+

pandemicEVENT

0.99+

Exascale DayEVENT

0.99+

HPCORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

Superdome FlexCOMMERCIAL_ITEM

0.99+

GreenLake DayEVENT

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

firstQUANTITY

0.98+

100%QUANTITY

0.98+

two yearsQUANTITY

0.98+

about a billion dollarsQUANTITY

0.97+

todayDATE

0.97+

earlier this yearDATE

0.96+

COVIDOTHER

0.94+

about a year agoDATE

0.93+

HPE GreenLake DayEVENT

0.87+

HPE Apollo Gen10 plusCOMMERCIAL_ITEM

0.84+

doubleQUANTITY

0.78+

number oneQUANTITY

0.75+

HPE Cray EXCOMMERCIAL_ITEM

0.7+

few yearsDATE

0.6+

yearsQUANTITY

0.57+

500QUANTITY

0.5+

GreenLakeTITLE

0.49+

Lenovo Transform 2017 Keynote


 

(upbeat techno music) >> Announcer: Good morning ladies and gentlemen. This is Lenovo Transform. Please welcome to the stage Lenovo's Rod Lappin. (upbeat instrumental) >> Alright, ladies and gentlemen. Here we go. I was out the back having a chat. A bit faster than I expected. How are you all doing this morning? (crowd cheers) >> Good? How fantastic is it to be in New York City? (crowd applauds) Excellent. So my name's Rod Lappin. I'm with the Data Center Group, obviously. I do basically anything that touches customers from our sales people, our pre-sales engineers, our architects, et cetera, all the way through to our channel partner sales engagement globally. So that's my job, but enough of that, okay? So the weather this morning, absolutely fantastic. Not a cloud in the sky, perfect. A little bit different to how it was yesterday, right? I want to thank all of you because I know a lot of you had a lot of commuting issues getting into New York yesterday with all the storms. We have a lot of people from international and domestic travel caught up in obviously the network, which blows my mind, actually, but we have a lot of people here from Europe, obviously, a lot of analysts and media people here as well as customers who were caught up in circling around the airport apparently for hours. So a big round of applause for our team from Europe. (audience applauds) Thank you for coming. We have some people who commuted a very short distance. For example, our own server general manager, Cameron (mumbles), he's out the back there. Cameron, how long did it take you to get from Raleigh to New York? An hour-and-a-half flight? >> Cameron: 17 hours. >> 17 hours, ladies and gentleman. That's a fantastic distance. I think that's amazing. But I know a lot of us, obviously, in the United States have come a long way with the storms, obviously very tough, but I'm going to call out one individual. Shaneil from Spotless. Where are you Shaneil, you're here somewhere? There he is from Australia. Shaneil how long did it take you to come in from Australia? 25 hour, ladies and gentleman. A big round of applause. That's a pretty big effort. Shaneil actually I want you to stand up, if you don't mind. I've got a seat here right next to my CEO. You've gone the longest distance. How about a big round of applause for Shaneil. We'll put him in my seat, next to YY. Honestly, Shaneil, you're doing me a favor. Okay ladies and gentlemen, we've got a big day today. Obviously, my seat now taken there, fantastic. Obviously New York City, the absolute pinnacle of globalization. I first came to New York in 1996, which was before a lot of people in the room were born, unfortunately for me these days. Was completely in awe. I obviously went to a Yankees game, had no clue what was going on, didn't understand anything to do with baseball. Then I went and saw Patrick Ewing. Some of you would remember Patrick Ewing. Saw the Knicks play basketball. Had no idea what was going on. Obviously, from Australia, and somewhat slightly height challenged, basketball was not my thing but loved it. I really left that game... That was the first game of basketball I'd ever seen. Left that game realizing that effectively the guy throws the ball up at the beginning, someone taps it, that team gets it, they run it, they put it in the basket, then the other team gets it, they put it in the basket, the other team gets it, and that's basically the entire game. So I haven't really progressed from that sort of learning or understanding of basketball since then, but for me, personally, being here in New York, and obviously presenting with all of you guys today, it's really humbling from obviously some of you would have picked my accent, I'm also from Australia. From the north shore of Sydney. To be here is just a fantastic, fantastic event. So welcome ladies and gentlemen to Transform, part of our tech world series globally in our event series and our event season here at Lenovo. So once again, big round of applause. Thank you for coming (audience applauds). Today, basically, is the culmination of what I would classify as a very large journey. Many of you have been with us on that. Customers, partners, media, analysts obviously. We've got quite a lot of our industry analysts in the room. I know Matt Eastwood yesterday was on a train because he sent a Tweet out saying there's 170 people on the WIFI network. He was obviously a bit concerned he was going to get-- Pat Moorhead, he got in at 3:30 this morning, obviously from traveling here as well with some of the challenges with the transportation, so we've got a lot of people in the room that have been giving us advice over the last two years. I think all of our employees are joining us live. All of our partners and customers through the stream. As well as everybody in this packed-out room. We're very very excited about what we're going to be talking to you all today. I want to have a special thanks obviously to our R&D team in Raleigh and around the world. They've also been very very focused on what they've delivered for us today, and it's really important for them to also see the culmination of this great event. And like I mentioned, this is really the feedback. It's not just a Lenovo launch. This is a launch based on the feedback from our partners, our customers, our employees, the analysts. We've been talking to all of you about what we want to be when we grow up from a Data Center Group, and I think you're going to hear some really exciting stuff from some of the speakers today and in the demo and breakout sessions that we have after the event. These last two years, we've really transformed the organization, and that's one of the reasons why that theme is part of our Tech World Series today. We're very very confident in our future, obviously, and where the company's going. It's really important for all of you to understand today and take every single snippet that YY, Kirk, and Christian talk about today in the main session, and then our presenters in the demo sections on what Lenovo's actually doing for its future and how we're positioning the company, obviously, for that future and how the transformation, the digital transformation, is going ahead globally. So, all right, we are now going to step into our Transform event. And I've got a quick agenda statement for you. The very first thing is we're going to hear from YY, our chairman and CEO. He's going to discuss artificial intelligence, the evolution of our society and how Lenovo is clearly positioning itself in the industry. Then, obviously, you're going to hear from Kirk Skaugen, our president of the Data Center Group, our new boss. He's going to talk about how long he's been with the company and the transformation, once again, we're making, very specifically to the Data Center Group and how much of a difference we're making to society and some of our investments. Christian Teismann, our SVP and general manager of our client business is going to talk about the 25 years of ThinkPad. This year is the 25-year anniversary of our ThinkPad product. Easily the most successful brand in our client branch or client branch globally of any vendor. Most successful brand we've had launched, and this afternoon breakout sessions, obviously, with our keynotes, fantastic sessions. Make sure you actually attend all of those after this main arena here. Now, once again, listen, ask questions, and make sure you're giving us feedback. One of the things about Lenovo that we say all the time... There is no room for arrogance in our company. Every single person in this room is a customer, partner, analyst, or an employee. We love your feedback. It's only through your feedback that we continue to improve. And it's really important that through all of the sessions where the Q&As happen, breakouts afterwards, you're giving us feedback on what you want to see from us as an organization as we go forward. All right, so what were you doing 25 years ago? I spoke about ThinkPad being 25 years old, but let me ask you this. I bet you any money that no one here knew that our x86 business is also 25 years old. So, this year, we have both our ThinkPad and our x86 anniversaries for 25 years. Let me tell you. What were you guys doing 25 years ago? There's me, 25 years ago. It's a bit scary, isn't it? It's very svelte and athletic and a lot lighter than I am today. It makes me feel a little bit conscious. And you can see the black and white shot. It shows you that even if you're really really short and you come from the wrong side of the tracks to make some extra cash, you can still do some modeling as long as no one else is in the photo to give anyone any perspective, so very important. I think I might have got one photo shoot out of that, I don't know. I had to do it, I needed the money. Let me show you another couple of photos. Very interesting, how's this guy? How cool does he look? Very svelte and athletic. I think there's no doubt. He looks much much cooler than I do. Okay, so ladies and gentlemen, without further ado, it gives me great honor to obviously introduce our very very first guest to the stage. Ladies and gentlemen, our chairman and CEO, Yuanqing Yang. or as we like to call him, YY. A big round of applause, thank you. (upbeat techno instrumental) >> Good morning everyone. Thank you, Rod, for your introduction. Actually, I didn't think I was younger than you (mumbles). I can't think of another city more fitting to host the Transform event than New York. A city that has transformed from a humble trading post 400 years ago to one of the most vibrant cities in the world today. It is a perfect symbol of transformation of our world. The rapid and the deep transformations that have propelled us from the steam engine to the Internet era in just 200 years. Looking back at 200 years ago, there was only a few companies that operated on a global scale. The total value of the world's economy was around $188 billion U.S. dollars. Today, it is only $180 for each person on earth. Today, there are thousands of independent global companies that compete to sell everything, from corn and crude oil to servers and software. They drive a robust global economy was over $75 trillion or $1,000 per person. Think about it. The global economy has multiplied almost 450 times in just two centuries. What is even more remarkable is that the economy has almost doubled every 15 years since 1950. These are significant transformation for businesses and for the world and our tiny slice of pie. This transformation is the result of the greatest advancement in technology in human history. Not one but three industrial revolutions have happened over the last 200 years. Even though those revolutions created remarkable change, they were just the beginning. Today, we are standing at the beginning of the fourth revolution. This revolution will transform how we work (mumbles) in ways that no one could imagine in the 18th century or even just 18 months ago. You are the people who will lead this revolution. Along with Lenovo, we will redefine IT. IT is no longer just information technology. It's intelligent technology, intelligent transformation. A transformation that is driven by big data called computing and artificial intelligence. Even the transition from PC Internet to mobile Internet is a big leap. Today, we are facing yet another big leap from the mobile Internet to the Smart Internet or intelligent Internet. In this Smart Internet era, Cloud enables devices, such as PCs, Smart phones, Smart speakers, Smart TVs. (mumbles) to provide the content and the services. But the evolution does not stop them. Ultimately, almost everything around us will become Smart, with building computing, storage, and networking capabilities. That's what we call the device plus Cloud transformation. These Smart devices, incorporated with various sensors, will continuously sense our environment and send data about our world to the Cloud. (mumbles) the process of this ever-increasing big data and to support the delivery of Cloud content and services, the data center infrastructure is also transforming to be more agile, flexible, and intelligent. That's what we call the infrastructure plus Cloud transformation. But most importantly, it is the human wisdom, the people learning algorithm vigorously improved by engineers that enables artificial intelligence to learn from big data and make everything around us smarter. With big data collected from Smart devices, computing power of the new infrastructure under the trend artificial intelligence, we can understand the world around us more accurately and make smarter decisions. We can make life better, work easier, and society safer and healthy. Think about what is already possible as we start this transformation. Smart Assistants can help you place orders online with a voice command. Driverless cars can run on the same road as traditional cars. (mumbles) can help troubleshoot customers problems, and the virtual doctors already diagnose basic symptoms. This list goes on and on. Like every revolution before it, intelligent transformation, will fundamentally change the nature of business. Understanding and preparing for that will be the key for the growth and the success of your business. The first industrial revolution made it possible to maximize production. Water and steam power let us go from making things by hand to making them by machine. This transformed how fast things could be produced. It drove the quantity of merchandise made and led to massive increase in trade. With this revolution, business scale expanded, and the number of customers exploded. Fifty years later, the second industrial revolution made it necessary to organize a business like the modern enterprise, electric power, and the telegraph communication made business faster and more complex, challenging businesses to become more efficient and meeting entirely new customer demands. In our own lifetimes, we have witnessed the third industrial revolution, which made it possible to digitize the enterprise. The development of computers and the Internet accelerated business beyond human speed. Now, global businesses have to deal with customers at the end of a cable, not always a handshake. While we are still dealing with the effects of a digitizing business, the fourth revolution is already here. In just the past two or three years, the growth of data and advancement in visual intelligence has been astonishing. The computing power can now process the massive amount of data about your customers, suppliers, partners, competitors, and give you insights you simply could not imagine before. Artificial intelligence can not only tell you what your customers want today but also anticipate what they will need tomorrow. This is not just about making better business decisions or creating better customer relationships. It's about making the world a better place. Ultimately, can we build a new world without diseases, war, and poverty? The power of big data and artificial intelligence may be the revolutionary technology to make that possible. Revolutions don't happen on their own. Every industrial revolution has its leaders, its visionaries, and its heroes. The master transformers of their age. The first industrial revolution was led by mechanics who designed and built power systems, machines, and factories. The heroes of the second industrial revolution were the business managers who designed and built modern organizations. The heroes of the third revolution were the engineers who designed and built the circuits and the source code that digitized our world. The master transformers of the next revolution are actually you. You are the designers and the builders of the networks and the systems. You will bring the benefits of intelligence to every corner of your enterprise and make intelligence the central asset of your business. At Lenovo, data intelligence is embedded into everything we do. How we understand our customer's true needs and develop more desirable products. How we profile our customers and market to them precisely. How we use internal and external data to balance our supply and the demand. And how we train virtual agents to provide more effective sales services. So the decisions you make today about your IT investment will determine the quality of the decisions your enterprise will make tomorrow. So I challenge each of you to seize this opportunity to become a master transformer, to join Lenovo as we work together at the forefront of the fourth industrial revolution, as leaders of the intelligent transformation. (triumphant instrumental) Today, we are launching the largest portfolio in our data center history at Lenovo. We are fully committed to the (mumbles) transformation. Thank you. (audience applauds) >> Thanks YY. All right, ladies and gentlemen. Fantastic, so how about a big round of applause for YY. (audience applauds) Obviously a great speech on the transformation that we at Lenovo are taking as well as obviously wanting to journey with our partners and customers obviously on that same journey. What I heard from him was obviously artificial intelligence, how we're leveraging that integrally as well as externally and for our customers, and the investments we're making in the transformation around IoT machine learning, obviously big data, et cetera, and obviously the Data Center Group, which is one of the key things we've got to be talking about today. So we're on the cusp of that fourth revolution, as YY just mentioned, and Lenovo is definitely leading the way and investing in those parts of the industry and our portfolio to ensure we're complimenting all of our customers and partners on what they want to be, obviously, as part of this new transformation we're seeing globally. Obviously now, ladies and gentlemen, without further ado once again, to tell us more about what's going on today, our announcements, obviously, that all of you will be reading about and seeing in the breakout and the demo sessions with our segment general managers this afternoon is our president of the data center, Mr. Kirk Skaugen. (upbeat instrumental) >> Good morning, and let me add my welcome to Transform. I just crossed my six months here at Lenovo after over 24 years at Intel Corporation, and I can tell you, we've been really busy over the last six months, and I'm more excited and enthusiastic than ever and hope to share some of that with you today. Today's event is called "Transform", and today we're announcing major new transformations in Lenovo, in the data center, but more importantly, we're celebrating the business results that these platforms are going to have on society and with international supercomputing going on in parallel in Frankfurt, some of the amazing scientific discoveries that are going to happen on some of these platforms. Lenovo has gone through some significant transformations in the last two years, since we acquired the IBM x86 business, and that's really positioning us for this next phase of growth, and we'll talk more about that later. Today, we're announcing the largest end-to-end data center portfolio in Lenovo's history, as you heard from YY, and we're really taking the best of the x86 heritage from our IBM acquisition of the x86 server business and combining that with the cost economics that we've delivered from kind of our China heritage. As we've talked to some of the analysts in the room, it's really that best of the east and best of the west is combining together in this announcement today. We're going to be announcing two new brands, building on our position as the number one x86 server vendor in both customer satisfaction and in reliability, and we're also celebrating, next month in July, a very significant milestone, which will we'll be shipping our 20 millionth x86 server into the industry. For us, it's an amazing time, and it's an inflection point to kind of look back, pause, but also share the next phase of Lenovo and the exciting vision for the future. We're also making some declarations on our vision for the future today. Again, international supercomputing's going on, and, as it turns out, we're the fastest growing supercomputer company on earth. We'll talk about that. Our goal today that we're announcing is that we plan in the next several years to become number one in supercomputing, and we're going to put the investments behind that. We're also committing to our customers that we're going to disrupt the status quo and accelerate the pace of innovation, not just in our legacy server solutions, but also in Software-Defined because what we've heard from you is that that lack of legacy, we don't have a huge router business or a huge sand business to protect. It's that lack of legacy that's enabling us to invest and get ahead of the curb on this next transition to Software-Defined. So you're going to see us doing that through building our internal IP, through some significant joint ventures, and also through some merges and acquisitions over the next several quarters. Altogether, we're driving to be the most trusted data center provider in the industry between us and our customers and our suppliers. So a quick summary of what we're going to dive into today, both in my keynote as well as in the breakout sessions. We're in this transformation to the next phase of Lenovo's data center growth. We're closing out our previous transformation. We actually, believe it or not, in the last six months or so, have renegotiated 18,000 contracts in 160 countries. We built out an entire end-to-end organization from development and architecture all the way through sales and support. This next transformation, I think, is really going to excite Lenovo shareholders. We're building the largest data center portfolio in our history. I think when IBM would be up here a couple years ago, we might have two or three servers to announce in time to market with the next Intel platform. Today, we're announcing 14 new servers, seven new storage systems, an expanded set of networking portfolios based on our legacy with Blade Network Technologies and other companies we've acquired. Two new brands that we'll talk about for both data center infrastructure and Software-Defined, a new set of premium premiere services as well as a set of engineered solutions that are going to help our customers get to market faster. We're going to be celebrating our 20 millionth x86 server, and as Rod said, 25 years in x86 server compute, and Christian will be up here talking about 25 years of ThinkPad as well. And then a new end-to-end segmentation model because all of these strategies without execution are kind of meaningless. I hope to give you some confidence in the transformation that Lenovo has gone through as well. So, having observed Lenovo from one of its largest partners, Intel, for more than a couple decades, I thought I'd just start with why we have confidence on the foundation that we're building off of as we move from a PC company into a data center provider in a much more significant way. So Lenovo today is a company of $43 billion in sales. Absolutely astonishing, it puts us at about Fortune 202 as a company, with 52,000 employees around the world. We're supporting and have service personnel, almost a little over 10,000 service personnel that service our servers and data center technologies in over 160 countries that provide onsite service and support. We have seven data center research centers. One of the reasons I came from Intel to Lenovo was that I saw that Lenovo became number one in PCs, not through cost cutting but through innovation. It was Lenovo that was partnering on the next-generation Ultrabooks and two-in-ones and tablets in the modem mods that you saw, but fundamentally, our path to number one in data center is going to be built on innovation. Lastly, we're one of the last companies that's actually building not only our own motherboards at our own motherboard factories, but also with five global data center manufacturing facilities. Today, we build about four devices a second, but we also build over 100 servers per hour, and the cost economics we get, and I just visited our Shenzhen factory, of having everything from screws to microprocessors come up through the elevator on the first floor, go left to build PCs and ThinkPads and go right to build server technology, means we have some of the world's most cost effective solutions so we can compete in things like hyperscale computing. So it's with that that I think we're excited about the foundation that we can build off of on the Data Center Group. Today, as we stated, this event is about transformation, and today, I want to talk about three things we're going to transform. Number one is the customer experience. Number two is the data center and our customer base with Software-Defined infrastructure, and then the third is talk about how we plan to execute flawlessly with a new transformation that we've had internally at Lenovo. So let's dive into it. On customer experience, really, what does it mean to transform customer experience? Industry pundits say that if you're not constantly innovating, you can fall behind. Certainly the technology industry that we're in is transforming at record speed. 42% of business leaders or CIOs say that digital first is their top priority, but less than 50% actually admit that they have a strategy to get there. So people are looking for a partner to keep pace with that innovation and change, and that's really what we're driving to at Lenovo. So today we're announcing a set of plans to take another step function in customer experience, and building off of our number one position. Just recently, Gartner shows Lenovo as the number 24 supply chains of companies over $12 billion. We're up there with Amazon, Coca-Cola, and we've now completely re-architected our supply chain in the Data Center Group from end to end. Today, we can deliver 90% of our SKUs, order to ship in less than seven days. The artificial intelligence that YY mentioned is optimizing our performance even further. In services, as we talked about, we're now in 160 countries, supporting on-site support, 50 different call centers around the world for local language support, and we're today announcing a whole set of new premiere support services that I'll get into in a second. But we're building on what's already better than 90% customer satisfaction in this space. And then in development, for all the engineers out there, we started foundationally for this new set of products, talking about being number one in reliability and the lowest downtime of any x86 server vendor on the planet, and these systems today are architected to basically extend that leadership position. So let me tell you the realities of reliability. This is ITIC, it's a reliability report. 750 CIOs and IT managers from more than 20 countries, so North America, Europe, Asia, Australia, South America, Africa. This isn't anything that's paid for with sponsorship dollars. Lenovo has been number one for four years running on x86 reliability. This is the amount of downtime, four hours or more, in mission-critical environments from the leading x86 providers. You can see relative to our top two competitors that are ahead of us, HP and Dell, you can see from ITIC why we are building foundationally off of this, and why it's foundational to how we're developing these new platforms. In customer satisfaction, we are also rated number one in x86 server customer satisfaction. This year, we're now incentivizing every single Lenovo employee on customer satisfaction and customer experience. It's been a huge mandate from myself and most importantly YY as our CEO. So you may say well what is the basis of this number one in customer satisfaction, and it's not just being number one in one category, it's actually being number one in 21 of the 22 categories that TBR talks about. So whether it's performance, support systems, online product information, parts and availability replacement, Lenovo is number one in 21 of the 22 categories and number one for six consecutive studies going back to Q1 of 2015. So this, again, as we talk about the new product introductions, it's something that we absolutely want to build on, and we're humbled by it, and we want to continue to do better. So let's start now on the new products and talk about how we're going to transform the data center. So today, we are announcing two new product offerings. Think Agile and ThinkSystem. If you think about the 25 years of ThinkPad that Christian's going to talk about, Lenovo has a continuous learning culture. We're fearless innovators, we're risk takers, we continuously learn, but, most importantly, I think we're humble and we have some humility. That when we fail, we can fail fast, we learn, and we improve. That's really what drove ThinkPad to number one. It took about eight years from the acquisition of IBM's x86 PC business before Lenovo became number one, but it was that innovation, that listening and learning, and then improving. As you look at the 25 years of ThinkPad, there were some amazing successes, but there were also some amazing failures along the way, but each and every time we learned and made things better. So this year, as Rod said, we're not just celebrating 25 years of ThinkPad, but we're celebrating 25 years of x86 server development since the original IBM PC servers in 1992. It's a significant day for Lenovo. Today, we're excited to announce two new brands. ThinkSystem and ThinkAgile. It's an important new announcement that we started almost three years ago when we acquired the x86 server business. Why don't we run a video, and we'll show you a little bit about ThinkSystem and ThinkAgile. >> Narrator: The status quo is comfortable. It gets you by, but if you think that's good enough for your data center, think again. If adoption is becoming more complicated when it should be simpler, think again. If others are selling you technology that's best for them, not for you, think again. It's time for answers that win today and tomorrow. Agile, innovative, different. Because different is better. Different embraces change and makes adoption simple. Different designs itself around you. Using 25 years of innovation and design and R&D. Different transforms, it gives you ThinkSystem. World-record performance, most reliable, easy to integrate, scales faster. Different empowers you with ThinkAgile. It redefines the experience, giving you the speed of Cloud and the control of on-premise IT. Responding faster to what your business really needs. Different defines the future. Introducing Lenovo ThinkSystem and ThinkAgile. (exciting and slightly aggressive digital instrumental) >> All right, good stuff, huh? (audience applauds) So it's built off of this 25-year history of us being in the x86 server business, the commitment we established three years ago after acquiring the x86 server business to be and have the most reliable, the most agile, and the most highest-performing data center solutions on the planet. So today we're announcing two brands. ThinkSystem is for the traditional data center infrastructure, and ThinkAgile is our brand for Software-Defined infrastructure. Again, the teams challenge themselves from the start, how do we build off this rich heritage, expanding our position as number one in customer satisfaction, reliability, and one of the world's best supply chains. So let's start and look at the next set of solutions. We have always prided ourself that little things don't mean a lot. Little things mean everything. So today, as we said on the legacy solutions, we have over 30 world-record performance benchmarks on Intel architecture, and more than actually 150 since we started tracking this back in 2001. So it's the little pieces of innovation. It's the fine tuning that we do with our partners like an Intel or a Microsoft, an SAP, VMware, and Nutanix that's enabling us to get these world-record performance benchmarks, and with this next generation of solutions we think we'll continue to certainly do that. So today we're announcing the most comprehensive portfolio ever in our data center history. There's 14 servers, seven storage devices, and five network switches. We're also announcing, which is super important to our customer base, a set of new premiere service options. That's giving you fast access directly to a level two support person. No automated response system involved. You get to pick up the phone and directly talk to a level two support person that's going to have end-to-end ownership of the customer experience for ThinkSystem. With ThinkAgile, that's going to be completely bundled with every ThinkAgile you purchase. In addition, we're having white glove service on site that will actually unbox the product for you and get it up and running. It's an entirely new set of solutions for hybrid Cloud, for big data analytics and database applications around these engineered solutions. These are like 40- to 50-page guides where we fine-tuned the most important applications around virtual desktop infrastructure and those kinds of applications, working side by side with all of our ISP partners. So significantly expanding, not just the hardware but the software solutions that, obviously, you, as our customers, are running. So if you look at ThinkSystem innovation, again, it was designed for the ultimate in flexibility, performance, and reliability. It's a single now-unified brand that combines what used to be the Lenovo Think server and the IBM System x products now into a single brand that spans server, storage, and networking. We're basically future-proofing it for the next-generation data center. It's a significantly simplified portfolio. One of the big pieces that we've heard is that the complexity of our competitors has really been overwhelming to customers. We're building a more flexible, more agile solution set that requires less work, less qualification, and more future proofing. There's a bunch of things in this that you'll see in the demos. Faster time-to-service in terms of the modularity of the systems. 12% faster service equating to almost $50 thousand per hour of reduced downtime. Some new high-density options where we have four nodes and a 2U, twice the density to improve and reduce outbacks and mission-critical workloads. And then in high-performance computing and supercomputing, we're going to spend some time on that here shortly. We're announcing new water-cooled solutions. We have some of the most premiere water-cooled solutions in the world, with more than 25 patents pending now, just in the water-cooled solutions for supercomputing. The performance that we think we're going to see out of these systems is significant. We're building off of that legacy that we have today on the existing Intel solutions. Today, we believe we have more than 50% of SAP HANA installations in the world. In fact, SAP just went public that they're running their internal SAP HANA on Lenovo hardware now. We're seeing a 59% increase in performance on SAP HANA generation on generation. We're seeing 31% lower total cost to ownership. We believe this will continue our position of having the highest level of five-nines in the x86 server industry. And all of these servers will start being available later this summer when the Intel announcements come out. We're also announcing the largest storage portfolio in our history, significantly larger than anything we've done in the past. These are all available today, including some new value class storage offerings. Our network portfolio is expanding now significantly. It was a big surprise when I came to Lenovo, seeing the hundreds of engineers we had from the acquisition of Blade Network Technologies and others with our teams in Romania, Santa Clara, really building out both the embedded portfolio but also the top racks, which is around 10 gig, 25 gig, and 100 gig. Significantly better economics, but all the performance you'd expect from the largest networking companies in the world. Those are also available today. ThinkAgile and Software-Defined, I think the one thing that has kind of overwhelmed me since coming in to Lenovo is we are being embraced by our customers because of our lack of legacy. We're not trying to sell you one more legacy SAN at 65% margins. ThinkAgile really was founded, kind of born free from the shackles of legacy thinking and legacy infrastructure. This is just the beginning of what's going to be an amazing new brand in the transformation to Software-Defined. So, for Lenovo, we're going to invest in our own internal organic IP. I'll foreshadow: There's some significant joint ventures and some mergers and acquisitions that are going to be coming in this space. And so this will be the foundation for our Software-Defined networking and storage, for IoT, and ultimately for the 5G build-out as well. This is all built for data centers of tomorrow that require fluid resources, tightly integrated software and hardware in kind of an appliance, selling at the rack level, and so we'll show you how that is going to take place here in a second. ThinkAgile, we have a few different offerings. One is around hyperconverged storage, Hybrid Cloud, and also Software-Defined storage. So we're really trying to redefine the customer experience. There's two different solutions we're having today. It's a Microsoft Azure solution and a Nutanix solution. These are going to be available both in the appliance space as well as in a full rack solution. We're really simplifying and trying to transform the entire customer experience from how you order it. We've got new capacity planning tools that used to take literally days for us to get the capacity planning done. It's now going down to literally minutes. We've got new order, delivery, deployment, administration service, something we're calling ThinkAgile Advantage, which is the white glove unboxing of the actual solutions on prem. So the whole thing when you hear about it in the breakout sessions about transforming the entire customer experience with both an HX solution and an SX solution. So again, available at the rack level for both Nutanix and for Microsoft Solutions available in just a few months. Many of you in the audience since the Microsoft Airlift event in Seattle have started using these things, and the feedback to date has been fantastic. We appreciate the early customer adoption that we've seen from people in the audience here. So next I want to bring up one of our most important partners, and certainly if you look at all of these solutions, they're based on the next-generation Intel Xeon scalable processor that's going to be announcing very very soon. I want to bring on stage Rupal Shah, who's the corporate vice president and general manager of Global Data Center Sales with Intel, so Rupal, please join me. (upbeat instrumental) So certainly I have long roots at Intel, but why don't you talk about, from Intel's perspective, why Lenovo is an important partner for Lenovo. >> Great, well first of all, thank you very much. I've had the distinct pleasure of not only working with Kirk for many many years, but also working with Lenovo for many years, so it's great to be here. Lenovo is not only a fantastic supplier and leader in the industry for Intel-based servers but also a very active partner in the Intel ecosystem. In the Intel ecosystem, specifically, in our partner programs and in our builder programs around Cloud, around the network, and around storage, I personally have had a long history in working with Lenovo, and I've seen personally that PC transformation that you talked about, Kirk, and I believe, and I know that Intel believes in Lenovo's ability to not only succeed in the data center but to actually lead in the data center. And so today, the ThinkSystem and ThinkAgile announcement is just so incredibly important. It's such a great testament to our two companies working together, and the innovation that we're able to bring to the market, and all of it based on the Intel Xeon scalable processor. >> Excellent, so tell me a little bit about why we've been collaborating, tell me a little bit about why you're excited about ThinkSystem and ThinkAgile, specifically. >> Well, there are a lot of reasons that I'm excited about the innovation, but let me talk about a few. First, both of our companies really stand behind the fact that it's increasingly a hybrid world. Our two companies offer a range of solutions now to customers to be able to address their different workload needs. ThinkSystem really brings the best, right? It brings incredible performance, flexibility in data center deployment, and industry-leading reliability that you've talked about. And, as always, Xeon has a history of being built for the data center specifically. The Intel Xeon scalable processor is really re-architected from the ground up in order to enhance compute, network, and storage data flows so that we can deliver workload optimized performance for both a wide range of traditional workloads and traditional needs but also some emerging new needs in areas like artificial intelligence. Second is when it comes to the next generation of Cloud infrastructure, the new Lenovo ThinkAgile line offers a truly integrated offering to address data center pain points, and so not only are you able to get these pretested solutions, but these pretested solutions are going to get deployed in your infrastructure faster, and they're going to be deployed in a way that's going to meet your specific needs. This is something that is new for both of us, and it's an incredible innovation in the marketplace. I think that it's a great addition to what is already a fantastic portfolio for Lenovo. >> Excellent. >> Finally, there's high-performance computing. In high-performance computing. First of all, congratulations. It's a big week, I think, for both of us. Fantastic work that we've been doing together in high-performance computing and actually bringing the best of the best to our customers, and you're going to hear a whole lot more about that. We obviously have a number of joint innovation centers together between Intel and Lenovo. Tell us about some of the key innovations that you guys are excited about. >> Well, Intel and Lenovo, we do have joint innovation labs around the world, and we have a long and strong history of very tight collaboration. This has brought a big wave of innovation to the marketplace in areas like software-defined infrastructure. Yet another area is working closely on a joint vision that I think our two companies have in artificial intelligence. Intel is very committed to the world of AI, and we're committed in making the investments required in technology development, in training, and also in R&D to be able to deliver end-to-end solutions. So with Intel's comprehensive technology portfolio and Lenovo's development and innovation expertise, it's a great combination in this space. I've already talked a little bit about HPC and so has Kirk, and we're going to hear a little bit more to come, but we're really building the fastest compute solutions for customers that are solving big problems. Finally, we often talk about processors from Intel, but it's not just about the processors. It's way beyond that. It's about engaging at the solution level for our customers, and I'm so excited about the work that we've done together with Lenovo to bring to market products like Intel Omni-Path Architecture, which is really the fabric for high-performance data centers. We've got a great showing this week with Intel Omni-Path Architecture, and I'm so grateful for all the work that we've done to be able to bring true solutions to the marketplace. I am really looking forward to our future collaboration with Lenovo as we have in the past. I want to thank you again for inviting me here today, and congratulations on a fantastic launch. >> Thank you, Rupal, very much, for the long partnership. >> Thank you. (audience applauds) >> Okay, well now let's transition and talk a little bit about how Lenovo is transforming. The first thing we've done when I came on board about six months ago is we've transformed to a truly end-to-end organization. We're looking at the market segments I think as our customers define them, and we've organized into having vice presidents and senior vice presidents in charge of each of these major groups, thinking really end to end, from architecture all the way to end of life and customer support. So the first is hyperscale infrastructure. It's about 20% on the market by 2020. We've hired a new vice president there to run that business. Given we can make money in high-volume desktop PCs, it's really the manufacturing prowess, deep engineering collaboration that's enabling us to sell into Baidu, and to Alibaba, Tencent, as well as the largest Cloud vendors on the West Coast here in the United States. We believe we can make money here by having basically a deep deep engineering engagement with our key customers and building on the PC volume economics that we have within Lenovo. On software-defined infrastructure, again, it's that lack of legacy that I think is propelling us into this space. We're not encumbered by trying to sell one more legacy SAN or router, and that's really what's exciting us here, as we transform from a hardware to a software-based company. On HPC and AI, as we said, we'll talk about this in a second. We're the fastest-growing supercomputing company on earth. We have aspirations to be the largest supercomputing company on earth, with China and the U.S. vying for number one in that position, it puts us in a good position there. We're going to bridge that into artificial intelligence in our upcoming Shanghai Tech World. The entire day is around AI. In fact, YY has committed $1.2 billion to artificial intelligence over the next few years of R&D to help us bridge that. And then on data center infrastructure, is really about moving to a solutions based infrastructure like our position with SAP HANA, where we've gone deep with engineers on site at SAP, SAP running their own infrastructure on Lenovo and building that out beyond just SAP to other solutions in the marketplace. Overall, significantly expanding our services portfolio to maintain our number one customer satisfaction rating. So given ISC, or International Supercomputing, this week in Frankfurt, and a lot of my team are actually over there, I wanted to just show you the transformation we've had at Lenovo for delivering some of the technology to solve some of the most challenging humanitarian problems on earth. Today, we are the fastest-growing supercomputer company on the planet in terms of number of systems on the Top 500 list. We've gone from zero to 92 positions in just a few short years, but IDC also positions Lenovo as the fast-growing supercomputer and HPC company overall at about 17% year on year growth overall, including all of the broad channel, the regional universities and this kind of thing, so this is an exciting place for us. I'm excited today that Sergi has come all the way from Spain to be with us today. It's an exciting time because this week we announce the fastest next-generation Intel supercomputer on the planet at Barcelona Supercomputer. Before I bring Sergi on stage, let's run a video and I'll show you why we're excited about the capabilities of these next-generation supercomputers. Run the video please. >> Narrator: Different creates one of the most powerful supercomputers for the Barcelona Supercomputer Center. A high-performance, high-capacity design to help shape tomorrow's world. Different designs what's best for you, with 25 years of end-to-end expertise delivering large-scale solutions. It integrates easily with technology from industry partners, through deep collaboration with the client to manufacture, test, configure, and install at global scale. Different achieves the impossible. The first of a new series. A more energy-efficient supercomputer yet 10 times more powerful than its predecessor. With over 3,400 Lenovo ThinkSystem servers, each performing over two trillion calculations per second, giving us 11.1 petaflop capacity. Different powers MareNostrum, a supercomputer that will help us better understand cancer, help discover disease-fighting therapies, predict the impact of climate change. MareNostrom 4.0 promises to uncover answers that will help solve humanities greatest challenges. (audience applauds) >> So please help me in welcoming operations director of the Barcelona Supercomputer Center, Sergi Girona. So welcome, and again, congratulations. It's been a big week for both of us. But I think for a long time, if you haven't been to Barcelona, this has been called the world's most beautiful computer because it's in one of the most gorgeous chapels in the world as you can see here. Congratulations, we now are number 13 on the Top500 list and the fastest next-generation Intel computer. >> Thank you very much, and congratulations to you as well. >> So maybe we can just talk a little bit about what you've done over the last few months with us. >> Sure, thank you very much. It is a pleasure for me being invited here to present to you what we've been doing with Lenovo so far and what we are planning to do in the next future. I'm representing here Barcelona Supercomputing Center. I am presenting high-performance computing services to science and industry. How we see these science services has changed the paradigm of science. We are coming from observation. We are coming from observation on the telescopes and the microscopes and the building of infrastructures, but this is not affordable anymore. This is very expensive, so it's not possible, so we need to move to simulations. So we need to understand what's happening in our environment. We need to predict behaviors only going through simulation. So, at BSC, we are devoted to provide services to industry, to science, but also we are doing our own research because we want to understand. At the same time, we are helping and developing the new engineers of the future on the IT, on HPC. So we are having four departments based on different topics. The main and big one is wiling to understand how we are doing the next supercomputers from the programming level to the performance to the EIA, so all these things, but we are having also interest on what about the climate change, what's the air quality that we are having in our cities. What is the precision medicine we need to have. How we can see that the different drugs are better for different individuals, for different humans, and of course we have an energy department, taking care of understanding what's the better optimization for a cold, how we can save energy running simulations on different topics. But, of course, the topic of today is not my research, but it's the systems we are building in Barcelona. So this is what we have been building in Barcelona so far. From left to right, you have the preparation of the facility because this is 160 square meters with 1.4 megabytes, so that means we need new piping, we need new electricity, at the same time in the center we have to install the core services of the system, so the management practices, and then on the right-hand side you have installation of the networking, the Omni-Path by Intel. Because all of the new racks have to be fully integrated and they need to come into operation rapidly. So we start deployment of the system May 15, and we've now been ending and coming in production July first. All the systems, all the (mumbles) systems from Lenovo are coming before being open and available. What we've been installing here in Barcelona is general purpose systems for our general workload of the system with 3,456 nodes. Everyone of those having 48 cores, 96 gigabytes main memory for a total capacity of about 400 terabytes memory. The objective of this is that we want to, all the system, all the processors, to work together for a single execution for running altogether, so this is an example of the platinum processors from Intel having 24 cores each. Of course, for doing this together with all the cores in the same application, we need a high-speed network, so this is Omni-Path, and of course all these cables are connecting all the nodes. Noncontention, working together, cooperating. Of course, this is a bunch of cables. They need to be properly aligned in switches. So here you have the complete presentation. Of course, this is general purpose, but we wanted to invest with our partners. We want to understand what the supercomputers we wanted to install in 2020, (mumbles) Exascale. We want to find out, we are installing as well systems with different capacities with KNH, with power, with ARM processors. We want to leverage our obligations for the future. We want to make sure that in 2020 we are ready to move our users rapidly to the new technologies. Of course, this is in total, giving us a total capacity of 13.7 petaflops that it's 12 times the capacity of the former MareNostrum four years ago. We need to provide the services to our scientists because they are helping to solve problems for humanity. That's the place we are going to go. Last is inviting you to come to Barcelona to see our place and our chapel. Thank you very much (audience applauds). >> Thank you. So now you can all go home to your spouses and significant others and say you have a formal invitation to Barcelona, Spain. So last, I want to talk about what we've done to transform Lenovo. I think we all know the history is nice but without execution, none of this is going to be possible going forward, so we have been very very busy over the last six months to a year of transforming Lenovo's data center organization. First, we moved to a dedicated end-to-end sales and marketing organization. In the past, we had people that were shared between PC and data center, now thousands of sales people around the world are 100% dedicated end to end to our data center clients. We've moved to a fully integrated and dedicated supply chain and procurement organization. A fully dedicated quality organization, 100% dedicated to expanding our data center success. We've moved to a customer-centric segment, again, bringing in significant new leaders from outside the company to look end to end at each of these segments, supercomputing being very very different than small business, being very very different than taking care of, for example, a large retailer or bank. So around hyperscale, software-defined infrastructure, HPC, AI, and supercomputing and data center solutions-led infrastructure. We've built out a whole new set of global channel programs. Last year, or a year passed, we have five different channel programs around the world. We've now got one simplified channel program for dealer registration. I think our channel is very very energized to go out to market with Lenovo technology across the board, and a whole new set of system integrator relationships. You're going to hear from one of them in Christian's discussion, but a whole new set of partnerships to build solutions together with our system integrative partners. And, again, as I mentioned, a brand new leadership team. So look forward to talking about the details of this. There's been a significant amount of transformation internal to Lenovo that's led to the success of this new product introduction today. So in conclusion, I want to talk about the news of the day. We are transforming Lenovo to the next phase of our data center growth. Again, in over 160 countries, closing on that first phase of transformation and moving forward with some unique declarations. We're launching the largest portfolio in our history, not just in servers but in storage and networking, as everything becomes kind of a software personality on top of x86 Compute. We think we're very well positioned with our scale on PCs as well as data center. Two new brands for both data center infrastructure and Software-Defined, without the legacy shackles of our competitors, enabling us to move very very quickly into Software-Defined, and, again, foreshadowing some joint ventures in M&A that are going to be coming up that will further accelerate ourselves there. New premiere support offerings, enabling you to get direct access to level two engineers and white glove unboxing services, which are going to be bundled along with ThinkAgile. And then celebrating the milestone of 25 years in x86 server compute, not just ThinkPads that you'll hear about shortly, but also our 20 million server shipping next month. So we're celebrating that legacy and looking forward to the next phase. And then making sure we have the execution engine to maintain our position and grow it, being number one in customer satisfaction and number one in quality. So, with that, thank you very much. I look forward to seeing you in the breakouts today and talking with many of you, and I'll bring Rod back up to transition us to the next section. Thank you. (audience applauds) >> All right, Kirk, thank you, sir. All right, ladies and gentlemen, what did you think of that? How about a big round of applause for ThinkAgile, ThinkSystems new brands? (audience applauds) And, obviously, with that comes a big round of applause, for Kirk Skaugen, my boss, so we've got to give him a big round of applause, please. I need to stay employed, it's very important. All right, now you just heard from Kirk about some of the new systems, the brands. How about we have a quick look at the video, which shows us the brand new DCG images. >> Narrator: Legacy thinking is dead, stuck in the past, selling the same old stuff, over and over. So then why does it seem like a data center, you know, that thing powering all our little devices and more or less everything interaction today is still stuck in legacy thinking because it's rigid, inflexible, slow, but that's not us. We don't do legacy. We do different. Because different is fearless. Different reduces Cloud deployment from days to hours. Different creates agile technology that others follow. Different is fluid. It uses water-cooling technology to save energy. It co-innovates with some of the best minds in the industry today. Different is better, smarter. Maybe that's why different already holds so many world-record benchmarks in everything. From virtualization to database and application performance or why it's number one in reliability and customer satisfaction. Legacy sells you what they want. Different builds the data center you need without locking you in. Introducing the Data Center Group at Lenovo. Different... Is better. >> All right, ladies and gentlemen, a big round of applause, once again (mumbles) DCG, fantastic. And I'm sure all of you would agree, and Kirk mentioned it a couple of times there. No legacy means a real consultative approach to our customers, and that's something that we really feel is differentiated for ourselves. We are effectively now one of the largest startups in the DCG space, and we are very much ready to disrupt. Now, here in New York City, obviously, the heart of the fashion industry, and much like fashion, as I mentioned earlier, we're different, we're disruptive, we're agile, smarter, and faster. I'd like to say that about myself, but, unfortunately, I can't. But those of you who have observed, you may have noticed that I, too, have transformed. I don't know if anyone saw that. I've transformed from the pinstripe blue, white shirt, red tie look of the, shall we say, our predecessors who owned the x86 business to now a very Lenovo look. No tie and consequently a little bit more chic New York sort of fashion look, shall I say. Nothing more than that. So anyway, a bit of a transformation. It takes a lot to get to this look, by the way. It's a lot of effort. Our next speaker, Christian Teismann, is going to talk a lot about the core business of Lenovo, which really has been, as we've mentioned today, our ThinkPad, 25-year anniversary this year. It's going to be a great celebration inside Lenovo, and as we get through the year and we get closer and closer to the day, you'll see a lot more social and digital work that engages our customers, partners, analysts, et cetera, when we get close to that birthday. Customers just generally are a lot tougher on computers. We know they are. Whether you hang onto it between meetings from the corner of the Notebook, and that's why we have magnesium chassis inside the box or whether you're just dropping it or hypothetically doing anything else like that. We do a lot of robust testing on these products, and that's why it's the number one branded Notebook in the world. So Christian talks a lot about this, but I thought instead of having him talk, I might just do a little impromptu jump back stage and I'll show you exactly what I'm talking about. So follow me for a second. I'm going to jaunt this way. I know a lot of you would have seen, obviously, the front of house here, what we call the front of house. Lots of videos, et cetera, but I don't think many of you would have seen the back of house here, so I'm going to jump through the back here. Hang on one second. You'll see us when we get here. Okay, let's see what's going on back stage right now. You can see one of the team here in the back stage is obviously working on their keyboard. Fantastic, let me tell you, this is one of the key value props of this product, obviously still working, lots of coffee all over it, spill-proof keyboard, one of the key value propositions and why this is the number one laptop brand in the world. Congratulations there, well done for that. Obviously, we test these things. Height, distances, Mil-SPEC approved, once again, fantastic product, pick that up, lovely. Absolutely resistant to any height or drops, once again, in line with our Mil-SPEC. This is Charles, our producer and director back stage for the absolute event. You can see, once again, sand, coincidentally, in Manhattan, who would have thought a snow storm was occurring here, but you can throw sand. We test these things for all of the elements. I've obviously been pretty keen on our development solutions, having lived in Japan for 12 years. We had this originally designed in 1992 by (mumbles), he's still our chief development officer still today, fantastic, congratulations, a sand-enhanced notebook, he'd love that. All right, let's get back out front and on with the show. Watch the coffee. All right, how was that? Not too bad (laughs). It wasn't very impromptu at all, was it? Not at all a set up (giggles). How many people have events and have a bag of sand sitting on the floor right next to a Notebook? I don't know. All right, now it's time, obviously, to introduce our next speaker, ladies and gentlemen, and I hope I didn't steal his thunder, obviously, in my conversations just now that you saw back stage. He's one of my best friends in Lenovo and easily is a great representative of our legendary PC products and solutions that we're putting together for all of our customers right now, and having been an ex-Pat with Lenovo in New York really calls this his second home and is continually fighting with me over the fact that he believes New York has better sushi than Tokyo, let's welcome please, Christian Teismann, our SVP, Commercial Business Segment, and PC Smart Office. Christian Teismann, come on up mate. (audience applauds) >> So Rod thank you very much for this wonderful introduction. I'm not sure how much there is to add to what you have seen already back stage, but I think there is a 25-year of history I will touch a little bit on, but also a very big transformation. But first of all, welcome to New York. As Rod said, it's my second home, but it's also a very important place for the ThinkPad, and I will come back to this later. The ThinkPad is thee industry standard of business computing. It's an industry icon. We are celebrating 25 years this year like no other PC brand has done before. But this story today is not looking back only. It's a story looking forward about the future of PC, and we see a transformation from PCs to personalized computing. I am privileged to lead the commercial PC and Smart device business for Lenovo, but much more important beyond product, I also am responsible for customer experience. And this is what really matters on an ongoing basis. But allow me to stay a little bit longer with our iconic ThinkPad and history of the last 25 years. ThinkPad has always stand for two things, and it always will be. Highest quality in the industry and technology innovation leadership that matters. That matters for you and that matters for your end users. So, now let me step back a little bit in time. As Rod was showing you, as only Rod can do, reliability is a very important part of ThinkPad story. ThinkPads have been used everywhere and done everything. They have survived fires and extreme weather, and they keep surviving your end users. For 25 years, they have been built for real business. ThinkPad also has a legacy of first innovation. There are so many firsts over the last 25 years, we could spend an hour talking about them. But I just want to cover a couple of the most important milestones. First of all, the ThinkPad 1992 has been developed and invented in Japan on the base design of a Bento box. It was designed by the famous industrial designer, Richard Sapper. Did you also know that the ThinkPad was the first commercial Notebook flying into space? In '93, we traveled with the space shuttle the first time. For two decades, ThinkPads were on every single mission. Did you know that the ThinkPad Butterfly, the iconic ThinkPad that opens the keyboard to its size, is the first and only computer showcased in the permanent collection of the Museum of Modern Art, right here in New York City? Ten years later, in 2005, IBM passed the torch to Lenovo, and the story got even better. Over the last 12 years, we sold over 100 million ThinkPads, four times the amount IBM sold in the same time. Many customers were concerned at that time, but since then, the ThinkPad has remained the best business Notebook in the industry, with even better quality, but most important, we kept innovating. In 2012, we unveiled the X1 Carbon. It was the thinnest, lightest, and still most robust business PC in the world. Using advanced composited materials like a Formula One car, for super strengths, X1 Carbon has become our ThinkPad flagship since then. We've added an X1 Carbon Yoga, a 360-degree convertible. An X1 Carbon tablet, a detachable, and many new products to come in the future. Over the last few years, many new firsts have been focused on providing the best end-user experience. The first dual-screen mobile workstation. The first Windows business tablet, and the first business PC with OLED screen technology. History is important, but a massive transformation is on the way. Future success requires us to think beyond the box. Think beyond hardware, think beyond notebooks and desktops, and to think about the future of personalized computing. Now, why is this happening? Well, because the business world is rapidly changing. Looking back on history that YY gave, and the acceleration of innovation and how it changes our everyday life in business and in personal is driving a massive change also to our industry. Most important because you are changing faster than ever before. Human capital is your most important asset. In today's generation, they want to have freedom of choice. They want to have a product that is tailored to their specific needs, every single day, every single minute, when they use it. But also IT is changing. The Cloud, constant connectivity, 5G will change everything. Artificial intelligence is adding things to the capability of an infrastructure that we just are starting to imagine. Let me talk about the workforce first because it's the most important part of what drives this. The millennials will comprise more than half of the world's workforce in 2020, three years from now. Already, one out of three millennials is prioritizing mobile work environment over salary, and for nearly 60% of all new hires in the United States, technology is a very important factor for their job search in terms of the way they work and the way they are empowered. This new generation of new employees has grown up with PCs, with Smart phones, with tablets, with touch, for their personal use and for their occupation use. They want freedom. Second, the workplace is transforming. The video you see here in the background. This is our North America headquarters in Raleigh, where we have a brand new Smart workspace. We have transformed this to attract the new generation of workers. It has fewer traditional workspaces, much more meaning and collaborative spaces, and Lenovo, like many companies, is seeing workspaces getting smaller. An average workspace per employee has decreased by 30% over the last five years. Employees are increasingly mobile, but, if they come to the office, they want to collaborate with their colleagues. The way we collaborate and communicate is changing. Investment in new collaboration technology is exploding. The market of collaboration technology is exceeding the market of personal computing today. It will grow in the future. Conference rooms are being re-imagined from a ratio of 50 employees to one large conference room. Today, we are moving into scenarios of four employees to one conference room, and these are huddle rooms, pioneer spaces. Technology is everywhere. Video, mega-screens, audio, electronic whiteboards. Adaptive technologies are popping up and change the way we work. As YY said earlier, the pace of the revolution is astonishing. So personalized computing will transform the PC we all know. There's a couple of key factors that we are integrating in our next generations of PC as we go forward. The most important trends that we see. First of all, choose your own device. We talked about this new generation of workforce. Employees who are used to choosing their own device. We have to respond and offer devices that are tailored to each end user's needs without adding complexity to how we operate them. PC is a service. Corporations increasingly are looking for on-demand computing in data center as well as in personal computing. Customers want flexibility. A tailored management solution and a services portfolio that completes the lifecycle of the device. Agile IT, even more important, corporations want to run an infrastructure that is agile, instant respond to their end-customer needs, that is self provisioning, self diagnostic, and remote software repair. Artificial intelligence. Think about artificial intelligence for you personally as your personal assistant. A personal assistant which does understand you, your schedule, your travel, your next task, an extension of yourself. We believe the PC will be the center of this mobile device universe. Mobile device synergy. Each of you have two devices or more with you. They need to work together across different operating systems, across different platforms. We believe Lenovo is uniquely positioned as the only company who has a Smart phone business, a PC business, and an infrastructure business to really seamlessly integrate all of these devices for simplicity and for efficiency. Augmented reality. We believe augmented reality will drive significantly productivity improvements in commercial business. The core will be to understand industry-specific solutions. New processes, new business challenges, to improve things like customer service and sales. Security will remain the foundation for personalized computing. Without security, without trust in the device integrity, this will not happen. One of the most important trends, I believe, is that the PC will transform, is always connected, and always on, like a Smart phone. Regardless if it's open, if it's closed, if you carry it, or if you work with it, it always is capable to respond to you and to work with you. 5G is becoming a reality, and the data capacity that will be out there is by far exceeding today's traffic imagination. Finally, Smart Office, delivering flexible and collaborative work environments regardless on where the worker sits, fully integrated and leverages all the technologies we just talked before. These are the main challenges you and all of your CIO and CTO colleagues have to face today. A changing workforce and a new set of technologies that are transforming PC into personalized computing. Let me give you a real example of a challenge. DXC was just formed by merging CSE company and HP's Enterprise services for the largest independent services company in the world. DXC is now a 25 billion IT services leader with more than 170,000 employees. The most important capital. 6,000 clients and eight million managed devices. I'd like to welcome their CIO, who has one of the most challenging workforce transformation in front of him. Erich Windmuller, please give him a round of applause. (audience applauds). >> Thank you Christian. >> Thank you. >> It's my pleasure to be here, thank you. >> So first of all, let me congratulation you to this very special time. By forming a new multi-billion-dollar enterprise, this new venture. I think it has been so far fantastically received by analysts, by the press, by customers, and we are delighted to be one of your strategic partners, and clearly we are collaborating around workforce transformation between our two companies. But let me ask you a couple of more personal questions. So by bringing these two companies together with nearly 200,00 employees, what are the first actions you are taking to make this a success, and what are your biggest challenges? >> Well, first, again, let me thank you for inviting me and for DXC Technology to be a part of this very very special event with Lenovo, so thank you. As many of you might expect, it's been a bit of a challenge over the past several months. My goal was really very simple. It was to make sure that we brought two companies together, and they could operate as one. We need to make sure that could continue to support our clients. We certainly need to make sure we could continue to sell, our sellers could sell. That we could pay our employees, that we could hire people, we could do all the basic foundational things that you might expect a company would want to do, but we really focused on three simple areas. I called it the three Cs. Connectivity, communicate, and collaborate. So we wanted to make sure that we connected our legacy data centers so we could transfer information and communicate back and forth. We certainly wanted to be sure that our employees could communicate via WIFI, whatever locations they may or may not go to. We certainly wanted to, when we talk about communicate, we need to be sure that everyone of our employees could send and receive email as a DXC employee. And that we had a single-enterprise directory and people could communicate, gain access to calendars across each of the two legacy companies, and then collaborate was also key. And so we wanted to be sure, again, that people could communicate across each other, that our legacy employees on either side could get access to many of their legacy systems, and, again, we could collaborate together as a single corporation, so it was challenging, but very very, great opportunity for all of us. And, certainly, you might expect cyber and security was a very very important topic. My chairman challenged me that we had to be at least as good as we were before from a cyber perspective, and when you bring two large companies together like that there's clearly an opportunity in this disruptive world so we wanted to be sure that we had a very very strong cyber security posture, of which Lenovo has been very very helpful in our achieving that. >> Thank you, Erich. So what does DXC consider as their critical solutions and technology for workplace transformation, both internally as well as out on the market? >> So workplace transformation, and, again, I've heard a lot of the same kinds of words that I would espouse... It's all about making our employees productive. It's giving the right tools to do their jobs. I, personally, have been focused, and you know this because Lenovo has been a very very big part of this, in working with our, we call it our My Style Workplace, it's an offering team in developing a solution and driving as much functionality as possible down to the workstation. We want to be able, for me, to avoid and eliminate other ancillary costs, audio video costs, telecommunication cost. The platform that we have, the digitized workstation that Lenovo has provided us, has just got a tremendous amount of capability. We want to streamline those solutions, as well, on top of the modern server. The modern platform, as we call it, internally. I'd like to congratulate Kirk and your team that you guys have successfully... Your hardware has been certified on our modern platform, which is a significant accomplishment between our two companies and our partnership. It was really really foundational. Lenovo is a big part of our digital workstation transformation, and you'll continue to be, so it's very very important, and I want you to know that your tools and your products have done a significant job in helping us bring two large corporations together as one. >> Thank you, Erich. Last question, what is your view on device as a service and hardware utility model? >> This is the easy question, right? So who in the room doesn't like PC or device as a service? This is a tremendous opportunity, I think, for all of us. Our corporation, like many of you in the room, we're all driven by the concept of buying devices in an Opex versus a Capex type of a world and be able to pay as you go. I think this is something that all of us would like to procure, product services and products, if you will, personal products, in this type of a mode, so I am very very eager to work with Lenovo to be sure that we bring forth a very dynamic and constructive device as a service approach. So very eager to do that with Lenovo and bring that forward for DXC Technology. >> Erich, thank you very much. It's a great pleasure to work with you, today and going forward on all sides. I think with your new company and our lineup, I think we have great things to come. Thank you very much. >> My pleasure, great pleasure, thank you very much. >> So, what's next for Lenovo PC? We already have the most comprehensive commercial portfolio in the industry. We have put the end user in the core of our portfolio to finish and going forward. Ultra mobile users, like consultants, analysts, sales and service. Heavy compute users like engineers and designers. Industry users, increasingly more understanding. Industry-specific use cases like education, healthcare, or banking. So, there are a few exciting things we have to announce today. Obviously, we don't have that broad of an announcement like our colleagues from the data center side, but there is one thing that I have that actually... Thank you Rod... Looks like a Bento box, but it's not a ThinkPad. It's a first of it's kind. It's the world's smallest professional workstation. It has the power of a tower in the Bento box. It has the newest Intel core architecture, and it's designed for a wide range of heavy duty workload. Innovation continues, not only in the ThinkPad but also in the desktops and workstations. Second, you hear much about Smart Office and workspace transformation today. I'm excited to announce that we have made a strategic decision to expand our Think portfolio into Smart Office, and we will soon have solutions on the table in conference rooms, working with strategic partners like Intel and like Microsoft. We are focused on a set of devices and a software architecture that, as an IoT architecture, unifies the management of Smart Office. We want to move fast, so our target is that we will have our first product already later this year. More to come. And finally, what gets me most excited is the upcoming 25 anniversary in October. Actually, if you go to Japan, there are many ThinkPad lovers. Actually beyond lovers, enthusiasts, who are collectors. We've been consistently asked in blogs and forums about a special anniversary edition, so let me offer you a first glimpse what we will announce in October, of something we are bring to market later this year. For the anniversary, we will introduce a limited edition product. This will include throwback features from ThinkPad's history as well as the best and most powerful features of the ThinkPad today. But we are not just making incremental adjustments to the Think product line. We are rethinking ThinkPad of the future. Well, here is what I would call a concept card. Maybe a ThinkPad without a hinge. Maybe one you can fold. What do you think? (audience applauds) but this is more than just design or look and feel. It's a new set of advanced materials and new screen technologies. It's how you can speak to it or write on it or how it speaks to you. Always connected, always on, and can communicate on multiple inputs and outputs. It will anticipate your next meeting, your next travel, your next task. And when you put it all together, it's just another part of the story, which we call personalized computing. Thank you very much. (audience applauds) Thank you, sir. >> Good on ya, mate. All right, ladies and gentlemen. We are now at the conclusion of the day, for this session anyway. I'm going to talk a little bit more about our breakouts and our demo rooms next door. But how about the power with no tower, from Christian, huh? Big round of applause. (audience applauds) And what about the concept card, the ThinkPad? Pretty good, huh? I love that as well. I tell you, it was almost like Leonardo DiCaprio was up on stage at one stage. He put that big ThinkPad concept up, and everyone's phones went straight up and took a photo, the whole audience, so let's be very selective on how we distribute that. I'm sure it's already on Twitter. I'll check it out in a second. So once again, ThinkPad brand is a core part of the organization, and together both DCG and PCSD, what we call PCSD, which is our client side of the business and Smart device side of the business, are obviously very very linked in transforming Lenovo for the future. We want to also transform the industry, obviously, and transform the way that all of us do business. Lenovo, if you look at basically a summary of the day, we are highly committed to being a top three data center provider. That is really important for us. We are the largest and fastest growing supercomputing company in the world, and Kirk actually mentioned earlier on, committed to being number one by 2020. So Madhu who is in Frankfurt at the International Supercomputing Convention, if you're watching, congratulations, your targets have gone up. There's no doubt he's going to have a lot of work to do. We're obviously very very committed to disrupting the data center. That's obviously really important for us. As we mentioned, with both the brands, the ThinkSystem, and our ThinkAgile brands now, highly focused on disrupting and ensuring that we do things differently because different is better. Thank you to our customers, our partners, media, analysts, and of course, once again, all of our employees who have been on this journey with us over the last two years that's really culminating today in the launch of all of our new products and our profile and our portfolio. It's really thanks to all of you that once again on your feedback we've been able to get to this day. And now really our journey truly begins in ensuring we are disrupting and enduring that we are bringing more value to our customers without that legacy that Kirk mentioned earlier on is really an advantage for us as we really are that large startup from a company perspective. It's an exciting time to be part of Lenovo. It's an exciting time to be associated with Lenovo, and I hope very much all of you feel that way. So a big round of applause for today, thank you very much. (audience applauds) I need to remind all of you. I don't think I'm going to have too much trouble getting you out there, because I was just looking at Christian on the streaming solutions out in the room out the back there, and there's quite a nice bit of lunch out there as well for those of you who are hungry, so at least there's some good food out there, but I think in reality all of you should be getting up into the demo sessions with our segment general managers because that's really where the rubber hits the road. You've heard from YY, you've heard from Kirk, and you've heard from Christian. All of our general managers and our specialists in our product sets are going to be out there to obviously demonstrate our technology. As we said at the very beginning of this session, this is Transform, obviously the fashion change, hopefully you remember that. Transform, we've all gone through the transformation. It's part of our season of events globally, and our next event obviously is going to be in Tech World in Shanghai on the 20th of July. I hope very much for those of you who are going to attend have a great safe travel over there. We look forward to seeing you. Hope you've had a good morning, and get into the sessions next door so you get to understand the technology. Thank you very much, ladies and gentlemen. (upbeat innovative instrumental)

Published Date : Jun 20 2017

SUMMARY :

This is Lenovo Transform. How are you all doing this morning? Not a cloud in the sky, perfect. One of the things about Lenovo that we say all the time... from the mobile Internet to the Smart Internet and the demo sessions with our segment general managers and the cost economics we get, and I just visited and the control of on-premise IT. and the feedback to date has been fantastic. and all of it based on the Intel Xeon scalable processor. and ThinkAgile, specifically. and it's an incredible innovation in the marketplace. the best of the best to our customers, and also in R&D to be able to deliver end-to-end solutions. Thank you. some of the technology to solve some of the most challenging Narrator: Different creates one of the most powerful in the world as you can see here. So maybe we can just talk a little bit Because all of the new racks have to be fully integrated from outside the company to look end to end about some of the new systems, the brands. Different builds the data center you need in the DCG space, and we are very much ready to disrupt. and change the way we work. and we are delighted to be one of your strategic partners, it's been a bit of a challenge over the past several months. and technology for workplace transformation, I've heard a lot of the same kinds of words Last question, what is your view on device and be able to pay as you go. It's a great pleasure to work with you, and most powerful features of the ThinkPad today. and get into the sessions next door

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ShaneilPERSON

0.99+

Erich WindmullerPERSON

0.99+

Richard SapperPERSON

0.99+

LenovoORGANIZATION

0.99+

EuropeLOCATION

0.99+

1992DATE

0.99+

twoQUANTITY

0.99+

Patrick EwingPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Data Center GroupORGANIZATION

0.99+

DellORGANIZATION

0.99+

RomaniaLOCATION

0.99+

Rupal ShahPERSON

0.99+

Matt EastwoodPERSON

0.99+

Christian TeismannPERSON

0.99+

May 15DATE

0.99+

RodPERSON

0.99+

ErichPERSON

0.99+

AustraliaLOCATION

0.99+

RupalPERSON

0.99+

AlibabaORGANIZATION

0.99+

JapanLOCATION

0.99+

IBMORGANIZATION

0.99+

Pat MoorheadPERSON

0.99+

SpainLOCATION

0.99+

AmazonORGANIZATION

0.99+

RaleighLOCATION

0.99+

TencentORGANIZATION

0.99+

AsiaLOCATION

0.99+

2001DATE

0.99+

25 gigQUANTITY

0.99+

Blade Network TechnologiesORGANIZATION

0.99+

New YorkLOCATION

0.99+

MadhuPERSON

0.99+

DCGORGANIZATION

0.99+

Leonardo DiCaprioPERSON

0.99+

40QUANTITY

0.99+

KirkPERSON

0.99+

100%QUANTITY

0.99+

14 serversQUANTITY

0.99+

BarcelonaLOCATION

0.99+

12 timesQUANTITY

0.99+

2020DATE

0.99+

12 yearsQUANTITY

0.99+