Exascale – Why So Hard? | Exascale Day
from around the globe it's thecube with digital coverage of exascale day made possible by hewlett packard enterprise welcome everyone to the cube celebration of exascale day ben bennett is here he's an hpc strategist and evangelist at hewlett-packard enterprise ben welcome good to see you good to see you too son hey well let's evangelize exascale a little bit you know what's exciting you uh in regards to the coming of exoskilled computing um well there's a couple of things really uh for me historically i've worked in super computing for many years and i have seen the coming of several milestones from you know actually i'm old enough to remember gigaflops uh coming through and teraflops and petaflops exascale is has been harder than many of us anticipated many years ago the sheer amount of technology that has been required to deliver machines of this performance has been has been us utterly staggering but the exascale era brings with it real solutions it gives us opportunities to do things that we've not been able to do before if you look at some of the the most powerful computers around today they've they've really helped with um the pandemic kovid but we're still you know orders of magnitude away from being able to design drugs in situ test them in memory and release them to the public you know we still have lots and lots of lab work to do and exascale machines are going to help with that we are going to be able to to do more um which ultimately will will aid humanity and they used to be called the grand challenges and i still think of them as that i still think of these challenges for scientists that exascale class machines will be able to help but also i'm a realist is that in 10 20 30 years time you know i should be able to look back at this hopefully touch wood look back at it and look at much faster machines and say do you remember the days when we thought exascale was faster yeah well you mentioned the pandemic and you know the present united states was tweeting this morning that he was upset that you know the the fda in the u.s is not allowing the the vaccine to proceed as fast as you'd like it in fact it the fda is loosening some of its uh restrictions and i wonder if you know high performance computing in part is helping with the simulations and maybe predicting because a lot of this is about probabilities um and concerns is is is that work that is going on today or are you saying that that exascale actually you know would be what we need to accelerate that what's the role of hpc that you see today in regards to sort of solving for that vaccine and any other sort of pandemic related drugs so so first a disclaimer i am not a geneticist i am not a biochemist um my son is he tries to explain it to me and it tends to go in one ear and out the other um um i just merely build the machines he uses so we're sort of even on that front um if you read if you had read the press there was a lot of people offering up systems and computational resources for scientists a lot of the work that has been done understanding the mechanisms of covid19 um have been you know uncovered by the use of very very powerful computers would exascale have helped well clearly the faster the computers the more simulations we can do i think if you look back historically no vaccine has come to fruition as fast ever under modern rules okay admittedly the first vaccine was you know edward jenner sat quietly um you know smearing a few people and hoping it worked um i think we're slightly beyond that the fda has rules and regulations for a reason and we you don't have to go back far in our history to understand the nature of uh drugs that work for 99 of the population you know and i think exascale widely available exoscale and much faster computers are going to assist with that imagine having a genetic map of very large numbers of people on the earth and being able to test your drug against that breadth of person and you know that 99 of the time it works fine under fda rules you could never sell it you could never do that but if you're confident in your testing if you can demonstrate that you can keep the one percent away for whom that drug doesn't work bingo you now have a drug for the majority of the people and so many drugs that have so many benefits are not released and drugs are expensive because they fail at the last few moments you know the more testing you can do the more testing in memory the better it's going to be for everybody uh personally are we at a point where we still need human trials yes do we still need due diligence yes um we're not there yet exascale is you know it's coming it's not there yet yeah well to your point the faster the computer the more simulations and the higher the the chance that we're actually going to going to going to get it right and maybe compress that time to market but talk about some of the problems that you're working on uh and and the challenges for you know for example with the uk government and maybe maybe others that you can you can share with us help us understand kind of what you're hoping to accomplish so um within the united kingdom there was a report published um for the um for the uk research institute i think it's the uk research institute it might be epsrc however it's the body of people responsible for funding um science and there was a case a science case done for exascale i'm not a scientist um a lot of the work that was in this documentation said that a number of things that can be done today aren't good enough that we need to look further out we need to look at machines that will do much more there's been a program funded called asimov and this is a sort of a commercial problem that the uk government is working with rolls royce and they're trying to research how you build a full engine model and by full engine model i mean one that takes into account both the flow of gases through it and how those flow of gases and temperatures change the physical dynamics of the engine and of course as you change the physical dynamics of the engine you change the flow so you need a closely coupled model as air travel becomes more and more under the microscope we need to make sure that the air travel we do is as efficient as possible and currently there aren't supercomputers that have the performance one of the things i'm going to be doing as part of this sequence of conversations is i'm going to be having an in detailed uh sorry an in-depth but it will be very detailed an in-depth conversation with professor mark parsons from the edinburgh parallel computing center he's the director there and the dean of research at edinburgh university and i'm going to be talking to him about the azimoth program and and mark's experience as the person responsible for looking at exascale within the uk to try and determine what are the sort of science problems that we can solve as we move into the exoscale era and what that means for humanity what are the benefits for humans yeah and that's what i wanted to ask you about the the rolls-royce example that you gave it wasn't i if i understood it wasn't so much safety as it was you said efficiency and so that's that's what fuel consumption um it's it's partly fuel consumption it is of course safety there is a um there is a very specific test called an extreme event or the fan blade off what happens is they build an engine and they put it in a cowling and then they run the engine at full speed and then they literally explode uh they fire off a little explosive and they fire a fan belt uh a fan blade off to make sure that it doesn't go through the cowling and the reason they do that is there has been in the past uh a uh a failure of a fan blade and it came through the cowling and came into the aircraft depressurized the aircraft i think somebody was killed as a result of that and the aircraft went down i don't think it was a total loss one death being one too many but as a result you now have to build a jet engine instrument it balance the blades put an explosive in it and then blow the fan blade off now you only really want to do that once it's like car crash testing you want to build a model of the car you want to demonstrate with the dummy that it is safe you don't want to have to build lots of cars and keep going back to the drawing board so you do it in computers memory right we're okay with cars we have computational power to resolve to the level to determine whether or not the accident would hurt a human being still a long way to go to make them more efficient uh new materials how you can get away with lighter structures but we haven't got there with aircraft yet i mean we can build a simulation and we can do that and we can be pretty sure we're right um we still need to build an engine which costs in excess of 10 million dollars and blow the fan blade off it so okay so you're talking about some pretty complex simulations obviously what are some of the the barriers and and the breakthroughs that are kind of required you know to to do some of these things that you're talking about that exascale is going to enable i mean presumably there are obviously technical barriers but maybe you can shed some light on that well some of them are very prosaic so for example power exoscale machines consume a lot of power um so you have to be able to design systems that consume less power and that goes into making sure they're cooled efficiently if you use water can you reuse the water i mean the if you take a laptop and sit it on your lap and you type away for four hours you'll notice it gets quite warm um an exascale computer is going to generate a lot more heat several megawatts actually um and it sounds prosaic but it's actually very important to people you've got to make sure that the systems can be cooled and that we can power them yeah so there's that another issue is the software the software models how do you take a software model and distribute the data over many tens of thousands of nodes how do you do that efficiently if you look at you know gigaflop machines they had hundreds of nodes and each node had effectively a processor a core a thread of application we're looking at many many tens of thousands of nodes cores parallel threads running how do you make that efficient so is the software ready i think the majority of people will tell you that it's the software that's the problem not the hardware of course my friends in hardware would tell you ah software is easy it's the hardware that's the problem i think for the universities and the users the challenge is going to be the software i think um it's going to have to evolve you you're just you want to look at your machine and you just want to be able to dump work onto it easily we're not there yet not by a long stretch of the imagination yeah consequently you know we one of the things that we're doing is that we have a lot of centers of excellence is we will provide well i hate say the word provide we we sell super computers and once the machine has gone in we work very closely with the establishments create centers of excellence to get the best out of the machines to improve the software um and if a machine's expensive you want to get the most out of it that you can you don't just want to run a synthetic benchmark and say look i'm the fastest supercomputer on the planet you know your users who want access to it are the people that really decide how useful it is and the work they get out of it yeah the economics is definitely a factor in fact the fastest supercomputer in the planet but you can't if you can't afford to use it what good is it uh you mentioned power uh and then the flip side of that coin is of course cooling you can reduce the power consumption but but how challenging is it to cool these systems um it's an engineering problem yeah we we have you know uh data centers in iceland where it gets um you know it doesn't get too warm we have a big air cooled data center in in the united kingdom where it never gets above 30 degrees centigrade so if you put in water at 40 degrees centigrade and it comes out at 50 degrees centigrade you can cool it by just pumping it round the air you know just putting it outside the building because the building will you know never gets above 30 so it'll easily drop it back to 40 to enable you to put it back into the machine um right other ways to do it um you know is to take the heat and use it commercially there's a there's a lovely story of they take the hot water out of the supercomputer in the nordics um and then they pump it into a brewery to keep the mash tuns warm you know that's that's the sort of engineering i can get behind yeah indeed that's a great application talk a little bit more about your conversation with professor parsons maybe we could double click into that what are some of the things that you're going to you're going to probe there what are you hoping to learn so i think some of the things that that are going to be interesting to uncover is just the breadth of science that can be uh that could take advantage of exascale you know there are there are many things going on that uh that people hear about you know we people are interested in um you know the nobel prize they might have no idea what it means but the nobel prize for physics was awarded um to do with research into black holes you know fascinating and truly insightful physics um could it benefit from exascale i have no idea uh i i really don't um you know one of the most profound pieces of knowledge in in the last few hundred years has been the theory of relativity you know an austrian patent clerk wrote e equals m c squared on the back of an envelope and and voila i i don't believe any form of exascale computing would have helped him get there any faster right that's maybe flippant but i think the point is is that there are areas in terms of weather prediction climate prediction drug discovery um material knowledge engineering uh problems that are going to be unlocked with the use of exascale class systems we are going to be able to provide more tools more insight [Music] and that's the purpose of computing you know it's not that it's not the data that that comes out and it's the insight we get from it yeah i often say data is plentiful insights are not um ben you're a bit of an industry historian so i've got to ask you you mentioned you mentioned mentioned gigaflop gigaflops before which i think goes back to the early 1970s uh but the history actually the 80s is it the 80s okay well the history of computing goes back even before that you know yes i thought i thought seymour cray was you know kind of father of super computing but perhaps you have another point of view as to the origination of high performance computing [Music] oh yes this is um this is this is one for all my colleagues globally um you know arguably he says getting ready to be attacked from all sides arguably you know um computing uh the parallel work and the research done during the war by alan turing is the father of high performance computing i think one of the problems we have is that so much of that work was classified so much of that work was kept away from commercial people that commercial computing evolved without that knowledge i uh i have done in in in a previous life i have done some work for the british science museum and i have had the great pleasure in walking through the the british science museum archives um to look at how computing has evolved from things like the the pascaline from blaise pascal you know napier's bones the babbage's machines uh to to look all the way through the analog machines you know what conrad zeus was doing on a desktop um i think i think what's important is it doesn't matter where you are is that it is the problem that drives the technology and it's having the problems that requires the you know the human race to look at solutions and be these kicks started by you know the terrible problem that the us has with its nuclear stockpile stewardship now you've invented them how do you keep them safe originally done through the ascii program that's driven a lot of computational advances ultimately it's our quest for knowledge that drives these machines and i think as long as we are interested as long as we want to find things out there will always be advances in computing to meet that need yeah and you know it was a great conversation uh you're a brilliant guest i i love this this this talk and uh and of course as the saying goes success has many fathers so there's probably a few polish mathematicians that would stake a claim in the uh the original enigma project as well i think i think they drove the algorithm i think the problem is is that the work of tommy flowers is the person who took the algorithms and the work that um that was being done and actually had to build the poor machine he's the guy that actually had to sit there and go how do i turn this into a machine that does that and and so you know people always remember touring very few people remember tommy flowers who actually had to turn the great work um into a working machine yeah super computer team sport well ben it's great to have you on thanks so much for your perspectives best of luck with your conversation with professor parsons we'll be looking forward to that and uh and thanks so much for coming on thecube a complete pleasure thank you and thank you everybody for watching this is dave vellante we're celebrating exascale day you're watching the cube [Music]
SUMMARY :
that requires the you know the human
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
mark parsons | PERSON | 0.99+ |
ben bennett | PERSON | 0.99+ |
today | DATE | 0.99+ |
hundreds of nodes | QUANTITY | 0.99+ |
dave vellante | PERSON | 0.98+ |
pandemic | EVENT | 0.98+ |
united kingdom | LOCATION | 0.98+ |
seymour cray | PERSON | 0.98+ |
one ear | QUANTITY | 0.98+ |
first vaccine | QUANTITY | 0.98+ |
mark | PERSON | 0.98+ |
four hours | QUANTITY | 0.97+ |
tens of thousands of nodes | QUANTITY | 0.97+ |
blaise pascal | PERSON | 0.97+ |
one percent | QUANTITY | 0.97+ |
50 degrees centigrade | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
40 | QUANTITY | 0.97+ |
nobel prize | TITLE | 0.97+ |
rolls royce | ORGANIZATION | 0.96+ |
each node | QUANTITY | 0.96+ |
early 1970s | DATE | 0.96+ |
hpc | ORGANIZATION | 0.96+ |
10 million dollars | QUANTITY | 0.95+ |
uk government | ORGANIZATION | 0.95+ |
fda | ORGANIZATION | 0.95+ |
united states | ORGANIZATION | 0.94+ |
both | QUANTITY | 0.94+ |
this morning | DATE | 0.94+ |
40 degrees centigrade | QUANTITY | 0.94+ |
one death | QUANTITY | 0.93+ |
hewlett packard | ORGANIZATION | 0.93+ |
earth | LOCATION | 0.93+ |
exascale | TITLE | 0.93+ |
above 30 | QUANTITY | 0.93+ |
99 of the population | QUANTITY | 0.92+ |
Why So Hard? | TITLE | 0.92+ |
uk research institute | ORGANIZATION | 0.92+ |
lots of cars | QUANTITY | 0.92+ |
exascale day | EVENT | 0.9+ |
conrad zeus | PERSON | 0.9+ |
first | QUANTITY | 0.9+ |
edinburgh university | ORGANIZATION | 0.89+ |
many years ago | DATE | 0.89+ |
asimov | TITLE | 0.88+ |
Exascale Day | EVENT | 0.88+ |
uk | LOCATION | 0.87+ |
professor | PERSON | 0.87+ |
parsons | PERSON | 0.86+ |
99 of | QUANTITY | 0.86+ |
above 30 degrees centigrade | QUANTITY | 0.85+ |
edward jenner | PERSON | 0.85+ |
alan turing | PERSON | 0.83+ |
things | QUANTITY | 0.83+ |
80s | DATE | 0.82+ |
epsrc | ORGANIZATION | 0.82+ |
last few hundred years | DATE | 0.82+ |
Exascale | TITLE | 0.8+ |
a lot of people | QUANTITY | 0.79+ |
covid19 | OTHER | 0.78+ |
hewlett-packard | ORGANIZATION | 0.77+ |
british | OTHER | 0.76+ |
tommy | PERSON | 0.75+ |
edinburgh parallel computing center | ORGANIZATION | 0.74+ |
one of | QUANTITY | 0.73+ |
nordics | LOCATION | 0.71+ |
so many drugs | QUANTITY | 0.7+ |
many | QUANTITY | 0.69+ |
many years | QUANTITY | 0.68+ |
lots and lots of lab work | QUANTITY | 0.68+ |
large numbers of people | QUANTITY | 0.68+ |
hpc | EVENT | 0.68+ |
people | QUANTITY | 0.68+ |
The University of Edinburgh and Rolls Royce Drive in Exascale Style | Exascale Day
>>welcome. My name is Ben Bennett. I am the director of HPC Strategic programs here at Hewlett Packard Enterprise. It is my great pleasure and honor to be talking to Professor Mark Parsons from the Edinburgh Parallel Computing Center. And we're gonna talk a little about exa scale. What? It means we're gonna talk less about the technology on Maura about the science, the requirements on the need for exa scale. Uh, rather than a deep dive into the enabling technologies. Mark. Welcome. >>I then thanks very much for inviting me to tell me >>complete pleasure. Um, so I'd like to kick off with, I suppose. Quite an interesting look back. You and I are both of a certain age 25 plus, Onda. We've seen these milestones. Uh, I suppose that the S I milestones of high performance computing's come and go, you know, from a gig a flop back in 1987 teraflop in 97 a petaflop in 2000 and eight. But we seem to be taking longer in getting to an ex a flop. Um, so I'd like your thoughts. Why is why is an extra flop taking so long? >>So I think that's a very interesting question because I started my career in parallel computing in 1989. I'm gonna join in. IPCC was set up then. You know, we're 30 years old this year in 1990 on Do you know the fastest computer we have them is 800 mega flops just under a getting flogged. So in my career, we've gone already. When we reached the better scale, we'd already gone pretty much a million times faster on, you know, the step from a tariff block to a block scale system really didn't feel particularly difficult. Um, on yet the step from A from a petaflop PETA scale system. To an extent, block is a really, really big challenge. And I think it's really actually related to what's happened with computer processes over the last decade, where, individually, you know, approached the core, Like on your laptop. Whoever hasn't got much faster, we've just got more often So the perception of more speed, but actually just being delivered by more course. And as you go down that approach, you know what happens in the supercomputing world as well. We've gone, uh, in 2010 I think we had systems that were, you know, a few 1000 cores. Our main national service in the UK for the last eight years has had 118,000 cores. But looking at the X scale we're looking at, you know, four or five million cores on taming that level of parallelism is the real challenge. And that's why it's taking an enormous and time to, uh, deliver these systems. That is not just on the hardware front. You know, vendors like HP have to deliver world beating technology and it's hard, hard. But then there's also the challenge to the users. How do they get the codes to work in the face of that much parallelism? >>If you look at what the the complexity is delivering an annex a flop. Andi, you could have bought an extra flop three or four years ago. You couldn't have housed it. You couldn't have powered it. You couldn't have afforded it on, do you? Couldn't program it. But you still you could have You could have bought one. We should have been so lucky to be unable to supply it. Um, the software, um I think from our standpoint, is is looking like where we're doing mawr enabling with our customers. You sell them a machine on, then the the need then to do collaboration specifically seems mawr and Maura around the software. Um, so it's It's gonna be relatively easy to get one x a flop using limb pack, but but that's not extra scale. So what do you think? On exa scale machine versus an X? A flop machine means to the people like yourself to your users, the scientists and industry. What is an ex? A flop versus >>an exa scale? So I think, you know, supercomputing moves forward by setting itself challenges. And when you when you look at all of the excess scale programs worldwide that are trying to deliver systems that can do an X a lot form or it's actually very arbitrary challenge. You know, we set ourselves a PETA scale challenge delivering a petaflop somebody manage that, Andi. But you know, the world moves forward by setting itself challenges e think you know, we use quite arbitrary definition of what we mean is well by an exit block. So, you know, in your in my world, um, we either way, first of all, see ah flop is a computation, so multiply or it's an ad or whatever on we tend. Thio, look at that is using very high precision numbers or 64 bit numbers on Do you know, we then say, Well, you've got to do the next block. You've got to do a billion billion of those calculations every second. No, a some of the last arbitrary target Now you know today from HPD Aiken by my assistant and will do a billion billion calculations per second. And they will either do that as a theoretical peak, which would be almost unattainable, or using benchmarks that stressed the system on demonstrate a relaxing law. But again, those benchmarks themselves attuned Thio. Just do those calculations and deliver and explore been a steady I'll way if you like. So, you know, way kind of set ourselves this this this big challenge You know, the big fence on the race course, which were clambering over. But the challenge in itself actually should be. I'm much more interesting. The water we're going to use these devices for having built um, eso. Getting into the extra scale era is not so much about doing an extra block. It's a new generation off capability that allows us to do better scientific and industrial research. And that's the interesting bit in this whole story. >>I would tend to agree with you. I think the the focus around exa scale is to look at, you know, new technologies, new ways of doing things, new ways of looking at data and to get new results. So eventually you will get yourself a nexus scale machine. Um, one hopes, sooner rather >>than later. Well, I'm sure you don't tell me one, Ben. >>It's got nothing to do with may. I can't sell you anything, Mark. But there are people outside the door over there who would love to sell you one. Yes. However, if we if you look at your you know your your exa scale machine, Um, how do you believe the workloads are going to be different on an extra scale machine versus your current PETA scale machine? >>So I think there's always a slight conceit when you buy a new national supercomputer. On that conceit is that you're buying a capability that you know on. But many people will run on the whole system. Known truth. We do have people that run on the whole of our archer system. Today's A 118,000 cores, but I would say, and I'm looking at the system. People that run over say, half of that can be counted on Europe on a single hand in a year, and they're doing very specific things. It's very costly simulation they're running on. So, you know, if you look at these systems today, two things show no one is. It's very difficult to get time on them. The Baroque application procedures All of the requirements have to be assessed by your peers and your given quite limited amount of time that you have to eke out to do science. Andi people tend to run their applications in the sweet spot where their application delivers the best performance on You know, we try to push our users over time. Thio use reasonably sized jobs. I think our average job says about 20,000 course, she's not bad, but that does mean that as we move to the exits, kill two things have to happen. One is actually I think we've got to be more relaxed about giving people access to the system, So let's give more people access, let people play, let people try out ideas they've never tried out before. And I think that will lead to a lot more innovation and computational science. But at the same time, I think we also need to be less precious. You know, we to accept these systems will have a variety of sizes of job on them. You know, we're still gonna have people that want to run four million cores or two million cores. That's absolutely fine. Absolutely. Salute those people for trying really, really difficult. But then we're gonna have a huge spectrum of views all the way down to people that want to run on 500 cores or whatever. So I think we need Thio broaden the user base in Alexa Skill system. And I know this is what's happening, for example, in Japan with the new Japanese system. >>So, Mark, if you cast your mind back to almost exactly a year ago after the HPC user forum, you were interviewed for Premier Magazine on Do you alluded in that article to the needs off scientific industrial users requiring, you know, uh on X a flop or an exa scale machine it's clear in your in your previous answer regarding, you know, the workloads. Some would say that the majority of people would be happier with, say, 10 100 petaflop machines. You know, democratization. More people access. But can you provide us examples at the type of science? The needs of industrial users that actually do require those resources to be put >>together as an exa scale machine? So I think you know, it's a very interesting area. At the end of the day, these systems air bought because they are capability systems on. I absolutely take the argument. Why shouldn't we buy 10 100 pattern block systems? But there are a number of scientific areas even today that would benefit from a nexus school system and on these the sort of scientific areas that will use as much access onto a system as much time and as much scale of the system as they can, as you can give them eso on immediate example. People doing chroma dynamics calculations in particle physics, theoretical calculations, they would just use whatever you give them. But you know, I think one of the areas that is very interesting is actually the engineering space where, you know, many people worry the engineering applications over the last decade haven't really kept up with this sort of supercomputers that we have. I'm leading a project called Asimov, funded by M. P S O. C in the UK, which is jointly with Rolls Royce, jointly funded by Rolls Royce and also working with the University of Cambridge, Oxford, Bristol, Warrick. We're trying to do the whole engine gas turbine simulation for the first time. So that's looking at the structure of the gas turbine, the airplane engine, the structure of it, how it's all built it together, looking at the fluid dynamics off the air and the hot gasses, the flu threat, looking at the combustion of the engine looking how fuel is spread into the combustion chamber. Looking at the electrics around, looking at the way the engine two forms is, it heats up and cools down all of that. Now Rolls Royce wants to do that for 20 years. Andi, Uh, whenever they certify, a new engine has to go through a number of physical tests, and every time they do on those tests, it could cost them as much as 25 to $30 million. These are very expensive tests, particularly when they do what's called a blade off test, which would be, you know, blade failure. They could prove that the engine contains the fragments of the blade. Sort of think, continue face really important test and all engines and pass it. What we want to do is do is use an exa scale computer to properly model a blade off test for the first time, so that in future, some simulations can become virtual rather than having thio expend all of the money that Rolls Royce would normally spend on. You know, it's a fascinating project is a really hard project to do. One of the things that I do is I am deaf to share this year. Gordon Bell Price on bond I've really enjoyed to do. That's one of the major prizes in our area, you know, gets announced supercomputing every year. So I have the pleasure of reading all the submissions each year. I what's been really interesting thing? This is my third year doing being on the committee on what's really interesting is the way that big systems like Summit, for example, in the US have pushed the user communities to try and do simulations Nowhere. Nobody's done before, you know. And we've seen this as well, with papers coming after the first use of the for Goku system in Japan, for example, people you know, these are very, very broad. So, you know, earthquake simulation, a large Eddie simulations of boats. You know, a number of things around Genome Wide Association studies, for example. So the use of these computers spans of last area off computational science. I think the really really important thing about these systems is their challenging people that do calculations they've never done before. That's what's important. >>Okay, Thank you. You talked about challenges when I nearly said when you and I had lots of hair, but that's probably much more true of May. Um, we used to talk about grand challenges we talked about, especially around the teraflop era, the ski red program driving, you know, the grand challenges of science, possibly to hide the fact that it was a bomb designing computer eso they talked about the grand challenges. Um, we don't seem to talk about that much. We talk about excess girl. We talk about data. Um Where are the grand challenges that you see that an exa scale computer can you know it can help us. Okay, >>so I think grand challenges didn't go away. Just the phrase went out of fashion. Um, that's like my hair. I think it's interesting. The I do feel the science moves forward by setting itself grand challenges and always had has done, you know, my original backgrounds in particle physics. I was very lucky to spend four years at CERN working in the early stage of the left accelerator when it first came online on. Do you know the scientists there? I think they worked on left 15 years before I came in and did my little ph d on it. Andi, I think that way of organizing science hasn't changed. We just talked less about grand challenges. I think you know what I've seen over the last few years is a renaissance in computational science, looking at things that have previously, you know, people have said have been impossible. So a couple of years ago, for example, one of the key Gordon Bell price papers was on Genome Wide Association studies on some of it. If I may be one of the winner of its, if I remember right on. But that was really, really interesting because first of all, you know, the sort of the Genome Wide Association Studies had gone out of favor in the bioinformatics by a scientist community because people thought they weren't possible to compute. But that particular paper should Yes, you could do these really, really big Continental little problems in a reasonable amount of time if you had a big enough computer. And one thing I felt all the way through my career actually is we've probably discarded Mawr simulations because they were impossible at the time that we've actually decided to do. And I sometimes think we to challenge ourselves by looking at the things we've discovered in the past and say, Oh, look, you know, we could actually do that now, Andi, I think part of the the challenge of bringing an extra service toe life is to get people to think about what they would use it for. That's a key thing. Otherwise, I always say, a computer that is unused to just be turned off. There's no point in having underutilized supercomputer. Everybody loses from that. >>So Let's let's bring ourselves slightly more up to date. We're in the middle of a global pandemic. Uh, on board one of the things in our industry has bean that I've been particularly proud about is I've seen the vendors, all the vendors, you know, offering up machine's onboard, uh, making resources available for people to fight things current disease. Um, how do you see supercomputers now and in the future? Speeding up things like vaccine discovery on help when helping doctors generally. >>So I think you're quite right that, you know, the supercomputer community around the world actually did a really good job of responding to over 19. Inasmuch as you know, speaking for the UK, we put in place a rapid access program. So anybody wanted to do covert research on the various national services we have done to the to two services Could get really quick access. Um, on that, that has worked really well in the UK You know, we didn't have an archer is an old system, Aziz. You know, we didn't have the world's largest supercomputer, but it is happily bean running lots off covert 19 simulations largely for the biomedical community. Looking at Druk modeling and molecular modeling. Largely that's just been going the US They've been doing really large uh, combinatorial parameter search problems on on Summit, for example, looking to see whether or not old drugs could be reused to solve a new problem on DSO, I think, I think actually, in some respects Kobe, 19 is being the sounds wrong. But it's actually been good for supercomputing. Inasmuch is pointed out to governments that supercomputers are important parts off any scientific, the active countries research infrastructure. >>So, um, I'll finish up and tap into your inner geek. Um, there's a lot of technologies that are being banded around to currently enable, you know, the first exa scale machine, wherever that's going to be from whomever, what are the current technologies or emerging technologies that you are interested in excited about looking forward to getting your hands on. >>So in the business case I've written for the U. K's exa scale computer, I actually characterized this is a choice between the American model in the Japanese model. Okay, both of frozen, both of condoms. Eso in America, they're very much gone down the chorus plus GPU or GPU fruit. Um, so you might have, you know, an Intel Xeon or an M D process er center or unarmed process or, for that matter on you might have, you know, 24 g. P. U s. I think the most interesting thing that I've seen is definitely this move to a single address space. So the data that you have will be accessible, but the G p u on the CPU, I think you know, that's really bean. One of the key things that stopped the uptake of GPS today and that that that one single change is going Thio, I think, uh, make things very, very interesting. But I'm not entirely convinced that the CPU GPU model because I think that it's very difficult to get all the all the performance set of the GPU. You know, it will do well in H p l, for example, high performance impact benchmark we're discussing at the beginning of this interview. But in riel scientific workloads, you know, you still find it difficult to find all the performance that has promised. So, you know, the Japanese approach, which is the core, is only approach. E think it's very attractive, inasmuch as you know They're using very high bandwidth memory, very interesting process of which they are going to have to, you know, which they could develop together over 10 year period. And this is one thing that people don't realize the Japanese program and the American Mexico program has been working for 10 years on these systems. I think the Japanese process really interesting because, um, it when you look at the performance, it really does work for their scientific work clothes, and that's that does interest me a lot. This this combination of a A process are designed to do good science, high bandwidth memory and a real understanding of how data flows around the supercomputer. I think those are the things are exciting me at the moment. Obviously, you know, there's new networking technologies, I think, in the fullness of time, not necessarily for the first systems. You know, over the next decade we're going to see much, much more activity on silicon photonics. I think that's really, really fascinating all of these things. I think in some respects the last decade has just bean quite incremental improvements. But I think we're supercomputing is going in the moment. We're a very very disruptive moment again. That goes back to start this discussion. Why is extra skill been difficult to get? Thio? Actually, because the disruptive moment in technology. >>Professor Parsons, thank you very much for your time and your insights. Thank you. Pleasure and folks. Thank you for watching. I hope you've learned something, or at least enjoyed it. With that, I would ask you to stay safe and goodbye.
SUMMARY :
I am the director of HPC Strategic programs I suppose that the S I milestones of high performance computing's come and go, But looking at the X scale we're looking at, you know, four or five million cores on taming But you still you could have You could have bought one. challenges e think you know, we use quite arbitrary focus around exa scale is to look at, you know, new technologies, Well, I'm sure you don't tell me one, Ben. outside the door over there who would love to sell you one. So I think there's always a slight conceit when you buy a you know, the workloads. That's one of the major prizes in our area, you know, gets announced you know, the grand challenges of science, possibly to hide I think you know what I've seen over the last few years is a renaissance about is I've seen the vendors, all the vendors, you know, Inasmuch as you know, speaking for the UK, we put in place a rapid to currently enable, you know, I think you know, that's really bean. Professor Parsons, thank you very much for your time and your insights.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ben Bennett | PERSON | 0.99+ |
1989 | DATE | 0.99+ |
Rolls Royce | ORGANIZATION | 0.99+ |
UK | LOCATION | 0.99+ |
500 cores | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
Parsons | PERSON | 0.99+ |
1990 | DATE | 0.99+ |
Mark | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
1987 | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
118,000 cores | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
America | LOCATION | 0.99+ |
CERN | ORGANIZATION | 0.99+ |
third year | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
four million cores | QUANTITY | 0.99+ |
two million cores | QUANTITY | 0.99+ |
Genome Wide Association | ORGANIZATION | 0.99+ |
two services | QUANTITY | 0.99+ |
Ben | PERSON | 0.99+ |
first systems | QUANTITY | 0.99+ |
two forms | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
IPCC | ORGANIZATION | 0.99+ |
three | DATE | 0.99+ |
today | DATE | 0.98+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.98+ |
University of Cambridge | ORGANIZATION | 0.98+ |
five million cores | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
single | QUANTITY | 0.98+ |
Mark Parsons | PERSON | 0.98+ |
two things | QUANTITY | 0.98+ |
$30 million | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Edinburgh Parallel Computing Center | ORGANIZATION | 0.98+ |
Aziz | PERSON | 0.98+ |
Gordon Bell | PERSON | 0.98+ |
May | DATE | 0.98+ |
64 bit | QUANTITY | 0.98+ |
Europe | LOCATION | 0.98+ |
One | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
about 20,000 course | QUANTITY | 0.97+ |
Today | DATE | 0.97+ |
Alexa | TITLE | 0.97+ |
this year | DATE | 0.97+ |
HPC | ORGANIZATION | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
Xeon | COMMERCIAL_ITEM | 0.95+ |
25 | QUANTITY | 0.95+ |
over 10 year | QUANTITY | 0.95+ |
1000 cores | QUANTITY | 0.95+ |
Thio | PERSON | 0.95+ |
800 mega flops | QUANTITY | 0.95+ |
Professor | PERSON | 0.95+ |
Andi | PERSON | 0.94+ |
one thing | QUANTITY | 0.94+ |
couple of years ago | DATE | 0.94+ |
over 19 | QUANTITY | 0.93+ |
U. K | LOCATION | 0.92+ |
Premier Magazine | TITLE | 0.92+ |
10 100 petaflop machines | QUANTITY | 0.91+ |
four years ago | DATE | 0.91+ |
Exascale | LOCATION | 0.91+ |
HPD Aiken | ORGANIZATION | 0.91+ |
Dimitrios Stiliadis - OpenStack Summit 2013 - theCUBE
okay we're back live here at the OpenStack summit in Portland Oregon I'm John furry the founder SiliconANGLE comment rose mykos Dave a latte from Wikibon org this is silicon angles the cube our flagship program we go out to the events and extract the signal from the noise and certainly here OpenStack there's not a lot of noise but a lot of signal a lot of developers a lot of use cases really really the Alpha geeks the practitioner is really putting new technology into place to power this modern era of computing cloud mobile and social David Floria we're here with Demetri stilly at us from nudge networks and mountain view welcome to the cube thank you David I want to get your take on this before we set up this interview because honestly we've heard from right scale there in the management side just previous we've had Rackspace on earlier there on the Omni on the provider side we had big switch-on software-defined networking and now Dimitri's company the software is eating the world what's your take on the SDN market right now relative to OpenStack relative to open saying well what you're clearly wanting to do in every part of it is separate out all of the different layers and you ought to be able to separate out the physical and the the logical and the the software is the way that that's going to be done so instead of having to have a switch which is a piece of hardware and the software you want to separate the two out so that you have the logical function and the physical function from from the two pieces so that's very important to be able to contribute to every layer take new technologies along with you and then define the software element of that as the piece that you keep constant as technologies themselves adjust so durable code we walk manageable and build on and we clean can take advantage of new technologies as they come along and obviously I coming back to you what are you contributing what I think needs to be contributing was the white space in that area that you're going after right so see when people started thinking about the cloud and OpenStack and to always kind of think they they quickly realize that the network is a fundamental piece right you have to start with the network you have to interconnect your components and so on the angle that we are taking is yes it's good with in your data center within your cloud you have to create this network services interconnect applications and so on but much more importantly you need to be able to dynamically connect these applications with your existing network services right so you have a large amount of enterprise VPN services you have hybrid clouds coming out so you need to be able the moment you activate a network service in the data center to be able to seamlessly interconnect this now with your enterprise side with other network services in other data centers in other clouds and so on right so the network is always a network of networks and we have to bring everything together we cannot just restrict ourselves with is the confinements of a single administrative model so that's that's a fundamental part of what we are trying to to bring here together okay and so how are you fitting in with the the network layer right so our view is say that first of all we need to talk both both languages if you don't think of it as a as a translation thing right so we need to understand the language of the cloud we need to understand the language of the application developers in the cloud they want to use some abstract mechanism to define their network services and install them if you want in the hypervisors and OpenStack quantum seems to be the prevalent way to do that so that's language number one but then we have all these thousands of networks out there where their language is bgp so what we are doing is we are marrying the two we allow you to codon define services in OpenStack and we allow you to define the mekinese between interconnect the service is automatically with all the other networks that are out there right so I call it sometimes we are just translating between languages all right a language translator live from an application point of view they want to consume resources and previously networks and the computers were the main things they consumed but it seems now that sorry computing and storage with the main things they consumed but it now it seems that networks themselves have to pay a much bigger role in providing a quality of service to those places Rick you've got a quality of service down in the nano seconds when you get to the server level and used to have milliseconds for the for the storage side it's now coming down to micro second what are you doing to make sure that that quality of service no it is not just the bandwidth but it's also the latency are you planning to marry that see the weight datacenter networks of all these people are quickly realizing that the same if you want principles that we used in order to build the Internet itself can be used inside the data center so if you think about the internet right in the internet there is voice services that is video services there is all these other services running and they are actually running by assuming you have a well-engineered IP network and then you run the service is at the edges if you want all that you push all the intelligence at the edges it's the same thing where the network on the data center is going the data center network becomes a very scalable IP fabric it it is very well managed if you want very well traffic engineer and you push the edges at the hypervisors you push essentially the services at the hypervisors where traffic is differentiated so if you see for example a tenant misbehaving you are going to block him at the hypervisor layer if you're going to provide us or map different tenants to different classes of traffic it's happening at the hypervisor so the center of the network behaves like a scalable IP fabric and all the intelligence it's pushed around the edges and the reason you want to do that is because this allows you the ultimate scalability right the network or doesn't need to know about every flow that goes into the through through the corner of the network there right you don't need to know the IP addresses of virtual machines you don't need to know what individual virtual machines no need to know I want to do there you just need to worry about aggregates so you can engineer and scale the core make it very cheap and because you make it very tip you can increase the capacity at the core and you can say distribute all the intelligence at the edges of the network right but so you said that you can do the hypervisor and that's obviously on the compute side that side of it but what about the data network isn't that a don't you need to regulate the priorities and flex all the data through and isn't that today that's that's a very big part of it yes but it is still happening at the hypervisor right the the first touch of it enough an application with a network it is not anymore the top of rack sheets let's say on the data center but it does it is actually the hypervisor virtual sheets right that's the first time that you see a packet when a packet comes out of a virtual machine the first time you see it is at the hypervisor itself and at this layer when the first time you see the bucket of the hypervisor itself is where you apply all your policies right in other words the edge of the network is not the hardware is not the switch on the top of rack the edge of the network is inside the server now ok yeah ok excellent so I want to ask you we have a couple minutes left here I wanted we have two minutes less I want to get your perspective on the state of the business around OpenStack what is your view ok because your chief architects you're looking at the tech yes and you but you have to intersect the business objectives what are you seeing as the core business drivers that are that are causing you to make your technology in a certain way right so it's clear that what people want to do is they they want to provide this ability to their end users to consume services rapidly right that is what is driving this call OpenStack development and more important the community came together in order to unify view on the core engine and the core AP is in order to make this consumption of services very easy and in order to allow the application developers to move from one cloud to the other and so on right what we do is what we try to do is in addition is expanding view on this model amazing the network as consumable as the storage and compute facilities right and I'm not talking just about the network in the data center I'm talking about also the network in the way that the service in the data center of a cloud provider will interconnect with the enterprise read if you see then the next if you want Holy Grail that everybody is talking about is the hybrid cloud the hybrid cloud is only possible if you can connect the network and the services in the service provider cloud with a network and services in the in the in the enterprise itself right so they what links the two together is the network so we have to make this network to be consumable final question for you is actually DevOps is a mindset we heard from right scale that that adoption is in mainstream enterprises and service providers but the word infrastructure as code is becoming more popular outside of the the geeks and the album the architects the coders what in your mind how would you describe infrastructure as code to the folks out there give it a try it's okay no right answer it's a moving target that's what it is realities it's that the applications and code is a living organization it's constantly changing and you cannot assume at any point it's static right it's not there it's not the good old days if you want and that's what it really means right it's a living organism it it will constantly adapt to the new to the new requirements out there like switches in the old days you knew exactly ports and you you knew i was going now it's all kinds of weird stuff happening right it's all stuff you you have to be you you have to accept change if you want right so it's the actually there is a there is an okay Isaac Asimov code right there another the author of the science fiction yes that's the only constant is change yeah we should be no project just on the network genome here Software Defined Networking Dmitry stylianos thanks for jumping inside the cube again you're here like with a lot of the chief architects making things happen congratulations thanks for joining us thank you we'll be right back with more analysis from David's lawyer after the at break on a breakdown day 1 and day chu here in more depth from the analysts here at opens Dec 2 SiliconANGLE Gibbons exclusive coverage of OpenStack summit be right back
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Isaac Asimov | PERSON | 0.99+ |
Dimitrios Stiliadis | PERSON | 0.99+ |
David Floria | PERSON | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Dec 2 | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
both languages | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Dmitry stylianos | PERSON | 0.98+ |
first touch | QUANTITY | 0.98+ |
Dimitri | PERSON | 0.98+ |
Dave | PERSON | 0.97+ |
OpenStack | TITLE | 0.97+ |
today | DATE | 0.97+ |
Wikibon | ORGANIZATION | 0.95+ |
thousands of networks | QUANTITY | 0.95+ |
OpenStack Summit 2013 | EVENT | 0.95+ |
first | QUANTITY | 0.93+ |
OpenStack | EVENT | 0.92+ |
single | QUANTITY | 0.92+ |
OpenStack summit | EVENT | 0.92+ |
a couple minutes | QUANTITY | 0.87+ |
Demetri stilly | PERSON | 0.86+ |
two minutes less | QUANTITY | 0.84+ |
one cloud | QUANTITY | 0.83+ |
every layer | QUANTITY | 0.82+ |
Portland Oregon | LOCATION | 0.81+ |
a lot of signal | QUANTITY | 0.77+ |
a lot of developers | QUANTITY | 0.77+ |
SiliconANGLE | ORGANIZATION | 0.76+ |
John furry | PERSON | 0.76+ |
Rackspace | ORGANIZATION | 0.72+ |
micro second | QUANTITY | 0.7+ |
Gibbons | PERSON | 0.66+ |
nano seconds | QUANTITY | 0.66+ |
OpenStack | ORGANIZATION | 0.65+ |
day 1 | QUANTITY | 0.61+ |
Omni | ORGANIZATION | 0.59+ |
noise | QUANTITY | 0.59+ |
theCUBE | ORGANIZATION | 0.57+ |
a lot | QUANTITY | 0.54+ |
part | QUANTITY | 0.52+ |
one | QUANTITY | 0.48+ |