Image Title

Search Results for Cambridge University:

Randy Meyer, HPE & Paul Shellard, University of Cambridge | HPE Discover 2017 Madrid


 

>> Announcer: Live from Madrid, Spain, it's the Cube, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, Spain everybody, this is the Cube, the leader in live tech coverage. We're here covering HPE Discover 2017. I'm Dave Vellante with my cohost for the week, Peter Burris, Randy Meyer is back, he's the vice president and general manager Synergy and Mission Critical Solutions at Hewlett Packard Enterprise and Paul Shellerd is here, the director of the Center for Theoretical Cosmology at Cambridge University, thank you very much for coming on the Cube. >> It's a pleasure. >> Good to see you again. >> Yeah good to be back for the second time this week. I think that's, day stay outlets play too. >> Talking about computing meets the cosmos. >> Well it's exciting, yesterday we talked about Superdome Flex that we announced, we talked about it in the commercial space, where it's taking HANA and Orcale databases to the next level but there's a whole different side to what you can do with in memory compute. It's all in this high performance computing space. You think about the problems people want to solve in fluid dynamics, in forecasting, in all sorts of analytics problems, high performance compute, one of the things it does is it generates massive amounts of data that people then want to do things with. They want to compare that data to what their model said, okay can I run that against, they want to take that data and visualize it, okay how do I go do that. The more you can do that in memory, it means it's just faster to deal with because you're not going and writing this stuff off the disk, you're not moving it to another cluster back and forth, so we're seeing this burgeoning, the HPC guys would call it fat nodes, where you want to put lots of memory and eliminate the IO to go make their jobs easier and Professor Shallard will talk about a lot of that in terms of what they're doing at the Cosmos Institute, but this is a trend, you don't have to be a university. We're seeing this inside of oil and gas companies, aerospace engineering companies, anybody that's solving these complex computational problems that have an analytical element to whether it's comparative model, visualize, do something with that once you've done that. >> Paul, explain more about what it is you do. >> Well in the Cosmos Group, of which I'm the head, we're interested in two things, cosmology, which is trying to understand where the universe comes from, the whole big bang and then we're interested in black holes, particularly their collisions which produce gravitational waves, so they're the two main areas, relativity and cosmology. >> That's a big topic. I don't even know where to start, I just want to know okay what have you learned and can you summarize it for a lay person, where are you today, what can you share with us that we can understand? >> What we do is we take our mathematical models and we make predictions about the real universe and so we try and compare those to the latest observational data. We're in a particularly exciting period of time at the moment because of a flood of new data about the universe and about black holes and in the last two years, gravitational waves were discovered, there's a Nobel prize this year so lots of things are happening. It's a very data driven science so we have to try and keep up with this flood of new data which is getting larger and larger and also with new types of data, because suddenly gravitational waves are the latest thing to look at. >> What are the sources of data and new sources of data that you're tapping? >> Well, in cosmology we're mainly interested in the cosmic microwave background. >> Peter: Yeah the sources of data are the cosmos. >> Yeah right, so this is relic radiation left over from the big bang fireball, it's like a photograph of the universe, a blueprint and then also in the distribution of galaxies, so 3D maps of the universe and we've only, we're in a new age of exploration, we've only got a tiny fraction of the universe mapped so far and we're trying to extract new information about the origin of the universe from that data. In relativity, we've got these gravitational waves, these ripples in space time, they're traversing across the universe, they're essentially earthquakes in the universe and they're sound waves or seismic waves that propagate to us from these very violent events. >> I want to take you to the gravitational waves because in many respects, it's an example of a lot of what's here in action. Here's what I mean, that the experiment and correct me if I'm wrong, but it's basically, you create a, have two lasers perpendicular to each other, shooting a signal about two or three miles in that direction and it is the most precise experiment ever undertaken because what you're doing is you're measuring the time it takes for one laser versus another laser and that time is a function of the slight stretching that comes from the gravitational rays. That is an unbelievable example of edge computing, where you have just the tolerances to do that, that's not something you can send back to the cloud, you gotta do a lot of the compute right there, right? >> That's right, yes so a gravitational wave comes by and you shrink one way and you stretch the other. >> Peter: It distorts the space time. >> Yeah you become thinner and these tiny, tiny changes are what's measured and nobody expected gravitational waves to be discovered in 2015, we all thought, oh another five years, another five years, they've always been saying, we'll discover them, we'll discover them, but it happened. >> And since then, it's been used two or three times to discover new types of things and there's now a whole, I'm sure this is very centric to what you're doing, there's now a whole concept of gravitational information, can in fact becomes an entirely new branch of cosmology, have I got that right? >> Yeah you have, it's called multimessenger astronomy now because you don't just see the universe in electromagnetic waves, in light, you hear the universe. This is qualitatively different, it's sound waves coming across the universe and so combining these two, the latest event was where they heard the event first, then they turned their telescope and they saw it. So much information came out of that, even information about cosmology, because these signals are traveling hundreds of billions of light years across to us, we're getting a picture of the whole universe as they propagate all that way, so we're able to measure the expansion rate of the universe from that point. >> The techniques for the observational, the technology for observation, what is that, how has that evolved? >> Well you've got the wrong guy here. I'm from the theory group, we're doing the predictions and these guys with their incredible technology, are seeing the data, seeing and it's imagined, the whole point is you've gotta get the predictions and then you've gotta look in the data for a needle in the haystack which is this signature of these black holes colliding. >> You think about that, I have a model, I'm looking for the needle in the haystack, that's a different way to describe an in memory analytic search pattern recognition problem, that's really what it is. This is the world's largest pattern recognition problem. >> Most precise, and literally. >> And that's an observation that confirms your theory right? >> Confirms the theory, maybe it was your theory. >> I'm actually a cosmologist, so in my group we have relativists who are actively working on the black hole collisions and making predictions about this stuff. >> But they're dampening vibration from passing trucks and these things and correcting it, it's unbelievable. But coming back to the technology, the technology is, one of the reasons why this becomes so exciting and becomes practical is because for the first time, the technology has gotten to the point where you can assume that the problem you're trying to solve, that you're focused on and you don't have to translate it in technology terms, so talk a little bit about, because in many respects, that's where business is. Business wants to be able to focus on the problem and how to think the problem differently and have the technology to just respond. They don't want to have to start with the technology and then imagine what they can do with it. >> I think from our point of view, it's a very fast moving field, things are changing, new data's coming in. The data's getting bigger and bigger because instruments are getting packed tighter and tighter, there's more information, so we've got a computational problem as well, so we've got to get more computational power but there's new types of data, like suddenly there's gravitational waves. There's new types of analysis that we want to do so we want to be able to look at this data in a very flexible way and ingest it and explore new ideas more quickly because things are happening so fast, so that's why we've adopted this in memory paradigm for a number of years now and the latest incarnation of this is the HP Superdome flex and that's a shared memory system, so you can just pull in all your data and explore it without carefully programming how the memory is distributed around. We find this is very easy for our users to develop data analytic pipelines to develop their new theoretical models and to compare the two on the single system. It's also very easy for new users to use. You don't have to be an advanced programmer to get going, you can just stay with the science in a sense. >> You gotta have a PhD in Physics to do great in Physics, you don't have to have a PhD in Physics and technology. >> That's right, yeah it's a very flexible program. A flexible architecture with which to program so you can more or less take your laptop pipeline, develop your pipeline on a laptop, take it to the Superdome and then scale it up to these huge memory problems. >> And get it done fast and you can iterate. >> You know these are the most brilliant scientists in the world, bar none, I made the analogy the other day. >> Oh, thanks. >> You're supposed to say aw, chucks. >> Peter: Aw, chucks. >> Present company excepted. >> Oh yeah, that's right. >> I made the analogy of, imagine I.M. Pei or Frank Lloyd Wright or someone had to be their own general contractor, right? No, they're brilliant at designing architectures and imagining things that no one else could imagine and then they had people to go do that. This allows the people to focus on the brilliance of the science without having to go become the expert programmer, we see that in business too. Parallel programming techniques are difficult, spoken like an old tandem guy, parallelism is hard but to the extent that you can free yourself up and focus on the problem and not have to mess around with that, it makes life easier. Some problems parallelize well, but a lot of them don't need to be and you can allow the data to shine, you can allow the science to shine. >> Is it correct that the barrier in your ability to reach a conclusion or make a discovery is the ability to find that needle in a haystack or maybe there are many, but. >> Well, if you're talking about obstacles to progress, I would say computational power isn't the obstacle, it's developing the software pipelines and it's the human personnel, the smart people writing the codes that can look for the needle in the haystack who have the efficient algorithms to do that and if they're cobbled by having to think very hard about the hardware and the architecture they're working with and how they've parallelized the problem, our philosophy is much more that you solve the problem, you validate it, it can be quite inefficient if you like, but as long as it's a working program that gets you to where you want, then your second stage you worry about making it efficient, putting it on accelerators, putting it on GPUs, making it go really fast and that's, for many years now we've bought these very flexible shared memory or in memory is the new word for it, in memory architectures which allow new users, graduate students to come straight in without a Master's degree in high performance computing, they can start to tackle problems straight away. >> It's interesting, we hear the same, you talk about it at the outer reaches of the universe, I hear it at the inner reaches of the universe from the life sciences companies, we want to map the genome and we want to understand the interaction of various drug combinations with that genetic structure to say can I tune exactly a vaccine or a drug or something else for that patient's genetic makeup to improve medical outcomes? The same kind of problem, I want to have all this data that I have to run against a complex genome sequence to find the one that gets me to the answer. From the macro to the micro, we hear this problem in all different sorts of languages. >> One of the things we have our clients, mainly in business asking us all the time, is with each, let me step back, as analysts, not the smartest people in the world, as you'll attest I'm sure for real, as analysts, we like to talk about change and we always talked about mainframe being replaced by minicomputer being replaced by this or that. I like to talk in terms of the problems that computing's been able to take on, it's been able to take on increasingly complex, challenging, more difficult problems as a consequence of the advance of technology, very much like you're saying, the advance of technology allows us to focus increasingly on the problem. What kinds of problems do you think physicists are gonna be able to attack in the next five years or so as we think about the combination of increasingly powerful computing and an increasingly simple approach to use it? >> I think the simplification you're indicating here is really going to more memory. Holding your whole workload in memory, so that you, one of the biggest bottlenecks we find is ingesting the data and then writing it out, but if you can do everything at once, then that's the key element, so one of the things we've been working on a great deal is in situ visualization for example, so that you see the black holes coming together and you see that you've set the right parameters, they haven't missed each other or something's gone wrong with your simulation, so that you do the post-processing at the same time, you never need the intermediate data products, so larger and larger memory and the computational power that balances with that large memory. It's all very well to get a fat node, but you don't have the computational power to use all those terrabytes, so that's why this in memory architecture of the Superdome Flex is much more balanced between the two. What are the problems that we're looking forward to in terms of physics? Well, in cosmology we're looking for these hints about the origin of the universe and we've made a lot of progress analyzing the Plank satellite data about the cosmic microwave background. We're honing in on theories of inflation, which is where all the structure in the universe comes from, from Heisenberg's uncertainty principle, rapid period of expansion just like inflation in the financial markets in the very early universe, okay and so we're trying to identify can we distinguish between different types and are they gonna tell us whether the universe comes from a higher dimensional theory, ten dimensions, gets reduced to three plus one or lots of clues like that, we're looking for statistical fingerprints of these different models. In gravitational waves of course, this whole new area, we think of the cosmic microwave background as a photograph of the early universe, well in fact gravitational waves look right back to the earliest moment, fractions of a nanosecond after the big bang and so it may be that the answers, the clues that we're looking for come from gravitational waves and of course there's so much in astrophysics that we'll learn about compact objects, about neutron stars, about the most energetic events there are in the whole universe. >> I never thought about the idea, because cosmic radiation background goes back what, about 300,000 years if that's right. >> Yeah that's right, you're very well informed, 400,000 years because 300 is. >> Not that well informed. >> 370,000. >> I never thought about the idea of gravitational waves as being noise from the big bang and you make sense with that. >> Well with the cosmic microwave background, we're actually looking for a primordial signal from the big bang, from inflation, so it's yeah. Well anyway, what were you gonna say Randy? >> No, I just, it's amazing the frontiers we're heading down, it's kind of an honor to be able to enable some of these things, I've spent 30 years in the technology business and heard customers tell me you transformed by business or you helped me save costs, you helped me enter a new market. Never before in 30 plus years of being in this business have I had somebody tell me the things that you're providing are helping me understand the origins of the universe. It's an honor to be affiliated with you guys. >> Oh no, the honor's mine Randy, you're producing the hardware, the tools that allow us to do this work. >> Well now the honor's ours for coming onto the Cube. >> That's right, how do we learn more about your work and your discoveries, inclusions. >> In terms of looking at. >> Are there popular authors we could read other than Stephen Hawking? >> Well, read Stephen's books, they're very good, he's got a new one called A Briefer History of Time so it's more accessible than the Brief History of Time. >> So your website is. >> Yeah our website is ctc.cam.ac.uk, the center for theoretical cosmology and we've got some popular pages there, we've got some news stories about the latest things that have happened like the HP partnership that we're developing and some nice videos about the work that we're doing actually, very nice videos of that. >> Certainly, there were several videos run here this week that if people haven't seen them, go out, they're available on Youtube, they're available at your website, they're on Stephen's Facebook page also I think. >> Can you share that website again? >> Well, actually you can get the beautiful videos of Stephen and the rest of his group on the Discover website, is that right? >> I believe so. >> So that's at HP Discover website, but your website is? >> Is ctc.cam.ac.uk and we're just about to upload those videos ourselves. >> Can I make a marketing suggestion. >> Yeah. >> Simplify that. >> Ctc.cam.ac.uk. >> Yeah right, thank you. >> We gotta get the Cube at one of these conferences, one of these physics conferences and talk about gravitational waves. >> Bone up a little bit, you're kind of embarrassing us here, 100,000 years off. >> He's better informed than you are. >> You didn't need to remind me sir. Thanks very much for coming on the Cube, great pleasure having you today. >> Thank you. >> Keep it right there everybody, Mr. Universe and I will be back after this short break. (upbeat techno music)

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. the director of the Center for Theoretical Cosmology Yeah good to be back for the second time this week. to what you can do with in memory compute. Well in the Cosmos Group, of which I'm the head, okay what have you learned and can you summarize it and in the last two years, gravitational waves in the cosmic microwave background. in the universe and they're sound waves or seismic waves and it is the most precise experiment ever undertaken and you shrink one way and you stretch the other. Yeah you become thinner and these tiny, tiny changes of the universe from that point. I'm from the theory group, we're doing the predictions for the needle in the haystack, that's a different way and making predictions about this stuff. the technology has gotten to the point where you can assume to get going, you can just stay with the science in a sense. You gotta have a PhD in Physics to do great so you can more or less take your laptop pipeline, in the world, bar none, I made the analogy the other day. This allows the people to focus on the brilliance is the ability to find that needle in a haystack the problem, our philosophy is much more that you solve From the macro to the micro, we hear this problem One of the things we have our clients, at the same time, you never need the I never thought about the idea, Yeah that's right, you're very well informed, from the big bang and you make sense with that. from the big bang, from inflation, so it's yeah. It's an honor to be affiliated with you guys. the hardware, the tools that allow us to do this work. and your discoveries, inclusions. so it's more accessible than the Brief History of Time. that have happened like the HP partnership they're available at your website, to upload those videos ourselves. We gotta get the Cube at one of these conferences, of embarrassing us here, 100,000 years off. You didn't need to remind me sir. Keep it right there everybody, Mr. Universe and I

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Dave VellantePERSON

0.99+

Peter BurrisPERSON

0.99+

2015DATE

0.99+

PaulPERSON

0.99+

Randy MeyerPERSON

0.99+

PeterPERSON

0.99+

30 yearsQUANTITY

0.99+

HeisenbergPERSON

0.99+

Frank Lloyd WrightPERSON

0.99+

Paul ShellerdPERSON

0.99+

twoQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Cosmos InstituteORGANIZATION

0.99+

30 plus yearsQUANTITY

0.99+

Center for Theoretical CosmologyORGANIZATION

0.99+

A Briefer History of TimeTITLE

0.99+

Cosmos GroupORGANIZATION

0.99+

RandyPERSON

0.99+

100,000 yearsQUANTITY

0.99+

ten dimensionsQUANTITY

0.99+

three milesQUANTITY

0.99+

yesterdayDATE

0.99+

five yearsQUANTITY

0.99+

second stageQUANTITY

0.99+

Paul ShellardPERSON

0.99+

threeQUANTITY

0.99+

ctc.cam.ac.ukOTHER

0.99+

ShallardPERSON

0.99+

Stephen HawkingPERSON

0.99+

three timesQUANTITY

0.99+

Brief History of TimeTITLE

0.99+

HPEORGANIZATION

0.99+

todayDATE

0.98+

first timeQUANTITY

0.98+

this weekDATE

0.98+

Ctc.cam.ac.ukOTHER

0.98+

two lasersQUANTITY

0.98+

Madrid, SpainLOCATION

0.98+

400,000 yearsQUANTITY

0.98+

hundreds of billions of light yearsQUANTITY

0.98+

this yearDATE

0.98+

DiscoverORGANIZATION

0.98+

MadridLOCATION

0.98+

second timeQUANTITY

0.98+

oneQUANTITY

0.97+

about 300,000 yearsQUANTITY

0.96+

two main areasQUANTITY

0.96+

University of CambridgeORGANIZATION

0.96+

Superdome flexCOMMERCIAL_ITEM

0.96+

Nobel prizeTITLE

0.95+

OneQUANTITY

0.95+

about twoQUANTITY

0.95+

one wayQUANTITY

0.95+

one laserQUANTITY

0.94+

HANATITLE

0.94+

single systemQUANTITY

0.94+

HP DiscoverORGANIZATION

0.94+

eachQUANTITY

0.93+

YoutubeORGANIZATION

0.93+

HPORGANIZATION

0.93+

two thingsQUANTITY

0.92+

UniversePERSON

0.92+

firstQUANTITY

0.92+

ProfessorPERSON

0.89+

last two yearsDATE

0.88+

I.M. PeiPERSON

0.88+

CubeCOMMERCIAL_ITEM

0.87+

370,000QUANTITY

0.86+

Cambridge UniversityORGANIZATION

0.85+

SynergyORGANIZATION

0.8+

PlankLOCATION

0.8+

300QUANTITY

0.72+

several videosQUANTITY

0.65+

next five yearsDATE

0.64+

HPCORGANIZATION

0.61+

a nanosecondQUANTITY

0.6+

Physics Successfully Implements Lagrange Multiplier Optimization


 

>> Hello everybody. My title is Physics Implements Lagrange Multiplier Optimization. And let me be very specific about what I mean by this, is that in physics, there are a series of principles that are optimization principles. And we are just beginning to take advantage of them. For example, most famous in physics is the principle of least action. Of equal importance is the principle of least entropy generation. That's to say a dissipated circuit will try to adjust itself to dissipated as little as possible. There's other concepts first-to-gain-threshold, the variational principle, the adiabatic method, simulated annealing but actual physical annealing. So let's look at some of these that I'm sure you probably know about is the principle of least time. And this is sort of illustrated by a lifeguard who is trying to save a swimmer and runs as fast as possible along the sand and finally jumps in the water. So it's like the refraction of light. The lifeguard is trying to get to the swimmer as quickly as possible and is trying to follow the path that takes the least amount of time. This of course occurs in optics and classical mechanics and so forth. It's the principle of least action. Let me show you another one. The principle of minimum power dissipation. Imagine you had a circuit like this, where the current was dividing unequally. Well, that would make you feel very uncomfortable. The circuit will automatically try to adjust itself, so that the two branches which are equal actually are drawing equal amount of current. If they are unequal, it will dissipate excess energy. So we talk about least power dissipation, more sophisticated way of saying the same thing is the least entropy production. This is actually the most common one of all. Here's one that's kind of interesting. People have made a lot of hay about this, is you have lasers and you try to reach the threshold. And so you have different modes on the horizontal axis. And then one mode happens to have the lowest loss and then all the energy goes into that mode. This is the first-to-gain-threshold. This is also a type of minimization principle because physics finds the mode with the lowest gain threshold. Now, what I'll show about this, is it's not as good as it seems because there continues to be, even after you reach the gain threshold, there continues to be evolution among the modes. And so it's not quite as clear cut as it might seem. Here's the one it's famous, the variational principle. It says you have a trial wave function, the red one, it's no good because it has too much energy. The true wave function is illustrated in green. And that one has fines automatically. The fines, the situation with the wave function has the lowest energy. Here's one, of course it's just physical annealing in which you could do as physical annealing, which you could also think of it as simulated annealing. And in simulated annealing, you add noise or you raise the temperature, or do something else to jump out of local minima. So you do tend to get stuck in all of these methods. You tend to get stuck in local minima and you have to find a strategy to jump out of those local minima, but certainly physical annealing actually promises to give you a global optimum. So that's, we've got to keep that one in mind. And then there's the adiabatic method. And in the adiabatic method, you have modes. Now I am one who believes that we could do this even classically, just with LC circuits? We have avoided crossings. And the avoided crossings are such that you start from a solvable problem, and then you go to a very difficult to solve problem. And yet you stay in the ground state and I'm sure you all know this. This is the adiabatic method. Some people think of it as quantum mechanical, it could be, but it's also a classical. And what you're adjusting is one of the inductances in a complicated LC circuit. And this is sort of another illustration of the same thing, a little bit more complicated graph. You go from a simple Hamiltonian to a hard Hamiltonian, and you find a solution that way. So these are all minimization principles. Now, one of the preferred attributes is to have a digital answer, which we can get with bistable elements, physics is loaded with bistable elements, starting with the flip-flop. And you can imagine somehow coupling them together. I show you here just resistors, but it's very important that the, you don't have a pure analog machine. You want to have a machine that provides digital answers and the flip-flop is actually an analog machine, but it locks into a digital state. And so we want bistable elements that will give us binary answers. Okay, so having quickly gone through it, which of these is the best? So let's try to answer, which of these is the best for doing optimization? Which physics principle might be the best? And so one of our nice problems that we like to solve is the Ising problem. And there's a way to set that up with circuits and you can have LC circuits and try to mimic the ferromagnetic case as the two circuits are in phase and so you have, you try to lock them into, either positive or negative phase. You can do that with parametric gains. You have classical parametric gain with a two omega modulation on a capacitor and it's bistable. And if you have crossing couplings, then it's a, the phases tend to be opposite. And so you tend to have anti-ferromagnetic coupling. So you can mimic with these circuits, but there's so many ways to mimic it. So we'll see some more examples. Now, one of the main points I'm going to make today is that it's very easy to set up a physical system that not only does optimization, but also includes constraints and the constraints we normally take into account with Lagrange multipliers and this sort of an explanation of Lagrange multipliers. You're trying to go toward the absolute optimum here, but you run into the red constraint. So you get stopped right there. And the gradient of the constraint is opposite to the a, they cancel each other, the gradient of the merit function. So this is standard stuff in college, Lagrange multiplier calculus. So if physics does this, how does it do it? Well, it does it by steepest descent. We just follow it. Physics, for example, will try to go to the state of lowest power dissipation. So it goes, and it minimizes the participation in blue, but also tries to satisfy the constraint. And then we finally, we find the optimum point in some multi-dimensional configuration space. Another way of saying it, is we go from some initial state to some final state and physics does this for you for free, because it is always trying to reduce the entropy production, the power dissipation. And so there have been, I'm going to show you now five different schemes, actually I have about eight different schemes. And they all use the principle of minimum entropy generation but not all of them recognize it. So here's some work from my colleague, Roychowdhury here in my department, and he has these very amplitude, stable oscillators, but they tend to lock into a phase and in this way, it's unnatural for solving the Ising problem. But if you analyze it in detail and I'll show you the link to the archive where we've shown this is that this one is trying to satisfy the principle of minimum entropy generation and it includes constraints. And the most important constraint for us is that we want a digital answer. So we want to have either a plus or minus as the answer and the parametric oscillator permits that. He's not using a parametric oscillator, he's using something a little different, but it's somewhat similar. He's using lock sort of second-harmonic locking. It's similar to the parametric oscillator. And here's another approach from England, Cambridge University. I have the symbol of the university here and they got very excited. They have polaritons, exciton-polaritons they were very excited about that. But to us they're really just coupled electromagnetic modes and created by optical excitation. And they lock into definite phases and no big surprise they're actually, it also follows, it tends to lock in, in such a way that it minimizes the power dissipation, and it is very easy to include the digital constraint in there. And so that's yet another example. Of course, all the examples I'm going to show you from literature are all following the principle of minimum entropy generation. This is not always acknowledged by the authors. This is the Yamamoto Stanford approach. Thank you very much for inviting me. So I've analyzed this one with, we think that what's going on here. I think the quantum mechanical version could be very interesting possibly. But the versions that are out there right now are they're dissipative and there's dissipation in the optical fiber it's overcome by the parametric gain. And the net conclusion of this is that the different optical parametric oscillator pulses are trying to organize themselves in such a way as to minimize the power dissipation. So it's based upon minimum entropy generation, which for our purposes is synonymous with minimizing the power dissipation. And of course, very beautifully done. It is a very beautiful system because it's time multiplexed and it locks in to digital answers. So that's very nice. Here's something different, not the Ising problem from MIT. It is an optimizer. It's an optimizer for artificial intelligence. It uses Silicon Photonics and does unitary operations. We've gone through this very carefully. I'm sure to the people at MIT, they think they have something very unusual. But to us, this is usual. This is an example of minimizing the power dissipation. As you go round over and over again, through the Silicon Photonics, you end up minimizing the power dissipation. It's kind of surprising. And principle of minimum entropy generation again. Okay. And this is from my own group where we try to mimic the coherent ising machine, except it's just electrical. And we get the, this is an anti-ferromagnetic configuration. If the resistors were this way, it would be a ferromagnetic configuration. And we can arrange that. So I've just done five of my, I think I could have done a few more, but we're running out of time. But all of these optimization approaches are similar in that they're based upon minimum entropy generation, which is a, I don't want to say it's a law of physics, but it's accepted by many physicists, and you have different examples, including particularly MIT's optimizer for artificial intelligence. They all seem to take advantage of this type of physics. So they're all versions of minimum entropy generation. The physics hardware implements steepest descent physically. And because of the constraint though, it produces a binary output. Which is digital in the same sense that a flip-flop is digital. What's the promise? The promise is that the physics-based hardware will perform the same function at far greater speed and far less power dissipation. Now. The challenge of global optimization remains unsolved. I don't think anybody has a solution to the problem of global optimization. We can try to do better, we can get a little closer. But if, so even setting that aside, there all these terrific applications in deep learning and in neural network back-propagation, artificial intelligence, control theory. So there many applications, operations research, biology, et cetera. But there are a couple of action items needed to go further. And that is, I believe that the electronic implementation is perhaps a little easier to scale. And so we need to design some chips. So we need a chip with an array of oscillators. If you had a thousand LC oscillators on the chip, I think that would be already be very interesting. But you need to interconnect them. This would require a resistive network with about a million resistors. I think that can also be done on a chip. So minimizing the power dissipation is the whole point, but you'll do have to, there is an accuracy problem. The resistors have to be very precise but there's good news. Resistors can be programmed very accurately and I'll be happy to take questions on that. So later step though, once we have the chips is we need compiler software to convert the unknown problem into the given resistance values that will fit within these oscillator chips. So let me pause then for questions and thank you very much for your attention.

Published Date : Sep 24 2020

SUMMARY :

And because of the constraint though,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
two branchesQUANTITY

0.99+

fiveQUANTITY

0.99+

RoychowdhuryPERSON

0.99+

MITORGANIZATION

0.99+

YamamotoPERSON

0.99+

five different schemesQUANTITY

0.99+

oneQUANTITY

0.98+

todayDATE

0.98+

two circuitsQUANTITY

0.97+

Cambridge UniversityORGANIZATION

0.97+

firstQUANTITY

0.97+

one modeQUANTITY

0.95+

about a million resistorsQUANTITY

0.93+

two omegaQUANTITY

0.9+

Silicon PhotonicsOTHER

0.87+

EnglandLOCATION

0.86+

a thousand LCQUANTITY

0.82+

Physics Implements Lagrange Multiplier OptimizationTITLE

0.8+

about eight different schemesQUANTITY

0.78+

HamiltonianOTHER

0.63+

muchQUANTITY

0.59+

coupleQUANTITY

0.54+

StanfordORGANIZATION

0.48+

Day 2 Livestream | Enabling Real AI with Dell


 

>>from the Cube Studios >>in Palo Alto and >>Boston connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back here. Ready? Jeff Frick here with the Cube. We're doing a special presentation today really talking about AI and making ai really with two companies that are right in the heart of the Dell EMC as well as Intel. So we're excited to have a couple Cube alumni back on the program. Haven't seen him in a little while. First off from Intel. Lisa Spelman. She is the corporate VP and GM for the Xeon Group in Jersey on and Memory Group. Great to see you, Lisa. >>Good to see you again, too. >>And we've got Ravi Pinter. Conte. He is the SBP server product management, also from Dell Technologies. Ravi, great to see you as well. >>Good to see you on beast. Of course, >>yes. So let's jump into it. So, yesterday, Robbie, you guys announced a bunch of new kind of ai based solutions where if you can take us through that >>Absolutely so one of the things we did Jeff was we said it's not good enough for us to have a point product. But we talked about hope, the tour of products, more importantly, everything from our workstation side to the server to these storage elements and things that we're doing with VM Ware, for example. Beyond that, we're also obviously pleased with everything we're doing on bringing the right set off validated configurations and reference architectures and ready solutions so that the customer really doesn't have to go ahead and do the due diligence. Are figuring out how the various integration points are coming for us in making a solution possible. Obviously, all this is based on the great partnership we have with Intel on using not just their, you know, super cues, but FPG's as well. >>That's great. So, Lisa, I wonder, you know, I think a lot of people you know, obviously everybody knows Intel for your CPU is, but I don't think they recognize kind of all the other stuff that can wrap around the core CPU to add value around a particular solution. Set or problems. That's what If you could tell us a little bit more about Z on family and what you guys are doing in the data center with this kind of new interesting thing called AI and machine learning. >>Yeah. Um, so thanks, Jeff and Ravi. It's, um, amazing. The way to see that artificial intelligence applications are just growing in their pervasiveness. And you see it taking it out across all sorts of industries. And it's actually being built into just about every application that is coming down the pipe. And so if you think about meeting toe, have your hardware foundation able to support that. That's where we're seeing a lot of the customer interest come in. And not just a first Xeon, but, like Robbie said on the whole portfolio and how the system and solution configuration come together. So we're approaching it from a total view of being able to move all that data, store all of that data and cross us all of that data and providing options along that entire pipeline that move, um, and within that on Z on. Specifically, we've really set that as our cornerstone foundation for AI. If it's the most deployed solution and data center CPU around the world and every single application is going to have artificial intelligence in it, it makes sense that you would have artificial intelligence acceleration built into the actual hardware so that customers get a better experience right out of the box, regardless of which industry they're in or which specialized function they might be focusing on. >>It's really it's really wild, right? Cause in process, right, you always move through your next point of failure. So, you know, having all these kind of accelerants and the ways that you can carve off parts of the workload part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution side. Nobody wants General Ai just for ai sake. It's a nice word. Interesting science experiment. But it's really in the applied. A world is. We're starting to see the value in the application of this stuff, and I wonder you have a customer. You want to highlight Absalon, tell us a little bit about their journey and what you guys did with them. >>Great, sure. I mean, if you didn't start looking at Epsilon there in the market in the marketing business, and one of the crucial things for them is to ensure that they're able to provide the right data. Based on that analysis, there run on? What is it that the customer is looking for? And they can't wait for a period of time, but they need to be doing that in the near real time basis, and that's what excellent does. And what really blew my mind was the fact that they actually service are send out close to 100 billion messages. Again, it's 100 billion messages a year. And so you can imagine the amount of data that they're analyzing, which is in petabytes of data, and they need to do real time. And that's all possible because of the kind of analytics we have driven into the power It silver's, you know, using the latest of the Intel Intel Xeon processor couple with some of the technologies from the BGS side, which again I love them to go back in and analyze this data and service to the customers very rapidly. >>You know, it's funny. I think Mark Tech is kind of an under appreciated ah world of ai and, you know, in machine to machine execution, right, That's the amount of transactions go through when you load a webpage on your site that actually ideas who you are you know, puts puts a marketplace together, sells time on that or a spot on that ad and then lets people in is a really sophisticated, as you said in massive amounts of data going through the interesting stuff. If it's done right, it's magic. And if it's done, not right, then people get pissed off. You gotta have. You gotta have use our tools. >>You got it. I mean, this is where I talked about, you know, it can be garbage in garbage out if you don't really act on the right data. Right. So that is where I think it becomes important. But also, if you don't do it in a timely fashion, but you don't service up the right content at the right time. You miss the opportunity to go ahead and grab attention, >>right? Right. Lisa kind of back to you. Um, you know, there's all kinds of open source stuff that's happening also in the in the AI and machine learning world. So we hear things about tense or flow and and all these different libraries. How are you guys, you know, kind of embracing that world as you look at ai and kind of the development. We've been at it for a while. You guys are involved in everything from autonomous vehicles to the Mar Tech. Is we discussed? How are you making sure that these things were using all the available resources to optimize the solutions? >>Yeah, I think you and Robbie we're just hitting on some of those examples of how many ways people have figured out how to apply AI now. So maybe at first it was really driven by just image recognition and image tagging. But now you see so much work being driven in recommendation engines and an object detection for much more industrial use cases, not just consumer enjoyment and also those things you mentioned and hit on where the personalization is a really fine line you walk between. How do you make an experience feel good? Personalized versus creepy personalized is a real challenge and opportunity across so many industries. And so open source like you mentioned, is a great place for that foundation because it gives people the tools to build upon. And I think our strategy is really a stack strategy that starts first with delivering the best hardware for artificial intelligence and again the other is the foundation for that. But we also have, you know, Milat type processing for out of the Edge. And then we have all the way through to very custom specific accelerators into the data center, then on top about the optimized software, which is going into each of those frameworks and doing the work so that the framework recognizes the specific acceleration we built into the CPU. Whether that steel boost or recognizes the capabilities that sit in that accelerator silicon, and then once we've done that software layer and this is where we have the opportunity for a lot of partnership is the ecosystem and the solutions work that Robbie started off by talking about. So Ai isn't, um, it's not easy for everyone. It has a lot of value, but it takes work to extract that value. And so partnerships within the ecosystem to make sure that I see these are taking those optimization is building them in and fundamentally can deliver to customers. Reliable solution is the last leg of that of that strategy, but it really is one of the most important because without it you get a lot of really good benchmark results but not a lot of good, happy customer, >>right? I'm just curious, Lee says, because you kind of sit in the catbird seat. You guys at the core, you know, kind of under all the layers running data centers run these workloads. How >>do you see >>kind of the evolution of machine learning and ai from kind of the early days, where with science projects and and really smart people on mahogany row versus now people are talking about trying to get it to, like a citizen developer, but really a citizen data science and, you know, in exposing in the power of AI to business leaders or business executioners. Analysts, if you will, so they can apply it to their day to day world in their day to day life. How do you see that kind of evolving? Because you not only in it early, but you get to see some of the stuff coming down the road in design, find wins and reference architectures. How should people think about this evolution? >>It really is one of those things where if you step back from the fundamentals of AI, they've actually been around for 50 or more years. It's just that the changes in the amount of computing capability that's available, the network capacity that's available and the fundamental efficiency that I t and infrastructure managers and get out of their cloud architectures as allowed for this pervasiveness to evolve. And I think that's been the big tipping point that pushed people over this fear. Of course, I went through the same thing that cloud did where you had maybe every business leader or CEO saying Hey, get me a cloud and I'll figure out what for later give me some AI will get a week and make it work, But we're through those initial use pieces and starting to see a business value derived from from those deployments. And I think some of the most exciting areas are in the medical services field and just the amount, especially if you think of the environment we're in right now. The amount of efficiency and in some cases, reduction in human contact that you could require for diagnostics and just customer tracking and ability, ability to follow their entire patient History is really powerful and represents the next wave and care and how we scale our limited resource of doctors nurses technician. And the point we're making of what's coming next is where you start to see even more mass personalization and recommendations in that way that feel very not spooky to people but actually comforting. And they take value from them because it allows them to immediately act. Robbie reference to the speed at which you have to utilize the data. When people get immediately act more efficiently. They're generally happier with the service. So we see so much opportunity and we're continuing to address across, you know, again that hardware, software and solution stack so we can stay a step ahead of our customers, >>Right? That's great, Ravi. I want to give you the final word because you guys have to put the solutions together, it actually delivering to the customer. So not only, you know the hardware and the software, but any other kind of ecosystem components that you have to bring together. So I wonder if you can talk about that approach and how you know it's it's really the solution. At the end of the day, not specs, not speeds and feeds. That's not really what people care about. It's really a good solution. >>Yeah, three like Jeff, because end of the day I mean, it's like this. Most of us probably use the A team to retry money, but we really don't know what really sits behind 80 and my point being that you really care at that particular point in time to be able to put a radio do machine and get your dollar bills out, for example. Likewise, when you start looking at what the customer really needs to know, what Lisa hit upon is actually right. I mean what they're looking for. And you said this on the whole solution side house. To our our mantra to this is very simple. We want to make sure that we use the right basic building blocks, ensuring that we bring the right solutions using three things the right products which essentially means that we need to use the right partners to get the right processes in GPU Xen. But then >>we get >>to the next level by ensuring that we can actually do things we can either provide no ready solutions are validated reference architectures being that you have the sausage making process that you now don't need to have the customer go through, right? In a way. We have done the cooking and we provide a recipe book and you just go through the ingredient process of peering does and then off your off right to go get your solution done. And finally, the final stages there might be helped that customers still need in terms of services. That's something else Dell technology provides. And the whole idea is that customers want to go out and have them help deploying the solutions. We can also do that we're services. So that's probably the way we approach our data. The way we approach, you know, providing the building blocks are using the right technologies from our partners, then making sure that we have the right solutions that our customers can look at. And finally, they need deployment. Help weaken due their services. >>Well, Robbie, Lisa, thanks for taking a few minutes. That was a great tee up, Rob, because I think we're gonna go to a customer a couple of customer interviews enjoying that nice meal that you prepared with that combination of hardware, software, services and support. So thank you for your time and a great to catch up. All right, let's go and run the tape. Hi, Jeff. I wanted to talk about two examples of collaboration that we have with the partners that have yielded Ah, really examples of ah put through HPC and AI activities. So the first example that I wanted to cover is within your AHMAD team up in Canada with that team. We collaborated with Intel on a tuning of algorithm and code in order to accelerate the mapping of the human brain. So we have a cluster down here in Texas called Zenith based on Z on and obtain memory on. And we were able to that customer with the three of us are friends and Intel the norm, our team on the Dell HPC on data innovation, injuring team to go and accelerate the mapping of the human brain. So imagine patients playing video games or doing all sorts of activities that help understand how the brain sends the signal in order to trigger a response of the nervous system. And it's not only good, good way to map the human brain, but think about what you can get with that type of information in order to help cure Alzheimer's or dementia down the road. So this is really something I'm passionate about. Is using technology to help all of us on all of those that are suffering from those really tough diseases? Yeah, yeah, way >>boil. I'm a project manager for the project, and the idea is actually to scan six participants really intensively in both the memory scanner and the G scanner and see if we can use human brain data to get closer to something called Generalized Intelligence. What we have in the AI world, the systems that are mathematically computational, built often they do one task really, really well, but they struggle with other tasks. Really good example. This is video games. Artificial neural nets can often outperform humans and video games, but they don't really play in a natural way. Artificial neural net. Playing Mario Brothers The way that it beats the system is by actually kind of gliding its way through as quickly as possible. And it doesn't like collect pennies. For example, if you play Mary Brothers as a child, you know that collecting those coins is part of your game. And so the idea is to get artificial neural nets to behave more like humans. So like we have Transfer of knowledge is just something that humans do really, really well and very naturally. It doesn't take 50,000 examples for a child to know the difference between a dog and a hot dog when you eat when you play with. But an artificial neural net can often take massive computational power and many examples before it understands >>that video games are awesome, because when you do video game, you're doing a vision task instant. You're also doing a >>lot of planning and strategy thinking, but >>you're also taking decisions you several times a second, and we record that we try to see. Can we from brain activity predict >>what people were doing? We can break almost 90% accuracy with this type of architecture. >>Yeah, yeah, >>Use I was the lead posts. Talk on this collaboration with Dell and Intel. She's trying to work on a model called Graph Convolution Neural nets. >>We have being involved like two computing systems to compare it, like how the performance >>was voting for The lab relies on both servers that we have internally here, so I have a GPU server, but what we really rely on is compute Canada and Compute Canada is just not powerful enough to be able to run the models that he was trying to run so it would take her days. Weeks it would crash, would have to wait in line. Dell was visiting, and I was invited into the meeting very kindly, and they >>told us that they started working with a new >>type of hardware to train our neural nets. >>Dell's using traditional CPU use, pairing it with a new >>type off memory developed by Intel. Which thing? They also >>their new CPU architectures and really optimized to do deep learning. So all of that sounds great because we had this problem. We run out of memory, >>the innovation lab having access to experts to help answer questions immediately. That's not something to gate. >>We were able to train the attic snatch within 20 minutes. But before we do the same thing, all the GPU we need to wait almost three hours to each one simple way we >>were able to train the short original neural net. Dell has been really great cause anytime we need more memory, we send an email, Dell says. Yeah, sure, no problem. We'll extended how much memory do you need? It's been really simple from our end, and I think it's really great to be at the edge of science and technology. We're not just doing the same old. We're pushing the boundaries. Like often. We don't know where we're going to be in six months. In the big data world computing power makes a big difference. >>Yeah, yeah, yeah, yeah. The second example I'd like to cover is the one that will call the data accelerator. That's a publisher that we have with the University of Cambridge, England. There we partnered with Intel on Cambridge, and we built up at the time the number one Io 500 storage solution on. And it's pretty amazing because it was built on standard building blocks, power edge servers until Xeon processors some envy me drives from our partners and Intel. And what we did is we. Both of this system with a very, very smart and elaborate suffering code that gives an ultra fast performance for our customers, are looking for a front and fast scratch to their HPC storage solutions. We're also very mindful that this innovation is great for others to leverage, so the suffering Could will soon be available on Get Hub on. And, as I said, this was number one on the Iot 500 was initially released >>within Cambridge with always out of focus on opening up our technologies to UK industry, where we can encourage UK companies to take advantage of advanced research computing technologies way have many customers in the fields of automotive gas life sciences find our systems really help them accelerate their product development process. Manage Poor Khalidiya. I'm the director of research computing at Cambridge University. Yeah, we are a research computing cloud provider, but the emphasis is on the consulting on the processes around how to exploit that technology rather than the better results. Our value is in how we help businesses use advanced computing resources rather than the provision. Those results we see increasingly more and more data being produced across a wide range of verticals, life sciences, astronomy, manufacturing. So the data accelerators that was created as a component within our data center compute environment. Data processing is becoming more and more central element within research computing. We're getting very large data sets, traditional spinning disk file systems can't keep up and we find applications being slowed down due to a lack of data, So the data accelerator was born to take advantage of new solid state storage devices. I tried to work out how we can have a a staging mechanism for keeping your data on spinning disk when it's not required pre staging it on fast envy any stories? Devices so that can feed the applications at the rate quiet for maximum performance. So we have the highest AI capability available anywhere in the UK, where we match II compute performance Very high stories performance Because for AI, high performance storage is a key element to get the performance up. Currently, the data accelerated is the fastest HPC storage system in the world way are able to obtain 500 gigabytes a second read write with AI ops up in the 20 million range. We provide advanced computing technologies allow some of the brightest minds in the world really pushed scientific and medical research. We enable some of the greatest academics in the world to make tomorrow's discoveries. Yeah, yeah, yeah. >>Alright, Welcome back, Jeff Frick here and we're excited for this next segment. We're joined by Jeremy Raider. He is the GM digital transformation and scale solutions for Intel Corporation. Jeremy, great to see you. Hey, thanks for having me. I love I love the flowers in the backyard. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Garden, Right To very beautiful places to visit in Portland. >>Yeah. You know, you only get him for a couple. Ah, couple weeks here, so we get the timing just right. >>Excellent. All right, so let's jump into it. Really? And in this conversation really is all about making Ai Riel. Um, and you guys are working with Dell and you're working with not only Dell, right? There's the hardware and software, but a lot of these smaller a solution provider. So what is some of the key attributes that that needs to make ai riel for your customers out there? >>Yeah, so, you know, it's a it's a complex space. So when you can bring the best of the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore you're getting into Memory technologies, network technologies and kind of a little less known as how many resources we have focused on the software side of things optimizing frameworks and optimizing, and in these key ingredients and libraries that you can stitch into that portfolio to really get more performance in value, out of your machine learning and deep learning space. And so you know what we've really done here with Dell? It has started to bring a bunch of that portfolio together with Dell's capabilities, and then bring in that ai's V partner, that software vendor where we can really take and stitch and bring the most value out of that broad portfolio, ultimately using using the complexity of what it takes to deploy an AI capability. So a lot going on. They're bringing kind of the three legged stool of the software vendor hardware vendor dental into the mix, and you get a really strong outcome, >>right? So before we get to the solutions piece, let's stick a little bit into the Intel world. And I don't know if a lot of people are aware that obviously you guys make CPUs and you've been making great CPIs forever. But there's a whole lot more stuff that you've added, you know, kind of around the core CPU. If you will in terms of of actual libraries and ways to really optimize the seond processors to operate in an AI world. I wonder if you can kind of take us a little bit below the surface on how that works. What are some of the examples of things you can do to get more from your Gambira Intel processors for ai specific applications of workloads? >>Yeah, well, you know, there's a ton of software optimization that goes into this. You know that having the great CPU is definitely step one. But ultimately you want to get down into the libraries like tensor flow. We have data analytics, acceleration libraries. You know, that really allows you to get kind of again under the covers a little bit and look at it. How do we have to get the most out of the kinds of capabilities that are ultimately used in machine learning in deep learning capabilities, and then bring that forward and trying and enable that with our software vendors so that they can take advantage of those acceleration components and ultimately, you know, move from, you know, less training time or could be a the cost factor. But those are the kind of capabilities we want to expose to software vendors do these kinds of partnerships. >>Okay. Ah, and that's terrific. And I do think that's a big part of the story that a lot of people are probably not as aware of that. There are a lot of these optimization opportunities that you guys have been leveraging for a while. So shifting gears a little bit, right? AI and machine learning is all about the data. And in doing a little research for this, I found actually you on stage talking about some company that had, like, 350 of road off, 315 petabytes of data, 140,000 sources of those data. And I think probably not great quote of six months access time to get that's right and actually work with it. And the company you're referencing was intel. So you guys know a lot about debt data, managing data, everything from your manufacturing, and obviously supporting a global organization for I t and run and ah, a lot of complexity and secrets and good stuff. So you know what have you guys leveraged as intel in the way you work with data and getting a good data pipeline. That's enabling you to kind of put that into these other solutions that you're providing to the customers, >>right? Well, it is, You know, it's absolutely a journey, and it doesn't happen overnight, and that's what we've you know. We've seen it at Intel on We see it with many of our customers that are on the same journey that we've been on. And so you know, this idea of building that pipeline it really starts with what kind of problems that you're trying to solve. What are the big issues that are holding you back that company where you see that competitive advantage that you're trying to get to? And then ultimately, how do you build the structure to enable the right kind of pipeline of that data? Because that's that's what machine learning and deep learning is that data journey. So really a lot of focus around you know how we can understand those business challenges bring forward those kinds of capabilities along the way through to where we structure our entire company around those assets and then ultimately some of the partnerships that we're gonna be talking about these companies that are out there to help us really squeeze the most out of that data as quickly as possible because otherwise it goes stale real fast, sits on the shelf and you're not getting that value out of right. So, yeah, we've been on the journey. It's Ah, it's a long journey, but ultimately we could take a lot of those those kind of learnings and we can apply them to our silicon technology. The software optimization is that we're doing and ultimately, how we talk to our enterprise customers about how they can solve overcome some of the same challenges that we did. >>Well, let's talk about some of those challenges specifically because, you know, I think part of the the challenge is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Little bit was there's a whole lot that goes into it. Besides just doing the analysis, there's a lot of data practice data collection, data organization, a whole bunch of things that have to happen before. You can actually start to do the sexy stuff of AI. So you know, what are some of those challenges. How are you helping people get over kind of these baby steps before they can really get into the deep end of the pool? >>Yeah, well, you know, one is you have to have the resource is so you know, do you even have the resource is if you can acquire those Resource is can you keep them interested in the kind of work that you're doing? So that's a big challenge on and actually will talk about how that fits into some of the partnerships that we've been establishing in the ecosystem. It's also you get stuck in this poc do loop, right? You finally get those resource is and they start to get access to that data that we talked about. It start to play out some scenarios, a theorize a little bit. Maybe they show you some really interesting value, but it never seems to make its way into a full production mode. And I think that is a challenge that has faced so many enterprises that are stuck in that loop. And so that's where we look at who's out there in the ecosystem that can help more readily move through that whole process of the evaluation that proved the r a y, the POC and ultimately move that thing that capability into production mode as quickly as possible that you know that to me is one of those fundamental aspects of if you're stuck in the POC. Nothing's happening from this. This is not helping your company. We want to move things more quickly, >>right? Right. And let's just talk about some of these companies that you guys are working with that you've got some reference architectures is data robot a Grid dynamics H 20 just down the road in Antigua. So a lot of the companies we've worked with with Cube and I think you know another part that's interesting. It again we can learn from kind of old days of big data is kind of generalized. Ai versus solution specific. Ai and I think you know where there's a real opportunity is not AI for a sake, but really it's got to be applied to a specific solution, a specific problem so that you have, you know, better chatbots, better customer service experience, you know, better something. So when you were working with these folks and trying to design solutions or some of the opportunities that you saw to work with some of these folks to now have an applied a application slash solution versus just kind of AI for ai's sake. >>Yeah. I mean, that could be anything from fraud, detection and financial services, or even taking a step back and looking more horizontally like back to that data challenge. If if you're stuck at the AI built a fantastic Data lake, but I haven't been able to pull anything back out of it, who are some of the companies that are out there that can help overcome some of those big data challenges and ultimately get you to where you know, you don't have a data scientist spending 60% of their time on data acquisition pre processing? That's not where we want them, right? We want them on building out that next theory. We want them on looking at the next business challenge. We want them on selecting the right models, but ultimately they have to do that as quickly as possible so that they can move that that capability forward into the next phase. So, really, it's about that that connection of looking at those those problems or challenges in the whole pipeline. And these companies like data robot in H 20 quasi. Oh, they're all addressing specific challenges in the end to end. That's why they've kind of bubbled up as ones that we want to continue to collaborate with, because it can help enterprises overcome those issues more fast. You know more readily. >>Great. Well, Jeremy, thanks for taking a few minutes and giving us the Intel side of the story. Um, it's a great company has been around forever. I worked there many, many moons ago. That's Ah, that's a story for another time, but really appreciate it and I'll interview you will go there. Alright, so super. Thanks a lot. So he's Jeremy. I'm Jeff Frick. So now it's time to go ahead and jump into the crowd chat. It's crowdchat dot net slash make ai real. Um, we'll see you in the chat. And thanks for watching

Published Date : Jun 3 2020

SUMMARY :

Boston connecting with thought leaders all around the world. She is the corporate VP and GM Ravi, great to see you as well. Good to see you on beast. solutions where if you can take us through that reference architectures and ready solutions so that the customer really doesn't have to on family and what you guys are doing in the data center with this kind of new interesting thing called AI and And so if you think about meeting toe, have your hardware foundation part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution we have driven into the power It silver's, you know, using the latest of the Intel Intel of ai and, you know, in machine to machine execution, right, That's the amount of transactions I mean, this is where I talked about, you know, How are you guys, you know, kind of embracing that world as you look But we also have, you know, Milat type processing for out of the Edge. you know, kind of under all the layers running data centers run these workloads. and, you know, in exposing in the power of AI to business leaders or business the speed at which you have to utilize the data. So I wonder if you can talk about that approach and how you know to retry money, but we really don't know what really sits behind 80 and my point being that you The way we approach, you know, providing the building blocks are using the right technologies the brain sends the signal in order to trigger a response of the nervous know the difference between a dog and a hot dog when you eat when you play with. that video games are awesome, because when you do video game, you're doing a vision task instant. that we try to see. We can break almost 90% accuracy with this Talk on this collaboration with Dell and Intel. to be able to run the models that he was trying to run so it would take her days. They also So all of that the innovation lab having access to experts to help answer questions immediately. do the same thing, all the GPU we need to wait almost three hours to each one do you need? That's a publisher that we have with the University of Cambridge, England. Devices so that can feed the applications at the rate quiet for maximum performance. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Ah, couple weeks here, so we get the timing just right. Um, and you guys are working with Dell and you're working with not only Dell, right? the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore What are some of the examples of things you can do to get more from You know, that really allows you to get kind of again under the covers a little bit and look at it. So you know what have you guys leveraged as intel in the way you work with data and getting And then ultimately, how do you build the structure to enable the right kind of pipeline of that is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Yeah, well, you know, one is you have to have the resource is so you know, do you even have the So a lot of the companies we've worked with with Cube and I think you know another that can help overcome some of those big data challenges and ultimately get you to where you we'll see you in the chat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

JeremyPERSON

0.99+

Lisa SpelmanPERSON

0.99+

CanadaLOCATION

0.99+

TexasLOCATION

0.99+

RobbiePERSON

0.99+

LeePERSON

0.99+

PortlandLOCATION

0.99+

Xeon GroupORGANIZATION

0.99+

LisaPERSON

0.99+

DellORGANIZATION

0.99+

RaviPERSON

0.99+

Palo AltoLOCATION

0.99+

UKLOCATION

0.99+

60%QUANTITY

0.99+

Jeremy RaiderPERSON

0.99+

Ravi PinterPERSON

0.99+

IntelORGANIZATION

0.99+

20 millionQUANTITY

0.99+

Mar TechORGANIZATION

0.99+

50,000 examplesQUANTITY

0.99+

RobPERSON

0.99+

Mario BrothersTITLE

0.99+

six monthsQUANTITY

0.99+

AntiguaLOCATION

0.99+

University of CambridgeORGANIZATION

0.99+

JerseyLOCATION

0.99+

140,000 sourcesQUANTITY

0.99+

six participantsQUANTITY

0.99+

315 petabytesQUANTITY

0.99+

threeQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

yesterdayDATE

0.99+

two companiesQUANTITY

0.99+

500 gigabytesQUANTITY

0.99+

AHMADORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

eachQUANTITY

0.99+

Cube StudiosORGANIZATION

0.99+

first exampleQUANTITY

0.99+

BothQUANTITY

0.99+

Memory GroupORGANIZATION

0.99+

two examplesQUANTITY

0.99+

Cambridge UniversityORGANIZATION

0.98+

Rose GardenLOCATION

0.98+

todayDATE

0.98+

both serversQUANTITY

0.98+

oneQUANTITY

0.98+

BostonLOCATION

0.98+

Intel CorporationORGANIZATION

0.98+

KhalidiyaPERSON

0.98+

second exampleQUANTITY

0.98+

one taskQUANTITY

0.98+

80QUANTITY

0.98+

intelORGANIZATION

0.97+

EpsilonORGANIZATION

0.97+

RocketPERSON

0.97+

bothQUANTITY

0.97+

CubeORGANIZATION

0.96+

Sharad Singhal, The Machine & Michael Woodacre, HPE | HPE Discover Madrid 2017


 

>> Man: Live from Madrid, Spain, it's the Cube! Covering HPE Discover Madrid, 2017. Brought to you by: Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is The Cube, the leader in live tech coverage. My name is Dave Vellante, I'm here with my co-host, Peter Burris, and this is our second day of coverage of HPE's Madrid Conference, HPE Discover. Sharad Singhal is back, Director of Machine Software and Applications, HPE and Corps and Labs >> Good to be back. And Mike Woodacre is here, a distinguished engineer from Mission Critical Solutions at Hewlett-Packard Enterprise. Gentlemen, welcome to the Cube, welcome back. Good to see you, Mike. >> Good to be here. >> Superdome Flex is all the rage here! (laughs) At this show. You guys are happy about that? You were explaining off-camera that is the first jointly-engineered product from SGI and HPE, so you hit a milestone. >> Yeah, and I came into Hewett Packard Enterprise just over a year ago with the SGI Acquisition. We're already working on our next generation in memory computing platform. We basically hit the ground running, integrated the engineering teams immediately that we closed the acquisition so we could drive through the finish line and with the product announcement just recently, we're really excited to get that out into the market. Really represent the leading in memory, computing system in the industry. >> Sharad, a high performance computer, you've always been big data, needing big memories, lots of performance... How has, or has, the acquisition of SGI shaped your agenda in any way or your thinking, or advanced some of the innovations that you guys are coming up with? >> Actually, it was truly like a meeting of the minds when these guys came into HPE. We had been talking about memory-driven computing, the machine prototype, for the last two years. Some of us were aware of it, but a lot of us were not aware of it. These guys had been working essentially in parallel on similar concepts. Some of the work we had done, we were thinking in terms of our road maps and they were looking at the same things. Their road maps were looking incredibly similar to what we were talking about. As the engineering teams came about, we brought both the Superdome X technology and The UV300 technology together into this new product that Mike can talk a lot more about. From my side, I was talking about the machine and the machine research project. When I first met Mike and I started talking to him about what they were doing, my immediate reaction was, "Oh wow wait a minute, this is exactly what I need!" I was talking about something where I could take the machine concepts and deliver products to customers in the 2020 time frame. With the help of Mike and his team, we are able to now do essentially something where we can take the benefits we are describing in the machine program and- make those ideas available to customers right now. I think to me that was the fun part of this journey here. >> So what are the key problems that your team is attacking with this new offering? >> The primary use case for the Superdome Flex is really high-performance in memory database applications, typically SAP Hana is sort of the industry leading solution in that space right now. One of the key things with the Superdome Flex, you know, Flex is the active word, it's the flexibility. You can start with a small building block of four socket, three terabyte building block, and then you just connect these boxes together. The memory footprint just grows linearly. The latency across our fabric just stays constant as you add these modules together. We can deliver up to 32 processes, 48 terabytes of in-memory data in a single rack. So it's really the flexibility, sort of a pay as you grow model. As their needs grow, they don't have to throw out the infrastructure. They can add to it. >> So when you take a look ultimately at the combination, we talked a little bit about some of the new types of problems that can be addressed, but let's bring it practical to the average enterprise. What can the enterprise do today, as a consequence of this machine, that they couldn't do just a few weeks ago? >> So it sort of builds on the modularity, as Lance explained. If you ask a CEO today, "what's my database requirement going to be in two or three years?" they're like, "I hope my business is successful, I hope I'm gonna grow my needs," but I really don't know where that side is going to grow, so the flexibility to just add modules and scale up the capacity of memory to bring that- so the whole concept of in-memory databases is basically bringing your online transaction processing and your data-analytics processing together. So then you can do this in real time and instead of your data going to a data warehouse and looking at how the business is operating days or weeks or months ago, I can see how it's acting right now with the latest updates of transactions. >> So this is important. You mentioned two different things. Number one is you mentioned you can envision- or three things. You can start using modern technology immediately on an extremely modern platform. Number two, you can grow this and scale this as needs follow, because Hana in memory is not gonna have the same scaling limitations that you know, Oracle on a bunch of spinning discs had. >> Mike: Exactly. >> So, you still have the flexibility to learn and then very importantly, you can start adding new functions, including automation, because now you can put the analytics and the transaction processing together, close that loop so you can bring transactions, analytics, boom, into a piece of automation, and scale that in unprecedented ways. That's kind of three things that the business can now think about. Have I got that right? >> Yeah, that's exactly right. It lets people really understand how their business is operating in real time, look for trends, look for new signatures in how the business is operating. They can basically build on their success and basically having this sort of technology gives them a competitive advantage over their competitors so they can out-compute or out-compete and get ahead of the competition. >> But it also presumably leads to new kinds of efficiencies because you can converge, that converge word that we've heard so much. You can not just converge the hardware and converge the system software management, but you can now increasingly converge tasks. Bring those tasks in the system, but also at a business level, down onto the same platform. >> Exactly, and so moving in memory is really about bringing real time to the problem instead of batch mode processing, you bring in the real-time aspect. Humans, we're interactive, we like to ask a question, get an answer, get on to the next question in real time. When processes move from batch mode to real time, you just get a step change in the innovation that can occur. We think with this foundation, we're really enabling the industry to step forward. >> So let's create a practical example here. Let's apply this platform to a sizeable system that's looking at customer behavior patterns. Then let's imagine how we can take the e-commerce system that's actually handling order, bill, fulfillment and all those other things. We can bring those two things together not just in a way that might work, if we have someone online for five minutes, but right now. Is that kind of one of those examples that we're looking at? >> Absolutely, you can basically- you have a history of the customers you're working with. In retail when you go in a store, the store will know your history of transactions with them. They can decide if they want to offer you real time discounts on particular items. They'll also be taking in other data, weather conditions to drive their business. Suddenly there's going to be a heat wave, I want more ice cream in the store, or it's gonna be freezing next week, I'm gonna order in more coats and mittens for everyone to buy. So taking in lots of transactional data, not just the actual business transaction, but environmental data, you can accelerate your ability to provide consumers with the things they will need. >> Okay, so I remember when you guys launched Apollo. Antonio Neri was running the server division, you might have had networking to him. He did a little reveal on the floor. Antonio's actually in the house over there. >> Mike: (laughs) Next door. There was an astronaut at the reveal. We covered it on the Cube. He's always been very focused on this part of the business of the high-performance computing, and obviously the machine has been a huge project. How has the leadership been? We had a lot of skeptics early on that said you were crazy. What was the conversation like with Meg and Antonio? Were they continuously supportive, were they sometimes skeptical too? What was that like? >> So if you think about the total amount of effort we've put in the machine program, and truly speaking, that kind of effort would not be possible if the senior leadership was not behind us inside this company. Right? A lot of us in HP labs were working on it. It was not just a labs project, it was a project where our business partners were working on it. We brought together engineering teams from the business groups who understood how projects were put together. We had software people working with us who were working inside the business, we had researchers from labs working, we had supply chain partners working with us inside this project. A project of this scale and scope does not succeed if it's a handful of researchers doing this work. We had enormous support from the business side and from our leadership team. I give enormous thanks to our leadership team to allow us to do this, because it's an industry thing, not just an HP Enterprise thing. At the same time, with this kind of investment, there's clearly an expectation that we will make it real. It's taken us three years to go from, "here is a vague idea from a group of crazy people in labs," to something which actually works and is real. Frankly, the conversation in the last six months has been, "okay, so how do we actually take it to customers?" That's where the partnership with Mike and his team has become so valuable. At this point in time, we have a shared vision of where we need to take the thing. We have something where we can on-board customers right now. We have something where, frankly, even I'm working on the examples we were talking about earlier today. Not everybody can afford a 16-socket, giant machine. The Superdome Flex allows my customer, or anybody who is playing with an application to start small, something that is reasonably affordable, try that application out. If that application is working, they have the ability to scale up. This is what makes the Superdome Flex such a nice environment to work in for the types of applications I'm worrying about because it takes something which when we had started this program, people would ask us, "when will the machine product be?" From day one, we said, "the machine product will be something that might become available to you in some form or another by the end of the decade." Well, suddenly with Mike, I think I can make it happen right now. It's not quite the end of the decade yet, right? So I think that's what excited me about this partnership we have with the Superdome Flex team. The fact that they had the same vision and the same aspirations that we do. It's a platform that allows my current customers with their current applications like Mike described within the context of say, SAB Hana, a scalable platform, they can operate it now. It's also something that allows them to involve towards the future and start putting new applications that they haven't even thought about today. Those were the kinds of applications we were talking about. It makes it possible for them to move into this journey today. >> So what is the availability of Superdome Flex? Can I buy it today? >> Mike: You can buy it today. Actually, I had the pleasure of installing the first early-access system in the UK last week. We've been delivering large memory platforms to Stephen Hawking's team at Cambridge University for the last twenty years because they really like the in-memory capability to allow them, as they say, to be scientists, not computer scientists, in working through their algorithms and data. Yeah, it's ready for sale today. >> What's going on with Hawking's team? I don't know if this is fake news or not, but I saw something come across that said he says the world's gonna blow up in 600 years. (laughter) I was like, uh-oh, what's Hawking got going now? (laughs) That's gotta be fun working with those guys. >> Yeah, I know, it's been fun working with that team. Actually, what I would say following up on Sharad's comment, it's been really fun this last year, because I've sort of been following the machine from outside when the announcements were made a couple of years ago. Immediately when the acquisition closed, I was like, "tell me about the software you've been developing, tell me about the photonics and all these technologies," because boy, I can now accelerate where I want to go with the technology we've been developing. Superdome Flex is really the first step on the path. It's a better product than either company could have delivered on their own. Now over time, we can integrate other learnings and technologies from the machine research program. It's a really exciting time. >> Excellent. Gentlemen, I always love the SGI acquisitions. Thought it made a lot of sense. Great brand, kind of put SGI back on the map in a lot of ways. Gentlemen, thanks very much for coming on the Cube. >> Thank you again. >> We appreciate you. >> Mike: Thank you. >> Thanks for coming on. Alright everybody, We'll be back with our next guest right after this short break. This is the Cube, live from HGE Discover Madrid. Be right back. (energetic synth)

Published Date : Nov 29 2017

SUMMARY :

it's the Cube! the leader in live tech coverage. Good to be back. that is the first jointly-engineered the finish line and with the product How has, or has, the acquisition of Some of the work we had done, One of the key things with the What can the enterprise do today, so the flexibility to just add gonna have the same scaling limitations that the transaction processing together, how the business is operating. You can not just converge the hardware and the innovation that can occur. Let's apply this platform to a not just the actual business transaction, Antonio's actually in the house We covered it on the Cube. the same aspirations that we do. Actually, I had the pleasure of he says the world's gonna blow up in 600 years. Superdome Flex is really the first Gentlemen, I always love the SGI This is the Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

MikePERSON

0.99+

Dave VellantePERSON

0.99+

MegPERSON

0.99+

Sharad SinghalPERSON

0.99+

AntonioPERSON

0.99+

Mike WoodacrePERSON

0.99+

SGIORGANIZATION

0.99+

HawkingPERSON

0.99+

UKLOCATION

0.99+

five minutesQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Antonio NeriPERSON

0.99+

LancePERSON

0.99+

HPEORGANIZATION

0.99+

48 terabytesQUANTITY

0.99+

Hewlett-Packard EnterpriseORGANIZATION

0.99+

todayDATE

0.99+

three yearsQUANTITY

0.99+

next weekDATE

0.99+

OracleORGANIZATION

0.99+

HPORGANIZATION

0.99+

Michael WoodacrePERSON

0.99+

Stephen HawkingPERSON

0.99+

last weekDATE

0.99+

MadridLOCATION

0.99+

Hewett Packard EnterpriseORGANIZATION

0.99+

first stepQUANTITY

0.99+

2020DATE

0.99+

SharadPERSON

0.99+

OneQUANTITY

0.99+

Cambridge UniversityORGANIZATION

0.99+

two thingsQUANTITY

0.99+

HGE Discover MadridORGANIZATION

0.98+

firstQUANTITY

0.98+

last yearDATE

0.98+

three thingsQUANTITY

0.98+

twoQUANTITY

0.98+

three terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.97+

600 yearsQUANTITY

0.97+

16-socketQUANTITY

0.97+

second dayQUANTITY

0.97+

The MachineORGANIZATION

0.96+

Superdome FlexORGANIZATION

0.96+

Madrid, SpainLOCATION

0.95+

two different thingsQUANTITY

0.95+

upQUANTITY

0.93+

single rackQUANTITY

0.91+

CubeCOMMERCIAL_ITEM

0.9+

endDATE

0.9+

HPE DiscoverEVENT

0.88+

32 processesQUANTITY

0.88+

Superdome FlexCOMMERCIAL_ITEM

0.88+

few weeks agoDATE

0.88+

SAB HanaTITLE

0.86+

couple of years agoDATE

0.86+

overDATE

0.84+

Number twoQUANTITY

0.83+

Mission Critical SolutionsORGANIZATION

0.83+

four socketQUANTITY

0.82+

end of the decadeDATE

0.82+

last six monthsDATE

0.81+

a year agoDATE

0.81+

earlier todayDATE

0.8+