Kirk Bresniker, HPE | SuperComputing 22
>>Welcome back, everyone live here at Supercomputing 22 in Dallas, Texas. I'm John for host of the Queue here at Paul Gillin, editor of Silicon Angle, getting all the stories, bringing it to you live. Supercomputer TV is the queue right now. And bringing all the action Bresniker, chief architect of Hewlett Packard Labs with HP Cube alumnis here to talk about Supercomputing Road to Quantum. Kirk, great to see you. Thanks for coming on. >>Thanks for having me guys. Great to be >>Here. So Paul and I were talking and we've been covering, you know, computing as we get into the large scale cloud now on premises compute has been one of those things that just never stops. No one ever, I never heard someone say, I wanna run my application or workload on slower, slower hardware or processor or horsepower. Computing continues to go, but this, we're at a step function. It feels like we're at a level where we're gonna unleash new, new creativity, new use cases. You've been kind of working on this for many, many years at hp, Hewlett Packard Labs, I remember the machine and all the predecessor r and d. Where are we right now from your standpoint, HPE standpoint? Where are you in the computing? It's as a service, everything's changing. What's your view? >>So I think, you know, you capture so well. You think of the capabilities that you create. You create these systems and you engineer these amazing products and then you think, whew, it doesn't get any better than that. And then you remind yourself as an engineer. But wait, actually it has to, right? It has to because we need to continuously provide that next generation of scientists and engineer and artists and leader with the, with the tools that can do more and do more frankly with less. Because while we want want to run the program slower, we sure do wanna run them for less energy. And figuring out how we accomplish all of those things, I think is, is really where it's gonna be fascinating. And, and it's also, we think about that, we think about that now, scale data center billion, billion operations per second, the new science, arts and engineering that we'll create. And yet it's also what's beyond what's beyond that data center. How do we hook it up to those fantastic scientific instruments that are capable to generate so much information? We need to understand how we couple all of those things together. So I agree, we are at, at an amazing opportunity to raise the aspirations of the next generation. At the same time we have to think about what's coming next in terms of the technology. Is the silicon the only answer for us to continue to advance? >>You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's doing energy. You can build it in data centers for compute. There's all kinds of new things. Is there anything in the paradigm of computing and now on the road to quantum, which I know you're involved, I saw you have on LinkedIn, you have an open rec for that. What paradigm elements are changing that weren't in play a few years ago that you're looking at right now as you look at the 20 mile stair into quantum? >>So I think for us it's fascinating because we've had a tailwind at our backs my whole career, 33 years at hp. And what I could count on was transistors got at first they got cheaper, faster and they use less energy. And then, you know, that slowed down a little bit. Now they're still cheaper and faster. As we look in that and that Moore's law continues to flatten out of it, there has to be something better to do than, you know, yet another copy of the prior design opening up that diversity of approach. And whether that is the amazing wafer scale accelerators, we see these application specific silicon and then broadening out even farther next to the next to the silicon. Here's the analog computational accelerator here is now the, the emergence of a potential quantum accelerator. So seeing that diversity of approaches, but what we have to happen is we need to harness all of those efficiencies and yet we still have to realize that there are human beings that need to create the application. So how do we bridge, how do we accommodate the physical of, of new kinds of accelerator? How do we imagine the cyber physical connection to the, to the rest of the supercomputer? And then finally, how do we bridge that productivity gap? Especially not for people who like me who have been around for a long time, we wanna think about that next generation cuz they're the ones that need to solve the problems and write the code that will do it. >>You mentioned what exists beyond silicon. In fact, are you looking at different kinds of materials that computers in the future will be built upon? >>Oh absolutely. You think of when, when we, we look at the quantum, the quantum modalities then, you know, whether it is a trapped ion or a superconducting, a piece of silicon or it is a neutral ion. There's just no, there's about half a dozen of these novel systems because really what we're doing when we're using a a quantum mechanical computer, we're creating a tiny universe. We're putting a little bit of material in there and we're manipulating at, at the subatomic level, harnessing the power of of, of quantum physics. That's an incredible challenge. And it will take novel materials, novel capabilities that we aren't just used to seeing. Not many people have a helium supplier in their data center today, but some of them might tomorrow. And understanding again, how do we incorporate industrialize and then scale all of these technologies. >>I wanna talk Turkey about quantum because we've been talking for, for five years. We've heard a lot of hyperbole about quantum. We've seen some of your competitors announcing quantum computers in the cloud. I don't know who's using these, these computers, what kind of work they're being used, how much of the, how real is quantum today? How close are we to having workable true quantum computers and what can you point to any examples of how it's being, how that technology is being used in the >>Field? So it, it remains nascent. We'll put it that way. I think part of the challenge is we see this low level technology and of course it was, you know, professor Richard Fineman who first pointed us in this direction, you know, more than 30 years ago. And you know, I I I trust his judgment. Yes. You know that there's probably some there there especially for what he was doing, which is how do we understand and engineer systems at the quantum mechanical level. Well he said a quantum mechanical system's probably the way to go. So understanding that, but still part of the challenge we see is that people have been working on the low level technology and they're reaching up to wondering will I eventually have a problem that that I can solve? And the challenge is you can improve something every single day and if you don't know where the bar is, then you don't ever know if you'll be good enough. >>I think part of the approach that we like to understand, can we start with the problem, the thing that we actually want to solve and then figure out what is the bespoke combination of classical supercomputing, advanced AI accelerators, novel quantum quantum capabilities. Can we simulate and design that? And we think there's probably nothing better to do that than than an next to scale supercomputer. Yeah. Can we simulate and design that bespoke environment, create that digital twin of this environment and if we, we've simulated it, we've designed it, we can analyze it, see is it actually advantageous? Cuz if it's not, then we probably should go back to the drawing board. And then finally that then becomes the way in which we actually run the quantum mechanical system in this hybrid environment. >>So it's na and you guys are feeling your way through, you get some moonshot, you work backwards from use cases as a, as a more of a discovery navigational kind of mission piece. I get that. And Exoscale has been a great role for you guys. Congratulations. Has there been strides though in quantum this year? Can you point to what's been the, has the needle moved a little bit a lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put our finger on what's moving, like what need, where's the needle moved I >>Guess in quantum. And I think, I think that's part of the conversation that we need to have is how do we measure ourselves. I know at the World Economic Forum, quantum Development Network, we had one of our global future councils on the future of quantum computing. And I brought in a scene I EEE fellow Par Gini who, you know, created the international technology roadmap for semiconductors. And I said, Paulo, could you come in and and give us examples, how was the semiconductor community so effective not only at developing the technology but predicting the development of technology so that whether it's an individual deciding if they should change careers or it's a nation state deciding if they should spend a couple billion dollars, we have that tool to predict the rate of change and improvement. And so I think that's part of what we're hoping by participating will bring some of that road mapping skill and technology and understanding so we can make those better reasoned investments. >>Well it's also fun to see super computing this year. Look at the bigger picture, obviously software cloud natives running modern applications, infrastructure as code that's happening. You're starting to see the integration of, of environments almost like a global distributed operating system. That's the way I call it. Silicon and advancements have been a big part of what we see now. Merchant silicon, but also dpu are on the scene. So the role role of silicon is there. And also we have supply chain problems. So how, how do you look at that as a a, a chief architect of h Hewlett Packard Labs? Because not only you have to invent the future and dream it up, but you gotta deal with the realities and you get the realities are silicon's great, we need more of that quantums around the corner, but supply chain, how do you solve that? What's your thoughts and how do you, how, how is HPE looking at silicon innovation and, and supply chain? >>And so for us it, it is really understanding that partnership model and understanding and contributing. And so I will do things like I happen to be the, the systems and architectures chapter editor for the I eee International Roadmap for devices and systems, that community that wants to come together and provide that guidance. You know, so I'm all about telling the semiconductor and the post semiconductor community, okay, this is where we need to compute. I have a partner in the applications and benchmark that says, this is what we need to compute. And when you can predict in the future about where you need to compute, what you need to compute, you can have a much richer set of conversations because you described it so well. And I think our, our senior fellow Nick Dubey would, he's coined the term internet of workflows where, you know, you need to harness everything from the edge device all the way through the extra scale computer and beyond. And it's not just one sort of static thing. It is a very interesting fluid topology. I'll use this compute at the edge, I'll do this information in the cloud, I want to have this in my exoscale data center and I still need to provide the tool so that an individual who's making that decision can craft that work flow across all of those different resources. >>And those workflows, by the way, are complicated. Now you got services being turned on and off. Observability is a hot area. You got a lot more data in in cycle inflow. I mean a lot more action. >>And I think you just hit on another key point for us and part of our research at labs, I have, as part of my other assignments, I help draft our AI ethics global policies and principles and not only tell getting advice about, about how we should live our lives, it also became the basis for our AI research lab at Shewl Packard Labs because they saw, here's a challenge and here's something where I can't actually believe, maintain my ethical compliance. I need to have engineer new ways of, of achieving artificial intelligence. And so much of that comes back to governance over that data and how can we actually create those governance systems and and do that out in the open >>That's a can of worms. We're gonna do a whole segment on that one, >>On that >>Technology, on that one >>Piece I wanna ask you, I mean, where rubber meets the road is where you're putting your dollars. So you've talked a lot, a lot of, a lot of areas of, of progress right now, where are you putting your dollars right now at Hewlett Packard Labs? >>Yeah, so I think when I draw, when I draw my 2030 vision slide, you know, I, for me the first column is about heterogeneous, right? How do we bring all of these novel computational approaches to be able to demonstrate their effectiveness, their sustainability, and also the productivity that we can drive from, from, from them. So that's my first column. My section column is that edge to exoscale workflow that I need to be able to harness all of those computational and data resources. I need to be aware of the energy consequence of moving data, of doing computation and find all of that while still maintaining and solving for security and privacy. But the last thing, and, and that's one was a, one was a how one was aware. The last thing is a who, right? And is is how do we take that subject matter expert? I think of a, a young engineer starting their career at hpe. It'll be very different than my 33 years. And part of it, you know, they will be undaunted by any, any scale. They will be cloud natives, maybe they metaverse natives, they will demand to design an open cooperative environment. So for me it's thinking about that individual and how do I take those capabilities, heterogeneous edge to exito scale workflows and then make them productive. And for me, that's, that's where we were putting our emphasis on those three. When, where and >>Who. Yeah. And making it compatible for the next generation. We see the student cluster competition going on over there. This is the only show that we cover that we've been to that is from the dorm room to the boardroom and this cuz Supercomputing now is elevating up into that workflow, into integration, multiple environments, cloud, premise, edge, metaverse. This is like a whole nother world. >>And, and, but I think it's, it's the way that regardless of which human pursuit you're in, you know, everyone is going to be demand simulation and modeling ai, ML and massive data m l and massive data analytics that's gonna be at heart of, of everything. And that's what you see. That's what I love about coming here. This isn't just the way we're gonna do science. This is the way we're gonna do everything. >>We're gonna come by your booth, check it out. We've talked to some of the folks, hpe obviously HPE Discover this year, GreenLake with center stage, it's now consumption is a service for technology. Whole nother ballgame. Congratulations on, on all this. I would say the massive, I won't say pivot, but you know, a change >>It >>Is and how you guys >>Operate. And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, but as someone who has supported designs over decades, you know, that ability to to to operate and at peak efficiency, to always keep in perfect operating order and to continuously change while still meeting the customer expectations that actually allows us to deliver innovation to our customers faster than when we are delivering warranted individual packaged products. >>Kirk, thanks for coming on Paul. Great conversation here. You know, the road to Quantum's gonna be paved through computing supercomputing software integrated workflows from the dorm room to the boardroom to Cube, bringing all the action here at Supercomputing 22. I'm Jacque Forer with Paul Gillin. Thanks for watching. We'll be right back.
SUMMARY :
bringing it to you live. Great to be I remember the machine and all the predecessor r and d. Where are we right now from At the same time we have to think about what's coming next in terms of the technology. You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's And then, you know, that slowed down a little bit. that computers in the future will be built upon? And understanding again, how do we incorporate industrialize and true quantum computers and what can you point to any examples And the challenge is you can improve something every single day and if you don't know where the bar is, I think part of the approach that we like to understand, can we start with the problem, lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put And I think, I think that's part of the conversation that we need to have is how do we need more of that quantums around the corner, but supply chain, how do you solve that? in the future about where you need to compute, what you need to compute, you can have a much richer set of Now you got services being turned on and off. And so much of that comes back to governance over that data and how can we actually create That's a can of worms. a lot of, a lot of areas of, of progress right now, where are you putting your dollars right And part of it, you know, they will be undaunted by any, any scale. This is the only show that we cover that we've been to that And that's what you see. the massive, I won't say pivot, but you know, a change And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, You know, the road to Quantum's gonna be paved through
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillin | PERSON | 0.99+ |
Nick Dubey | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Bresniker | PERSON | 0.99+ |
Richard Fineman | PERSON | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
Kirk | PERSON | 0.99+ |
Paulo | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
33 years | QUANTITY | 0.99+ |
first column | QUANTITY | 0.99+ |
Jacque Forer | PERSON | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
Shewl Packard Labs | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Kirk Bresniker | PERSON | 0.99+ |
John | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
hp | ORGANIZATION | 0.98+ |
Moore | PERSON | 0.98+ |
five years | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
2030 | DATE | 0.97+ |
h Hewlett Packard Labs | ORGANIZATION | 0.97+ |
this year | DATE | 0.96+ |
one | QUANTITY | 0.96+ |
HP Cube | ORGANIZATION | 0.95+ |
GreenLake | ORGANIZATION | 0.93+ |
about half a dozen | QUANTITY | 0.91+ |
billion, | QUANTITY | 0.91+ |
World Economic Forum | ORGANIZATION | 0.9+ |
quantum Development Network | ORGANIZATION | 0.9+ |
few years ago | DATE | 0.88+ |
couple billion dollars | QUANTITY | 0.84+ |
more than 30 years ago | DATE | 0.84+ |
Gini | ORGANIZATION | 0.78+ |
Supercomputing Road to Quantum | TITLE | 0.68+ |
Supercomputing 22 | ORGANIZATION | 0.68+ |
Par | PERSON | 0.67+ |
billion operations per second | QUANTITY | 0.67+ |
Silicon Angle | ORGANIZATION | 0.66+ |
EEE | ORGANIZATION | 0.66+ |
single | QUANTITY | 0.66+ |
Turkey | ORGANIZATION | 0.56+ |
SuperComputing 22 | ORGANIZATION | 0.52+ |
Cube | ORGANIZATION | 0.48+ |
Exoscale | TITLE | 0.44+ |
International | TITLE | 0.4+ |
Kirk Bresniker, HPE | HPE Discover 2021
>>from the cube studios >>in Palo alto in >>boston connecting with thought leaders all around the world. This >>is a cute >>conversation. Hello welcome to the cubes coverage of HPD discovered 2021 virtual. I'm john for your host of the cube we're here with CUBA alumni. One of the original cube guests 2020 11 back in the day kurt president and chief architect of Hewlett Packard labs. He's also a Hewlett Packard enterprise fellow and vice president. Great to see you and you're in Vegas. I'm in Palo Alto. We've got a little virtual hybrid going on here. Thanks for spending time. >>Thanks john it's great to be back with you >>so much going on. I love to see you guys having this event kind of everyone in one spot. Good mojo. Great CHP, you know, back in the saddle again. I want to get your, take, your in the, in the, in the action right now on the lab side, which is great disruptive innovation is the theme. It's always been this year, more than ever coming out of the pandemic, people are looking for the future, looking to see the signs, they want to connect the dots. There's been some radical rethinking going on that you've been driving and in the labs, you hope you look back at last, take us through what's going on, what you're thinking, what's the, what's the big trends? >>Yeah, John So it's been interesting, you know, over the last 18 months, all of us had gone through about a decade's worth of advancement in decentralization, education, healthcare, our own work, what we're doing right now suddenly spread apart. Uh, and it got us thinking, you know, we think about that distributed mesh and as we, as we try and begin to return to normal and certainly think about all that we've lost, we want to move forward, we don't want to regress. And we started imagining, what does that world look like? And we think about the world of 20 2500 and 35 zeta bytes, 100 and 50 billion connected things out there. And it's the shape of the world has changed. That's where the data is going to be. And so we started thinking about what's it like to thrive in that kind of world. We had a global Defense research institute came to us, Nasa's that exact question. What's the edge? What do we need to prepare for for this age of insight? And it was kind of like when you had those exam questions and I was one of those kids who give you the final exam and if it's a really good question, suddenly everything clicked. I understood all the material because there was that really forcing question when they asked us that for me, it it solidified what I've been thinking about all the work we've done at labs over the last the last 10 years. And it's really about what does it take to survive and thrive. And for me it's three things. One is, success is going to go to whoever can reason over more information, who can gain the deepest insights from that information in time that matters and then can turn that insight into action at scale. So reason, insight and action. And it certainly was clear to me everything we've been trying to push for in labs, all those boundaries. We've been pushing all those conventions we've been defying are really trying to do that for, for our customers and our partners to bring in more information for them to understand, to be able to allow them to gain insight across departments across disciplines and then turn that insight into action at scale where scale is no longer one cloud or one company or one country, let alone one data center >>lot there. I love the dot I love that metadata and meta reasoning incites always been part of that. Um and you mentioned decentralization. Again, another big trend. I gotta ask you where is the big opportunity because a lot of people who are attending discover people watching are trying to ask what should they be thinking about. So what is that next big opportunity? How would you frame that and what should attendees look for coming out at HP discover. >>So one thing we're seeing is that this is actually a ubiquitous trend, whether we're talking about transportation or energy or communications, they all are trying to understand and how will they admit more of that data to make those real time decisions? Our expectation in the middle of this decade when we have the 125 petabytes, You know, 30% of that data will need real time action out of the edge where the speed of light is now material. And also we expect that at that point in time three out of four of those 185 petabytes, they'll never make it back to the data center. So understanding how we will allow that computation, that understanding to reach out to where the data is and then bringing in that's important. And then if we look at at those, all of those different areas, whether it's energy and transportation, communications, all that real time data, they all want to understand. And so I I think that as many people come to us virtually now, hopefully in person in the future when we have those conversations that labs, it's almost immediate takes a while for them and then they realize away that's me, this is my industry too, because they see that potential and suddenly where they see data, they see opportunity and they just want to know, okay, what does it take for me to turn that raw material into insight and then turn that insight into >>action, you know, storage compute never goes away, it gets more and more, you need more of it. This whole data and edge conversations really interesting. You know, we're living in that data centric, you know, everyone's gonna be a date a couple, okay. That we know that that's obvious. But I gotta ask you as you start to see machine learning, um cloud scale cloud operations, a new Edge and the new architecture is emerging and clients start to look at things like AI and they want to have more explain ability behind I hear that all the time. Can you explain it to me? Is there any kind of, what is it doing? Good as our biases, a good bad or you know, is really valuable expect experimental experiential. These are words are I'm hearing more and more of >>not so much a speeds >>and feeds game, but these are these are these are these are outcomes. So you got the core data, you've got a new architecture and you're hearing things like explainable ai experiential customer support, a new things happening, explain what this all means, >>You know, and it's it's interesting. We have just completed uh creating an Ai ethical framework for all of Hewlett Packard enterprise and whether we're talking about something that's internal improving a process, uh something that we sell our product or we're talking about a partnership where someone wants to build on top of our services and infrastructure, Build an AI system. We really wanted to encompass all of those. And so it was it was challenging actually took us about 18 months from that very first meeting for us to craft what are some principles for us to use to guide our our team members to give them that understanding. And what was interesting is we examined our principles of robustness of uh making sure they're human centric that they're reliable, that they are privacy preserving, that they are robust. We looked at that and then you look at where people want to apply these Ai today's AI and you start to realize there's a gap, there's actually areas where we have a great challenge, a human challenge and as interesting as possibly efficacious as today's A. I. S. R. We actually can't employ them with the confidence in the ethical position that we need to really pull that technology in. And what was interesting is that then became something that we were driving at labs. It began gave us a viewpoint into where there are gaps where, as you say, explica bility, you know, as fantastic as it is to talk into your mobile phone and have it translated into another one of hundreds of languages. I mean that is right out of Star trek and it's something we can all do. And frankly, it's, you know, we're expecting it now as efficacious as that is as we echo some other problems, it's not enough. We actually need to be explainable. We need to be able to audit these decisions. And so that's really what's informed now are trustworthy ai research and development program at Hewlett Packard Labs. Let's look at where we want to play. I I we look at what keeps us from doing it and then let's close the technology gap and it means some new things. It means new approaches. Sometimes we're going back back back to some of the very early ai um that things that we sort of left behind when suddenly the computational capability allowed us to enter into a machine learning and deep neural nets. Great applications, but it's not universally applicable. So that's where we are now. We're beginning to construct that second generation of AI systems where that explica bility where that trustworthiness and were more important that you said, understanding that data flow and the responsibility we have to those who created that data, especially when it's representing human information, that long term responsibility. What are the structures we need to support that ethically? >>That's great insight, Kirk, that's awesome stuff. And it reminds me of the old is new again, right? The cycles of innovation, you mentioned a I in the eighties, reminds me of dusting off and I was smiling because the notion of reasoning and natural language that's been around for a while, these other for a lot of Ai frame which have been around for a while But applied differently becomes interesting. The notion of Meta reasoning, I remember talking about that in 1998 around ontology and syntax and data analysis. I mean, again, well formed, you know, older ways to look at data. And so I gotta ask you, you know, you mentioned reasoning over information, getting the insights and having actions at scale. That doesn't sound like an R and D or labs issue. Right? I mean that that should be like in the market today. So I know you, there's stuff out there, what's different around the Hewlett Packard labs challenge because you guys, you guys are working on stuff that's kind of next gen, so why, what's next gen about reasoning moreover, information and getting insights? Because you know, there's a zillion startups out there that claim to be insights as a service, um, taking action outcomes >>and I think there were going to say a couple things. One is the technologies and the capabilities that God is this far. Uh, they're actually in an interesting position if we think of that twilight of moore's law is getting a little darker every day. Um, there's been such a tail wind behind us tremendous and we would have been foolish not to take advantage of it while it lasted, but as it now flattens out, we have to be realistic and say, you know what that ability to expect anticipate and then planned for a doubling and performance in the next 18 to 24 months because there's twice as many transistors in that square of silicon. We can't count on that anymore. We have to look now broader and it's not just one of these technology inflection points. There's so many we already mentioned ai it's voraciously vowing all this data at the same time. Now that data is all at the edge is no longer in the data center. I mean we may find ourselves laughing chuckling at the term itself data center. Remember when we sent it all the data? Because that's where the computers were. Well, that's 2020 thinking right, that's not even 2025. Thinking also security, that cyber threat of Nation State and criminal enterprises, all these things coming together and it's that confluence of discontinuities, that's what makes a loud problem. And the second piece is we don't just need to do it the way that we've been doing it because that's not necessarily sustainable. And if something is not sustainable is inherently inequitable because we can't afford to let everyone enjoy those benefits. So I think that's all those things, the technology confluence of technology, uh, disruptions and this desire to move to really sustainable, really inherently inequitable systems. That's what makes it a labs problem. >>I really think that's right on the money. And one of things I want to get your thoughts on, cause I know you have a unique historic view of the trajectory arc. Cloud computing that everyone's attention lift and shift cloud scale. Great cloud native. Now with hybrid and multi cloud clearly happening, all the cloud players were saying, oh, it's never gonna happen. All the data set is going to go away. Not really. The, the data center is just an edge big age. So you brought up the data center concept and you mentioned decentralization there, it's a distributed computing architecture, There is no line anymore between what's cloud and what's not the cloud is just the cloud and the data center is now a big fat edge and edges are smaller and bigger. Their nodes distribute computing now is the context. So this is not a new thing for Hewlett Packard enterprise. I mean you guys been doing distributed computing paradigms, supplying software and hardware and solutions Since I can remember since it was founded, what's new now, what do you say that folks are saying, what is HP doing for this new architecture? Because now an operating system is the word, the word that they want. They want to have an operating model, deV ops to have sex shops, all this is happening. What's the what's the state of the art from H. P. E. And how does the lab play into that vision? >>And it's so wonderful that you mentioned in our heritage because if you think about it was the first thing that Bill and they did, they made instruments of unparalleled value and quality for engineers and scientists. And the second thing they did was computerized that instrument control. And then they network them together and then they connect to the network measurement sensing systems to business computing. Right. And so that's really, that's exactly what we're talking about here. You know, and yesterday it was H. B. I. B. Cables. But today it is everything from an Aruba wireless gateway to a green Lake cloud that comes to you to now are cray exa scale supercomputing. And we wanted to look at that entire gamut and understand exactly what you said. How is today's modern developer who has been distinct in agile development in seven uh and devops and def sec ops. How can we make them as comfortable and confident deploying to any one of those systems or all of them in conjunction as confident as they've been deploying to a cloud. And I think that's really part of what we need to understand. And as you move out towards the edge things become interesting. A tiny amount of resources, the number of threats, physical and uh um cyber increased dramatically. It is no longer the healthy happy environment of that raised floor data center, It is actually out in the world but we have to because that's where the data is and so that's another piece of it that we're trying to bring with the labs are distributed systems lab trying to understand how do we make cloud native access every single bite everywhere from the tiniest little Edge embedded system, all the way up through that exa scale supercomputer, how do we admit all of that data to this entire generation and then the following subsequent generation, who will no longer understand what we were so worried about with things being in one place or another, they want to digest all the world's data regardless of where it is. >>You know, I was just having a conversation, you brought this up. Uh that's interesting around the history and the heritage, embedded systems is changing the whole hardware equations, changes the software driven model. Now, supply chain used to be constrained to software. Now you have a software supply chain, hardware, now you have software supply chain. So everything is happening in these kind of new use cases. And Edge is a great example where you want to have compute at the edge not having pulled back to some central location. So, again, advantage hp right, you've got more, you've got some solutions there. So all these like memory driven computing, something that you've worked on and been driving the machine product that we talked about when you guys launched a few years ago, um, looks like now a good R and D project, because all the discussions, I'm I'm hearing whether it's stuff in space or inside hybrid edges is I gotta have software running on an embedded system, I need security, I gotta have, you know, memory driven architecture is I gotta have data driven value in real time. This is new as a kind of a new shift, but you still need to run it. What's the update on the machine and the memory driven computing? And how does that connect the dots for this intelligent Edge? That's now super important in the hybrid equation. >>Yeah, it's fantastic you brought that up. You know, it's uh it's gratifying when you've been drawing pictures on your white board for 10 or 15 years and suddenly you see them printed uh and on the web and he's like, OK Yeah, you guys were there were there because we always knew it had to be bigger than us. And for a while you wonder, well is this the right direction? And then you get that gratification that you see it repeated. And I think one of the other elements that you said that was so important was talking about that supply chain uh and especially as we get towards these edge devices uh and the increasing cyber threat, you know, so much more about understanding the provenance of that supply chain and how we get beyond trust uh to prove. And in our case that proof is rooted in the silicon. Start with the silicon establish a silicon root of trust, something that can't be forged that that physically uncomfortable function in the silicon. And then build up that chain not of trust but a proof of measurable confidence. And then let's link that through the hardware through the data. And I think that's another element, understanding how that data is flowing in and we establish that that that provenance that's provable provenance and that also enables us to come back to that equitable question. How do we deal with all this data? Well, we want to make sure that everyone wants to buy in and that's why you need to be able to reward them. So being able to trace data into an AI model, trace it back out to its effect on society. All these are things that we're trying to understand the labs so that we can really establish this data economy and admit the day that we need to the problems that we have that really just are crying out for that solution bringing in that data, you just know where is the data, Where is the answer? Now I get to work with, I've worked for several years with the German center for your Degenerative Disease Research and I was teasing their director dr nakata. I said, you know, in a couple of years when you're getting that Nobel prize for medicine because you cracked Alzheimer's I want you to tell me how long was the answer hiding in plain sight because it was segregated across disciplines across geography and it was there. But we just didn't have that ability to view across the breath of the information and in a time that matters. And I think so much about what we're trying to do with the lab is that that's that reasoning moreover, more information, gaining insights in the time that matters and then it's all about action and that is driving that insight into the world regardless of whether it has to land in an exa scale supercomputer or tiny little edge device, we want today's application development teams to feel that degree of freedom to range over all of those that infrastructure and all of that data. >>You know, you bring up a great call out there. I want to just highlight that cause I thought that was awesome. The future breakthroughs are hiding in plain sight. It's the access to the people and the talent to solve the problems and the data that's stuck in the silos. You bring those together, you make that seamless and frictionless, then magic happens. That's that's really what we're talking about in this new world, isn't it? >>Absolutely, yeah. And it's one of those things that sometimes my kids as you know, why do you come in every day? And for me it is exactly that I think so many of the challenges we have are actually solvable if the right people knew the right information at the right time and that we all have that not again, not trust, but that proof that confidence, that measurable conference back to the instruments that that HP was always famous for. It was that precision and they all had that calibration tag. So you could measure your confidence in an HP instrument and the same. We want people to measure their confidence when data is flowing through Hewlett Packard Enterprise infrastructure. >>It's interesting to bring up the legacy because instrumentation network together, connecting to business systems. Hey, that sounds like the cloud observe ability, modern applications, instant action and actionable insights. I mean that's really the the same almost exact formula. >>Yeah, For me that's that, that the constant through line from the garage to right now is that ability to handle and connect people to the information that they need. >>Great, great to chat. You're always an inspiration and we could go for another hour talking about extra scale, green leg, all the other cool things going on at H P E. I got to ask you the final question, what are you most excited about for h B and his future and how and how can folks learn more to discover and what should they focus on? >>Uh so I think for me um what I love is that I imagine that world where the data you know today is out there at the edge and you know we have our Aruba team, we have our green Lake team, we have are consistent, you know, our core enterprise infrastructure business and now we also have all the way up through X scale compute when I think of that thriving business, that ability to bring in massive data analytics, machine learning and Ai and then stimulation and modeling. That's really what whether you're a scientist and engineer or an artist, you want to have that intersectionality. And I think we actually have this incredible, diverse set of resources to bring to bear to those problems that will span from edge to cloud, back to core and then to exit scale. So that's what really, that's why I find so exciting is all of the great uh innovators that we get to work with and the markets we get to participate in. And then for me it's also the fact it's all happening at Hewlett Packard Enterprise, which means we have a purpose. You know, if you ask, you know, when they did ask Dave Packer, Dave, why hp? And he said in 1960, we come together as a company because we can do something we could not do by ourselves and we make a contribution to society and I dare anyone to spend more than a couple of minutes with Antonio Neary and he won't remind you. And this is whether it is here to discover or in the halls at labs remind me our purpose, that Hewlett Packard Enterprise is to advance the way that people live and work. And for me that's that direct connection. So it's, it's the technology and then the purpose and that's really what I find so exciting about HPV. >>That's a great call out, Antonio deserves props. I love talking with him, he's the true Bill and Dave Bill. Hewlett Dave package spirit And I'll say that I've talked with him and one of the things that resident to me and resonates well is the citizenship and be interesting to see if Bill and Dave were alive today, that now it's a global citizenship. This is a huge part of the culture and I know it's still alive there at H P E. So, great call out there and props to Antonio and yourself and the team. Congratulations. Thanks for spending the time, appreciate it. >>Thank you john it's great to be with you again. >>Okay. Global labs. Global opportunities, radical. Rethinking this is what's happening within HP. Hewlett Packard Labs, Great, great contribution there from Kirk, have them on the cube and always fun to talk so much, so much to digest there. It's awesome. I'm john Kerry with the cube. Thanks for watching. >>Mm >>mhm Yeah.
SUMMARY :
boston connecting with thought leaders all around the world. Great to see you I love to see you guys having this event kind of everyone in one spot. And it was kind of like when you had those exam questions and I gotta ask you And so I I think that as many people come to us virtually now, But I gotta ask you as you start to see machine learning, So you got the core data, you've got a new architecture and you're hearing things like explainable ai experiential We looked at that and then you look at where people want to apply these I mean that that should be like in the market today. And the second piece is we don't just need to do it the All the data set is going to go away. And we wanted to look at that entire gamut and understand exactly what you said. been driving the machine product that we talked about when you guys launched a few years ago, And I think one of the other elements that you said that was so important was talking about that supply chain uh It's the access to the people and the talent to solve the problems and And it's one of those things that sometimes my kids as you know, I mean that's really the the same almost exact formula. Yeah, For me that's that, that the constant through line from the garage to right now is that green leg, all the other cool things going on at H P E. I got to ask you the final question, is all of the great uh innovators that we get to work with and the markets we get that resident to me and resonates well is the citizenship and be so much to digest there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Antonio | PERSON | 0.99+ |
1998 | DATE | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Bill | PERSON | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Nasa | ORGANIZATION | 0.99+ |
Dave Packer | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
125 petabytes | QUANTITY | 0.99+ |
john Kerry | PERSON | 0.99+ |
Antonio Neary | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
second piece | QUANTITY | 0.99+ |
Kirk Bresniker | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
1960 | DATE | 0.99+ |
30% | QUANTITY | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
Kirk | PERSON | 0.99+ |
15 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
185 petabytes | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
100 | QUANTITY | 0.99+ |
john | PERSON | 0.99+ |
2025 | DATE | 0.99+ |
Palo alto | LOCATION | 0.99+ |
twice | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
three things | QUANTITY | 0.99+ |
pandemic | EVENT | 0.99+ |
second thing | QUANTITY | 0.99+ |
one company | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Nobel prize | TITLE | 0.98+ |
kurt | PERSON | 0.97+ |
three | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
one country | QUANTITY | 0.97+ |
four | QUANTITY | 0.96+ |
HPD | ORGANIZATION | 0.96+ |
Star trek | TITLE | 0.96+ |
nakata | PERSON | 0.96+ |
2021 | DATE | 0.96+ |
one spot | QUANTITY | 0.96+ |
second generation | QUANTITY | 0.95+ |
Aruba | LOCATION | 0.95+ |
H. P. E. | PERSON | 0.95+ |
eighties | DATE | 0.95+ |
first meeting | QUANTITY | 0.94+ |
about 18 months | QUANTITY | 0.94+ |
Degenerative Disease Research | ORGANIZATION | 0.94+ |
24 months | QUANTITY | 0.93+ |
CHP | ORGANIZATION | 0.9+ |
18 | QUANTITY | 0.9+ |
HPE | ORGANIZATION | 0.89+ |
hundreds of languages | QUANTITY | 0.89+ |
boston | LOCATION | 0.89+ |
Alzheimer | OTHER | 0.88+ |
one cloud | QUANTITY | 0.87+ |
seven | QUANTITY | 0.86+ |
hp | ORGANIZATION | 0.85+ |
Sharad Singhal, The Machine & Matthias Becker, University of Bonn | HPE Discover Madrid 2017
>> Announcer: Live from Madrid, Spain, it's theCUBE, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is theCUBE, the leader in live tech coverage and my name is Dave Vellante, and I'm here with Peter Burris, this is day two of HPE Hewlett Packard Enterprise Discover in Madrid, this is their European version of a show that we also cover in Las Vegas, kind of six month cadence of innovation and organizational evolution of HPE that we've been tracking now for several years. Sharad Singal is here, he covers software architecture for the machine at Hewlett Packard Enterprise, and Matthias Becker, who's a postdoctoral researcher at the University of Bonn. Gentlemen, thanks so much for coming in theCUBE. >> Thank you. >> No problem. >> You know, we talk a lot on theCUBE about how technology helps people make money or save money, but now we're talking about, you know, something just more important, right? We're talking about lives and the human condition and >> Peter: Hard problems to solve. >> Specifically, yeah, hard problems like Alzheimer's. So Sharad, why don't we start with you, maybe talk a little bit about what this initiative is all about, what the partnership is all about, what you guys are doing. >> So we started on a project called the Machine Project about three, three and a half years ago and frankly at that time, the response we got from a lot of my colleagues in the IT industry was "You guys are crazy", (Dave laughs) right. We said we are looking at an enormous amount of data coming at us, we are looking at real time requirements on larger and larger processing coming up in front of us, and there is no way that the current architectures of the computing environments we create today are going to keep up with this huge flood of data, and we have to rethink how we do computing, and the real question for those of us who are in research in Hewlett Packard Labs, was if we were to design a computer today, knowing what we do today, as opposed to what we knew 50 years ago, how would we design the computer? And this computer should not be something which solves problems for the past, this should be a computer which deals with problems in the future. So we are looking for something which would take us for the next 50 years, in terms of computing architectures and what we will do there. In the last three years we have gone from ideas and paper study, paper designs, and things which were made out of plastic, to a real working system. We have around Las Vegas time, we'd basically announced that we had the entire system working with actual applications running on it, 160 terabytes of memory all addressable from any processing core in 40 computing nodes around it. And the reason is, although we call it memory-driven computing, it's really thinking in terms of data-driven computing. The reason is that the data is now at the center of this computing architecture, as opposed to the processor, and any processor can return to any part of the data directly as if it was doing, addressing in local memory. This provides us with a degree of flexibility and freedom in compute that we never had before, and as a software person, I work in software, as a software person, when we started looking at this architecture, our answer was, well, we didn't know we could do this. Now if, given now that I can do this and I assume that I can do this, all of us in the programmers started thinking differently, writing code differently, and we suddenly had essentially a toy to play with, if you will, as programmers, where we said, you know, this algorithm I had written off decades ago because it didn't work, but now I have enough memory that if I were to think about this algorithm today, I would do it differently. And all of a sudden, a new set of algorithms, a new set of programming possibilities opened up. We worked with a number of applications, ranging from just Spark on this kind of an environment, to how do you do large scale simulations, Monte Carlo simulations. And people talk about improvements in performance from something in the order of, oh I can get you a 30% improvement. We are saying in the example applications we saw anywhere from five, 10, 15 times better to something which where we are looking at financial analysis, risk management problems, which we can do 10,000 times faster. >> So many orders of magnitude. >> Many, many orders >> When you don't have to wait for the horrible storage stack. (laughs) >> That's right, right. And these kinds of results gave us the hope that as we look forward, all of us in these new computing architectures that we are thinking through right now, will take us through this data mountain, data tsunami that we are all facing, in terms of bringing all of the data back and essentially doing real-time work on those. >> Matthias, maybe you could describe the work that you're doing at the University of Bonn, specifically as it relates to Alzheimer's and how this technology gives you possible hope to solve some problems. >> So at the University of Bonn, we work very closely with the German Center for Neurodegenerative Diseases, and in their mission they are facing all diseases like Alzheimer's, Parkinson's, Multiple Sclerosis, and so on. And in particular Alzheimer's is a really serious disease and for many diseases like cancer, for example, the mortality rates improve, but for Alzheimer's, there's no improvement in sight. So there's a large population that is affected by it. There is really not much we currently can do, so the DZNE is focusing on their research efforts together with the German government in this direction, and one thing about Alzheimer's is that if you show the first symptoms, the disease has already been present for at least a decade. So if you really want to identify sources or biomarkers that will point you in this direction, once you see the first symptoms, it's already too late. So at the DZNE they have started on a cohort study. In the area around Bonn, they are now collecting the data from 30,000 volunteers. They are planning to follow them for 30 years, and in this process we generate a lot of data, so of course we do the usual surveys to learn a bit about them, we learn about their environments. But we also do very more detailed analysis, so we take blood samples and we analyze the complete genome, and also we acquire imaging data from the brain, so we do an MRI at an extremely high resolution with some very advanced machines we have, and all this data is accumulated because we do not only have to do this once, but we try to do that repeatedly for every one of the participants in the study, so that we can later analyze the time series when in 10 years someone develops Alzheimer's we can go back through the data and see, maybe there's something interesting in there, maybe there was one biomarker that we are looking for so that we can predict the disease better in advance. And with this pile of data that we are collecting, basically we need something new to analyze this data, and to deal with this, and when we heard about the machine, we though immediately this is a system that we would need. >> Let me see if I can put this in a little bit of context. So Dave lives in Massachusetts, I used to live there, in Framingham, Massachusetts, >> Dave: I was actually born in Framingham. >> You were born in Framingham. And one of the more famous studies is the Framingham Heart Study, which tracked people over many years and discovered things about heart disease and relationship between smoking and cancer, and other really interesting problems. But they used a paper-based study with an interview base, so for each of those kind of people, they might have collected, you know, maybe a megabyte, maybe a megabyte and a half of data. You just described a couple of gigabytes of data per person, 30,000, multiple years. So we're talking about being able to find patterns in data about individuals that would number in the petabytes over a period of time. Very rich detail that's possible, but if you don't have something that can help you do it, you've just collected a bunch of data that's just sitting there. So is that basically what you're trying to do with the machine is the ability to capture all this data, to then do something with it, so you can generate those important inferences. >> Exactly, so with all these large amounts of data we do not only compare the data sets for a single person, but once we find something interesting, we have also to compare the whole population that we have captured with each other. So there's really a lot of things we have to parse and compare. >> This brings together the idea that it's not just the volume of data. I also have to do analytics and cross all of that data together, right, so every time a scientist, one of the people who is doing biology studies or informatic studies asks a question, and they say, I have a hypothesis which this might be a reason for this particular evolution of the disease or occurrence of the disease, they then want to go through all of that data, and analyze it as as they are asking the question. Now if the amount of compute it takes to actually answer their questions takes me three days, I have lost my train of thought. But if I can get that answer in real time, then I get into this flow where I'm asking a question, seeing the answer, making a different hypothesis, seeing a different answer, and this is what my colleagues here were looking for. >> But if I think about, again, going back to the Framingham Heart Study, you know, I might do a query on a couple of related questions, and use a small amount of data. The technology to do that's been around, but when we start looking for patterns across brain scans with time series, we're not talking about a small problem, we're talking about an enormous sum of data that can be looked at in a lot of different ways. I got one other question for you related to this, because I gotta presume that there's the quid pro quo for getting people into the study, is that, you know, 30,000 people, is that you'll be able to help them and provide prescriptive advice about how to improve their health as you discover more about what's going on, have I got that right? >> So, we're trying to do that, but also there are limits to this, of course. >> Of course. >> For us it's basically collecting the data and people are really willing to donate everything they can from their health data to allow these large studies. >> To help future generations. >> So that's not necessarily quid pro quo. >> Okay, there isn't, okay. But still, the knowledge is enough for them. >> Yeah, their incentive is they're gonna help people who have this disease down the road. >> I mean if it is not me, if it helps society in general, people are willing to do a lot. >> Yeah of course. >> Oh sure. >> Now the machine is not a product yet that's shipping, right, so how do you get access to it, or is this sort of futures, or... >> When we started talking to one another about this, we actually did not have the prototype with us. But remember that when we started down this journey for the machine three years ago, we know back then that we would have hardware somewhere in the future, but as part of my responsibility, I had to deal with the fact that software has to be ready for this hardware. It does me no good to build hardware when there is no software to run on it. So we have actually been working at the software stack, how to think about applications on that software stack, using emulation and simulation environments, where we have some simulators with essentially instruction level simulator for what the machine does, or what that prototype would have done, and we were running code on top of those simulators. We also had performance simulators, where we'd say, if we write the application this way, this is how much we think we would gain in terms of performance, and all of those applications on all of that code we were writing was actually on our large memory machines, Superdome X to be precise. So by the time we started talking to them, we had these emulation environments available, we had experience using these emulation environments on our Superdome X platform. So when they came to us and started working with us, we took their software that they brought to us, and started working within those emulation environments to see how fast we could make those problems, even within those emulation environments. So that's how we started down this track, and most of the results we have shown in the study are all measured results that we are quoting inside this forum on the Superdome X platform. So even in that emulated environment, which is emulating the machine now, on course in the emulation Superdome X, for example, I can only hold 24 terabytes of data in memory. I say only 24 terabytes >> Only! because I'm looking at much larger systems, but an enormously large number of workloads fit very comfortably inside the 24 terabytes. And for those particular workloads, the programming techniques we are developing work at that scale, right, they won't scale beyond the 24 terabytes, but they'll certainly work at that scale. So between us we then started looking for problems, and I'll let Matthias comment on the problems that they brought to us, and then we can talk about how we actually solved those problems. >> So we work a lot with genomics data, and usually what we do is we have a pipeline so we connect multiple tools, and we thought, okay, this architecture sounds really interesting to us, but if we want to get started with this, we should pose them a challenge. So if they can convince us, we went through the literature, we took a tool that was advertised as the new optimal solution. So prior work was taking up to six days for processing, they were able to cut it to 22 minutes, and we thought, okay, this is a perfect challenge for our collaboration, and we went ahead and we took this tool, we put it on the Superdome X that was already running and stepped five minutes instead of just 22, and then we started modifying the code and in the end we were able to shrink the time down to just 30 seconds, so that's two magnitudes faster. >> We took something which was... They were able to run in 22 minutes, and that was already had been optimized by people in the field to say "I want this answer fast", and then when we moved it to our Superdome X platform, the platform is extremely capable. Hardware-wise it compares really well to other platforms which are out there. That time came down to five minutes, but that was just the beginning. And then as we modified the software based on the emulation results we were seeing underneath, we brought that time down to 13 seconds, which is a hundred times faster. We started this work with them in December of last year. It takes time to set up all of this environment, so the serious coding was starting in around March. By June we had 9X improvement, which is already a factor of 10, and since June up to now, we have gotten another factor of 10 on that application. So I'm now at a 100X faster than what the application was able to do before. >> Dave: Two orders of magnitude in a year? >> Sharad: In a year. >> Okay, we're out of time, but where do you see this going? What is the ultimate outcome that you're hoping for? >> For us, we're really aiming to analyze our data in real time. Oftentimes when we have biological questions that we address, we analyze our data set, and then in a discussion a new question comes up, and we have to say, "Sorry, we have to process the data, "come back in a week", and our idea is to be able to generate these answers instantaneously from our data. >> And those answers will lead to what? Just better care for individuals with Alzheimer's, or potentially, as you said, making Alzheimer's a memory. >> So the idea is to identify Alzheimer long before the first symptoms are shown, because then you can start an effective treatment and you can have the biggest impact. Once the first symptoms are present, it's not getting any better. >> Well thank you for your great work, gentlemen, and best of luck on behalf of society, >> Thank you very much >> really appreciate you coming on theCUBE and sharing your story. You're welcome. All right, keep it right there, buddy. Peter and I will be back with our next guest right after this short break. This is theCUBE, you're watching live from Madrid, HPE Discover 2017. We'll be right back.
SUMMARY :
brought to you by Hewlett Packard Enterprise. that we also cover in Las Vegas, So Sharad, why don't we start with you, and frankly at that time, the response we got When you don't have to computing architectures that we are thinking through and how this technology gives you possible hope and in this process we generate a lot of data, So Dave lives in Massachusetts, I used to live there, is the Framingham Heart Study, which tracked people that we have captured with each other. Now if the amount of compute it takes to actually the Framingham Heart Study, you know, there are limits to this, of course. and people are really willing to donate everything So that's not necessarily But still, the knowledge is enough for them. people who have this disease down the road. I mean if it is not me, if it helps society in general, Now the machine is not a product yet and most of the results we have shown in the study that they brought to us, and then we can talk about and in the end we were able to shrink the time based on the emulation results we were seeing underneath, and we have to say, "Sorry, we have to process the data, Just better care for individuals with Alzheimer's, So the idea is to identify Alzheimer Peter and I will be back with our next guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ajay Patel | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
$3 | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jonathan Ebinger | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Mark Andreesen | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Matthias Becker | PERSON | 0.99+ |
Greg Sands | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jennifer Meyer | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Target | ORGANIZATION | 0.99+ |
Blue Run Ventures | ORGANIZATION | 0.99+ |
Robert | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
OVH | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
Red Cross | ORGANIZATION | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Andy Jazzy | PERSON | 0.99+ |
Korea | LOCATION | 0.99+ |
Howard | PERSON | 0.99+ |
Sharad Singal | PERSON | 0.99+ |
DZNE | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
$2.7 million | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Matthias | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Jesse | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Andrew Wheeler and Kirk Bresniker, HP Labs - HPE Discover 2017
>> Announcer: Live from Las Vegas, it's The Cube, covering HPE Discover, 2017 brought to you by Hewlett Packard Enterprise. >> Okay, welcome back everyone. We're here live in Las Vegas for our exclusive three day coverage from The Cube Silicon Angle media's flagship program. We go out to events, talk to the smartest people we can find CEOs, entrepreneurs, R&D lab managers and of course we're here at HPE Discover 2017 our next two guests, Andrew Wheeler, the Fellow, VP, Deputy Director, Hewlett Packard Labs and Kirk Bresniker, Fellow and VP, Chief Architect of HP Labs, was on yesterday. Welcome back, welcome to The Cube. Hewlett Packard Labs well known you guys doing great research, Meg Whitman really staying with a focused message and one of the comments she mentioned at our press analyst meeting yesterday was focusing on the lab. So I want ask you where is that range in the labs? In terms of what you guys, when does something go outside the lines if you will? >> Andrew: Yeah good question. So, if you think about Hewlett Packard Labs and really our charter role within the company we're really kind of tasked for looking at things that will disrupt our current business or looking for kind of those new opportunities. So for us we have something we call an innovation horizon and you know it's like any other portfolio that you have where you've got maybe things that are more kind of near term, maybe you know one to three years out, things that are easily kind of transferred or the timing is right. And then we have kind of another bucket that says well maybe it's more of a three to five year kind of in that advanced development category where it needs a little more incubation but you know it needs a little more time. And then you know we reserve probably you know a smaller pocket that's for more kind of pure research. Things that are further out, higher risk. It's a bigger bet but you know we do want to have kind of a complete portfolio of those, and you know over time throughout our history you know we've got really success stories in all of those. So it's always finding kind of that right blend. But you know there's clearly a focus around the advanced development piece now that we've had a lot of things come from that research point and really one of the... >> John: You're looking for breakthroughs. I mean that's what you're... Some-- >> Andrew: Clearly. >> Internal improvement, simplify IT all that good stuff, you guys still have your eyes on some breakthroughs. >> That's right. Breakthroughs, how do we differentiate what we're doing so but yeah clearly, clearly looking for those breakthrough opportunities. >> John: And one of the things that's come up really big in this show is the security and chip thing was pretty hot, very hot, and actually wiki bonds public, true public cloud report that they put out sizing up on prem the cloud mark. >> Dave: True private cloud. >> True private cloud I'm sorry. And that's not including hybrids of $265 billion tam but the notable thing that I want to get your thoughts on is the point they pushed was over 10 years $150 billion is going to shift out of IT on premise into other differentiated services. >> Andrew: Out of labor. >> Out of labor. So this, and I asked them what that means, as he said that means it's going to shift to vendor R&D meaning the suppliers have to do more work. So that the customers don't have to do the R&D. Which we see a lot in cloud where there's a lot of R&D going on. That's your job. So you guys are HP Labs, what's happening in that R&D area that's going to off load that labor so they can move to some other high yield tasks. >> Sure. Take first. >> John: Go ahead take a stab at it. >> When we've been looking at some of the concepts we had in the memory driven computing research and advanced development programs the machine program, you know one of the things that was the kick off for me back in 2003 we looked at what we had in the unix market, we had advanced virtualization technologies, we had great management of resources technologies, we had memory fabric technologies. But they're all kind of proprietary. But Silicon is thinking and back then we were saying how does risk unix compete with industry standards service? This new methodology, new wave, exciting changing cost structures. And for us it was that it was a chance to explore those ideas and understand how they would affect our maintaining the kind of rich set of customer experiences, mission criticality, security, all of these elements. And it's kind of funny that we're sort of just coming back to the future again and we're saying okay we have this move we want to see these things happen on the cloud and we're seeing those same technologies, the composable infrastructure we have in synergy and looking forward to see the research we've done on the machine advanced development program and how will that intersect hardware composability, converged infrastructure so that you can actually have that shift, those technologies coming in taking on more of that burden to allow you freedom of choice, so you can make sure that you end up with that right mix. The right part on a full public cloud, the right mix on a full private cloud, the right mixing on that intelligent edge. But still having the ability to have all of those great software development methodologies that agile methodology, the only thing the kids know how to do out of school is open source and agile now. So you want to make sure that you can embrace that and make sure regardless of where the right spot is for a particular application in your entire enterprise portfolio that you have this common set of experiences and tools. And some of the research and development we're doing will enable us to drive that into that existing, conventional, enterprise market as well as this intelligent edge. Making a continuum, a continuum from the core to the intelligent edge. And something that modern computer science graduates will find completely comfortable. >> One attracting them is going to be the key, I think the edge is kind of intoxicating if you think about all the possibilities that are out there in terms of what you know just from a business model disruption and also technology. I mean wearables are edge, brain implants in the future will be edge, you know the singularities here as Ray Kersewile would say... >> Yeah. >> I mean but, this is the truth. This is what's happened. This is real right now. >> Oh absolutely. You know we think of all that data and right now we're just scratching the surface. I remember it was 1994 the first time I fired up a web server inside of my development team. So I could begin thinning out design information on prototype products inside of HP, and it was a novelty. People would say "What is that thing "you just sent me an email, W W whatever?" And suddenly we went from, like almost overnight, from a novelty to a business necessity, to then it transformed the way that we created the applications for the... >> John: A lot of people don't know this but since you brought it up this historical trivia, HP Labs, Hewlett Packard Labs had scientists who actually invented the web with Tim Berners-Lee, I think HTML founder was an HP Labs scientist. Pretty notable trivia. A lot of people don't know that so congratulations. >> And so I look at just what you're saying there and we see this new edge thing is it's going to be similarly transformative. Now today it's a little gimmicky perhaps it's sort of scratching the surface. It's taking security and it can be problematic at times but that will transform, because there is so much possibility for economic transformation. Right now almost all that data on the edge is thrown away. If you, the first person who understands okay I'm going to get 1% more of that data and turn it into real time intelligence, real time action... That will unmake industries and it will remake new industries. >> John: Andrew this the applied research vision, you got to apply R&D to the problem... >> Andrew: Correct. >> That's what he's getting at but you got to also think differently. You got to bring in talent. The young guns. How are you guys bringing in the young guns? What's the, what's the honeypot? >> Well I think you know for us it's, the sell for us, obviously is just the tradition of Hewlett Packard to begin with right? You know we have recognition on that level even it's not just Hewlett Packard Labs as well it's you know just R&D in general right? Kind of it you know the DNA being an engineering company so... But it's you know I think it is creating kind of these opportunities and whether it's internship programs you know just the various things that we're doing whether it's enterprise related, high performance computing... I think this edge opportunity is a really interesting one as a bridge because if you think about all the things that we hear about in enterprise in terms of "Oh you know I need this deep analytics "capability," or you know even a lot of the in memories things that we're talking about, real time response, driving information, right? All of that needs to happen at the edge as well for various opportunities so it's got a lot of the young graduates excited. We host you know hundreds of interns every year and it's real exciting to see kind of the ideas they come in with and you know they're all excited to work in this space. >> Dave: So Kirk you have your machine button, three, of course you got the logo. And then the machine... >> I got the labs logo, I got the machine logo. >> So when I first entered you talked about in the early 1980s. When I first got in the business I remembered Gene Emdall. "The best IO is no IO." (laughter) >> Yeah that's right. >> We're here again with this sort of memory semantics, centric computing. So in terms of the three that Andrew laid out the three types of sort of projects you guys pursue... Where does the machine fit? IS it sort of in all three? Or maybe you could talk about that a little bit. >> Kirk: I think it is, so we see those technologies that over the last three years we have brought so much new and it was, the critical thing about this is I think it's also sort of the prototyping of the overall approach our leaning in approach here... >> Andrew: That's right. >> It wasn't just researchers. Right? Those 500 people who made that 160 terabyte monster machine possible weren't just from labs. It was engineering teams from across Hewlett Packard Enterprise. It was our supply chain team. It was our services team telling us how these things fit together for real. Now we've had incredible technology experiences, incredible technologist experiences, and what we're seeing is that we have intercepts on conventional platforms where there's the photonics, the persistent memories. Those will make our existing DCIG and SDCG products better almost immediately. But then we also have now these whole cloth applications and as we take all of our learnings, drive them into open source software, drive them into the genesys consortium and we'll see you know probably 18, 24 months from now some of those first optimized silicon designs pop out of that ecosystem then we'll be right there to assemble those again, into conventional systems as well as more expansive, exo-scale computing, intelligent edge with large persistent memories and application specific processing as that next generation of gateways, I think we can see these intercept points at every category Andrew talked about. >> Andrew: And another good point there that kind of magnifies the model we were talking about, if we were sitting here five years ago, we would talking about things like photonics and non-volatile memory as being those big R projects. Those higher risk, longer term things, that right? As those mature, we make more progress innovation happens, right? It gets pulled into that shorter time frame that becomes advanced development. >> Dave: And Meg has talked about that... >> Yeah. >> Wanting to get more productivity out of the labs. And she's also pointed out you guys have spent more on R&D in the last several years. But even as we talked about the other day you want to see a little more D and keep the R going. So my question is, when you get to that point, of being able to support DCIG... Where do you, is it a hand off? Are you guys intimately involved? When you're making decisions about okay so member stir for example, okay this is great, that's still in the R phase then you bring it in. But now you got to commercialize this and you got 3D nan coming out and okay let's use that, that fits into our framework. So how much do you guys get involved in that handoff? You know the commercialization of this stuff? >> We get very involved. So it's at the point where when we think have something that hey we think you know maybe this could get into a product or let's see if there's good intercept here. We work jointly at that point. It's lab engineers, it's the product managers out of the group, engineers out of the business group, they essentially work collectively then on getting it to that next step. So it's kind of just one big R&D effort at that point. >> Dave: And so specifically as it relates to the machine, where do you see in the next in the near term, let's call near term next three years, or five years even, what do you see that looking like? Is it this combination of memory width capacitors or flash extensions? What does that look like in terms of commercial terms that we can expect? >> Kirk: So I really think the palette is pretty broad here. That I can see these going into existing rack and tower products to allow them to have memory that's composable down to the individual module level. To be able to take that facility to have just the right resources applied at just the right time with that API that we have in one view. Extend down to composing the hardware itself. I think we look at those edge line systems and want to have just the right kind of analytic capability, large persistent memories at that edge so we can handle those zeta bytes and zeta bytes of data in full fidelity analyzed at the edge sending back that intelligence to the core but also taking action at the edge in a timeframe that matters. I also see it coming out and being the basis of our exoscale high performance computing. You know when you want to have a exoscale system that has all of the combined capacity of the top 500 systems today but 1/20th of their power that is going to take rather novel technologies and everything we've been working on is exactly what's feeding that research and soon to be advanced development and then soon to be production in supply chain. >> Dave: Great. >> John: So the question I have is obviously we saw some really awesome Gen 10 stuff here at this show you guys are seeing that obviously you're on stage talking about a lot of the cool R&D, but really the reality is that's multiple years in the works some of this root of trust silicon technology that's pretty, getting the show buzzed up everyone's psyched about it. Dreamworks Animation's talking about how inorganic opportunities is helping their business and they got the security with the root of trust NIST certified and compliant. Pretty impressive. What's next? What else are you working on because this is where the R&D is on your shoulders for that next level of innovation. Where, what do you guys see that? Because security is a huge deal. That's that great example of how you guys innovated. Cause that'll stop the vector of a tax in the service area of IOT if you can get the servers to lock down and you have firmware that's secure, makes a lot of sense. That's probably the tip of the iceberg. What else is happening with security? >> Kirk: So when we think about security and our efforts on advanced development research around the machine what you're seeing here with the proliance is making the machines more secure. The inherent platform more secure. But the other thing I would point to you is the application we're running on the prototype. Large scale graph inference. And this is security because you have a platform like the machine. Able to digest hundreds and hundreds of tera bytes worth of log data to look for that fingerprint, that subtle clue that you have a system that has been compromised. And these are not blatant let's just blast everything out to some dot dot x x x sub domain, this is an advanced persistent thread by a very capable adversary who is very subtle in their reach out from a system that has been compromised to that command and control server. The signs are there if you can look at the data holistically. If you can look at that DNS log, graph of billions of entries everyday, constantly changing, if you can look at that as a graph in totality in a timeframe that matters then that's an empowering thing for a cyber defense team and I think that's one of the interesting things that we're adding to this discussion. Not only protect, detect and recover, but giving offensive weapons to our cyber defense team so they can hunt, they can hunt for those events for system threats. >> John: One of the things, Andrew I'll get your thoughts and reaction to this because Ill make an observation and you guys can comment and tell me I'm all wet, fell off the deep end or what not. Last year HP had great marketing around the machine. I love that Star Trek ad. It was beautiful and it was just... A machine is very, a great marketing technique. I mean use the machine... So a lot of people set expectations on the machine You saw articles being written maybe these people didn't understand it. Little bit pulled back, almost dampered down a little bit in terms of the marketing of the machine, other than the bin. Is that because you don't yet know what it's going to look like? Or there's so many broader possibilities where you're trying to set expectations? Cause the machine certainly has a lot of range and it's almost as if I could read your minds you don't want to post the position too early on what it could do. And that's my observation. Why the pullback? I mean certainly as a marketer I'd be all over that. >> Andrew: Yeah, I think part of it has been intentional just on how the ecosystem, we need the ecosystem developed kind of around this at the same time. Meaning, there are a lot of kind of moving parts to it whether it's around the open source community and kind of getting their head wrapped around what is this new architecture look like. We've got things like you know the Jin Zee Consortium where we're pouring a lot of our understanding and knowledge into that. And so we need a lot of partners, we know we're in a day and an age where look there's no single one company that's going to do every piece and part themselves. So part of it is kind of enough to get out there, to get the buzz, get the excitement to get other people then on board and now we have been heads down especially this last six months of... >> John: Jamming hard on it. >> Getting it all together. You know you think about what we showed first essentially first booted the thing in November and now you know we've got it running at this scale, that's really been the focus. But we needed a lot of that early engagement, interaction to get a lot of the other, members of the ecosystem kind of on board and starting to contribute. And really that's where we're at today. >> John: It's almost you want it let it take its own course organically because you mentioned just on the cyber surveillance opportunity around the crunching, you kind of don't know yet what the killer app is right? >> And that's the great thing of where we're at today now that we have kind of the prototype running at scale like this, it is allowing us to move beyond, look we've had the simulators to work with, we've had kind of emulation vehicles now you've got the real thing to run actual workloads on. You know we had the announcement around DZ and E as kind of an early early example, but it really now will allow us to do some refinement that allows us to get to those product concepts. >> Dave: I want to just ask the closing question. So I've had this screen here, it's like the theater, and I've been seeing these great things coming up and one was "Moore's Law is dead." >> Oh that was my session this morning. >> Another one was block chain. And unfortunately I couldn't hear it but I could see the tease. So when you guys come to work in the morning what's kind of the driving set of assumptions for you? Is it just the technology is limitless and we're going to go figure it out or are there things that sort of frame your raison d'etre? That drive your activities and thinking? And what are the fundamental assumptions that you guys use to drive your actions? >> Kirk: So what's been driving me for the last couple years is this exponential growth of information that we create as a species. That seems to have no upper bounding function that tamps it down. At the same time, the timeframe we want to get from information, from raw information to insight that we can take action on seems to be shrinking from days, weeks, minutes... Now it's down to micro seconds. If I want to have an intelligent power grid, intelligent 3G communication, I have to have micro seconds. So if you look at those two things and at the same time we just have to be the lucky few who are sitting in these seats right when Moore's Law is slowing down and will eventually flatten out. And so all the skills that we've had over the last 28 years of my career you look at those technologies and you say "Those aren't the ones that are going "to take us forward." This is an opportunity for us to really look and examine every piece of this, because if was something we could of just can't we just dot dot dot do one thing? We would do it, right? We can't just do one thing. We have to be more holistic if we're going to create the next 20, 30, 40 years of innovation. And that's really what I'm looking at. How do we get back exponential scaling on supply to meet this unending exponential demand? >> Dave: So technically I would imagine, that's a very hard thing to balance because the former says that we're going to have more data than we've ever seen. The latter says we've got to act on it fast which is a great trend for memory but the economics are going to be such a challenge to meet, to balance that. >> Kirk: We have to be able to afford the energy, and we have to be able to afford the material cost, and we have to be able to afford the business processes that do all these things. So yeah, you need breakthroughs. And that's really what we've been doing. And I think that's why we're so fortunate at Hewlett Packard Enterprise to have the labs team but also that world class engineering and that world class supply chain and a services team that can get us introduced to every interesting customer around the world who has those challenging problems and can give us that partnership and that insight to get those kind of breakthroughs. >> Dave: And I wonder if there will be a tipping point, if the tipping point will be, and I'm sure you've thought about this, a change in the application development model that drives so much value and so much productivity that it offsets some of the potential cost issues of changing the development paradigm. >> And I think you're seeing hints of that. Now we saw this when we went from systems of record, OLTP systems, to systems of engagement, mobile systems, and suddenly new ways to develop it. I think now the interesting thing is we move over to systems of action and we're moving from programmatic to training. And this is this interesting thing if you have those data bytes of data you can't have a pair of human eyeballs in front of that, you have to have a machine learning algorithm. That's the only thing that's voracious enough to consume this data in a timely enough fashion to get us answers, but you can't program it. We saw those old approaches in old school A.I., old school autonomous vehicle programs, they go about 10 feet, boom, and they'd flip over, right? Now you know they're on our streets and they are functioning. They're a little bit raw right now but that improvement cycle is fantastic because they're training, they're not programming. >> Great opportunity to your point about Moore's Law but also all this new functionality that has yet been defined, is right on the doorstep. Andrew, Kirk thank you so much for sharing. >> Andrew: Thank you >> Great insight, love Hewlett Packard Labs love the R&D conversation. Gets us a chance to go play in the wild and dream about the future you guys are out creating it congratulations and thanks for spending the time on The Cube, appreciate it. >> Thanks. >> The Cube coverage will continue here live at Las Vegas for HPE Discover 2017, Hewlett Packard Enterprises annual event. We'll be right back with more, stay with us. (bright music)
SUMMARY :
brought to you by Hewlett Packard Enterprise. go outside the lines if you will? kind of near term, maybe you know one to three I mean that's what you're... all that good stuff, you guys still have Breakthroughs, how do we differentiate is the security and chip thing was pretty hot, of $265 billion tam but the notable So that the customers don't have to taking on more of that burden to allow you in terms of what you know just from I mean but, this is the truth. that we created the applications for the... A lot of people don't know that Right now almost all that data on the edge vision, you got to apply R&D to the problem... How are you guys bringing in the young guns? All of that needs to happen at the edge as well Dave: So Kirk you have your machine button, So when I first entered you talked about So in terms of the three that Andrew laid out technologies that over the last three years of gateways, I think we can see these intercept that kind of magnifies the model we were So how much do you guys get involved hey we think you know maybe this system that has all of the combined capacity the servers to lock down and you have firmware But the other thing I would point to you John: One of the things, the ecosystem, we need the ecosystem kind of on board and starting to contribute. And that's the great thing of where we're the theater, and I've been seeing these that you guys use to drive your actions? and at the same time we just have to be but the economics are going to be such a challenge the energy, and we have to be able to afford that it offsets some of the potential cost issues to get us answers, but you can't program it. is right on the doorstep. and thanks for spending the time on We'll be right back with more, stay with us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kirk | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Andrew Wheeler | PERSON | 0.99+ |
Tim Berners-Lee | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Ray Kersewile | PERSON | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
Meg | PERSON | 0.99+ |
2003 | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
1994 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Gene Emdall | PERSON | 0.99+ |
$265 billion | QUANTITY | 0.99+ |
Kirk Bresniker | PERSON | 0.99+ |
Jin Zee Consortium | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Dreamworks Animation | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Star Trek | TITLE | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
160 terabyte | QUANTITY | 0.99+ |
three day | QUANTITY | 0.99+ |
500 people | QUANTITY | 0.99+ |
HP Labs | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
three years | QUANTITY | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
1% | QUANTITY | 0.98+ |
Moore's Law is dead | TITLE | 0.98+ |
early 1980s | DATE | 0.98+ |
five years | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
1/20th | QUANTITY | 0.98+ |
three types | QUANTITY | 0.97+ |
DCIG | ORGANIZATION | 0.97+ |
500 systems | QUANTITY | 0.97+ |
Natalia Vassilieva & Kirk Bresniker, HP Labs - HPE Discover 2017
>> Announcer: Live from Las Vegas, it's the CUBE! Covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. >> Hey, welcome back, everyone. We are live here in Las Vegas for SiliconANGLE Media's CUBE exclusive coverage of HPE Discover 2017. I'm John Furrier, my co-host, Dave Vellante. Our next guest is Kirk Bresniker, fellow and VP chief architect of Hewlett Packard Labs, and Natalia Vassilieva, senior research manager, Hewlett Packard Labs. Did I get that right? >> Yes! >> John: Okay, welcome to theCUBE, good to see you. >> Thank you. >> Thanks for coming on, really appreciate you guys coming on. One of the things I'm most excited about here at HPE Discover is, always like to geek out on the Hewlett Packard Labs booth, which is right behind us. If you go to the wide shot, you can see the awesome display. But there's some two things in there that I love. The Machine is in there, which I love the new branding, by the way, love that pyramid coming out of the, the phoenix rising out of the ashes. And also Memristor, really game-changing. This is underlying technology, but what's powering the business trends out there that you guys are kind of doing the R&D on is AI, and machine learning, and software's changing. What's your thoughts as you look at the labs, you look out on the landscape, and you do the R&D, what's the vision? >> One of the things what is so fascinating about the transitional period we're in. We look at the kind of technologies that we've had 'til date, and certainly spent a whole part of my career on, and yet all these technologies that we've had so far, they're all kind of getting about as good as they're going to get. You know, the Moore's Law semiconductor process steps, general-purpose operating systems, general-purpose microprocessors, they've had fantastic productivity growth, but they all have a natural life cycle, and they're all maturing. And part of The Machine research program has been, what do we think is coming next? And really, what's informing us as what we have to set as the goals are the kinds of applications that we expect. And those are data-intensive applications, not just petabytes, exabytes, but zettabytes. Tens of zettabytes, hundreds of zettabytes of data out there in all those sensors out there in the world. And when you want to analyze that data, you can't just push it back to the individual human, you need to employ machine learning algorithms to go through that data to call out and find those needles in those increasingly enormous haystacks, so that you can get that key correlation. And when you don't have to reduce and redact and summarize data, when you can operate on the data at that intelligent edge, you're going to find those correlations, and that machine learning algorithm is going to be that unbiased and unblinking eye that's going to find that key relationship that'll really have a transformational effect. >> I think that's interesting. I'd like to ask you just one follow-up question on that, because I think, you know, it reminds me back when I was in my youth, around packets, and you'd get the buffer, and the speeds, and feeds. At some point there was a wire speed capability. Hey, packets are moving, and you can do all this analysis at wire speed. What you're getting at is, data processing at the speed of, as fast as the data's coming in and out. Is that, if I get that right, is that kind of where you're going with this? Because if you have more data coming, potentially an infinite amount of data coming in, the data speed is going to be so high-velocity, how do you know what a needle looks like? >> I think that's a key, and that's why the research Natalia's been doing is so fundamental, is that we need to be able to process that incredible amount of information and be able to afford to do it. And the way that you will not be able to have it scale is if you have to take that data, compress it, reduce it, select it down because of some pre-determined decision you've made, transmit it to a centralized location, do the analysis there, then send back the action commands. Now, we need that cycle of intelligence measurement, analysis and action to be microseconds. And that means it needs to happen at the intelligent edge. I think that's where the understanding of how machine learning algorithms, that you don't program, you train, so that they can work off of this enormous amount of data, they voraciously consume the data, and produce insights. That's where machine learning will be the key. >> Natalia, tell us about your research on this area. Curious. Your thoughts. >> We started to look at existing machine learning algorithms, and whether their limiting factors today in the infrastructure which don't allow to progress the machine learning algorithms fast enough. So, one of the recent advances in AI is appearance, or revival, of those artificial neural networks. Deep learning. That's a very large hype around those types of algorithms. Every speech assistant which you get, Siri in your phone, Cortana, or whatever, Alexa by Amazon, all of them use deep learning to train speech recognition systems. If you go to Facebook and suddenly it starts you to propose to mark the faces of your friends, that the face detection, face recognition, also that was deep learning. So that's a revival of the old artificial neural networks. Today we are capable to train byte-light enough models for those types of tasks, but we want to move forward. We want to be able to process larger volumes of data, to find more complicated patterns, and to do that, we need more compute power. Again, today, the only way how you can add more compute power to that, you scale out. So there is no compute device on Earth today which is capable to do all the computation. You need to have many of them interconnect together, and they all crunch numbers for the same problem. But at some point, the communication between those nodes becomes a bottleneck. So you need to let know laboring node what you achieved, and you can't scale out anymore. Adding yet another node to the cluster won't lead up to the reduction of the training time. With The Machine, when we have added the memory during computing architecture, when all data seeds in the same shared pool of memory, and when all computing nodes have an ability to talk to that memory. We don't have that limitation anymore. So for us, we are looking forward to deploy those algorithms on that type of architecture. We envision significant speedups in the training. And it will allow us to retrain the model on the new data, which is coming. To do not do training offline anymore. >> So how does this all work? When HP split into two companies, Hewlett Packard Labs went to HPE and HP Labs went to HP Ink. So what went where, and then, first question. Then second question is, how do you decide what to work on? >> I think in terms of how we organize ourselves, obviously, things that were around printing and personal systems went to HP Ink. Things that were around analytics, enterprise, hardware and research, went to Hewlett Packard Labs. The one thing that we both found equally interesting was security, 'cause obviously, personal systems, enterprise systems, we all need systems that are increasingly secure because of the advanced, persistent threats that are constantly assaulting everything from our personal systems up through enterprise and public infrastructure. So that's how we've organized ourselves. Now in terms of what we get to work on, you know, we're in an interesting position. I came to Labs three years ago. I used to be the chief technologist for the server global business unit. I was in the world of big D, tiny R. Natalia and the research team at Labs, they were out there looking out five, 10, 15, or 20 years. Huge R, and then we would meet together occasionally. I think one of the things that's happened with our machine advanced development and research program is, I came to Labs not to become a researcher, but to facilitate that communication to bring in the engineering, the supply chain team, that technical and production prowess, our experience from our services teams, who know how things actually get deployed in the real world. And I get to set them at the bench with Natalia, with the researchers, and I get to make everyone unhappy. Hopefully in equal amounts. That the development teams realize we're going to make some progress. We will end up with fantastic progress and products, both conventional systems as well as new systems, but it will be a while. We need to get through, that's why we had to build our prototype. To say, "No, we need a construction proof of these ideas." The same time, with Natalia and the research teams, they were always looking for that next horizon, that next question. Maybe we pulled them a little bit closer, got a little answers out of them rather than the next question. So I think that's part of what we've been doing at the Labs is understanding, how do we organize ourselves? How do we work with the Hewlett Packard Enterprise Pathfinder program, to find those little startups who need that extra piece of something that we can offer as that partnering community? It's really a novel approach for us to understand how do we fill that gap, how do we still have great conventional products, how do we enable breakthrough new category products, and have it in a timeframe that matters? >> So, much tighter connection between the R and the D. And then, okay, so when Natalia wants to initiate a project, or somebody wants Natalia to initiate a project around AI, how does that work? Do you say, "Okay, submit an idea," and then it goes through some kind of peer review? And then, how does it get funded? Take us through that. >> I think I'll give my perspective, I would love to hear what you have from your side. For me, it's always been organic. The ideas that we had on The Machine, for me, my little thread, one of thousands that's been brought in to get us to this point, started about 2003, where we were getting ready for, we're midway through Blade Systems C-class. A category-defining product. A absolute home run in defining what a Blade system was going to be. And we're partway through that, and you realize you got a success on your hands. You think, "Wow, nothing gets better than this!" Then it starts to worry, what if nothing gets better than this? And you start thinking about that next set of things. Now, I had some insights of my own, but when you're a technologist and you have an insight, that's a great feeling for a little while, and then it's a little bit of a lonely feeling. No one else understands this but me, and is it always going to be that way? And then you have to find that business opportunity. So that's where talking with our field teams, talking with our customers, coming to events like Discover, where you see business opportunities, and you realize, my ingenuity and this business opportunity are a match. Now, the third piece of that is someone who can say, a business leader, who can say, "You know what?" "Your ingenuity and that opportunity can meet "in a finite time with finite resources." "Let's do it." And really, that's what Meg and leadership team did with us on The Machine. >> Kirk, I want to shift gears and talk about the Memristor, because I think that's a showcase that everyone's talking about. Actually, The Machine has been talked about for many years now, but Memristor changes the game. It kind of goes back to old-school analog, right? We're talking about, you know, login, end-login kind of performance, that we've never seen before. So it's a completely different take on memory, and this kind of brings up your vision and the team's vision of memory-driven computing. Which, some are saying can scale machine learning. 'Cause now you have data response times in microseconds, as you said, and provisioning containers in microseconds is actually really amazing. So, the question is, what is memory-driven computing? What does that mean? And what are the challenges in deep learning today? >> I'll do the machine learning-- >> I will do deep learning. >> You'll do the machine learning. So, when I think of memory-driven computing, it's the realization that we need a new set of technologies, and it's not just one thing. Can't we just do, dot-dot-dot, we would've done that one thing. This is more taking a holistic approach, looking at all the technologies that we need to pull together. Now, memories are fascinating, and our Memristor is one example of a new class of memory. But they also-- >> John: It's doing it differently, too, it's not like-- >> It's changing the physics. You want to change the economics of information technology? You change the physics you're using. So here, we're changing physics. And whether it's our work on the Memristor with Western Digital and the resistive RAM program, whether it's the phase-change memories, whether it's the spin-torque memories, they're all applying new physics. What they all share, though, is the characteristic that they can continue to scale. They can scale in the layers inside of a die. The die is inside of a package. The package is inside of a module, and then when we add photonics, a transformational information communications technology, now we're scaling from the package, to the enclosure, to the rack, cross the aisle, and then across the data center. All that memory accessible as memory. So that's the first piece. Large, persistent memories. The second piece is the fabric, the way we interconnect them so that we can have great computational, great memory, great communication devices available on industry open standards, that's the Gen-Z Consortium. The last piece is software. New software as well as adapting existing productive programming techniques, and enabling people to be very productive immediately. >> Before Natalia gets into her piece, I just want to ask a question, because this is interesting to me because, sorry to get geeky here, but, this is really cool because you're going analog with signaling. So, going back to the old concepts of signaling theory. You mentioned neural networks. It's almost a hand-in-glove situation with neural networks. Here, you have the next question, which is, connect the dots to machine learning and neural networks. This seems to be an interesting technology game-changer. Is that right? I mean, am I getting this right? What's this mean? >> I'll just add one piece, and then hear Natalia, who's the expert on the machine learning. For me, it's bringing that right ensemble of components together. Memory technologies, communication technologies, and, as you say, novel computational technologies. 'Cause transistors are not going to get smaller for very much longer. We have to think of something more clever to do than just stamp out another copy of a standard architecture. >> Yes, you asked about changes of deep learning. We look at the landscape of deep learning today, and the set of tasks which are solved today by those problems. We see that although there is a variety of tasks solved, most of them are from the same area. So we can analyze images very efficiently, we can analyze video, though it's all visual data, we can also do speech processing. There are few examples in other domains, with other data types, but they're much fewer. It's much less knowledge how to, which models to train for those applications. The thing that one of the challenges for deep learning is to expand the variety of applications which it can be used. And it's known that artificial neural networks are very well applicable to the data where there are many hidden patterns underneath. And there are multi-dimensional data, like data from sensors. But we still need to learn what's the right topology of neural networks to do that. What's the right algorithm to train that. So we need to broaden the scope of applications which can take advantage of deep learning. Another aspect is, which I mentioned before, the computational power of today's devices. If you think about the well-known analogy of artificial neural network in our brain, the size of the model which we train today, the artificial neural networks, they are much, much, much smaller than the analogous thing in our brain. Many orders of magnitude. It was shown that if you increase the size of the model, you can get better accuracy for some tasks. You can process a larger variety of data. But in order to train those large models, you need more data and you need more compute power. Today, we don't have enough compute power. Actually did some computation, though in order to train a model which is comparable in size with our human brain, you will need to train it in a reasonable time. You will need a compute device which is capable to perform 10 to the power of 26 floating-point operations per second. We are far, far-- >> John: Can you repeat that again? >> 10 to the power of 26. We are far, far below that point now. >> All right, so here's the question for you guys. There's all this deep learning source code out there. It's open bar for open source right now. All this goodness is pouring in. Google's donating code, you guys are donating code. It used to be like, you had to build your code from scratch. Borrow here and there, and share in open source. Now it's a tsunami of greatness, so I'm just going to build my own deep learning. How do customers do that? It's too hard. >> You are right on the point to the next challenge of deep learning, which I believe is out there. Because we have so many efforts to speed up the infrastructure, we have so many open source libraries. So now the question is, okay, I have my application at hand. What should I choose? What is the right compute node to the deep learning? Everybody use GPUs, but is it true for all models? How many GPUs do I need? What is the optimal number of nodes in the cluster? And we have a research effort towards to answer those questions as well. >> And a breathalyzer for all the drunk coders out there, open bar. I mean, a lot of young kids are coming in. This is a great opportunity for everyone. And in all seriousness, we need algorithms for the algorithms. >> And I think that's where it's so fascinating. We think of some classes of things, like recognizing written handwriting, recognizing voice, but when we want to apply machine learning and algorithms to the volume of sensor data, so that every manufactured item, and not only every item we manufacture, but every factory that can be fully instrumented with machine learning understanding how it can be optimized. And then, what of the business processes that are feeding that factory? And then, what are the overall economic factors that are feeding that business? And instrumenting and having this learning, this unblinking, unbiased eye examining to find those hidden correlations, those hidden connections, that could yield a very much more efficient system at every level of human enterprise. >> And the data's more diverse now than ever. I'm sorry to interrupt, but in Voice you mentioned you saw Siri, you see Alexa, you see Voice as one dataset. Data diversity's massive, so more needles, more types of needles than ever before. >> In that example that you gave, you need a domain expert. And there's plenty of those, but you also need a big brain to build the model, and train the model, and iterate. And there aren't that many of those. Is the state of machine learning and AI going to get to the point where that problem will solve itself, or do we just need to train more big brains? >> Actually, one of the advantages of deep learning that you don't need that much effort from the domain experts anymore, from the step which was called future engineering, like, what do you do with your data before you throw machine learning algorithm into that? So they're, pretty thing, cool thing about deep learning, artificial neural network, that you can throw almost raw data into that. And there are some examples out there, that the people without any knowledge in medicine won the competition of the drug recognition by applying deep neural networks to that, without knowing all the details about their connection between proteins, like that. Not domain experts, but they still were able to win that competition. Just because algorithm that good. >> Kirk, I want to ask you a final question before we break in the segment because, having spent nine years of my career at HP in the '80s and '90s, it's been well-known that there's been great research at HP. The R&D has been spectacular. Not too much R, I mean, too much D, not enough applied, you mention you're bringing that to market faster, so, the question is, what should customers know about Hewlett Packard Labs today? Your mission, obviously the memory-centric is the key thing. You got The Machine, you got the Memristor, you got a novel way of looking at things. What's the story that you'd like to share? Take a minute, close out the segment and share Hewlett Packard Labs' mission, and what expect to see from you guys in terms of your research, your development, your applications. What are you guys bringing out of the kitchen? What's cooking in the oven? >> I think for us, it is, we've been given an opportunity, an opportunity to take all of those ideas that we have been ruminating on for five, 10, maybe even 15 years. All those things that you thought, this is really something. And we've been given the opportunity to build a practical working example. We just turned on the prototype with more memory, more computation addressable simultaneously than anyone's ever assembled before. And so I think that's a real vote of confidence from our leadership team, that they said, "Now, the ideas you guys have, "this is going to change the way that the world works, "and we want to see you given every opportunity "to make that real, and to make it effective." And I think everything that Hewlett Packard Enterprise has done to focus the company on being that fantastic infrastructure, provider and partner is just enabling us to get this innovation, and making it meaningful. I've been designing printed circuit boards for 28 years, now, and I must admit, it's not as, you know, it is intellectually stimulating on one level, but then when you actually meet someone who's changing the face of Alzheimer's research, or changing the way that we produce energy as a society, and has an opportunity to really create a more sustainable world, then you say, "That's really worth it." That's why I get up, come to Labs every day, work with fantastic researchers like Natalia, work with great customers, great partners, and our whole supply chain, the whole team coming together. It's just spectacular. >> Well, congratulations, thanks for sharing the insight on theCUBE. Natalia, thank you very much for coming on. Great stuff going on, looking forward to keeping the progress and checking in with you guys. Always good to see what's going on in the Lab. That's the headroom, that's the future. That's the bridge to the future. Thanks for coming in theCUBE. Of course, more CUBE coverage here at HP Discover, with the keynotes coming up. Meg Whitman on stage with Antonio Neri. Back with more live coverage after this short break. Stay with us. (energetic techno music)
SUMMARY :
Brought to you by Hewlett Packard Enterprise. Did I get that right? the business trends out there that you guys and that machine learning algorithm is going to be the data speed is going to be so high-velocity, And the way that you will not be able to have it scale Natalia, tell us about your research on this area. and to do that, we need more compute power. Then second question is, how do you decide what to work on? And I get to set them at the bench Do you say, "Okay, submit an idea," and is it always going to be that way? and the team's vision of memory-driven computing. it's the realization that we need a new set of technologies, that they can continue to scale. connect the dots to machine learning and neural networks. We have to think of something more clever to do What's the right algorithm to train that. 10 to the power of 26. All right, so here's the question for you guys. What is the right compute node to the deep learning? And a breathalyzer for all the to the volume of sensor data, I'm sorry to interrupt, but in Voice you mentioned In that example that you gave, you need a domain expert. that you don't need that much effort and what expect to see from you guys "Now, the ideas you guys have, to keeping the progress and checking in with you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Natalia | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Natalia Vassilieva | PERSON | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
HP Labs | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Kirk Bresniker | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
28 years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
second question | QUANTITY | 0.99+ |
Kirk | PERSON | 0.99+ |
Cortana | TITLE | 0.99+ |
first piece | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
nine years | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
third piece | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Hewlett Packard Labs' | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
HP Ink | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Tens of zettabytes | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Meg | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
three years ago | DATE | 0.99+ |
one piece | QUANTITY | 0.98+ |
Alexa | TITLE | 0.98+ |
26 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two things | QUANTITY | 0.97+ |
Memristor | TITLE | 0.96+ |
Moore | ORGANIZATION | 0.96+ |
Alzheimer | OTHER | 0.95+ |
one | QUANTITY | 0.94+ |
The Machine | TITLE | 0.94+ |
HPE Discover 2017 | EVENT | 0.94+ |
one thing | QUANTITY | 0.94+ |
Gen-Z Consortium | ORGANIZATION | 0.94+ |
One | QUANTITY | 0.93+ |
one level | QUANTITY | 0.92+ |
HP Discover | ORGANIZATION | 0.92+ |