Image Title

Search Results for Sharad Singhal:

Sharad Singhal, The Machine & Michael Woodacre, HPE | HPE Discover Madrid 2017


 

>> Man: Live from Madrid, Spain, it's the Cube! Covering HPE Discover Madrid, 2017. Brought to you by: Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is The Cube, the leader in live tech coverage. My name is Dave Vellante, I'm here with my co-host, Peter Burris, and this is our second day of coverage of HPE's Madrid Conference, HPE Discover. Sharad Singhal is back, Director of Machine Software and Applications, HPE and Corps and Labs >> Good to be back. And Mike Woodacre is here, a distinguished engineer from Mission Critical Solutions at Hewlett-Packard Enterprise. Gentlemen, welcome to the Cube, welcome back. Good to see you, Mike. >> Good to be here. >> Superdome Flex is all the rage here! (laughs) At this show. You guys are happy about that? You were explaining off-camera that is the first jointly-engineered product from SGI and HPE, so you hit a milestone. >> Yeah, and I came into Hewett Packard Enterprise just over a year ago with the SGI Acquisition. We're already working on our next generation in memory computing platform. We basically hit the ground running, integrated the engineering teams immediately that we closed the acquisition so we could drive through the finish line and with the product announcement just recently, we're really excited to get that out into the market. Really represent the leading in memory, computing system in the industry. >> Sharad, a high performance computer, you've always been big data, needing big memories, lots of performance... How has, or has, the acquisition of SGI shaped your agenda in any way or your thinking, or advanced some of the innovations that you guys are coming up with? >> Actually, it was truly like a meeting of the minds when these guys came into HPE. We had been talking about memory-driven computing, the machine prototype, for the last two years. Some of us were aware of it, but a lot of us were not aware of it. These guys had been working essentially in parallel on similar concepts. Some of the work we had done, we were thinking in terms of our road maps and they were looking at the same things. Their road maps were looking incredibly similar to what we were talking about. As the engineering teams came about, we brought both the Superdome X technology and The UV300 technology together into this new product that Mike can talk a lot more about. From my side, I was talking about the machine and the machine research project. When I first met Mike and I started talking to him about what they were doing, my immediate reaction was, "Oh wow wait a minute, this is exactly what I need!" I was talking about something where I could take the machine concepts and deliver products to customers in the 2020 time frame. With the help of Mike and his team, we are able to now do essentially something where we can take the benefits we are describing in the machine program and- make those ideas available to customers right now. I think to me that was the fun part of this journey here. >> So what are the key problems that your team is attacking with this new offering? >> The primary use case for the Superdome Flex is really high-performance in memory database applications, typically SAP Hana is sort of the industry leading solution in that space right now. One of the key things with the Superdome Flex, you know, Flex is the active word, it's the flexibility. You can start with a small building block of four socket, three terabyte building block, and then you just connect these boxes together. The memory footprint just grows linearly. The latency across our fabric just stays constant as you add these modules together. We can deliver up to 32 processes, 48 terabytes of in-memory data in a single rack. So it's really the flexibility, sort of a pay as you grow model. As their needs grow, they don't have to throw out the infrastructure. They can add to it. >> So when you take a look ultimately at the combination, we talked a little bit about some of the new types of problems that can be addressed, but let's bring it practical to the average enterprise. What can the enterprise do today, as a consequence of this machine, that they couldn't do just a few weeks ago? >> So it sort of builds on the modularity, as Lance explained. If you ask a CEO today, "what's my database requirement going to be in two or three years?" they're like, "I hope my business is successful, I hope I'm gonna grow my needs," but I really don't know where that side is going to grow, so the flexibility to just add modules and scale up the capacity of memory to bring that- so the whole concept of in-memory databases is basically bringing your online transaction processing and your data-analytics processing together. So then you can do this in real time and instead of your data going to a data warehouse and looking at how the business is operating days or weeks or months ago, I can see how it's acting right now with the latest updates of transactions. >> So this is important. You mentioned two different things. Number one is you mentioned you can envision- or three things. You can start using modern technology immediately on an extremely modern platform. Number two, you can grow this and scale this as needs follow, because Hana in memory is not gonna have the same scaling limitations that you know, Oracle on a bunch of spinning discs had. >> Mike: Exactly. >> So, you still have the flexibility to learn and then very importantly, you can start adding new functions, including automation, because now you can put the analytics and the transaction processing together, close that loop so you can bring transactions, analytics, boom, into a piece of automation, and scale that in unprecedented ways. That's kind of three things that the business can now think about. Have I got that right? >> Yeah, that's exactly right. It lets people really understand how their business is operating in real time, look for trends, look for new signatures in how the business is operating. They can basically build on their success and basically having this sort of technology gives them a competitive advantage over their competitors so they can out-compute or out-compete and get ahead of the competition. >> But it also presumably leads to new kinds of efficiencies because you can converge, that converge word that we've heard so much. You can not just converge the hardware and converge the system software management, but you can now increasingly converge tasks. Bring those tasks in the system, but also at a business level, down onto the same platform. >> Exactly, and so moving in memory is really about bringing real time to the problem instead of batch mode processing, you bring in the real-time aspect. Humans, we're interactive, we like to ask a question, get an answer, get on to the next question in real time. When processes move from batch mode to real time, you just get a step change in the innovation that can occur. We think with this foundation, we're really enabling the industry to step forward. >> So let's create a practical example here. Let's apply this platform to a sizeable system that's looking at customer behavior patterns. Then let's imagine how we can take the e-commerce system that's actually handling order, bill, fulfillment and all those other things. We can bring those two things together not just in a way that might work, if we have someone online for five minutes, but right now. Is that kind of one of those examples that we're looking at? >> Absolutely, you can basically- you have a history of the customers you're working with. In retail when you go in a store, the store will know your history of transactions with them. They can decide if they want to offer you real time discounts on particular items. They'll also be taking in other data, weather conditions to drive their business. Suddenly there's going to be a heat wave, I want more ice cream in the store, or it's gonna be freezing next week, I'm gonna order in more coats and mittens for everyone to buy. So taking in lots of transactional data, not just the actual business transaction, but environmental data, you can accelerate your ability to provide consumers with the things they will need. >> Okay, so I remember when you guys launched Apollo. Antonio Neri was running the server division, you might have had networking to him. He did a little reveal on the floor. Antonio's actually in the house over there. >> Mike: (laughs) Next door. There was an astronaut at the reveal. We covered it on the Cube. He's always been very focused on this part of the business of the high-performance computing, and obviously the machine has been a huge project. How has the leadership been? We had a lot of skeptics early on that said you were crazy. What was the conversation like with Meg and Antonio? Were they continuously supportive, were they sometimes skeptical too? What was that like? >> So if you think about the total amount of effort we've put in the machine program, and truly speaking, that kind of effort would not be possible if the senior leadership was not behind us inside this company. Right? A lot of us in HP labs were working on it. It was not just a labs project, it was a project where our business partners were working on it. We brought together engineering teams from the business groups who understood how projects were put together. We had software people working with us who were working inside the business, we had researchers from labs working, we had supply chain partners working with us inside this project. A project of this scale and scope does not succeed if it's a handful of researchers doing this work. We had enormous support from the business side and from our leadership team. I give enormous thanks to our leadership team to allow us to do this, because it's an industry thing, not just an HP Enterprise thing. At the same time, with this kind of investment, there's clearly an expectation that we will make it real. It's taken us three years to go from, "here is a vague idea from a group of crazy people in labs," to something which actually works and is real. Frankly, the conversation in the last six months has been, "okay, so how do we actually take it to customers?" That's where the partnership with Mike and his team has become so valuable. At this point in time, we have a shared vision of where we need to take the thing. We have something where we can on-board customers right now. We have something where, frankly, even I'm working on the examples we were talking about earlier today. Not everybody can afford a 16-socket, giant machine. The Superdome Flex allows my customer, or anybody who is playing with an application to start small, something that is reasonably affordable, try that application out. If that application is working, they have the ability to scale up. This is what makes the Superdome Flex such a nice environment to work in for the types of applications I'm worrying about because it takes something which when we had started this program, people would ask us, "when will the machine product be?" From day one, we said, "the machine product will be something that might become available to you in some form or another by the end of the decade." Well, suddenly with Mike, I think I can make it happen right now. It's not quite the end of the decade yet, right? So I think that's what excited me about this partnership we have with the Superdome Flex team. The fact that they had the same vision and the same aspirations that we do. It's a platform that allows my current customers with their current applications like Mike described within the context of say, SAB Hana, a scalable platform, they can operate it now. It's also something that allows them to involve towards the future and start putting new applications that they haven't even thought about today. Those were the kinds of applications we were talking about. It makes it possible for them to move into this journey today. >> So what is the availability of Superdome Flex? Can I buy it today? >> Mike: You can buy it today. Actually, I had the pleasure of installing the first early-access system in the UK last week. We've been delivering large memory platforms to Stephen Hawking's team at Cambridge University for the last twenty years because they really like the in-memory capability to allow them, as they say, to be scientists, not computer scientists, in working through their algorithms and data. Yeah, it's ready for sale today. >> What's going on with Hawking's team? I don't know if this is fake news or not, but I saw something come across that said he says the world's gonna blow up in 600 years. (laughter) I was like, uh-oh, what's Hawking got going now? (laughs) That's gotta be fun working with those guys. >> Yeah, I know, it's been fun working with that team. Actually, what I would say following up on Sharad's comment, it's been really fun this last year, because I've sort of been following the machine from outside when the announcements were made a couple of years ago. Immediately when the acquisition closed, I was like, "tell me about the software you've been developing, tell me about the photonics and all these technologies," because boy, I can now accelerate where I want to go with the technology we've been developing. Superdome Flex is really the first step on the path. It's a better product than either company could have delivered on their own. Now over time, we can integrate other learnings and technologies from the machine research program. It's a really exciting time. >> Excellent. Gentlemen, I always love the SGI acquisitions. Thought it made a lot of sense. Great brand, kind of put SGI back on the map in a lot of ways. Gentlemen, thanks very much for coming on the Cube. >> Thank you again. >> We appreciate you. >> Mike: Thank you. >> Thanks for coming on. Alright everybody, We'll be back with our next guest right after this short break. This is the Cube, live from HGE Discover Madrid. Be right back. (energetic synth)

Published Date : Nov 29 2017

SUMMARY :

it's the Cube! the leader in live tech coverage. Good to be back. that is the first jointly-engineered the finish line and with the product How has, or has, the acquisition of Some of the work we had done, One of the key things with the What can the enterprise do today, so the flexibility to just add gonna have the same scaling limitations that the transaction processing together, how the business is operating. You can not just converge the hardware and the innovation that can occur. Let's apply this platform to a not just the actual business transaction, Antonio's actually in the house We covered it on the Cube. the same aspirations that we do. Actually, I had the pleasure of he says the world's gonna blow up in 600 years. Superdome Flex is really the first Gentlemen, I always love the SGI This is the Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

MikePERSON

0.99+

Dave VellantePERSON

0.99+

MegPERSON

0.99+

Sharad SinghalPERSON

0.99+

AntonioPERSON

0.99+

Mike WoodacrePERSON

0.99+

SGIORGANIZATION

0.99+

HawkingPERSON

0.99+

UKLOCATION

0.99+

five minutesQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Antonio NeriPERSON

0.99+

LancePERSON

0.99+

HPEORGANIZATION

0.99+

48 terabytesQUANTITY

0.99+

Hewlett-Packard EnterpriseORGANIZATION

0.99+

todayDATE

0.99+

three yearsQUANTITY

0.99+

next weekDATE

0.99+

OracleORGANIZATION

0.99+

HPORGANIZATION

0.99+

Michael WoodacrePERSON

0.99+

Stephen HawkingPERSON

0.99+

last weekDATE

0.99+

MadridLOCATION

0.99+

Hewett Packard EnterpriseORGANIZATION

0.99+

first stepQUANTITY

0.99+

2020DATE

0.99+

SharadPERSON

0.99+

OneQUANTITY

0.99+

Cambridge UniversityORGANIZATION

0.99+

two thingsQUANTITY

0.99+

HGE Discover MadridORGANIZATION

0.98+

firstQUANTITY

0.98+

last yearDATE

0.98+

three thingsQUANTITY

0.98+

twoQUANTITY

0.98+

three terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.97+

600 yearsQUANTITY

0.97+

16-socketQUANTITY

0.97+

second dayQUANTITY

0.97+

The MachineORGANIZATION

0.96+

Superdome FlexORGANIZATION

0.96+

Madrid, SpainLOCATION

0.95+

two different thingsQUANTITY

0.95+

upQUANTITY

0.93+

single rackQUANTITY

0.91+

CubeCOMMERCIAL_ITEM

0.9+

endDATE

0.9+

HPE DiscoverEVENT

0.88+

32 processesQUANTITY

0.88+

Superdome FlexCOMMERCIAL_ITEM

0.88+

few weeks agoDATE

0.88+

SAB HanaTITLE

0.86+

couple of years agoDATE

0.86+

overDATE

0.84+

Number twoQUANTITY

0.83+

Mission Critical SolutionsORGANIZATION

0.83+

four socketQUANTITY

0.82+

end of the decadeDATE

0.82+

last six monthsDATE

0.81+

a year agoDATE

0.81+

earlier todayDATE

0.8+

Sharad Singhal, The Machine & Matthias Becker, University of Bonn | HPE Discover Madrid 2017


 

>> Announcer: Live from Madrid, Spain, it's theCUBE, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is theCUBE, the leader in live tech coverage and my name is Dave Vellante, and I'm here with Peter Burris, this is day two of HPE Hewlett Packard Enterprise Discover in Madrid, this is their European version of a show that we also cover in Las Vegas, kind of six month cadence of innovation and organizational evolution of HPE that we've been tracking now for several years. Sharad Singal is here, he covers software architecture for the machine at Hewlett Packard Enterprise, and Matthias Becker, who's a postdoctoral researcher at the University of Bonn. Gentlemen, thanks so much for coming in theCUBE. >> Thank you. >> No problem. >> You know, we talk a lot on theCUBE about how technology helps people make money or save money, but now we're talking about, you know, something just more important, right? We're talking about lives and the human condition and >> Peter: Hard problems to solve. >> Specifically, yeah, hard problems like Alzheimer's. So Sharad, why don't we start with you, maybe talk a little bit about what this initiative is all about, what the partnership is all about, what you guys are doing. >> So we started on a project called the Machine Project about three, three and a half years ago and frankly at that time, the response we got from a lot of my colleagues in the IT industry was "You guys are crazy", (Dave laughs) right. We said we are looking at an enormous amount of data coming at us, we are looking at real time requirements on larger and larger processing coming up in front of us, and there is no way that the current architectures of the computing environments we create today are going to keep up with this huge flood of data, and we have to rethink how we do computing, and the real question for those of us who are in research in Hewlett Packard Labs, was if we were to design a computer today, knowing what we do today, as opposed to what we knew 50 years ago, how would we design the computer? And this computer should not be something which solves problems for the past, this should be a computer which deals with problems in the future. So we are looking for something which would take us for the next 50 years, in terms of computing architectures and what we will do there. In the last three years we have gone from ideas and paper study, paper designs, and things which were made out of plastic, to a real working system. We have around Las Vegas time, we'd basically announced that we had the entire system working with actual applications running on it, 160 terabytes of memory all addressable from any processing core in 40 computing nodes around it. And the reason is, although we call it memory-driven computing, it's really thinking in terms of data-driven computing. The reason is that the data is now at the center of this computing architecture, as opposed to the processor, and any processor can return to any part of the data directly as if it was doing, addressing in local memory. This provides us with a degree of flexibility and freedom in compute that we never had before, and as a software person, I work in software, as a software person, when we started looking at this architecture, our answer was, well, we didn't know we could do this. Now if, given now that I can do this and I assume that I can do this, all of us in the programmers started thinking differently, writing code differently, and we suddenly had essentially a toy to play with, if you will, as programmers, where we said, you know, this algorithm I had written off decades ago because it didn't work, but now I have enough memory that if I were to think about this algorithm today, I would do it differently. And all of a sudden, a new set of algorithms, a new set of programming possibilities opened up. We worked with a number of applications, ranging from just Spark on this kind of an environment, to how do you do large scale simulations, Monte Carlo simulations. And people talk about improvements in performance from something in the order of, oh I can get you a 30% improvement. We are saying in the example applications we saw anywhere from five, 10, 15 times better to something which where we are looking at financial analysis, risk management problems, which we can do 10,000 times faster. >> So many orders of magnitude. >> Many, many orders >> When you don't have to wait for the horrible storage stack. (laughs) >> That's right, right. And these kinds of results gave us the hope that as we look forward, all of us in these new computing architectures that we are thinking through right now, will take us through this data mountain, data tsunami that we are all facing, in terms of bringing all of the data back and essentially doing real-time work on those. >> Matthias, maybe you could describe the work that you're doing at the University of Bonn, specifically as it relates to Alzheimer's and how this technology gives you possible hope to solve some problems. >> So at the University of Bonn, we work very closely with the German Center for Neurodegenerative Diseases, and in their mission they are facing all diseases like Alzheimer's, Parkinson's, Multiple Sclerosis, and so on. And in particular Alzheimer's is a really serious disease and for many diseases like cancer, for example, the mortality rates improve, but for Alzheimer's, there's no improvement in sight. So there's a large population that is affected by it. There is really not much we currently can do, so the DZNE is focusing on their research efforts together with the German government in this direction, and one thing about Alzheimer's is that if you show the first symptoms, the disease has already been present for at least a decade. So if you really want to identify sources or biomarkers that will point you in this direction, once you see the first symptoms, it's already too late. So at the DZNE they have started on a cohort study. In the area around Bonn, they are now collecting the data from 30,000 volunteers. They are planning to follow them for 30 years, and in this process we generate a lot of data, so of course we do the usual surveys to learn a bit about them, we learn about their environments. But we also do very more detailed analysis, so we take blood samples and we analyze the complete genome, and also we acquire imaging data from the brain, so we do an MRI at an extremely high resolution with some very advanced machines we have, and all this data is accumulated because we do not only have to do this once, but we try to do that repeatedly for every one of the participants in the study, so that we can later analyze the time series when in 10 years someone develops Alzheimer's we can go back through the data and see, maybe there's something interesting in there, maybe there was one biomarker that we are looking for so that we can predict the disease better in advance. And with this pile of data that we are collecting, basically we need something new to analyze this data, and to deal with this, and when we heard about the machine, we though immediately this is a system that we would need. >> Let me see if I can put this in a little bit of context. So Dave lives in Massachusetts, I used to live there, in Framingham, Massachusetts, >> Dave: I was actually born in Framingham. >> You were born in Framingham. And one of the more famous studies is the Framingham Heart Study, which tracked people over many years and discovered things about heart disease and relationship between smoking and cancer, and other really interesting problems. But they used a paper-based study with an interview base, so for each of those kind of people, they might have collected, you know, maybe a megabyte, maybe a megabyte and a half of data. You just described a couple of gigabytes of data per person, 30,000, multiple years. So we're talking about being able to find patterns in data about individuals that would number in the petabytes over a period of time. Very rich detail that's possible, but if you don't have something that can help you do it, you've just collected a bunch of data that's just sitting there. So is that basically what you're trying to do with the machine is the ability to capture all this data, to then do something with it, so you can generate those important inferences. >> Exactly, so with all these large amounts of data we do not only compare the data sets for a single person, but once we find something interesting, we have also to compare the whole population that we have captured with each other. So there's really a lot of things we have to parse and compare. >> This brings together the idea that it's not just the volume of data. I also have to do analytics and cross all of that data together, right, so every time a scientist, one of the people who is doing biology studies or informatic studies asks a question, and they say, I have a hypothesis which this might be a reason for this particular evolution of the disease or occurrence of the disease, they then want to go through all of that data, and analyze it as as they are asking the question. Now if the amount of compute it takes to actually answer their questions takes me three days, I have lost my train of thought. But if I can get that answer in real time, then I get into this flow where I'm asking a question, seeing the answer, making a different hypothesis, seeing a different answer, and this is what my colleagues here were looking for. >> But if I think about, again, going back to the Framingham Heart Study, you know, I might do a query on a couple of related questions, and use a small amount of data. The technology to do that's been around, but when we start looking for patterns across brain scans with time series, we're not talking about a small problem, we're talking about an enormous sum of data that can be looked at in a lot of different ways. I got one other question for you related to this, because I gotta presume that there's the quid pro quo for getting people into the study, is that, you know, 30,000 people, is that you'll be able to help them and provide prescriptive advice about how to improve their health as you discover more about what's going on, have I got that right? >> So, we're trying to do that, but also there are limits to this, of course. >> Of course. >> For us it's basically collecting the data and people are really willing to donate everything they can from their health data to allow these large studies. >> To help future generations. >> So that's not necessarily quid pro quo. >> Okay, there isn't, okay. But still, the knowledge is enough for them. >> Yeah, their incentive is they're gonna help people who have this disease down the road. >> I mean if it is not me, if it helps society in general, people are willing to do a lot. >> Yeah of course. >> Oh sure. >> Now the machine is not a product yet that's shipping, right, so how do you get access to it, or is this sort of futures, or... >> When we started talking to one another about this, we actually did not have the prototype with us. But remember that when we started down this journey for the machine three years ago, we know back then that we would have hardware somewhere in the future, but as part of my responsibility, I had to deal with the fact that software has to be ready for this hardware. It does me no good to build hardware when there is no software to run on it. So we have actually been working at the software stack, how to think about applications on that software stack, using emulation and simulation environments, where we have some simulators with essentially instruction level simulator for what the machine does, or what that prototype would have done, and we were running code on top of those simulators. We also had performance simulators, where we'd say, if we write the application this way, this is how much we think we would gain in terms of performance, and all of those applications on all of that code we were writing was actually on our large memory machines, Superdome X to be precise. So by the time we started talking to them, we had these emulation environments available, we had experience using these emulation environments on our Superdome X platform. So when they came to us and started working with us, we took their software that they brought to us, and started working within those emulation environments to see how fast we could make those problems, even within those emulation environments. So that's how we started down this track, and most of the results we have shown in the study are all measured results that we are quoting inside this forum on the Superdome X platform. So even in that emulated environment, which is emulating the machine now, on course in the emulation Superdome X, for example, I can only hold 24 terabytes of data in memory. I say only 24 terabytes >> Only! because I'm looking at much larger systems, but an enormously large number of workloads fit very comfortably inside the 24 terabytes. And for those particular workloads, the programming techniques we are developing work at that scale, right, they won't scale beyond the 24 terabytes, but they'll certainly work at that scale. So between us we then started looking for problems, and I'll let Matthias comment on the problems that they brought to us, and then we can talk about how we actually solved those problems. >> So we work a lot with genomics data, and usually what we do is we have a pipeline so we connect multiple tools, and we thought, okay, this architecture sounds really interesting to us, but if we want to get started with this, we should pose them a challenge. So if they can convince us, we went through the literature, we took a tool that was advertised as the new optimal solution. So prior work was taking up to six days for processing, they were able to cut it to 22 minutes, and we thought, okay, this is a perfect challenge for our collaboration, and we went ahead and we took this tool, we put it on the Superdome X that was already running and stepped five minutes instead of just 22, and then we started modifying the code and in the end we were able to shrink the time down to just 30 seconds, so that's two magnitudes faster. >> We took something which was... They were able to run in 22 minutes, and that was already had been optimized by people in the field to say "I want this answer fast", and then when we moved it to our Superdome X platform, the platform is extremely capable. Hardware-wise it compares really well to other platforms which are out there. That time came down to five minutes, but that was just the beginning. And then as we modified the software based on the emulation results we were seeing underneath, we brought that time down to 13 seconds, which is a hundred times faster. We started this work with them in December of last year. It takes time to set up all of this environment, so the serious coding was starting in around March. By June we had 9X improvement, which is already a factor of 10, and since June up to now, we have gotten another factor of 10 on that application. So I'm now at a 100X faster than what the application was able to do before. >> Dave: Two orders of magnitude in a year? >> Sharad: In a year. >> Okay, we're out of time, but where do you see this going? What is the ultimate outcome that you're hoping for? >> For us, we're really aiming to analyze our data in real time. Oftentimes when we have biological questions that we address, we analyze our data set, and then in a discussion a new question comes up, and we have to say, "Sorry, we have to process the data, "come back in a week", and our idea is to be able to generate these answers instantaneously from our data. >> And those answers will lead to what? Just better care for individuals with Alzheimer's, or potentially, as you said, making Alzheimer's a memory. >> So the idea is to identify Alzheimer long before the first symptoms are shown, because then you can start an effective treatment and you can have the biggest impact. Once the first symptoms are present, it's not getting any better. >> Well thank you for your great work, gentlemen, and best of luck on behalf of society, >> Thank you very much >> really appreciate you coming on theCUBE and sharing your story. You're welcome. All right, keep it right there, buddy. Peter and I will be back with our next guest right after this short break. This is theCUBE, you're watching live from Madrid, HPE Discover 2017. We'll be right back.

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. that we also cover in Las Vegas, So Sharad, why don't we start with you, and frankly at that time, the response we got When you don't have to computing architectures that we are thinking through and how this technology gives you possible hope and in this process we generate a lot of data, So Dave lives in Massachusetts, I used to live there, is the Framingham Heart Study, which tracked people that we have captured with each other. Now if the amount of compute it takes to actually the Framingham Heart Study, you know, there are limits to this, of course. and people are really willing to donate everything So that's not necessarily But still, the knowledge is enough for them. people who have this disease down the road. I mean if it is not me, if it helps society in general, Now the machine is not a product yet and most of the results we have shown in the study that they brought to us, and then we can talk about and in the end we were able to shrink the time based on the emulation results we were seeing underneath, and we have to say, "Sorry, we have to process the data, Just better care for individuals with Alzheimer's, So the idea is to identify Alzheimer Peter and I will be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NeilPERSON

0.99+

Dave VellantePERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

$3QUANTITY

0.99+

Peter BurrisPERSON

0.99+

Jonathan EbingerPERSON

0.99+

AnthonyPERSON

0.99+

Mark AndreesenPERSON

0.99+

Savannah PetersonPERSON

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Matthias BeckerPERSON

0.99+

Greg SandsPERSON

0.99+

AmazonORGANIZATION

0.99+

Jennifer MeyerPERSON

0.99+

Stu MinimanPERSON

0.99+

TargetORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

RobertPERSON

0.99+

Paul CormierPERSON

0.99+

PaulPERSON

0.99+

OVHORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

PeterPERSON

0.99+

CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RobinPERSON

0.99+

Red CrossORGANIZATION

0.99+

Tom AndersonPERSON

0.99+

Andy JazzyPERSON

0.99+

KoreaLOCATION

0.99+

HowardPERSON

0.99+

Sharad SingalPERSON

0.99+

DZNEORGANIZATION

0.99+

U.S.LOCATION

0.99+

five minutesQUANTITY

0.99+

$2.7 millionQUANTITY

0.99+

TomPERSON

0.99+

John FurrierPERSON

0.99+

MatthiasPERSON

0.99+

MattPERSON

0.99+

BostonLOCATION

0.99+

JessePERSON

0.99+

Red HatORGANIZATION

0.99+