Ken King & Sumit Gupta, IBM | IBM Think 2018
>> Narrator: Live from Las Vegas, it's the Cube, covering IBM Think 2018, brought to you by IBM. >> We're back at IBM Think 2018. You're watching the Cube, the leader in live tech coverage. My name is Dave Vellante and I'm here with my co-host, Peter Burris. Ken King is here; he's the general manager of OpenPOWER from IBM, and Sumit Gupta, PhD, who is the VP, HPC, AI, ML for IBM Cognitive. Gentleman, welcome to the Cube >> Sumit: Thank you. >> Thank you for having us. >> So, really, guys, a pleasure. We had dinner last night, talked about Picciano who runs the OpenPOWER business, appreciate you guys comin' on, but, I got to ask you, Sumit, I'll start with you. OpenPOWER, Cognitive systems, a lot of people say, "Well, that's just the power system. "This is the old AIX business, it's just renaming it. "It's a branding thing.", what do you say? >> I think we had a fundamental strategy shift where we realized that AI was going to be the dominant workload moving into the future, and the systems that have been designed today or in the past are not the right systems for the AI future. So, we also believe that it's not just about silicon and even a single server. It's about the software, it's about thinking at the react level and the data center level. So, fundamentally, Cognitive Systems is about co-designing hardware and software with an open ecosystem of partners who are innovating to maximize the data and AI support at a react level. >> Somebody was talkin' to Steve Mills, probably about 10 years ago, and he said, "Listen, if you're going to compete with Intel, "you can copy them, that's not what we're going to do." You know, he didn't like the spark strategy. "We have a better strategy.", is what he said, and "Oh, strategies, we're going to open it up, "we're going to try to get 10% of the market. "You know, we'll see if we can get there.", but, Ken, I wonder if you could sort of talk about, just from a high level, the strategy and maybe go into the segments. >> Yeah, absolutely, so, yeah, you're absolutely right on the strategy. You know, we have completely opened up the architecture. Our focus on growth is around having an ecosystem and an open architecture so everybody can innovate on top of it effectively and everybody in the ecosystem can profit from it and gains good margins. So, that's the strategy, that's how we design the OpenPOWER ecosystem, but, you know, our segments, our core segments, AIX in Unix is still a core, very big core segment of ours. Unix itself is flat to declining, but AIX is continuing to take share in that segment through all the new innovations we're delivering. The other segments are all growth segments, high growth segments, whether it's SAP HANA, our cognitive infrastructure in modern day to platform, or even what we're doing in the HyperScale data centers. Those are all significant growth opportunities for us, and those are all Linux based, and, so, that is really where a lot of the OpenPOWER initiatives are driving growth for us and leveraging the fact that, through that ecosystem, we're getting a lot of incremental innovation that's occurring and it's delivering competitive differentiation for our platform. I say for our platform, but that doesn't mean just for IBM, but for all the ecosystem partners as well, and a lot of that was on display on Monday when we had our OpenPOWER summit. >> So, to talk about more about the OpenPOWER summit, what was that all about, who was there? Give us some stats on OpenPOWER and ecosystem. >> Yeah, absolutely. So, it was a good day, we're up to well over 300 members. We have over 50 different systems that are coming out in the market from IBM or our partners. Over 20 different manufacturers out there actually developing OpenPOWER systems. A lot of announcements or a lot of statements that were made at the summit that we thought were extremely valuable, first of all, we got the number one server vendor in Europe, Atos, designing and developing P9, the number on in Japan, Hitachi, the number one in China, Inspur. We got top ODMs like Super Micro, Wistron, and others that are also developing their power nine. We have a lot of different component providers on the new PCIe gen four, on the open cabinet capabilities, a lot of announcements made by a number of component partners and accelerator partners at the summit as well. The other thing I'm excited about is we have over 70 ISVs now on the platform, and a number of statements were made and announcements on Monday from people like MapD, Anaconda, H2O, Conetica and others who are leveraging those innovations bought on the platform like NVLink and the coherency between GPU and CPU to do accelerated analytics and accelerated GPU database kind of capabilities, but the thing that had me the most excited on Monday were the end users. I've always said, and the analysts always ask me the questions of when are you going to start penetration in the market? When are you going to show that you've got a lot of end users deploying this? And there were a lot of statements by a lot of big players on Monday. Google was on stage and publicly said the IO was amazing, the memory bandwidth is amazing. We are deploying Zaius, which is the power nine server, in our data centers and we're ready for scale, and it's now Google strong which is basically saying that this thing is hardened and ready for production, but we also (laughs) had a number of other significant ones, Tencent talkin' about deploying OpenPOWER, 30% better efficiency, 30% less server resources required, the cloud armor of Alibaba talkin' about how they're putting on their on their X-Dragon, they have it in a piler program, they're asking everybody to use it now so they can figure out how do they go into production. PayPal made statements about how they're using it, but the machine learning and deep learning to do fraud detection, and we even had Limelight, who is not as big a name, but >> CDN, yeah. >> They're a CDN tool provider to people like Netflix and others. We're talkin' about the great capability with the IO and the ability to reduce the buffering and improve the streaming for all these CDN providers out there. So, we were really excited about all those end users and all the things they're saying. That demonstrates the power of this ecosystem. >> Alright, so just to comment on the architecture and then, I want to get into the Cognitive piece. I mean, you guys did, years ago, little Indians, recognizing you got to get software based to be compatible. You mentioned, Ken, bandwidth, IO bandwidth, CAPI stuff that you've done. So, there's a lot of incentives, especially for the big hyperscale guys, to be able to do more with less, but, to me, let's get into the AI, the Cognitive piece. Bob Picciano comes over from running a $15 billion analytics business, so, obviously, he's got some knowledge. He's bringin' in people like you with all these cool buzzwords in your title. So, talk a little bit about infrastructure for AI and why power is the right platform. >> Sure, so, I think we all recognize that the performance advantages and even power advantages that we were getting from Dennard scaling, also known as Moore's law, is over, right. So, people talk about the end of Moore's Law, and that's really the end of gaining processor performance with Dennard scaling and the Moore's Law. What we believe is that to continue to meet the performance needs of all of these new AI and data workloads, you need accelerators, and not just computer accelerators, you actually need accelerated networking. You need accelerated storage, you need high-density memory sitting very close to the compute power, and, if you really think about it, what's happened is, again, system view, right, we're not silicon view, we're looking at the system. The minute you start looking at the silicon you realize you want to get the data to where the computer is, or the computer where the data is. So, it all becomes about creating bigger pipelines, factor of pipelines, to move data around to get to the right compute piece. For example, we put much more emphasis on a much faster memory system to make sure we are getting data from the system memory to the CPU. >> Coherently. >> Coherently, that's the main memory. We put interfaces on power nine including NVLink, OpenCAPI, and PCIe gen four, and that enabled us to get that data either from the network to the system memory, or out back to the network, or to storage, or to accelerators like GPUs. We built and embedded these high-speed interconnects into power nine, into the processor. Nvidia put NVLink into their GPU, and we've been working with marketers like Xilinx and Mellanox on getting OpenCAPI onto their components. >> And we're seeing up to 10x for both memory bandwidth and IO over x86 which is significant. You should talk about how we're seeing up to 4x improvement in training of MLDL algorithms over x86 which is dramatic in how quickly you can get from data to insight, right? You could take training and turn it from weeks to days, or days to hours, or even hours to minutes, and that makes a huge difference in what you can do in any industry as far as getting insight out of your data which is the competitive differentiator in today's environment. >> Let's talk about this notion of architecture, or systems especially. The basic platform for how we've been building systems has been relatively consistent for a long time. The basic approach to how we think about building systems has been relatively consistent. You start with the database manager, you run it on an Intel processor, you build your application, you scale it up based on SMP needs. There's been some variations; we're going into clustering, because we do some other things, but you guys are talking about something fundamentally different, and flash memory, the ability to do flash storage, which dramatically changes the relationship between the processor and the data, means that we're not going to see all of the organization of the workloads around the server, see how much we can do in it. It's really going to be much more of a balanced approach. How is power going to provide that more balanced systems approach across as we distribute data, as we distribute processing, as we create a cloud experience that isn't in one place, but is in more places. >> Well, this ties exactly to the point I made around it's not just accelerated compute, which we've all talked about a lot over the years, it's also about accelerated storage, accelerated networking, and accelerated memories, right. This is really, the point being, that the compute, if you don't have a fast pipeline into the processor from all of this wonderful storage and flash technology, there's going to be a choke point in the network, or they'll be a choke point once the data gets to the server, you're choked then. So, a lot of our focus has been, first of all, partnering with a company like Mellanox which builds extremely high bandwidth, high-speed >> And EOF. >> Right, right, and I'm using one as an example right. >> Sure. >> I'm using one as an example and that's where the large partnerships, we have like 300 partnerships, as Ken talked about in the OpenPOWER foundation. Those partnerships is because we brought together all of these technology providers. We believe that no one company can own the agenda of technology. No one company can invest enough to continue to give us the performance we need to meet the needs of the AI workloads, and that's why we want to partner with all these technology vendors who've all invested billions of dollars to provide the best systems and software for AI and data. >> But fundamentally, >> It's the whole construct of data centric systems, right? >> Right. >> I mean, sometimes you got to process the data in the network, right? Sometimes you got to process the data in the storage. It's not just at the CPU, the GPUs a huge place for processing that data. >> Sure. >> How do you do that all coherently and how do things work together in a system environment is crucial versus a vertically integrated capability where the CPU provider continues to put more and more into the processor and disenfranchise the rest of the ecosystem. >> Well, that was the counter building strategies that we want to talk about. You have Intel who wants to put as much on the die as possible. It's worked quite well for Intel over the years. You had to take a different strategy. If you tried to take Intel on with that strategy, you would have failed. So, talk about the different philosophies, but really I'm interested in what it means for things like alternative processing and your relationship in your ecosystem. >> This is not about company strategies, right. I mean, Intel is a semiconductor company and they think like a semiconductor company. We're a systems and software company, we think like that, but this is not about company strategy. This is about what the market needs, what client workloads need, and if you start there, you start with a data centric strategy. You start with data centric systems. You think about moving data around and making sure there is heritage in this computer, there is accelerated computer, you have very fast networks. So, we just built the US's fastest supercomputer. We're currently building the US's fastest supercomputer which is the project name is Coral, but there are two supercomputers, one at Oak Ridge National Labs and one at Lawrence Livermore. These are the ultimate HPC and AI machines, right. Its computer's a very important part of them, but networking and storage is just as important. The file system is just as important. The cluster management software is just as important, right, because if you are serving data scientists and a biologist, they don't want to deal with, "How many servers do I need to launch this job on? "How do I manage the jobs, how do I manage the server?" You want them to just scale, right. So, we do a lot of work on our scalability. We do a lot of work in using Apache Spark to enable cluster virtualization and user virtualization. >> Well, if we think about, I don't like the term data gravity, it's wrong a lot of different perspectives, but if we think about it, you guys are trying to build systems in a world that's centered on data, as opposed to a world that's centered on the server. >> That's exactly right. >> That's right. >> You got that, right? >> That's exactly right. >> Yeah, absolutely. >> Alright, you guys got to go, we got to wrap, but I just want to close with, I mean, always says infrastructure matters. You got Z growing, you got power growing, you got storage growing, it's given a good tailwind to IBM, so, guys, great work. Congratulations, got a lot more to do, I know, but thanks for >> It's going to be a fun year. comin' on the Cube, appreciate it. >> Thank you very much. >> Thank you. >> Appreciate you having us. >> Alright, keep it right there, everybody. We'll be back with our next guest. You're watching the Cube live from IBM Think 2018. We'll be right back. (techno beat)
SUMMARY :
covering IBM Think 2018, brought to you by IBM. Ken King is here; he's the general manager "This is the old AIX business, it's just renaming it. and the systems that have been designed today or in the past You know, he didn't like the spark strategy. So, that's the strategy, that's how we design So, to talk about more about the OpenPOWER summit, the questions of when are you going to and the ability to reduce the buffering the big hyperscale guys, to be able to do more with less, from the system memory to the CPU. Coherently, that's the main memory. and that makes a huge difference in what you can do and flash memory, the ability to do flash storage, This is really, the point being, that the compute, Right, right, and I'm using one as an example the large partnerships, we have like 300 partnerships, It's not just at the CPU, the GPUs and disenfranchise the rest of the ecosystem. So, talk about the different philosophies, "How do I manage the jobs, how do I manage the server?" but if we think about it, you guys are trying You got Z growing, you got power growing, comin' on the Cube, appreciate it. We'll be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ken King | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Steve Mills | PERSON | 0.99+ |
Ken | PERSON | 0.99+ |
Sumit | PERSON | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
PayPal | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Sumit Gupta | PERSON | 0.99+ |
OpenPOWER | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
$15 billion | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Conetica | ORGANIZATION | 0.99+ |
Xilinx | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
OpenPOWER | EVENT | 0.99+ |
ORGANIZATION | 0.99+ | |
Netflix | ORGANIZATION | 0.99+ |
Atos | ORGANIZATION | 0.99+ |
Picciano | PERSON | 0.99+ |
300 partnerships | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
Inspur | ORGANIZATION | 0.98+ |
two supercomputers | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
Moore's Law | TITLE | 0.98+ |
over 300 members | QUANTITY | 0.98+ |
US | LOCATION | 0.98+ |
SAP HANA | TITLE | 0.97+ |
AIX | ORGANIZATION | 0.97+ |
over 50 different systems | QUANTITY | 0.97+ |
Wistron | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
Limelight | ORGANIZATION | 0.97+ |
H2O | ORGANIZATION | 0.97+ |
Unix | TITLE | 0.97+ |
over 70 ISVs | QUANTITY | 0.97+ |
Over 20 different manufacturers | QUANTITY | 0.97+ |
billions of dollars | QUANTITY | 0.96+ |
MapD | ORGANIZATION | 0.96+ |
Dennard | ORGANIZATION | 0.95+ |
OpenCAPI | TITLE | 0.95+ |
Moore's law | TITLE | 0.95+ |
today | DATE | 0.95+ |
single server | QUANTITY | 0.94+ |
Lawrence | LOCATION | 0.93+ |
Oak Ridge National Labs | ORGANIZATION | 0.93+ |
IBM Cognitive | ORGANIZATION | 0.93+ |
Tencent | ORGANIZATION | 0.93+ |
nine | QUANTITY | 0.92+ |
one place | QUANTITY | 0.91+ |
up to 10x | QUANTITY | 0.9+ |
X-Dragon | COMMERCIAL_ITEM | 0.9+ |
30% less | QUANTITY | 0.9+ |
P9 | COMMERCIAL_ITEM | 0.89+ |
last night | DATE | 0.88+ |
Coral | ORGANIZATION | 0.88+ |
AIX | TITLE | 0.87+ |
Cognitive Systems | ORGANIZATION | 0.86+ |
Linton Ward, IBM & Asad Mahmood, IBM - DataWorks Summit 2017
>> Narrator: Live from San Jose, in the heart of Silicon Valley, it's theCUBE! Covering Data Works Summit 2017. Brought to you by Hortonworks. >> Welcome back to theCUBE. I'm Lisa Martin with my co-host George Gilbert. We are live on day one of the Data Works Summit in San Jose in the heart of Silicon Valley. Great buzz in the event, I'm sure you can see and hear behind us. We're very excited to be joined by a couple of fellows from IBM. A very longstanding Hortonworks partner that announced a phenomenal suite of four new levels of that partnership today. Please welcome Asad Mahmood, Analytics Cloud Solutions Specialist at IBM, and medical doctor, and Linton Ward, Distinguished Engineer, Power Systems OpenPOWER Solutions from IBM. Welcome guys, great to have you both on the queue for the first time. So, Linton, software has been changing, companies, enterprises all around are really looking for more open solutions, really moving away from proprietary. Talk to us about the OpenPOWER Foundation before we get into the announcements today, what was the genesis of that? >> Okay sure, we recognized the need for innovation beyond a single chip, to build out an ecosystem, an innovation collaboration with our system partners. So, ranging from Google to Mellanox for networking, to Hortonworks for software, we believe that system-level optimization and innovation is what's going to bring the price performance advantage in the future. That traditional seamless scaling doesn't really bring us there by itself but that partnership does. >> So, from today's announcements, a number of announcements that Hortonworks is adopting IBM's data science platforms, so really the theme this morning of the keynote was data science, right, it's the next leg in really transforming an enterprise to be very much data driven and digitalized. We also saw the announcement about Atlas for data governance, what does that mean from your perspective on the engineering side? >> Very exciting you know, in terms of building out solutions of hardware and software the ability to really harden the Hortonworks data platform with servers, and storage and networking I think is going to bring simplification to on-premises, like people are seeing with the Cloud, I think the ability to create the analyst workbench, or the cognitive workbench, using the data science experience to create a pipeline of data flow and analytic flow, I think it's going to be very strong for innovation. Around that, most notable for me is the fact that they're all built on open technologies leveraging communities that universities can pick up, contribute to, I think we're going to see the pace of innovation really pick up. >> And on that front, on pace of innovation, you talked about universities, one of the things I thought was really a great highlight in the customer panel this morning that Raj Verma hosted was you had health care, insurance companies, financial services, there was Duke Energy there, and they all talked about one of the great benefits of open source is that kids in universities have access to the software for free. So from a talent attraction perspective, they're really kind of fostering that next generation who will be able to take this to the next level, which I think is a really important point as we look at data science being kind of the next big driver or transformer and also going, you know, there's not a lot of really skilled data scientists, how can that change over time? And this is is one, the open source community that Hortonworks has been very dedicated to since the beginning, it's a great it's really a great outcome of that. >> Definitely, I think the ability to take the risk out of a new analytical project is one benefit, and the other benefit is there's a tremendous, not just from young people, a tremendous amount of interest among programmers, developers of all types, to create data science skills, data engineering and data science skills. >> If we leave aside the skills for a moment and focus on the, sort of, the operationalization of the models once they're built, how should we think about a trained model, or, I should break it into two pieces. How should we think about training the models, where the data comes from and who does it? And then, the orchestration and deployment of them, Cloud, Edge Gateway, Edge device, that sort of thing. >> I think it all comes down to exactly what your use case is. You have to identify what use case you're trying to tackle, whether that's applicable to clinical medicine, whether that's applicable to finance, to banking, to retail or transportation, first you have to have that use case in mind, then you can go about training that model, developing that model, and for that you need to have a good, potent, robust data set to allow you to carry out that analysis and whether you want to do exploratory analysis or you want to do predictive analysis, that needs to be very well defined in your training stage. Once you have that model developed, then we have certain services, such as Watson Machine Learning, within data science experience that will allow you to take that model that you just developed, just moments ago, and just deploy that as a restful API that you can then embed into an application and to your solution, and in that solution you can basically use across industry. >> Are there some use cases where you have almost like a tiering of models where, you know, there're some that are right at the edge like, you know, a big device like a car and then, you know, there's sort of the fog level which is the, say, cell towers or other buildings nearby and then there's something in the Cloud that's sort of like, master model or an ensemble of models, I don't assume that's like, Evel Knievel would say you know, "Don't try that at home," but sort-of, is the tooling being built to enable that? >> So the tooling is already in existence right now. You can actually go ahead right now and be able to build out prototypes, even full-level, full-range applications right on the Cloud, and you can do that, you can do that thanks to Data Science Experience, you can do that thanks to IBM Bluemix, you can go ahead and do that type of analysis right there and not only that, you can allow that analysis to actually guide you along the path from building a model to building a full-range application and this is all happening on the Cloud level. We can talk more about it happening on on-premise level but on the Cloud level specifically, you can have those applications built on the fly, on the Cloud and have them deployed for web apps, for moblie apps, et cetera. >> One of the things that you talked about is use cases in certain verticals, IBM has been very strong and vertically focused for a very long time, but you kind of almost answered the question that I'd like to maybe explore a little bit more about building these models, training the models, in say, health care or telco and being able to deploy them, where's the horizontal benefits there that IBM would be able to deliver faster to other industries? >> Definitely, I think the main thing is that IBM, first of all, gives you that opportunity, that platform to say that hey, you have a data set, you have a use case, let's give you the tooling, let's give you the methodology to take you from data, to a model, to ultimately that full range application and specifically, I've built some applications specific to federal health care, specifically to address clinical medicine and behavioral medicine and that's allowed me to actually use IBM tools and some open source technologies as well to actually go out and build these applications on the fly as a prototype to show, not only the realm, the art of the possible when it comes to these technologies, but also to solve problems, because ultimately, that's what we're trying to accomplish here. We're trying to find real-world solutions to real-world problems. >> Linton, let me re-direct something towards you about, a lot of people are talking about how Moore's law slowing down or even ending, well at least in terms of speed of processors, but if you look at the, not just the CPU but FPGA or Asic or the tensor processing unit, which, I assume is an Asic, and you have the high speed interconnects, if we don't look at just, you know what can you fit on one chip, but you look at, you know 3D what's the density of transistors in a rack or in a data center, is that still growing as fast or faster, and what does it mean for the types of models that we can build? >> That's a great question. One of the key things that we did with the OpenPOWER Foundation, is to open up the interfaces to the chip, so with NVIDIA we have NVLink, which gives us a substantial increase in bandwidth, we have created something called OpenCAPI, which is a coherent protocol, to get to other types of accelerators, so we believe that hybrid computing in that form, you saw NVIDIDA on-stage this morning, and we believe especially for deploring the acceleration provided for GPUs is going to continue to drive substantial growth, it's a very exciting time. >> Would it be fair to say that we're on the same curve, if we look at it, not from the point of view of, you know what can we fit on a little square, but if we look at what can we fit in a data center or the power available to model things, you know Jeff Dean at Google said, "If Android users "talk into their phones for two to three minutes a day, "we need two to three times the data centers we have." Can we grow that price performance faster and enable sort of things that we did not expect? >> I think the innovation that you're describing will, in fact, put pressure on data centers. The ability to collect data from autonomous vehicles or other N points is really going up. So, we're okay for the near-term but at some point we will have to start looking at other technologies to continue that growth. Right now we're in the throws of what I call fast data versus slow data, so keeping the slow data cheaply and getting the fast data closer to the compute is a very big deal for us, so NAND flash and other non-volatile technologies for the fast data are where the innovation is happening right now, but you're right, over time we will continue to collect more and more data and it will put pressure on the overall technologies. >> Last question as we get ready to wrap here, Asad, your background is fascinating to me. Having a medical degree and working in federal healthcare for IBM, you talked about some of the clinical work that you're doing and the models that you're helping to build. What are some of the mission critical needs that you're seeing in health care today that are really kind of driving, not just health care organizations to do big data right, but to do data science right? >> Exactly, so I think one of the biggest questions that we get and one of the biggest needs that we get from the healthcare arena is patient-centric solutions. There are a lot of solutions that are hoping to address problems that are being faced by physicians on a day-to-day level, but there are not enough applications that are addressing the concerns that are the pain points that patients are facing on a daily basis. So the applications that I've started building out at IBM are all patient-centric applications that basically put the level of their data, their symptoms, their diagnosis, in their hands alone and allows them to actually find out more or less what's going wrong with my body at any particular time during the day and then find the right healthcare professional or the right doctor that is best suited to treating that condition, treating that diagnosis. So I think that's the big thing that we've seen from the healthcare market right now. The big need that we have, that we're currently addressing with our Cloud analytics technology which is just becoming more and more advanced and sophisticated and is trending towards some of the other health trends or technology trends that we have currently right now on the market, including the Blockchain, which is tending towards more of a de-centralized focus on these applications. So it's actually they're putting more of the data in the hands of the consumer, of the hands of the patient, and even in the hands of the doctor. >> Wow, fantastic. Well you guys, thank you so much for joining us on theCUBE. Congratulations on your first time being on the show, Asad Mahmood and Linton Ward from IBM, we appreciate your time. >> Thank you very much. >> Thank you. >> And for my co-host George Gilbert, I'm Lisa Martin, you're watching theCUBE live on day one of the Data Works Summit from Silicon Valley but stick around, we've got great guests coming up so we'll be right back.
SUMMARY :
Brought to you by Hortonworks. Welcome guys, great to have you both to build out an ecosystem, an innovation collaboration to be very much data driven and digitalized. the ability to really harden the Hortonworks data platform and also going, you know, there's not a lot is one benefit, and the other benefit is of the models once they're built, and for that you need to have a good, potent, to actually guide you along the path that platform to say that hey, you have a data set, the acceleration provided for GPUs is going to continue or the power available to model things, you know and getting the fast data closer to the compute for IBM, you talked about some of the clinical work There are a lot of solutions that are hoping to address Well you guys, thank you so much for joining us on theCUBE. on day one of the Data Works Summit from Silicon Valley
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff Dean | PERSON | 0.99+ |
Duke Energy | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Asad Mahmood | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Raj Verma | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Asad | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Evel Knievel | PERSON | 0.99+ |
OpenPOWER Foundation | ORGANIZATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Linton | PERSON | 0.99+ |
Linton Ward | PERSON | 0.99+ |
three times | QUANTITY | 0.99+ |
Data Works Summit | EVENT | 0.99+ |
one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one chip | QUANTITY | 0.98+ |
one benefit | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
Android | TITLE | 0.96+ |
three minutes a day | QUANTITY | 0.95+ |
both | QUANTITY | 0.94+ |
day one | QUANTITY | 0.94+ |
Moore | PERSON | 0.93+ |
this morning | DATE | 0.92+ |
OpenCAPI | TITLE | 0.91+ |
first | QUANTITY | 0.9+ |
single chip | QUANTITY | 0.89+ |
Data Works Summit 2017 | EVENT | 0.88+ |
telco | ORGANIZATION | 0.88+ |
DataWorks Summit 2017 | EVENT | 0.85+ |
NVLink | COMMERCIAL_ITEM | 0.79+ |
NVIDIDA | TITLE | 0.76+ |
IBM Bluemix | ORGANIZATION | 0.75+ |
Watson Machine Learning | TITLE | 0.75+ |
Power Systems OpenPOWER Solutions | ORGANIZATION | 0.74+ |
Edge | TITLE | 0.67+ |
Edge Gateway | TITLE | 0.62+ |
couple | QUANTITY | 0.6+ |
Covering | EVENT | 0.6+ |
Narrator | TITLE | 0.56+ |
Atlas | TITLE | 0.52+ |
Linton | ORGANIZATION | 0.51+ |
Ward | PERSON | 0.47+ |
3D | QUANTITY | 0.36+ |