Monica Livingston, Intel | HPE Discover 2020
>> Narrator: From around the globe, it's theCUBE! Covering HPE Discover Virtual Experience, brought to you by HPE. >> Artificial Intelligence, Monica Livingston, hey Monica, welcome to theCUBE! >> Hi Lisa, thank you for having me. >> So, AI is a big topic, but let's just get an understanding, Intel's approach to artificial intelligence? >> Yeah, so at Intel, we look at AI As a workload and a tool that is becoming ubiquitous across all of our compute solutions. We have customers that are using AI in the Cloud, in the data center, at the Edge, so our goal is to infuse as much performance as we can for AI into our base platform and then where acceleration is needed we will have accelerator solutions for those particular areas. An example of where we are infusing AI performance into our base platform is the Intel Deep Learning Boost feature set which is in our second generation Intel Xeon Scalable Processors and this feature alone provides up to 30x performance improvement for Deep Learning Inference on the CPU over the previous generation and we are continuing infusing AI into our base platform with the third generation Intel Xeon Scalable Processors which are launching later this month. Intel will continue that leadership by including support for bfloat16. Bfloat16 is a new format that enables Deep Learning training with similar accuracy but essentially using less data so it increases AI throughput. Another example is memory, so both inference and training require quite a bit of memory and with Intel Octane for system memory, customers are able to expand large pools of memory closer to the CPU, and where that's particularly relevant is in areas where data sets are enlarged like imaging, with lots of images and lots of high resolution images, like medical diagnostic or seismic imaging, we are able to perform some of these models without tiling, and tiling is where, if you are memory-constrained, you essentially have to take that picture and chop it up in little pieces and process each piece and then stitch it back together at the end whereas that loses a lot of context for the AI model, so if you're able to process that entire picture, then you are getting a much better result and that is the benefit of having that memory accessible to the compute. So, when you are buying the latest and greatest HPE servers, you will have built-in AI performance with Intel Xeon Scalable and Octane for system memory. >> A couple things that you said that piqued my interests are 30x improvement in performance, if you talk about that with respect to the Deep Learning Booster, 30x is a huge factor and you also said that your solution from a memory perspective doesn't require tiling and I heard context. Context is key to have context in the data, to be able to understand and interpret and make inferences, so, talk to me about some of those big changes that you're releasing, what were some of the customer-compelling events or maybe industry opportunities that drove Intel to make such huge performance gains in second generation. >> Right, so second generation, these are the processors that are out now, so these are features that our customers are using today, third generation is coming out this month but for second generation, Deep Learning Boost, what's really important is the software optimization and the fact that we're able to use the hooks that we've built into the hardware but then use software to make sure that we are optimizing performance on those platforms and it's extremely relevant to talk about software in the AI space because AI solutions can get super expensive, you can easily pay 2 to 3x what you should be paying if you don't have optimized software because then what you do is you're just throwing more and more compute, more and more hardware at the problem, but it's not optimized and so what's really impactful is being able to run a vast number of AI applications on your base platform, that essentially means that you can run that in a mixed workload environment together with your other applications and you're not standing up separate infrastructure. Now, of course, there will be some applications that do need separate infrastructure that do need alliances and accelerators and for that, we will have a host of accelerators, we have FPGAs today for real time low latency inference, we have Movidius VPU for low-power vision applications at the Edge, but by and large, if you're looking at classical machine learning, if you're looking at analytics, Deep Learning inference, that can run on a base platform today and I think that's what's important in ensuring that more and more customers are able to run AI at scale, it's not just a matter of running a POC in a back lab, you do that on the infrastructure that you have available, not an issue, but when you are looking to scale, the cost is going to be significantly important and that's why it's important for us to make sure that we are building in as much performance as is feasible into the base platform and then offering software tools to allow customers to see that performance. >> Okay, so talking about the technology components, performance, memory, what's needed to scale on the technology side, I want to then kind of look at the business side, because we know a lot of customers in any industry undertake AI projects and they run into pitfalls where they're not able to even get off the ground, so converse to the technology side, what is it that you're seeing, what are the pitfalls that customers can avoid on the business side to get these AI projects designed and launched? >> Yeah, so on the business side, I mean you really have to start with a very solid business plan for why you're doing AI and it's even less about just the AI piece, but you have to have a very solid business plan for your solution as a whole. If you're doing AI just to do AI because you saw that it's a top trend for 2020 so you must do AI, that's likely going to not result in success. You have to make sure that you're understanding why you're doing AI, if you have a workload that could be easily solved, or a problem that could be easily solved with data analytics, use data analytics, AI should be used where appropriate, a way to provide true benefit and I think if you can demonstrate that, you're a long way in getting your project off the ground, and then there's several other pitfalls like data, do you have enough data, is it close enough to your compute in order to be accessible and feasible, do you have the resources that are skilled in AI that can get your solution off the ground, do you have a plan for what to do after you've deployed your solution because these files need to be maintained on a regular basis, so some sort of maintenance program needs to be in place and then infrastructure, cost can be prohibitive a lot of times if you're not able to leverage a good amount of your base infrastructure and that's really where we spend a lot of time with customers in trying to understand what their model is trying to do and can they use their base infrastructure, can they reuse as much of what they have, what is their current utilization, do they maybe have cycles in off times if their utilization is diurnal and during the night they have early Utilization, can you train your models at night rather than putting up a whole new set of infrastructure that likely will not be approved by management, let's be honest. >> And I imagine that that is all part of the joint better marketing strategy that HPE and Intel have together to have such conversations like that with customers, to help really build a robust business plan. >> Yeah, so HPE's fantastic at consulting with customers from beginning to end, looking at solutions and they've got a whole suite of storage solutions as well which are crucial for AI and Intel works together with HPE to create reference architectures for AI and then we do joint training as well. But yes, talking to your HPE rep and leveraging your ecosystem I think is incredibly important because the ecosystem is so diverse and there are a lot of resources available from ISVs to hardware providers to consulting companies that are able to support with AI. >> So Monica, the ecosystem is incredibly important, but how do you work with customers, HPE and Intel together, to help the customer, whether its in biotech or manufacturing to build an ecosystem or partnership that can help the customer really define the business plan of what they want to do to get that for us functional collaboration and buy-in and support and launch a successful AI project. >> Yeah it really does take a village, but both Intel and HPE have an extensive partner network, these are partners that we work with to optimize their solution, in HPE's case, they validate their solutions on HPE hardware to ensure that it runs smoothly and for our customers, we have the ability to match-make with partners in the ecosystem and generally, the way it works, is in specific segments, we have a list of partners that we can draw from and we introduce those to the customer, the customer generally has a couple of meetings with them to see which one is a better fit, and then they go from there, but essentially, it is just making sure that solutions are validated and optimized and then giving our customers a choice of which partners are the best fit for them. >> Last question for you, Monica, we are in the middle of COVID-19 and we see things on the news every day about contact tracing, for example, social distancing, and a lot of the things that are talked about on the news are human contact tracers, people being involved in manual processes, what are some of the opportunities that you see for AI to really help drive some of these because time is of the essence, yet, there's the ethics issue with AI, right? >> Yes, yes, and the ethics issue is not something that AI can solve on its own, unfortunately, the ethics conversation is something we need to have broader as a society and from a privacy perspective, how are we going to be mindful and respectful while also being able to use some of the data to protect society especially in a situation like this, so, contact tracing is extremely important, this is something that in areas that have a wide system of cameras installed, that's something that is doable from an algorithmic perspective and there's several partners of ours that are looking at that, and actually, the technology itself, I don't think is as insurmountable as the logistical aspect and the privacy and the ethical aspect and regulation around it, making sure that it's not used for the wrong purposes, but certainly with COVID, there is a new aspect of AI use cases, and contact tracing is obviously one of them, the others that we are seeing is essentially, companies are adapting a lot of their existing AI solutions or solutions that use AI to accommodate or to account for COVID, like, companies that have observations done and so if they were doing facial recognition either in metro stations or stadiums or banks, they now are adding features to their systems to detect social distancing, for example, or detect if somebody is wearing a mask. The technology, again, itself is not that difficult, but in the implementation and the use and the governance around it, I think, is a lot more complex, and then, I would be remiss not to mention remote learning which is huge now, I think all of our children are learning remote at this point and being able to use AI in curriculums and being able to really pinpoint where a child is having a hard time understanding a concept and then giving them more support in that area is definitely something that our partners are looking at and it's something that (webcam scrambles) with my children and the tools that they're using and so instead of reading to their teacher for their reading test, they're reading to their computer and the computer's able to pinpoint some very specific issues that maybe a teacher would not see as easily and then of course, the teacher has the ability to go back with you and listen and make sure that there weren't any issues with dialects or anything like that, so it's really just an interesting reinforcement of the teacher/student learning with the added algorithmic impact as well. >> Right, a lot of opportunity is going to come out of COVID, some maybe more accelerated than others because as you mentioned, it's very complex. Monica, I wish we had more time, this has been a really fascinating conversation about what Intel and HPE are doing with respect to AI. Glad to have you back 'cause this topic is just too big, but we thank you so much for your time. >> Thank you. >> For my guest Monica Livingston, I'm Lisa Martin, you're watching theCUBE's coverage of HPE Discover 2020, thanks for watching.
SUMMARY :
brought to you by HPE. and that is the benefit of having and make inferences, so, talk to me the cost is going to be to be accessible and feasible, do you have like that with customers, are able to support with AI. that can help the customer really define and generally, the way it and so instead of reading to their teacher Glad to have you back 'cause of HPE Discover 2020, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Monica Livingston | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
2 | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
each piece | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
30x | QUANTITY | 0.98+ |
3x | QUANTITY | 0.98+ |
Octane | COMMERCIAL_ITEM | 0.97+ |
HPE Discover 2020 | TITLE | 0.97+ |
today | DATE | 0.96+ |
bfloat16 | COMMERCIAL_ITEM | 0.95+ |
both | QUANTITY | 0.95+ |
third generation | QUANTITY | 0.95+ |
Bfloat16 | COMMERCIAL_ITEM | 0.93+ |
this month | DATE | 0.93+ |
Xeon Scalable | COMMERCIAL_ITEM | 0.92+ |
later this month | DATE | 0.92+ |
Xeon | COMMERCIAL_ITEM | 0.91+ |
theCUBE | ORGANIZATION | 0.86+ |
one of them | QUANTITY | 0.83+ |
several partners | QUANTITY | 0.8+ |
up to 30x | QUANTITY | 0.76+ |
lot | QUANTITY | 0.76+ |
customers | QUANTITY | 0.75+ |
time | QUANTITY | 0.69+ |
couple things | QUANTITY | 0.63+ |
COVID | OTHER | 0.62+ |
Movidius | ORGANIZATION | 0.6+ |
Processors | COMMERCIAL_ITEM | 0.58+ |
Octane | ORGANIZATION | 0.56+ |
Learning Boost | OTHER | 0.56+ |
day | QUANTITY | 0.55+ |
Deep | COMMERCIAL_ITEM | 0.55+ |
images | QUANTITY | 0.53+ |
couple of | QUANTITY | 0.51+ |
Last | QUANTITY | 0.5+ |
Scalable | OTHER | 0.45+ |
COVID | TITLE | 0.43+ |
Deep Learning Boost | COMMERCIAL_ITEM | 0.39+ |
VPU | TITLE | 0.34+ |