Meet the new HPE ProLiant Gen11 Servers
>> Hello, everyone. Welcome to theCUBE's coverage of Compute Engineered For Your Hybrid World, sponsored by HPE and Intel. I'm John Furrier, host of theCUBE. I'm pleased to be joined by Krista Satterthwaite, SVP and general manager for HPE Mainstream Compute, and Lisa Spelman, corporate vice president, and general manager of Intel Xeon Products, here to discuss the major announcement. Thanks for joining us today. Thanks for coming on theCUBE. >> Thanks for having us. >> Great to be here. >> Great to see you guys. And exciting announcement. Krista, Compute continues to evolve to meet the challenges of businesses. We're seeing more and more high performance, more Compute, I mean, it's getting more Compute every day. You guys officially announced this next generation of ProLiant Gen11s in November. Can you share and talk about what this means? >> Yeah, so first of all, thanks so much for having me. I'm really excited about this announcement. And yeah, in November we announced our HPE ProLiant NextGen, and it really was about one thing. It's about engineering Compute for customers' hybrid world. And we have three different design principles when we designed this generation. First is intuitive cloud operating experience, and that's with our HPE GreenLake for Compute Ops Management. And that's all about management that is simple, unified, and automated. So it's all about seeing everything from one council. So you have a customer that's using this, and they were so surprised at how much they could see, and they were excited because they had servers in multiple locations. This was a hotel, so they had servers everywhere, and they can now see all their different firmware levels. And with that type of visibility, they thought their planning was going to be much, much easier. And then when it comes to updates, they're much quicker and much easier, so it's an exciting thing, whether you have servers just in the data center, or you have them distributed, you could see and do more than you ever could before with HPE GreenLake for Compute Ops Management. So that's number one. Number two is trusted security by design. Now, when we launched our HPE ProLiant Gen10 servers years ago, we launched groundbreaking innovative security features, and we haven't stopped, we've continued to enhance that every since then. And this generation's no exception. So we have new innovations around security. Security is a huge focus area for us, and so we're excited about delivering those. And then lastly, performance for every workload. We have a huge increase in performance with HPE ProLiant Gen11, and we have customers that are clamoring for this additional performance right now. And what's great about this is that, it doesn't matter where the bottleneck is, whether it's CPU, memory or IO, we have advancements across the board that are going to make real differences in what customers are going to be able to get out of their workloads. And then we have customers that are trying to build headroom in. So even if they don't need a today, what they put in their environment today, they know needs to last and need to be built for the future. >> That's awesome. Thanks for the recap. And that's great news for folks looking to power those workloads, more and more optimizations needed. I got to ask though, how is what you guys are announcing today, meeting these customer needs for the future, and what are your customers looking for and what are HPE and Intel announcing today? >> Yeah, so customers are doing more than ever before with their servers. So they're really pushing things to the max. I'll give you an example. There's a retail customer that is waiting to get their hands on our ProLiant Gen11 servers, because they want to do video streaming in every one of their retail stores and what they're building, when they're building what they need, we started talking to 'em about what their needs were today, and they were like, "Forget about what my needs are today. We're buying for headroom. We don't want to touch these servers for a while." So they're maxing things out, because they know the needs are coming. And so what you'll see with this generation is that we've built all of that in so that customers can deploy with confidence and know they have the headroom for all the things they want to do. The applications that we see and what people are trying to do with their servers is light years different than the last big announcement we had, which was our ProLiant Gen10 servers. People are trying to do more than ever before and they're trying to do that at the Edge as well as as the data center. So I'll tell you a little bit about the servers we have. So in partnership with Intel, we're really excited to announce a new batch of servers. And these servers feature the 4th Gen Intel Xeon scalable processors, bringing a lot more performance and efficiency. And I'll talk about the servers, one, the first one is a HPE ProLiant DL320 Gen11. Now, I told you about that retail customer that's trying to do video streaming in their stores. This is the server they were looking at. This server is a new server, we didn't have a Gen10 or a Gen10+ version of the server. This is a new server and it's optimized for Edge use cases. It's a rack-based server and it's very, very flexible. So different types of storage, different types of GPU configurations, really designed to take care of many, many use cases at the Edge and doing more at the Edge than ever before. So I mentioned video streaming, but also VDI and analytics at the Edge. The next two servers are some of our most popular servers, our HPE ProLiant DL360 Gen11, and that's our density-optimized server for enterprise. And that is getting an upgrade across the board as well, big, big improvements in terms of performance, and expansion. And for those customers that need even more expansion when it comes to, let's say, storage or accelerators then the DL 380 Gen11 is a server that's new as well. And that's really for folks that need more expandability than the DL360, which is a one use server. And then lastly, our ML350, which is a tower server. These tower servers are typically used at remote sites, branch offices and this particular server holds a world record for energy efficiency for tower servers. So those are some of the servers we have today that we're announcing. I also want to talk a little bit about our Cray portfolio. So we're announcing two new servers with our HPE Cray portfolio. And what's great about this is that these servers make super computing more accessible to more enterprise customers. These servers are going to be smaller, they're going to come in at lower price points, and deliver tremendous energy efficiency. So these are the Cray XD servers, and there's more servers to come, but these are the ones that we're announcing with this first iteration. >> Great stuff. I can talk about servers all day long, I love server innovation. It's been following for many, many years, and you guys know. Lisa, we'll bring you in. Servers have been powered by Intel Xeon, we've been talking a lot about the scalable processors. This is your 4th Gen, they're in Gen11 and you're at 4th Gen. Krista mentioned this generation's about Security Edge, which is essentially becoming like a data center model now, the Edges are exploding. What are some of the design principles that went into the 4th Gen this time around the scalable processor? Can you share the Intel role here? >> Sure. I love what Krista said about headroom. If there's anything we've learned in these past few years, it's that you can plan for today, and you can even plan for tomorrow, but your tomorrow might look a lot different than what you thought it was going to. So to meet these business challenges, as we think about the underlying processor that powers all that amazing server lineup that Krista just went through, we are really looking at delivering that increased performance, the power efficient compute and then strong security. And of course, attention to the overall operating cost of the customer environment. Intel's focused on a very workload-first approach to solving our customers' real problems. So this is the applications that they're running every day to drive their digital transformation, and we really like to focus our innovation, and leadership for those highest value, and also the highest growth workloads. Some of those that we've uniquely focused on in 4th Gen Xeon, our artificial intelligence, high performance computing, network, storage, and as well as the deployments, like you were mentioning, ranging from the cloud all the way out to the Edge. And those are all satisfied by 4th Gen Xeon scalable. So our strategy for architecting is based off of all of that. And in addition to doing things like adding core count, improving the platform, updating the memory and the IO, all those standard things that you do, we've invested deeply in delivering the industry's CPU with the most built-in accelerators. And I'll just give an example, in artificial intelligence with built-in AMX acceleration, plus the framework optimizations, customers can see a 10X performance improvement gen over gen, that's on both training and inference. So it further cements Xeon as the world's foundation for inference, and it now delivers performance equivalent of a modern GPU, but all within your CPU. The flexibility that, that opens up for customers is tremendous and it's so many new ways to utilize their infrastructure. And like Krista said, I just want to say that, that best-in-class security, and security solutions are an absolute requirement. We believe that starts at the hardware level, and we continue to invest in our security features with that full ecosystem support so that our customers, like HPE, can deliver that full stacked solution to really deliver on that promise. >> I love that scalable processor messaging too around the silicon and all those advanced features, the accelerators. AI's certainly seeing a lot of that in demand now. Krista, similar question to you on your end. How do you guys look at these, your core design principles around the ProLiant Gen11, and how that helps solve the challenges for your customers that are living in this hybrid world today? >> Yeah, so we see how fast things are changing and we kept that in mind when we decided to design this generation. We talked all already about distributed environments. We see the intensity of the requirements that are at the Edge, and that's part of what we're trying to address with the new platform that I mentioned. It's also part of what we're trying to address with our management, making sure that people can manage no matter where a server is and get a great experience. The other thing we're realizing when it comes to what's happening is customers are looking at how they operate. Many want to buy as a service and with HPE GreenLake, we see that becoming more and more popular. With HPE GreenLake, we can offer that to customers, which is really helpful, especially when they're trying to get new technology like this. Sometimes they don't have it in the budget. With something like HP GreenLake, there's no upfront costs so they can enjoy this technology without having to come up with a big capital outlay for it. So that's great. Another one is around, I liked what Lisa said about security starting at the hardware. And that's exactly, the foundation has to be secure, or you're starting at the wrong place. So that's also something that we feel like we've advanced this time around. This secure root of trust that we started in Gen10, we've extended that to additional partners, so we're excited about that as well. >> That's great, Krista. We're seeing and hearing a lot about customers challenges at the Edge. Lisa, I want to bring you back in on this one. What are the needs that you see at the Edge from an Intel perspective? How is Intel addressing the Edge? >> Yeah, thanks, John. You know, one of the best things about Xeon is that it can span workloads and environments all the way from the Edge back to the core data center all within the same software environment. Customers really love that portability. For the Edge, we have seen an explosion of use cases coming from all industries and I think Krista would say the same. Where we're focused on delivering is that performant-enough compute that can fit into a constrained environment, and those constraints can be physical space, they can be the thermal environment. The Network Edge has been a big focus for us. Not only adding features and integrating acceleration, but investing deeply in that software environment so that more and more critical applications can be ported to Xeon and HPE industry standard servers versus requiring expensive, proprietary systems that were quite frankly not designed for this explosion of use cases that we're seeing. Across a variety of Edge to cloud use cases, we have identified ways to provide step function improvements in both performance and that power efficiency. For example, in this generation, we're delivering an up to 2.9X average improvement in performance per watt versus not using accelerators, and up to 70 watt power savings per CPU opportunity with some unique power management features, and improve total cost of ownership, and just overall power- >> What's the closing thoughts? What should people take away from this announcement around scalable processors, 4th Gen Intel, and then Gen11 ProLiant? What's the walkaway? What's the main super thought here? >> So I can go first. I think the main thought is that, obviously, we have partnered with Intel for many, many years. We continue to partner this generation with years in the making. In fact, we've been working on this for years, so we're both very excited that it's finally here. But we're laser focused on making sure that customers get the most out of their workloads, the most out of their infrastructure, and that they can meet those challenges that people are throwing at 'em. I think IT is under more pressure than ever before and the demands are there. They're critical to the business success with digital transformation and our job is to make sure they have everything they need, and they could do and meet the business needs as they come at 'em. >> Lisa, your thoughts on this reflection point we're in right now? >> Well, I agree with everything that Krista said. It's just a really exciting time right now. There's a ton of challenges in front of us, but the opportunity to bring technology solutions to our customers' digital transformation is tremendous right now. I think I would also like our customers to take away that between the work that Intel and HPE have done together for generations, they have a community that they can trust. We are committed to delivering customer-led solutions that do solve these business transformation challenges that we know are in front of everyone, and we're pretty excited for this launch. >> Yeah, I'm super enthusiastic right now. I think you guys are on the right track. This title Compute Engineered for Hybrid World really kind of highlights the word, "Engineered." You're starting to see this distributed computing architecture take shape with the Edge. Cloud on-premise computing is everywhere. This is real relevant to your customers, and it's a great announcement. Thanks for taking the time and joining us today. >> Thank you. >> Yeah, thank you. >> This is the first episode of theCUBE's coverage of Compute Engineered For Your Hybrid World. Please continue to check out thecube.net, our site, for the future episodes where we'll discuss how to build high performance AI applications, transforming compute management experiences, and accelerating VDI at the Edge. Also, to learn more about the new HPE ProLiant servers with the 4th Gen Intel Xeon processors, you can go to hpe.com. And check out the URL below, click on it. I'm John Furrier at theCUBE. You're watching theCUBE, the leader in high tech, enterprise coverage. (bright music)
SUMMARY :
and general manager of Great to see you guys. that are going to make real differences Thanks for the recap. This is the server they were looking at. into the 4th Gen this time and also the highest growth workloads. and how that helps solve the challenges that are at the Edge, How is Intel addressing the Edge? from the Edge back to the core data center and that they can meet those challenges but the opportunity to Thanks for taking the and accelerating VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Krista | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
November | DATE | 0.99+ |
10X | QUANTITY | 0.99+ |
DL360 | COMMERCIAL_ITEM | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
DL 380 Gen11 | COMMERCIAL_ITEM | 0.99+ |
ProLiant Gen11 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.98+ |
first iteration | QUANTITY | 0.98+ |
ML350 | COMMERCIAL_ITEM | 0.98+ |
first | QUANTITY | 0.98+ |
Xeon | COMMERCIAL_ITEM | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
ProLiant Gen11s | COMMERCIAL_ITEM | 0.97+ |
first episode | QUANTITY | 0.97+ |
HPE Mainstream Compute | ORGANIZATION | 0.97+ |
thecube.net | OTHER | 0.97+ |
two servers | QUANTITY | 0.97+ |
4th Gen | QUANTITY | 0.96+ |
Edge | ORGANIZATION | 0.96+ |
Intel Xeon Products | ORGANIZATION | 0.96+ |
hpe.com | OTHER | 0.95+ |
one | QUANTITY | 0.95+ |
4th Gen. | QUANTITY | 0.95+ |
HPE GreenLake | ORGANIZATION | 0.93+ |
Gen10 | COMMERCIAL_ITEM | 0.93+ |
two new servers | QUANTITY | 0.92+ |
up to 70 watt | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
HPE ProLiant Gen11 | COMMERCIAL_ITEM | 0.91+ |
one council | QUANTITY | 0.91+ |
HPE ProLiant NextGen | COMMERCIAL_ITEM | 0.89+ |
first one | QUANTITY | 0.87+ |
Cray | ORGANIZATION | 0.86+ |
Gen11 ProLiant | COMMERCIAL_ITEM | 0.85+ |
Edge | TITLE | 0.83+ |
three different design principles | QUANTITY | 0.83+ |
HP GreenLake | ORGANIZATION | 0.82+ |
Number two | QUANTITY | 0.81+ |
HPE Compute Engineered for your Hybrid World-Containers to Deploy Higher Performance AI Applications
>> Hello, everyone. Welcome to theCUBE's coverage of "Compute Engineered for your Hybrid World," sponsored by HPE and Intel. Today we're going to discuss the new 4th Gen Intel Xeon Scalable process impact on containers and AI. I'm John Furrier, your host of theCUBE, and I'm joined by three experts to guide us along. We have Jordan Plum, Senior Director of AI and products for Intel, Bradley Sweeney, Big Data and AI Product Manager, Mainstream Compute Workloads at HPE, and Gary Wang, Containers Product Manager, Mainstream Compute Workloads at HPE. Welcome to the program gentlemen. Thanks for coming on. >> Thanks John. >> Thank you for having us. >> This segment is going to be talking about containers to deploy high performance AI applications. This is a really important area right now. We're seeing a lot more AI deployed, kind of next gen AI coming. How is HPE supporting and testing and delivering containers for AI? >> Yeah, so what we're doing from HPE's perspective is we're taking these container platforms, combining with the next generation Intel servers to fully validate the deployment of the containers. So what we're doing is we're publishing the reference architectures. We're creating these automation scripts, and also creating a monitoring and security strategy for these container platforms. So for customers to easily deploy these Kubernete clusters and to easily secure their community environments. >> Gary, give us a quick overview of the new Proliant DL 360 and 380 Gen 11 servers. >> Yeah, the load, for example, for container platforms what we're seeing mostly is the DL 360 and DL 380 for matching really well for container use cases, especially for AI. The DL 360, with the expended now the DDR five memory and the new PCI five slots really, really helps the speeds to deploy these container environments and also to grow the data that's required to store it within these container environments. So for example, like the DL 380 if you want to deploy a data fabric whether it's the Ezmeral data fabric or different vendors data fabric software you can do so with the DL 360 and DL 380 with the new Intel Xeon processors. >> How does HP help customers with Kubernetes deployments? >> Yeah, like I mentioned earlier so we do a full validation to ensure the container deployment is easy and it's fast. So we create these automation scripts and then we publish them on GitHub for customers to use and to reference. So they can take that and then they can adjust as they need to. But following the deployment guide that we provide will make the, deploy the community deployment much easier, much faster. So we also have demo videos that's also published and then for reference architecture document that's published to guide the customer step by step through the process. >> Great stuff. Thanks everyone. We'll be going to take a quick break here and come back. We're going to do a deep dive on the fourth gen Intel Xeon scalable process and the impact on AI and containers. You're watching theCUBE, the leader in tech coverage. We'll be right back. (intense music) Hey, welcome back to theCUBE's continuing coverage of "Compute Engineered for your Hybrid World" series. I'm John Furrier with the Cube, joined by Jordan Plum with Intel, Bradley Sweeney with HPE, and Gary Wang from HPE. We're going to do a drill down and do a deeper dive into the AI containers with the fourth gen Intel Xeon scalable processors we appreciate your time coming in. Jordan, great to see you. I got to ask you right out of the gate, what is the view right now in terms of Intel's approach to containers for AI? It's hot right now. AI is booming. You're seeing kind of next gen use cases. What's your approach to containers relative to AI? >> Thanks John and thanks for the question. With the fourth generation Xeon scalable processor launch we have tested and validated this platform with over 400 deep learning and machine learning models and workloads. These models and workloads are publicly available in the framework repositories and they can be downloaded by anybody. Yet customers are not only looking for model validation they're looking for model performance and performance is usually a combination of a given throughput at a target latency. And to do that in the data center all the way to the factory floor, this is not always delivered from these generic proxy models that are publicly available in the industry. >> You know, performance is critical. We're seeing more and more developers saying, "Hey, I want to go faster on a better platform, faster all the time." No one wants to run slower stuff, that's for sure. Can you talk more about the different container approaches Intel is pursuing? >> Sure. First our approach is to meet the customers where they are and help them build and deploy AI everywhere. Some customers just want to focus on deployment they have more mature use cases, and they just want to download a model that works that's high performing and run. Others are really focused more on development and innovation. They want to build and train models from scratch or at least highly customize them. Therefore we have several container approaches to accelerate the customer's time to solution and help them meet their business SLA along their AI journey. >> So what developers can just download these containers and just go? >> Yeah, so let me talk about the different kinds of containers we have. We start off with pre-trained containers. We'll have about 55 or more of these containers where the model is actually pre-trained, highly performant, some are optimized for low latency, others are optimized for throughput and the customers can just download these from Intel's website or from HPE and they can just go into production right away. >> That's great. A lot of choice. People can just get jump right in. That's awesome. Good, good choice for developers. They want more faster velocity. We know that. What else does Intel provide? Can you share some thoughts there? What you guys else provide developers? >> Yeah, so we talked about how hey some are just focused on deployment and they maybe they have more mature use cases. Other customers really want to do some more customization or optimization. So we have another class of containers called development containers and this includes not just the kind of a model itself but it's integrated with the framework and some other capabilities and techniques like model serving. So now that customers can download just not only the model but an entire AI stack and they can be sort of do some optimizations but they can also be sure that Intel has optimized that specific stack on top of the HPE servers. >> So it sounds simple to just get started using the DL model and containers. Is that it? Where, what else are customers looking for? What can you take a little bit deeper? >> Yeah, not quite. Well, while the customer customer's ability to reproduce performance on their site that HPE and Intel have measured in our own labs is fantastic. That's not actually what the customer is only trying to do. They're actually building very complex end-to-end AI pipelines, okay? And a lot of data scientists are really good at building models, really good at building algorithms but they're less experienced in building end-to-end pipelines especially 'cause the number of use cases end-to-end are kind of infinite. So we are building end-to-end pipeline containers for use cases like media analytics and sentiment analysis, anomaly detection. Therefore a customer can download these end-to-end containers, right? They can either use them as a reference, just like, see how we built them and maybe they have some changes in their own data center where they like to use different tools, but they can just see, "Okay this is what's possible with an end-to-end container on top of an HPE server." And other cases they could actually, if the overlap in the use case is pretty close, they can just take our containers and go directly into production. So this provides developers, all three types of containers that I discussed provide developers an easy starting point to get them up and running quickly and make them productive. And that's a really important point. You talked a lot about performance, John. But really when we talk to data scientists what they really want to be is productive, right? They're under pressure to change the business to transform the business and containers is a great way to get started fast >> People take product productivity, you know, seriously now with developer productivity is the hottest trend obviously they want performance. Totally nailed it. Where can customers get these containers? >> Right. Great, thank you John. Our pre-trained model containers, our developmental containers, and our end-to-end containers are available at intel.com at the developer catalog. But we'd also post these on many third party marketplaces that other people like to pull containers from. And they're frequently updated. >> Love the developer productivity angle. Great stuff. We've still got more to discuss with Jordan, Bradley, and Gary. We're going to take a short break here. You're watching theCUBE, the leader in high tech coverage. We'll be right back. (intense music) Welcome back to theCUBE's coverage of "Compute Engineered for your Hybrid World." I'm John Furrier with theCUBE and we'll be discussing and wrapping up our discussion on containers to deploy high performance AI. This is a great segment on really a lot of demand for AI and the applications involved. And we got the fourth gen Intel Xeon scalable processors with HP Gen 11 servers. Bradley, what is the top AI use case that Gen 11 HP Proliant servers are optimized for? >> Yeah, thanks John. I would have to say intelligent video analytics. It's a use case that's supplied across industries and verticals. For example, a smart hospital solution that we conducted with Nvidia and Artisight in our previous customer success we've seen 5% more hospital procedures, a 16 times return on investment using operating room coordination. With that IVA, so with the Gen 11 DL 380 that we provide using the the Intel four gen Xeon processors it can really support workloads at scale. Whether that is a smart hospital solution whether that's manufacturing at the edge security camera integration, we can do it all with Intel. >> You know what's really great about AI right now you're starting to see people starting to figure out kind of where the value is does a lot of the heavy lifting on setting things up to make humans more productive. This has been clearly now kind of going neck level. You're seeing it all in the media now and all these new tools coming out. How does HPE make it easier for customers to manage their AI workloads? I imagine there's going to be a surge in demand. How are you guys making it easier to manage their AI workloads? >> Well, I would say the biggest way we do this is through GreenLake, which is our IT as a service model. So customers deploying AI workloads can get fully-managed services to optimize not only their operations but also their spending and the cost that they're putting towards it. In addition to that we have our Gen 11 reliance servers equipped with iLO 6 technology. What this does is allows customers to securely manage their server complete environment from anywhere in the world remotely. >> Any last thoughts or message on the overall fourth gen intel Xeon based Proliant Gen 11 servers? How they will improve workload performance? >> You know, with this generation, obviously the performance is only getting ramped up as the needs and requirements for customers grow. We partner with Intel to support that. >> Jordan, gimme the last word on the container's effect on AI applications. Your thoughts as we close out. >> Yeah, great. I think it's important to remember that containers themselves don't deliver performance, right? The AI stack is a very complex set of software that's compiled together and what we're doing together is to make it easier for customers to get access to that software, to make sure it all works well together and that it can be easily installed and run on sort of a cloud native infrastructure that's hosted by HPE Proliant servers. Hence the title of this talk. How to use Containers to Deploy High Performance AI Applications. Thank you. >> Gentlemen. Thank you for your time on the Compute Engineered for your Hybrid World sponsored by HPE and Intel. Again, I love this segment for AI applications Containers to Deploy Higher Performance. This is a great topic. Thanks for your time. >> Thank you. >> Thanks John. >> Okay, I'm John. We'll be back with more coverage. See you soon. (soft music)
SUMMARY :
Welcome to the program gentlemen. and delivering containers for AI? and to easily secure their of the new Proliant DL 360 and also to grow the data that's required and then they can adjust as they need to. and the impact on AI and containers. And to do that in the about the different container and they just want to download a model and they can just go into A lot of choice. and they can be sort of So it sounds simple to just to use different tools, is the hottest trend to pull containers from. on containers to deploy we can do it all with Intel. for customers to manage and the cost that they're obviously the performance on the container's effect How to use Containers on the Compute Engineered We'll be back with more coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jordan Plum | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Gary Wang | PERSON | 0.99+ |
Bradley | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
16 times | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Jordan | PERSON | 0.99+ |
Artisight | ORGANIZATION | 0.99+ |
DL 360 | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three experts | QUANTITY | 0.99+ |
DL 380 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Compute Engineered for your Hybrid World | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
Bradley Sweeney | PERSON | 0.98+ |
over 400 deep learning | QUANTITY | 0.97+ |
intel | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Gen 11 DL 380 | COMMERCIAL_ITEM | 0.95+ |
Xeon | COMMERCIAL_ITEM | 0.95+ |
Today | DATE | 0.95+ |
fourth gen | QUANTITY | 0.92+ |
GitHub | ORGANIZATION | 0.91+ |
380 Gen 11 | COMMERCIAL_ITEM | 0.9+ |
about 55 or more | QUANTITY | 0.89+ |
four gen Xeon | COMMERCIAL_ITEM | 0.88+ |
Big Data | ORGANIZATION | 0.88+ |
Gen 11 | COMMERCIAL_ITEM | 0.87+ |
five slots | QUANTITY | 0.86+ |
Proliant | COMMERCIAL_ITEM | 0.84+ |
GreenLake | ORGANIZATION | 0.75+ |
Compute Engineered for your Hybrid | TITLE | 0.7+ |
Ezmeral | ORGANIZATION | 0.68+ |
HPE Compute Engineered for your Hybrid World - Accelerate VDI at the Edge
>> Hello everyone. Welcome to theCUBEs coverage of Compute Engineered for your Hybrid World sponsored by HPE and Intel. Today we're going to dive into advanced performance of VDI with the fourth gen Intel Zion scalable processors. Hello I'm John Furrier, the host of theCUBE. My guests today are Alan Chu, Director of Data Center Performance and Competition for Intel as well as Denis Kondakov who's the VDI product manager at HPE, and also joining us is Cynthia Sustiva, CAD/CAM product manager at HPE. Thanks for coming on, really appreciate you guys taking the time. >> Thank you. >> So accelerating VDI to the Edge. That's the topic of this topic here today. Let's get into it, Dennis, tell us about the new HPE ProLiant DL321 Gen 11 server. >> Okay, absolutely. Hello everybody. So HP ProLiant DL320 Gen 11 server is the new age center CCO and density optimized compact server, compact form factor server. It enables to modernize and power at the next generation of workloads in the diverse rec environment at the Edge in an industry standard designed with flexible scale for advanced graphics and compute. So it is one unit, one processor rec optimized server that can be deployed in the enterprise data center as well as at the remote office at end age. >> Cynthia HPE has announced another server, the ProLiant ML350. What can you tell us about that? >> Yeah, so the HPE ProLiant ML350 Gen 11 server is a powerful tower solution for a wide range of workloads. It is ideal for remote office compute with NextGen performance and expandability with two processors in tower form factor. This enables the server to be used not only in the data center environment, but also in the open office space as a powerful workstation use case. >> Dennis mentioned both servers are empowered by the fourth gen Intel Zion scale of process. Can you talk about the relationship between Intel HPE to get this done? How do you guys come together, what's behind the scenes? Share as much as you can. >> Yeah, thanks a lot John. So without a doubt it takes a lot to put all this together and I think the partnership that HPE and Intel bring together is a little bit of a critical point for us to be able to deliver to our customers. And I'm really thrilled to say that these leading Edge solutions that Dennis and Cynthia just talked about, they're built on the foundation of our fourth Gen Z on scalable platform that's trying to meet a wide variety of deployments for today and into the future. So I think the key point of it is we're together trying to drive leading performance with built-in acceleration and in order to deliver a lot of the business values to our customers, both HP and Intels, look to scale, drive down costs and deliver new services. >> You got the fourth Gen Z on, you got the Gen 11 and multiple ProLiants, a lot of action going on. Again, I love when these next gens come out. Can each of you guys comment and share what are the use cases for each of the systems? Because I think what we're looking at here is the next level innovation. What are some of the use cases on the systems? >> Yeah, so for the ML350, in the modern world where more and more data are generated at the Edge, we need to deploy computer infrastructure where the data is generated. So smaller form factor service will satisfy the requirements of S&B customers or remote and branch offices to deliver required performance redundancy where we're needed. This type of locations can be lacking dedicated facilities with strict humidity, temperature and noise isolation control. The server, the ML350 Gen 11 can be used as a powerful workstation sitting under a desk in the office or open space as well as the server for visualized workloads. It is a productivity workhorse with the ability to scale and adapt to any environment. One of the use cases can be for hosting digital workplace for manufacturing CAD/CAM engineering or oil and gas customers industry. So this server can be used as a high end bare metal workstation for local end users or it can be virtualized desktop solution environments for local and remote users. And talk about the DL320 Gen 11, I will pass it on to Dennis. >> Okay. >> Sure. So when we are talking about age of location we are talking about very specific requirements. So we need to provide solution building blocks that will empower and performance efficient, secure available for scaling up and down in a smaller increments than compared to the enterprise data center and of course redundant. So DL 320 Gen 11 server is the perfect server to satisfy all of those requirements. So for example, S&B customers can build a video solution, for example starting with just two HP ProLiant TL320 Gen 11 servers that will provide sufficient performance for high density video solution and at the same time be redundant and enable it for scaling up as required. So for VGI use cases it can be used for high density general VDI without GP acceleration or for a high performance VDI with virtual VGPU. So thanks to the modern modular architecture that is used on the server, it can be tailored for GPU or high density storage deployment with software defined compute and storage environment and to provide greater details on your Intel view I'm going to pass to Alan. >> Thanks a lot Dennis and I loved how you're both seeing the importance of how we scale and the applicability of the use cases of both the ML350 and DL320 solutions. So scalability is certainly a key tenant towards how we're delivering Intel's Zion scalable platform. It is called Zion scalable after all. And we know that deployments are happening in all different sorts of environments. And I think Cynthia you talked a little bit about kind of a environmental factors that go into how we're designing and I think a lot of people think of a traditional data center with all the bells and whistles and cooling technology where it sometimes might just be a dusty closet in the Edge. So we're defining fortunes you see on scalable to kind of tackle all those different environments and keep that in mind. Our SKUs range from low to high power, general purpose to segment optimize. We're supporting long life use cases so that all goes into account in delivering value to our customers. A lot of the latency sensitive nature of these Edge deployments also benefit greatly from monolithic architectures. And with our latest CPUs we do maintain quite a bit of that with many of our SKUs and delivering higher frequencies along with those SKUs optimized for those specific workloads in networking. So in the end we're looking to drive scalability. We're looking to drive value in a lot of our end users most important KPIs, whether it's latency throughput or efficiency and 4th Gen Z on scalable is looking to deliver that with 60 cores up to 60 cores, the most builtin accelerators of any CPUs in the market. And really the true technology transitions of the platform with DDR5, PCIE, Gen five and CXL. >> Love the scalability story, love the performance. We're going to take a break. Thanks Cynthia, Dennis. Now we're going to come back on our next segment after a quick break to discuss the performance and the benefits of the fourth Gen Intel Zion Scalable. You're watching theCUBE, the leader in high tech coverage, be right back. Welcome back around. We're continuing theCUBE's coverage of compute engineer for your hybrid world. I'm John Furrier, I'm joined by Alan Chu from Intel and Denis Konikoff and Cynthia Sistia from HPE. Welcome back. Cynthia, let's start with you. Can you tell us the benefits of the fourth Gen Intel Zion scale process for the HP Gen 11 server? >> Yeah, so HP ProLiant Gen 11 servers support DDR five memory which delivers increased bandwidth and lower power consumption. There are 32 DDR five dim slots with up to eight terabyte total on ML350 and 16 DDR five dim slots with up to two terabytes total on DL320. So we deliver more memory at a greater bandwidth. Also PCIE 5.0 delivers an increased bandwidth and greater number of lanes. So when we say increased number of lanes we need to remember that each lane delivers more bandwidth than lanes of the previous generation plus. Also a flexible storage configuration on HPDO 320 Gen 11 makes it an ideal server for establishing software defined compute and storage solution at the Edge. When we consider a server for VDI workloads, we need to keep the right balance between the number of cords and CPU frequency in order to deliver the desire environment density and noncompromised user experience. So the new server generation supports a greater number of single wide and global wide GPU use to deliver more graphic accelerated virtual desktops per server unit than ever before. HPE ProLiant ML 350 Gen 11 server supports up to four double wide GPUs or up to eight single wide GPUs. When the signing GPU accelerated solutions the number of GPUs available in the system and consistently the number of BGPUs that can be provisioned for VMs in the binding factor rather than CPU course or memory. So HPE ProLiant Gen 11 servers with Intel fourth generation science scalable processors enable us to deliver more virtual desktops per server than ever before. And with that I will pass it on to Alan to provide more details on the new Gen CPU performance. >> Thanks Cynthia. So you brought up I think a really great point earlier about the importance of achieving the right balance. So between the both of us, Intel and HPE, I'm sure we've heard countless feedback about how we should be optimizing efficiency for our customers and with four Gen Z and scalable in HP ProLiant Gen 11 servers I think we achieved just that with our built-in accelerator. So built-in acceleration delivers not only the revolutionary performance, but enables significant offload from valuable core execution. That offload unlocks a lot of previously unrealized execution efficiency. So for example, with quick assist technology built in, running engine X, TLS encryption to drive 65,000 connections per second we can offload up to 47% of the course that do other work. Accelerating AI inferences with AMX, that's 10X higher performance and we're now unlocking realtime inferencing. It's becoming an element in every workload from the data center to the Edge. And lastly, so with faster and more efficient database performance with RocksDB, we're executing with Intel in-memory analytics accelerator we're able to deliver 2X the performance per watt than prior gen. So I'll say it's that kind of offload that is really going to enable more and more virtualized desktops or users for any given deployment. >> Thanks everyone. We still got a lot more to discuss with Cynthia, Dennis and Allen, but we're going to take a break. Quick break before wrapping things up. You're watching theCUBE, the leader in tech coverage. We'll be right back. Okay, welcome back everyone to theCUBEs coverage of Compute Engineered for your Hybrid World. I'm John Furrier. We'll be wrapping up our discussion on advanced performance of VDI with the fourth gen Intel Zion scalable processers. Welcome back everyone. Dennis, we'll start with you. Let's continue our conversation and turn our attention to security. Obviously security is baked in from day zero as they say. What are some of the new security features or the key security features for the HP ProLiant Gen 11 server? >> Sure, I would like to start with the balance, right? We were talking about performance, we were talking about density, but Alan mentioned about the balance. So what about the security? The security is really important aspect especially if we're talking about solutions deployed at the H. When the security is not active but other aspects of the environment become non-important. And HP is uniquely positioned to deliver the best in class security solution on the market starting with the trusted supply chain and factories and silicon route of trust implemented from the factory. So the new ISO6 supports added protection leveraging SPDM for component authorization and not only enabled for the embedded server management, but also it is integrated with HP GreenLake compute ops manager that enables environment for secure and optimized configuration deployment and even lifecycle management starting from the single server deployed on the Edge and all the way up to the full scale distributed data center. So it brings uncompromised and trusted solution to customers fully protected at all tiers, hardware, firmware, hypervisor, operational system application and data. And the new intel CPUs play an important role in the securing of the platform. So Alan- >> Yeah, thanks. So Intel, I think our zero trust strategy toward security is a really great and a really strong parallel to all the focus that HPE is also bringing to that segment and market. We have even invested in a lot of hardware enabled security technologies like SGX designed to enhance data protection at rest in motion and in use. SGX'S application isolation is the most deployed, researched and battle tested confidential computing technology for the data center market and with the smallest trust boundary of any solution in market. So as we've talked about a little bit about virtualized use cases a lot of virtualized applications rely also on encryption whether bulk or specific ciphers. And this is again an area where we've seen the opportunity for offload to Intel's quick assist technology to encrypt within a single data flow. I think Intel and HP together, we are really providing security at all facets of execution today. >> I love that Software Guard Extension, SGX, also silicon root of trust. We've heard a lot about great stuff. Congratulations, security's very critical as we see more and more. Got to be embedded, got to be completely zero trust. Final question for you guys. Can you share any messages you'd like to share with the audience each of you, what should they walk away from this? What's in it for them? What does all this mean? >> Yeah, so I'll start. Yes, so to wrap it up, HPR Proliant Gen 11 servers are built on four generation science scalable processors to enable high density and extreme performance with high performance CDR five memory and PCI 5.0 plus HP engine engineered and validated workload solutions provide better ROI in any consumption model and prefer by a customer from Edge to Cloud. >> Dennis? >> And yeah, so you are talking about all of the great features that the new generation servers are bringing to our customers, but at the same time, customer IT organization should be ready to enable, configure, support, and fine tune all of these great features for the new server generation. And this is not an obvious task. It requires investments, skills, knowledge and experience. And HP is ready to step up and help customers at any desired skill with the HP Greenlake H2 cloud platform that enables customers for cloud like experience and convenience and the flexibility with the security of the infrastructure deployed in the private data center or in the Edge. So while consuming all of the HP solutions, customer have flexibility to choose the right level of the service delivered from HP GreenLake, starting from hardwares as a service and scale up or down is required to consume the full stack of the hardwares and software as a service with an option to paper use. >> Awesome. Alan, final word. >> Yeah. What should we walk away with? >> Yeah, thanks. So I'd say that we've talked a lot about the systems here in question with HP ProLiant Gen 11 and they're delivering on a lot of the business outcomes that our customers require in order to optimize for operational efficiency or to optimize for just to, well maybe just to enable what they want to do in, with their customers enabling new features, enabling new capabilities. Underpinning all of that is our fourth Gen Zion scalable platform. Whether it's the technology transitions that we're driving with DDR5 PCIA Gen 5 or the raw performance efficiency and scalability of the platform in CPU, I think we're here for our customers in delivering to it. >> That's great stuff. Alan, Dennis, Cynthia, thank you so much for taking the time to do a deep dive in the advanced performance of VDI with the fourth Gen Intel Zion scalable process. And congratulations on Gen 11 ProLiant. You get some great servers there and again next Gen's here. Thanks for taking the time. >> Thank you so much for having us here. >> Okay, this is theCUBEs keeps coverage of Compute Engineered for your Hybrid World sponsored by HP and Intel. I'm John Furrier for theCUBE. Accelerate VDI at the Edge. Thanks for watching.
SUMMARY :
the host of theCUBE. That's the topic of this topic here today. in the enterprise data center the ProLiant ML350. but also in the open office space by the fourth gen Intel deliver a lot of the business for each of the systems? One of the use cases can be and at the same time be redundant So in the end we're looking and the benefits of the fourth for VMs in the binding factor rather than from the data center to the Edge. for the HP ProLiant Gen 11 server? and not only enabled for the is the most deployed, got to be completely zero trust. by a customer from Edge to Cloud. of the HP solutions, Alan, final word. What should we walk away with? lot of the business outcomes the time to do a deep dive Accelerate VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Denis Kondakov | PERSON | 0.99+ |
Cynthia | PERSON | 0.99+ |
Dennis | PERSON | 0.99+ |
Denis Konikoff | PERSON | 0.99+ |
Alan Chu | PERSON | 0.99+ |
Cynthia Sustiva | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cynthia Sistia | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2X | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10X | QUANTITY | 0.99+ |
60 cores | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one unit | QUANTITY | 0.99+ |
each lane | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
ML350 | COMMERCIAL_ITEM | 0.99+ |
S&B | ORGANIZATION | 0.99+ |
DL320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
HPDO 320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
ML350 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
ProLiant ML350 | COMMERCIAL_ITEM | 0.97+ |
two | QUANTITY | 0.97+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.97+ |
DL 320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
ProLiant DL320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
single | QUANTITY | 0.97+ |
ProLiant ML350 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
Intels | ORGANIZATION | 0.96+ |
DL320 | COMMERCIAL_ITEM | 0.96+ |
ProLiant DL321 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
ProLiant TL320 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
two processors | QUANTITY | 0.96+ |
Zion | COMMERCIAL_ITEM | 0.95+ |
HPE ProLiant ML 350 Gen 11 | COMMERCIAL_ITEM | 0.95+ |
Zion | TITLE | 0.94+ |
HPE Compute Security - Kevin Depew, HPE & David Chang, AMD
>>Hey everyone, welcome to this event, HPE Compute Security. I'm your host, Lisa Martin. Kevin Dee joins me next Senior director, future Surfer Architecture at hpe. Kevin, it's great to have you back on the program. >>Thanks, Lisa. I'm glad to be here. >>One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And if we think of how dramatically the landscape has changed in the last couple of years, I was looking at some numbers that H P V E had provided. Cybercrime will reach 10.5 trillion by 2025. It's a couple years away. The average total cost of a data breach is now over 4 million, 15% year over year crime growth predicted over the next five years. It's no longer if we get hit, it's when it's how often. What's the severity? Talk to me about the current situation with the cybersecurity landscape that you're seeing. >>Yeah, I mean the, the numbers you're talking about are just staggering and then that's exactly what we're seeing and that's exactly what we're hearing from our customers is just absolutely key. Customers have too much to lose. The, the dollar cost is just, like I said, staggering. And, and here at HP we know we have a huge part to play, but we also know that we need partnerships across the industry to solve these problems. So we have partnered with, with our, our various partners to deliver these Gen 11 products. Whether we're talking about partners like a M D or partners like our Nick vendors, storage card vendors. We know we can't solve the problem alone. And we know this, the issue is huge. And like you said, the numbers are staggering. So we're really, we're really partnering with, with all the right players to ensure we have a secure solution so we can stay ahead of the bad guys to try to limit the, the attacks on our customers. >>Right. Limit the damage. What are some of the things that you've seen particularly change in the last 18 months or so? Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? >>Well, there, there's been a massive number of attacks just in the last 12 months, but I wouldn't really say it's so much changed because the amount of attacks has been increasing dramatically over the years for many, many, many years. It's just a very lucrative area for the bad guys, whether it's ransomware or stealing personal data, whatever it is, it's there. There's unfortunately a lot of money to be made into it, made from it, and a lot of money to be lost by the good guys, the good guys being our customers. So it's not so much that it's changed, it's just that it's even accelerating faster. So the real change is, it's accelerating even faster because it's becoming even more lucrative. So we have to stay ahead of these bad guys. One of the statistics of Microsoft operating environments, the number of tax in the last year, up 50% year over year, that's a huge acceleration and we've gotta stay ahead of that. We have to make sure our customers don't get impacted to the level that these, these staggering number of attacks are. The, the bad guys are out there. We've gotta protect, protect our customers from the bad guys. >>Absolutely. The acceleration that you talked about is, it's, it's kind of frightening. It's very eye-opening. We do know that security, you know, we've talked about it for so long as a, as a a C-suite priority, a board level priority. We know that as some of the data that HPE e also sent over organizations are risking are, are listing cyber risks as a top five concern in their organization. IT budgets spend is going up where security is concerned. And so security security's on everyone's mind. In fact, the cube did, I guess in the middle part of last, I did a series on this really focusing on cybersecurity as a board issue and they went into how companies are structuring security teams changing their assumptions about the right security model, offense versus defense. But security's gone beyond the board, it's top of mind and it's on, it's in an integral part of every conversation. So my question for you is, when you're talking to customers, what are some of the key challenges that they're saying, Kevin, these are some of the things the landscape is accelerating, we know it's a matter of time. What are some of those challenges and that they're key pain points that they're coming to you to help solve? >>Yeah, at the highest level it's simply that security is incredibly important to them. We talked about the numbers. There's so much money to be lost that what they come to us and say, is security's important for us? What can you do to protect us? What can you do to prevent us from being one of those statistics? So at a high level, that's kind of what we're seeing at a, with a little more detail. We know that there's customers doing digital transformations. We know that there's customers going hybrid cloud, they've got a lot of initiatives on their own. They've gotta spend a lot of time and a lot of bandwidth tackling things that are important to their business. They just don't have the bandwidth to worry about yet. Another thing which is security. So we are doing everything we can and partnering with everyone we can to help solve those problems for customers. >>Cuz we're hearing, hey, this is huge, this is too big of a risk. How do you protect us? And by the way, we only have limited bandwidth, so what can we do? What we can do is make them assured that that platform is secure, that we're, we are creating a foundation for a very secure platform and that we've worked with our partners to secure all the pieces. So yes, they still have to worry about security, but there's pieces that we've taken care of that they don't have to worry about and there's capabilities that we've provided that they can use and we've made that easy so they can build su secure solutions on top of it. >>What are some of the things when you're in customer conversations, Kevin, that you talk about with customers in terms of what makes HPE E'S approach to security really unique? >>Well, I think a big thing is security is part of our, our dna. It's part of everything we do. Whether we're designing our own asics for our bmc, the ilo ASIC ILO six used on Gen 11, or whether it's our firmware stack, the ILO firmware, our our system, UFI firmware, all those pieces in everything we do. We're thinking about security. When we're building products in our factory, we're thinking about security. When we're think designing our supply chain, we're thinking about security. When we make requirements on our suppliers, we're driving security to be a key part of those components. So security is in our D N a security's top of mind. Security is something we think about in everything we do. We have to think like the bad guys, what could the bad guy take advantage of? What could the bad guy exploit? So we try to think like them so that we can protect our customers. >>And so security is something that that really is pervasive across all of our development organizations, our supply chain organizations, our factories, and our partners. So that's what we think is unique about HPE is because security is so important and there's a whole lot of pieces of our reliance servers that we do ourselves that many others don't do themselves. And since we do it ourselves, we can make sure that security's in the design from the start, that those pieces work together in a secure manner. So we think that gives us a, an advantage from a security standpoint. >>Security is very much intention based at HPE e I was reading in some notes, and you just did a great job of talking about this, that fundamental security approach, security is fundamental to defend against threats that are increasingly complex through what you also call an uncompromising focus to state-of-the-art security and in in innovations built into your D N A. And then organizations can protect their infrastructure, their workloads, their data from the bad guys. Talk to us briefly in our final few minutes here, Kevin, about fundamental uncompromising protected the value in it for me as an HPE customer. >>Yeah, when we talk about fundamental, we're talking about the those fundamental technologies that are part of our platform. Things like we've integrated TPMS and sorted them down in our platforms. We now have platform certificates as a standard part of the platform. We have I dev id and probably most importantly, our platforms continue to support what we really believe was a groundbreaking technology, Silicon Root of trust and what that's able to do. We have millions of lines of firmware code in our platforms and with Silicon Root of trust, we can authenticate all of those lines of firmware. Whether we're talking about the the ILO six firmware, our U E I firmware, our C P L D in the system, there's other pieces of firmware. We authenticate all those to make sure that not a single line of code, not a single bit has been changed by a bad guy, even if the bad guy has physical access to the platform. >>So that silicon route of trust technology is making sure that when that system boots off and that hands off to the operating system and then eventually the customer's application stack that it's starting with a solid foundation, that it's starting with a system that hasn't been compromised. And then we build other things into that silicon root of trust, such as the ability to do the scans and the authentications at runtime, the ability to automatically recover if we detect something has been compromised, we can automatically update that compromised piece of firmware to a good piece before we've run it because we never want to run firmware that's been compromised. So that's all part of that Silicon Root of Trust solution and that's a fundamental piece of the platform. And then when we talk about uncompromising, what we're really talking about there is how we don't compromise security. >>And one of the ways we do that is through an extension of our Silicon Root of trust with a capability called S Spdm. And this is a technology that we saw the need for, we saw the need to authenticate our option cards and the firmware in those option cards. Silicon Root Prota, Silicon Root Trust protects against many attacks, but one piece it didn't do is verify the actual option card firmware and the option cards. So we knew to solve that problem we would have to partner with others in the industry, our nick vendors, our storage controller vendors, our G vendors. So we worked with industry standards bodies and those other partners to design a capability that allows us to authenticate all of those devices. And we worked with those vendors to get the support both in their side and in our platform side so that now Silicon Rivers and trust has been extended to where we protect and we trust those option cards as well. >>So that's when, when what we're talking about with Uncompromising and with with Protect, what we're talking about there is our capabilities around protecting against, for example, supply chain attacks. We have our, our trusted supply chain solution, which allows us to guarantee that our server, when it leaves our factory, what the server is, when it leaves our factory, will be what it is when it arrives at the customer. And if a bad guy does anything in that transition, the transit from our factory to the customer, they'll be able to detect that. So we enable certain capabilities by default capability called server configuration lock, which can ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, swapping out processors, whatever it is, we'll detect if a bad guy did any of that and the customer will know it before they deploy the system. That gets enabled by default. >>We have an intrusion detection technology option when you use by the, the trusted supply chain that is included by default. That lets you know, did anybody open that system up, even if the system's not plugged in, did somebody take the hood off and potentially do something malicious to it? We also enable a capability called U EFI secure Boot, which can go authenticate some of the drivers that are located on the option card itself. Those kind of capabilities. Also ilo high security mode gets enabled by default. So all these things are enabled in the platform to ensure that if it's attacked going from our factory to the customer, it will be detected and the customer won't deploy a system that's been maliciously attacked. So that's got >>It, >>How we protect the customer through those capabilities. >>Outstanding. You mentioned partners, my last question for you, we've got about a minute left, Kevin is bring AMD into the conversation, where do they fit in this >>AMD's an absolutely crucial partner. No one company even HP can do it all themselves. There's a lot of partnerships, there's a lot of synergies working with amd. We've been working with AMD for almost 20 years since we delivered our first AM MD base ProLiant back in 2004 H HP ProLiant, DL 5 85. So we've been working with them a long time. We work with them years ahead of when a processor is announced, we benefit each other. We look at their designs and help them make their designs better. They let us know about their technology so we can take advantage of it in our designs. So they have a lot of security capabilities, like their memory encryption technologies, their a MD secure processor, their secure encrypted virtualization, which is an absolutely unique and breakthrough technology to protect virtual machines and hypervisor environments and protect them from malicious hypervisors. So they have some really great capabilities that they've built into their processor, and we also take advantage of the capabilities they have and ensure those are used in our solutions and in securing the platform. So a really such >>A great, great partnership. Great synergies there. Kevin, thank you so much for joining me on the program, talking about compute security, what HPE is doing to ensure that security is fundamental, that it is unpromised and that your customers are protected end to end. We appreciate your insights, we appreciate your time. >>Thank you very much, Lisa. >>We've just had a great conversation with Kevin Depu. Now I get to talk with David Chang, data center solutions marketing lead at a md. David, welcome to the program. >>Thank, thank you. And thank you for having me. >>So one of the hot topics of conversation that we can't avoid is security. Talk to me about some of the things that AMD is seeing from the customer's perspective, why security is so important for businesses across industries. >>Yeah, sure. Yeah. Security is, is top of mind for, for almost every, every customer I'm talking to right now. You know, there's several key market drivers and, and trends, you know, in, out there today that's really needing a better and innovative solution for, for security, right? So, you know, the high cost of data breaches, for example, will cost enterprises in downtime of, of the data center. And that time is time that you're not making money, right? And potentially even leading to your, to the loss of customer confidence in your, in your cust in your company's offerings. So there's real costs that you, you know, our customers are facing every day not being prepared and not having proper security measures set up in the data center. In fact, according to to one report, over 400 high-tech threats are being introduced every minute. So every day, numerous new threats are popping up and they're just, you know, the, you know, the bad guys are just getting more and more sophisticated. So you have to take, you know, measures today and you have to protect yourself, you know, end to end with solutions like what a AM MD and HPE has to offer. >>Yeah, you talked about some of the costs there. They're exorbitant. I've seen recent figures about the average, you know, cost of data breacher ransomware is, is close to, is over $4 million, the cost of, of brand reputation you brought up. That's a great point because nobody wants to be the next headline and security, I'm sure in your experiences. It's a board level conversation. It's, it's absolutely table stakes for every organization. Let's talk a little bit about some of the specific things now that A M D and HPE E are doing. I know that you have a really solid focus on building security features into the EPIC processors. Talk to me a little bit about that focus and some of the great things that you're doing there. >>Yeah, so, you know, we partner with H P E for a long time now. I think it's almost 20 years that we've been in business together. And, and you know, we, we help, you know, we, we work together design in security features even before the silicons even, you know, even born. So, you know, we have a great relationship with, with, with all our partners, including hpe and you know, HPE has, you know, an end really great end to end security story and AMD fits really well into that. You know, if you kind of think about how security all started, you know, in, in the data center, you, you've had strategies around encryption of the, you know, the data in, in flight, the network security, you know, you know, VPNs and, and, and security on the NS. And, and even on the, on the hard drives, you know, data that's at rest. >>You know, encryption has, you know, security has been sort of part of that strategy for a a long time and really for, you know, for ages, nobody really thought about the, the actual data in use, which is, you know, the, the information that's being passed from the C P U to the, the, the memory and, and even in virtualized environments to the, the, the virtual machines that, that everybody uses now. So, you know, for a long time nobody really thought about that app, you know, that third leg of, of encryption. And so a d comes in and says, Hey, you know, this is things that as, as the bad guys are getting more sophisticated, you, you have to start worrying about that, right? And, you know, for example, you know, you know, think, think people think about memory, you know, being sort of, you know, non-persistent and you know, when after, you know, after a certain time, the, the, you know, the, the data in the memory kind of goes away, right? >>But that's not true anymore because even in in memory data now, you know, there's a lot of memory modules that still can retain data up to 90 minutes even after p power loss. And with something as simple as compressed, compressed air or, or liquid nitrogen, you can actually freeze memory dams now long enough to extract the data from that memory module for up, you know, up, up to two or three hours, right? So lo more than enough time to read valuable data and, and, and even encryption keys off of that memory module. So our, our world's getting more complex and you know, more, the more data out there, the more insatiable need for compute and storage. You know, data management is becoming all, all the more important, you know, to keep all of that going and secure, you know, and, and creating security for those threats. It becomes more and more important. And, and again, especially in virtualized environments where, you know, like hyperconverged infrastructure or vir virtual desktop memories, it's really hard to keep up with all those different attacks, all those different attack surfaces. >>It sounds like what you were just talking about is what AMD has been able to do is identify yet another vulnerability Yes. Another attack surface in memory to be able to, to plug that hole for organizations that didn't, weren't able to do that before. >>Yeah. And, you know, and, and we kind of started out with that belief that security needed to be scalable and, and able to adapt to, to changing environments. So, you know, we, we came up with, you know, the, you know, the, the philosophy or the design philosophy that we're gonna continue to build on those security features generational generations and stay ahead of those evolving attacks. You know, great example is in, in the third gen, you know, epic C P U, that family that we had, we actually created this feature called S E V S N P, which stands for SECURENESS Paging. And it's really all around this, this new attack where, you know, your, the, the, you know, it's basically hypervisor based attacks where people are, you know, the bad actors are writing in to the memory and writing in basically bad data to corrupt the mem, you know, to corrupt the data in the memory. So s e V S and P is, was put in place to help, you know, secure that, you know, before that became a problem. And, you know, you heard in the news just recently that that becoming a more and more, more of a bigger issue. And the great news is that we had that feature built in, you know, before that became a big problem. >>And now you're on the fourth gen, those epic crosses talk of those epic processes. Talk to me a little bit about some of the innovations that are now in fourth gen. >>Yeah, so in fourth gen we actually added, you know, on top of that. So we've, we've got, you know, the sec the, the base of our, our, what we call infinity guard is, is all around the secure boot. The, you know, the, the, the, the secure root of trust that, you know, that we, we work with HPE on the, the strong memory encryption and the S E V, which is the secure encrypted virtualization. And so remember those s s and p, you know, incap capabilities that I talked about earlier. We've actually, in the fourth gen added two x the number of sev v s and P guests for even higher number of confidential VMs to support even more customers than before. Right? We've also added more guest protection from simultaneous multi threading or S M T side channel attacks. And, you know, while it's not officially part of Infinity Guard, we've actually added more APEC acceleration, which greatly benefits the security of those confidential VMs with the larger number of VCPUs, which basically means that you can build larger VMs and still be secured. And then lastly, we actually added even stronger a e s encryption. So we went from 128 bit to 256 bit, which is now military grade encryption on top of that. And, you know, and, and that's really, you know, the de facto crypto cryptography that is used for most of the applications for, you know, customers like the US federal government and, and all, you know, the, is really an essential element for memory security and the H B C applications. And I always say if it's good enough for the US government, it's good enough for you. >>Exactly. Well, it's got to be, talk a little bit about how AMD is doing this together with HPE a little bit about the partnership as we round out our conversation. >>Sure, absolutely. So security is only as strong as the layer below it, right? So, you know, that's why modern security must be built in rather than, than, you know, bolted on or, or, or, you know, added after the fact, right? So HPE and a MD actually developed this layered approach for protecting critical data together, right? Through our leadership and, and security features and innovations, we really deliver a set of hardware based features that, that help decrease potential attack surfaces. With, with that holistic approach that, you know, that safeguards the critical information across system, you know, the, the entire system lifecycle. And we provide the confidence of built-in silicon authentication on the world's most secure industry standard servers. And with a 360 degree approach that brings high availability to critical workloads while helping to defend, you know, against internal and external threats. So things like h hp, root of silicon root of trust with the trusted supply chain, which, you know, obviously AMD's part of that supply chain combined with AMD's Infinity guard technology really helps provide that end-to-end data protection in today's business. >>And that is so critical for businesses in every industry. As you mentioned, the attackers are getting more and more sophisticated, the vulnerabilities are increasing. The ability to have a pa, a partnership like H P E and a MD to deliver that end-to-end data protection is table stakes for businesses. David, thank you so much for joining me on the program, really walking us through what am MD is doing, the the fourth gen epic processors and how you're working together with HPE to really enable security to be successfully accomplished by businesses across industries. We appreciate your insights. >>Well, thank you again for having me, and we appreciate the partnership with hpe. >>Well, you wanna thank you for watching our special program HPE Compute Security. I do have a call to action for you. Go ahead and visit hpe com slash security slash compute. Thanks for watching.
SUMMARY :
Kevin, it's great to have you back on the program. One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And like you said, the numbers are staggering. Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? So the real change is, it's accelerating even faster because it's becoming We do know that security, you know, we've talked about it for so long as a, as a a C-suite Yeah, at the highest level it's simply that security is incredibly important to them. And by the way, we only have limited bandwidth, So we try to think like them so that we can protect our customers. our reliance servers that we do ourselves that many others don't do themselves. and you just did a great job of talking about this, that fundamental security approach, of code, not a single bit has been changed by a bad guy, even if the bad guy has the ability to automatically recover if we detect something has been compromised, And one of the ways we do that is through an extension of our Silicon Root of trust with a capability ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, That lets you know, into the conversation, where do they fit in this and in securing the platform. Kevin, thank you so much for joining me on the program, Now I get to talk with David Chang, And thank you for having me. So one of the hot topics of conversation that we can't avoid is security. numerous new threats are popping up and they're just, you know, the, you know, the cost of, of brand reputation you brought up. know, the data in, in flight, the network security, you know, you know, that app, you know, that third leg of, of encryption. the data from that memory module for up, you know, up, up to two or three hours, It sounds like what you were just talking about is what AMD has been able to do is identify yet another in the third gen, you know, epic C P U, that family that we had, Talk to me a little bit about some of the innovations Yeah, so in fourth gen we actually added, you know, Well, it's got to be, talk a little bit about how AMD is with that holistic approach that, you know, that safeguards the David, thank you so much for joining me on the program, Well, you wanna thank you for watching our special program HPE Compute Security.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
David Chang | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Kevin Dee | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10.5 trillion | QUANTITY | 0.99+ |
HPE E | ORGANIZATION | 0.99+ |
H P E | ORGANIZATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
over $4 million | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
fourth gen. | QUANTITY | 0.99+ |
fourth gen | QUANTITY | 0.99+ |
over 4 million | QUANTITY | 0.99+ |
DL 5 85 | COMMERCIAL_ITEM | 0.99+ |
256 bit | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
three hours | QUANTITY | 0.98+ |
amd | ORGANIZATION | 0.98+ |
128 bit | QUANTITY | 0.98+ |
over 400 high-tech threats | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
Infinity Guard | ORGANIZATION | 0.98+ |
one piece | QUANTITY | 0.98+ |
almost 20 years | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
millions of lines | QUANTITY | 0.97+ |
single bit | QUANTITY | 0.97+ |
50% | QUANTITY | 0.97+ |
one report | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
hpe | ORGANIZATION | 0.96+ |
third gen | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
H P V E | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.95+ |
two | QUANTITY | 0.95+ |
third leg | QUANTITY | 0.94+ |
last couple of years | DATE | 0.93+ |
Silicon Rivers | ORGANIZATION | 0.92+ |
up to 90 minutes | QUANTITY | 0.92+ |
S Spdm | ORGANIZATION | 0.9+ |
ILO | ORGANIZATION | 0.88+ |
AM | ORGANIZATION | 0.88+ |
US government | ORGANIZATION | 0.86+ |
single line | QUANTITY | 0.85+ |
last 18 months | DATE | 0.82+ |
Gen 11 | QUANTITY | 0.81+ |
last 12 months | DATE | 0.81+ |
AM MD base ProLiant | COMMERCIAL_ITEM | 0.8+ |
next five years | DATE | 0.8+ |
up to two | QUANTITY | 0.8+ |
Protect | ORGANIZATION | 0.79+ |
couple years | QUANTITY | 0.79+ |
Kevin Depew | HPE ProLiant Gen11 – Trusted Security by Design
>>Hey everyone, welcome to the cube. Lisa Martin here with Kevin Depu, senior Director Future Server Architecture at hpe. Kevin, it's great to have you on the program. You're gonna be breaking down everything that's exciting and compelling about Gen 11. How are you today? >>Thanks Lisa, and I'm doing great. >>Good, good, good. So let's talk about ProLiant Gen 11, the next generation of compute. I read some great stats on hpe.com. I saw that Gen 11 added 28 new world records while delivering up to 99% higher performance and 43% more energy efficiency than the previous version. That's amazing. Talk to me about Gen 11. What makes this update so compelling? >>Well, you talked about some of the stats regarding the performance and the power efficiency, and those are excellent. We partnered with amd, we've got excellent performance on these platforms. We have excellent power efficiency, but the advantage of this platform go beyond that. Today we're gonna talk a lot about cybersecurity and we've got a lot of security capabilities in these platforms. We've built on top of the security capabilities that we've had, generation over generation, we've got some new exciting capabilities we'll be talking about. So whether it's the performance, whether it's power efficient, whether it's security, all those capabilities are in this platform. Security is part of our dna. We put it into the design from the very beginning, and we've partnered with AMD to deliver what we think is a very compelling story. >>The security piece is absolutely critical. The to, we could have a, you know, an entire separate conversation on the cybersecurity landscape and the changes there. But one of the things I also noticed in the material on Gen 11 is that HPE says it's fundamental. What do you mean by that and what's new that makes it so fundamental? >>Well, by saying it's fundamental is security is a fundamental part of the platform. You need systems that are reliable. You need systems that have excellent performance. You need systems that are, have very good power efficiency, those things you talked about before, those are all very important to have a good server, but security's a part that's absolutely critical as well. So security is one of the fundamental capabilities of the platform. I had mentioned. We built on top of capabilities, capabilities like our silicon root of trust, which ensures that the firmware stack on these platforms is not compromised. Those are continuing this platform and have been expanded on. We have our trusted supply chain and we've expanded on that as well. We have a lot of security capabilities, our platform certificates, our IEB IDs. There's just a lot of security capabilities that are absolutely fundamental to these being a good solution because as we said, security is fundamental. It's an absolutely critical part of these platforms. >>Absolutely. For companies in every industry. I wanna talk a little bit about about one of the other things that HPE describes Gen 11 as as being uncompromising. And I wanted to understand what that means and what's the value add in it for customers? >>Yeah. Well, by uncompromising means we can't compromise on security. Security to what I said before, it's fundamental. It can't be promised. You have to have security be strong on these platforms. So one of the capabilities, which we're specifically talking about when we talk about Uncompromising is a capability called spdm. We've extended our silicon root of trust, which is one of our key technologies we've had since our Gen 10 platforms. We've extended that through something called spdm. We saw a problem in the industry with the ability to authenticate option cards and other devices in the system. Silicon Root of Trust verified many pieces of firmware in the platform, but one piece that it wasn't verifying was the option cards. And we needed, we knew we needed to solve this problem and we knew we couldn't do it a hundred percent on our own because we needed to work with our partners, whether it's a storage option card, a nick, or even devices in the future, we needed to make sure that we could verify that those were what they were meant to be. >>They weren't compromised, they weren't maliciously compromised and that we could authenticate them. So we worked with industry standards bodies to create the S P M specification. And what that allows us to do is authenticate the option cards in the systems. So that's one of our new capabilities that we've added in these platforms. So we've gone beyond securing all of the things that Silicon Real Trust secured in the past to extending that to the option cards and their firmware as well. So when we boot up one of these platforms, when we hand off to the OS and to the the customers software solution, they can be, they can rest assured that all the things that have run all that, that platform is not compromised. A bad guy has not gone in and changed things and that includes a bad guy with physical access to the platform. So that's why we have unpromised security in these platforms. >>Outstanding. That sounds like great work that's been done there and giving customers that piece of mind where security is concerned is table stakes for everybody across the organization. Kevin, you mentioned partners. I know HPE is extending protection to the partner ecosystem. I wanted to get a little bit more info on that from you. >>Yeah, we've worked with our option co card vendors, numerous partners across the industry to support spdm. We were the ones who kind of went to the, the industry standards bodies and said, we need to solve this problem. And we had agreement from everybody. Everybody agrees this is a problem that had to be solved. So, but to solve it, you've gotta have a partnership. We can't just do it on our own. There's a lot of things that we HPE can solve on our own. This is not one of them to be able to get a method that we could authenticate and trust the option cards in the system. We needed to work with our option card vendors. So that's something that we, we did. And we use also some capabilities that we work with some of our processor vendor partners as well. So working with partners across the industry, we were able to deliver spdm. >>So we know that option card, whether it's a storage card or a Nick Card or, or GPUs in the future, those, those may not be there from day one, but we know that those option cards are what they intended because you could do an attack where you compromise the option card, you compromise the firmware in that option card and option cards have the ability to read and write to memory using something called dma. And if those cards are running firmware that's being created by a bad guy, they can do a lot of, of very costly attacks. I mean we, there's a lot of statistics that showed just how, how costly cybersecurity attacks are. If option cards have been compromised, you can do some really bad things. So this is how we can trust those option cards. And we had to partner with those, those partners in the industry to both define the spec and both sides had to implement to that specification so that we could deliver the solution we're delivering. >>HPE is such a strong partner ecosystem. You did a great job of articulating the value in this for customers. From a security perspective, I know that you're also doing a lot of collaboration and work with amd. Talk to me a little bit about that and the value in it for your joint customers. >>Yeah, absolutely. AMD is a longstanding partner. We actually started working with AMD about 20 years ago when we delivered our first AMD opton based platform, the HP pro, HP Pliant, DL 5 85. So we've got a long engineering relationship with AMD and we've been making products with AMD since they introduced their epic generation processor in 2017. That's when AMD really upped the secure their security game. They created capabilities with their AMD secure processor, their secure encryption virtualization, their memory encryption technologies. And we work with AMD long before platforms actually release. So they come to us with their ideas, their designs, we collaborate with them on things we think are valuable when we see areas where they can do things better, we provide feedback. So we really have a partnership to make these processors better. And it's not something where we just work with them for a short amount of time and deliver a product. >>We're working with them for years before those products come out. So that partnership allows both parties to create better platforms cuz we understand what they're capable of, they understand what our needs are as a, as a server provider. And so we help them make their processors better and they help us make our products better. And that extends in all areas, whether it's performance, power, efficiency, but very importantly in what we're talking about here, security. So they have got an excellent security story with all of their technologies. Again, memory encryption. They, they've got some exceptional technologies there. All their secure encryption, virtualization to secure virtualized environments, those are all things that they excel at. And we take advantage of those in our designs. We make sure that those so work with our servers as part of a solution >>Sounds like a very deeply technically integrated and longstanding relationship that's really symbiotic for both sides. I wanted to get some information from you on HPE server security optimized service. Talk to me about what that is. How does that help HP help its customers get around some of those supply chain challenges that are persistent? >>Yeah, what that is is with our previous generation of products, we announced something called our HPE trusted supply chain and but that was focused on the US market with the solution for gen 11. We've expanded that to other markets. It's, it's available from factories other than the ones in our us it's available for shipping products to other geographies. So what that really was is taking the HPE trusted supply chain and expanding it to additional geographies throughout the world, which provides a big, big benefit for our non-US based customers. And what that is, is we're trying to make sure that the server that we ship out of our factories is indeed exactly what that customer is getting. So try to prevent any possibility of attack in the supply chain going from our factories to the customer. And if there is an attack, we can detect it and the customer knows about it. >>So they won't deploy a system that's been compromised cuz there, there have been high profile cases of supply chain attacks. We don't want to have that with our, our customers buying our Reliant products. So we do things like enable you I Secure Boot, which is an ability to authenticate the, what's called a u i option ROM driver on option cards. That's enabled by default. Normally that's not enabled by default. We enable our high security mode in our ILO product. We include our intrusion tech detection technology option, which is an optional feature, but it's their standard when you buy one of the boxes with this, this capability, this trusted supply chain capability. So there's a lot of capabilities that get enabled at the factory. We also enable server configuration lock, which allows a customer to detect, get a bad guy, modify anything in the platform when it transits from our factory to them. So what it allows a customer to do is get that platform and know that it is indeed what it is intended to be and that it hasn't been attacked and we've now expanded that to many geographies throughout the world. >>Excellent. So much more coverage across the world, which is so incredibly important. As cyber attacks continue to rise year over year, the the ransomware becomes a household word, the ransoms get even more expensive, especially considering the cybersecurity skills gap. I'm just wondering what are some of the, the ways in which everything that you've described with Gen 11 and the HPE partner ecosystem with A and B for example, how does that help customers to get around that security skills gap that is present? >>Well, the key thing there is we care about our customer security. So as I mentioned, security is in our dna. We do, we consider security in everything. We do every update to firm where we make, when we do the hardware design, whatever we're doing, we're always considering what could a bad guy do? What could a bad guy take advantage of and attempt to prevent it. And AMD does the same thing. You can look at all the technologies they have in their AMD processor. They're, they're making sure their processor is secure. We're making sure our platform is secure so the customer doesn't have to worry about it. So that's something the customer can trust us. They can trust the amd so they know that that's not the area where they, they have to expend their bandwidth. They can extend their bandwidth on the security on other parts of the, the solution versus knowing that the platform and the CPU is secure. >>And beyond that, we create features and capabilities that they can take advantage of in the, in the case of amd, a lot of their capabilities are things that the software stack and the OS can take advantage of. We have capabilities on the client side that the software and that they can take advantage of, whether it's server configuration lock or whatever. We try to create features that are easy for them to use to make their environments more secure. So we're making things that can trust the platform, they can trust the processor, they don't have to worry about that. And then we have features and capabilities that lets them solve some of the problems easier. So we're, we're trying to, to help them with that skills gap by making certain things easier and making certain things that they don't even have to worry about. >>Right. It sounds like allowing them to be much more strategic about the security skills that they do have. My last question for you, Kevin, is Gen 11 available now? Where can folks go to get their hands on it? >>So Gen 11 was announced earlier this month. The products will actually be shipping before the end of this year, before the end of 2022. And you can go to our website and find all about our compute security. So it all that information's available on our website. >>Awesome. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in it, why security is fundamental to the uncompromising nature with which HPE and partners have really updated the system and the rest of world coverage that you guys are enabling. We appreciate your insights on your time, Kevin. >>Thank you very much, Lisa. Appreciate >>It. And we want to let you and the audience know, check out hpe.com/info/compute for more info on 11. Thanks for watching.
SUMMARY :
Kevin, it's great to have you on the program. So let's talk about ProLiant Gen 11, the next generation of compute. We put it into the design from the very beginning, The to, we could have a, you know, an entire separate conversation So security is one of the fundamental capabilities of the platform. And I wanted to understand what that means and what's the value add in it for customers? a nick, or even devices in the future, we needed to make sure that we could verify in the past to extending that to the option cards and their firmware as well. is table stakes for everybody across the organization. the industry standards bodies and said, we need to solve this problem. the spec and both sides had to implement to that specification so that we could deliver You did a great job of articulating the value in this for customers. So they come to us with their ideas, their designs, we collaborate parties to create better platforms cuz we understand what they're capable of, Talk to me about what that is. possibility of attack in the supply chain going from our factories to the customer. So we do things like enable you I Secure Boot, So much more coverage across the world, which is so incredibly important. So that's something the customer can trust us. We have capabilities on the client side that the It sounds like allowing them to be much more strategic about the security skills that they do have. So it all that information's available on our website. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in It. And we want to let you and the audience know, check out hpe.com/info/compute
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Kevin Depu | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
43% | QUANTITY | 0.99+ |
amd | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
Silicon Real Trust | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
end of 2022 | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
both parties | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
Today | DATE | 0.97+ |
hpe | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
hpe.com/info/compute | OTHER | 0.97+ |
end of this year | DATE | 0.97+ |
hpe.com | ORGANIZATION | 0.96+ |
DL 5 85 | COMMERCIAL_ITEM | 0.96+ |
earlier this month | DATE | 0.95+ |
up to 99% | QUANTITY | 0.95+ |
hundred percent | QUANTITY | 0.93+ |
day one | QUANTITY | 0.9+ |
ILO | ORGANIZATION | 0.89+ |
ProLiant | TITLE | 0.87+ |
Gen 10 | QUANTITY | 0.86+ |
Pliant | COMMERCIAL_ITEM | 0.84+ |
28 new world records | QUANTITY | 0.83+ |
gen 11 | QUANTITY | 0.83+ |
Gen 11 | QUANTITY | 0.82+ |
about 20 years ago | DATE | 0.81+ |
one of | QUANTITY | 0.77+ |
11 | OTHER | 0.7+ |
Nick Card | COMMERCIAL_ITEM | 0.69+ |
Gen11 | QUANTITY | 0.64+ |
HPE ProLiant | ORGANIZATION | 0.64+ |
Gen 11 | QUANTITY | 0.62+ |
years | QUANTITY | 0.62+ |
Gen | OTHER | 0.6+ |
Gen 11 | OTHER | 0.59+ |
11 | QUANTITY | 0.57+ |
Gen | QUANTITY | 0.52+ |
boxes | QUANTITY | 0.47+ |
spdm | TITLE | 0.44+ |
spdm | OTHER | 0.41+ |
pro | COMMERCIAL_ITEM | 0.38+ |
Jen Huffstetler, Intel | HPE Discover 2022
>> Announcer: theCube presents HPE Discover 2022 brought to you by HPE. >> Hello and welcome back to theCube's continuous coverage HPE Discover 2022 and from Las Vegas the formerly Sands Convention Center now Venetian, John Furrier and Dave Vellante here were excited to welcome in Jen Huffstetler. Who's the Chief product Sustainability Officer at Intel Jen, welcome to theCube thanks for coming on. >> Thank you very much for having me. >> You're really welcome. So you dial back I don't know, the last decade and nobody really cared about it but some people gave it lip service but corporations generally weren't as in tune, what's changed? Why has it become so top of mind? >> I think in the last year we've noticed as we all were working from home that we had a greater appreciation for the balance in our lives and the impact that climate change was having on the world. So I think across the globe there's regulations industry and even personally, everyone is really starting to think about this a little more and corporations specifically are trying to figure out how are they going to continue to do business in these new regulated environments. >> And IT leaders generally weren't in tune cause they weren't paying the power bill for years it was the facilities people, but then they started to come together. How should leaders in technology, business tech leaders, IT leaders, CIOs, how should they be thinking about their sustainability goals? >> Yeah, I think for IT leaders specifically they really want to be looking at the footprint of their overall infrastructure. So whether that is their on-prem data center, their cloud instances, what can they do to maximize the resources and lower the footprint that they contribute to their company's overall footprint. So IT really has a critical role to play I think because as you'll find in IT, the carbon footprint of the data center of those products in use is actually it's fairly significant. So having a focus there will be key. >> You know compute has always been one of those things where, you know Intel's been makes chips so that, you know heat is important in compute. What is Intel's current goals? Give us an update on where you guys are at. What's the ideal goal in the long term? Where are you now? You guys always had a focus on this for a long, long time. Where are we now? Cause I won't say the goalpost of changed, they're changing the definitions of what this means. What's the current state of Intel's carbon footprint and overall goals? >> Yeah, no thanks for asking. As you mentioned, we've been invested in lowering our environmental footprint for decades in fact, without action otherwise, you know we've already lowered our carbon footprint by 75%. So we're really in that last mile. And that is why when we recently announced a very ambitious goal Net-Zero 2040 for our scope one and two for manufacturing operations, this is really an industry leading goal. And partly because the technology doesn't even exist, right? For the chemistries and for making the silicon into the sand into, you know, computer chips yet. And so by taking this bold goal, we're going to be able to lead the industry, partner with academia, partner with consortia, and that drive is going to have ripple effects across the industry and all of the components in semiconductors. >> Is there a changing definition of Net-Zero? What that means, cause some people say they're Net-Zero and maybe in one area they might be but maybe holistically across the company as it becomes more of a broader mandate society, employees, partners, Wall Street are all putting pressure on companies. Is the Net-Zero conversation changed a little bit or what's your view on that? >> I think we definitely see it changing with changing regulations like those coming forth from the SEC here in the US and in Europe. Net-Zero can't just be lip service anymore right? It really has to be real reductions on your footprint. And we say then otherwise and even including in our supply chain goals what we've taken new goals to reduce, but our operations are growing. So I think everybody is going through this realization that you know, with the growth, how do we keep it lower than it would've been otherwise, keep focusing on those reductions and have not just renewable credits that could have been bought in one location and applied to a different geographical location but real credible offsets for where the the products manufactured or the computes deployed. >> Jen, when you talk about you've reduced already by 75% you're on that last mile. We listened to Pat Gelsinger very closely up until recently he was the number one most frequently had on theCube guest. He's been busy I guess. But as you apply that discipline to where you've been, your existing business and now Pat's laid out this plan to increase the Foundry business how does that affect your... Are you able to carry through that reduction to, you know, the new foundries? Do you have to rethink that? How does that play in? >> Certainly, well, the Foundry expansion of our business with IBM 2.0 is going to include the existing factories that already have the benefit of those decades of investment and focus. And then, you know we have clear goals for our new factories in Ohio, in Europe to achieve goals as well. That's part of the overall plan for Net-Zero 2040. It's inclusive of our expansion into Foundry which means that many, many many more customers are going to be able to benefit from the leadership that Intel has here. And then as we onboard acquisitions as any company does we need to look at the footprint of the acquisition and see what we can do to align it with our overall goals. >> Yeah so sustainable IT I don't know for some reason was always an area of interest to me. And when we first started, even before I met you, John we worked with PG&E to help companies get rebates for installing technologies that would reduce their carbon footprint. >> Jen: Very forward thinking. >> And it was a hard thing to get, you know, but compute was the big deal. And there were technologies and I remember virtualization at the time was one and we would go in and explain to the PG&E engineers how that all worked. Cause they had metrics and that they wanted to see, but anyway, so virtualization was clearly one factor. What are the technologies today that people should be paying, flash storage was another one. >> John: AI's going to have a big impact. >> Reduce the spinning disk, but what are the ones today that are going to have an impact? >> Yeah, no, that's a great question. We like to think of the built in acceleration that we have including some of the early acceleration for virtualization technologies as foundational. So built in accelerated compute is green compute and it allows you to maximize the utilization of the transistors that you already have deployed in your data center. This compute is sitting there and it is ready to be used. What matters most is what you were talking about, John that real world workload performance. And it's not just you know, a lot of specsmanship around synthetic benchmarks, but AI performance with the built in acceleration that we have in Xeon processors with the Intel DL Boost, we're able to achieve four X, the AI performance per Watts without you know, doing that otherwise. You think about the consolidation you were talking about that happened with virtualization. You're basically effectively doing the same thing with these built in accelerators that we have continued to add over time and have even more coming in our Sapphire Generation. >> And you call that green compute? Or what does that mean, green compute? >> Well, you are greening your compute. >> John: Okay got it. >> By increasing utilization of your resources. If you're able to deploy AI, utilize the telemetry within the CPU that already exists. We have customers KDDI in Japan has a great Proofpoint that they already announced on their 5G data center, lowered their data center power by 20%. That is real bottom line impact as well as carbon footprint impact by utilizing all of those built in capabilities. So, yeah. >> We've heard some stories earlier in the event here at Discover where there was some cooling innovations that was powering moving the heat to power towns and cities. So you start to see, and you guys have been following this data center and been part of the whole, okay and hot climates, you have cold climates, but there's new ways to recycle energy where's that cause that sounds very Sci-Fi to me that oh yeah, the whole town runs on the data center exhaust. So there's now systems thinking around compute. What's your reaction to that? What's the current view on re-engineering a system to take advantage of that energy or recycling? >> I think when we look at our vision of sustainable compute over this horizon it's going to be required, right? We know that compute helps to solve society's challenges and the demand for it is not going away. So how do we take new innovations looking at a systems level as compute gets further deployed at the edge, how do we make it efficient? How do we ensure that that compute can be deployed where there is air pollution, right? So some of these technologies that you have they not only enable reuse but they also enable some you know, closing in of the solution to make it more robust for edge deployments. It'll allow you to place your data center wherever you need it. It no longer needs to reside in one place. And then that's going to allow you to have those energy reuse benefits either into district heating if you're in, you know Northern Europe or there's examples with folks putting greenhouses right next to a data center to start growing food in what we're previously food deserts. So I don't think it's science fiction. It is how we need to rethink as a society. To utilize everything we have, the tools at our hand. >> There's a commercial on the radio, on the East Coast anyway, I don't know if you guys have heard of it, it's like, "What's your one thing?" And the gentleman comes on, he talks about things that you can do to help the environment. And he says, "What's your one thing?" So what's the one thing or maybe it's not just one that IT managers should be doing to affect carbon footprint? >> The one thing to affect their carbon footprint, there are so many things. >> Dave: Two, three, tell me. >> I think if I was going to pick the one most impactful thing that they could do in their infrastructure is it's back to John's comment. It's imagine if the world deployed AI, all the benefits not only in business outcomes, you know the revenue, lowering the TCO, but also lowering the footprint. So I think that's the one thing they could do. If I could throw in a baby second, it would be really consider how you get renewable energy into your computing ecosystem. And then you know, at Intel, when we're 80% renewable power, our processors are inherently low carbon because of all the work that we've done others have less than 10% renewable energy. So you want to look for products that have low carbon by design, any Intel based system and where you can get renewables from your grid to ask for it, run your workload there. And even the next step to get to sustainable computing it's going to take everyone, including every enterprise to think differently and really you know, consider what would it look like to bring renewables onto my site? If I don't have access through my local utility and many customers are really starting to evaluate that. >> Well Jen its great to have you on theCube. Great insight into the current state of the art of sustainability and carbon footprint. My final question for you is more about the talent out there. The younger generation coming in I'll say the pressure, people want to work for a company that's mission driven we know that, the Wall Street impact is going to be financial business model and then save the planet kind of pressure. So there's a lot of talent coming in. Is there awareness at the university level? Is there a course where can, do people get degrees in sustainability? There's a lot of people who want to come into this field what are some of the talent backgrounds of people learning or who might want to be in this field? What would you recommend? How would you describe how to onboard into the career if they want to contribute? What are some of those factors? Cause it's not new, new, but it's going to be globally aware. >> Yeah well there certainly are degrees with focuses on sustainability maybe to look at holistically at the enterprise, but where I think the globe is really going to benefit, we didn't really talk about the software inefficiency. And as we delivered more and more compute over the last few decades, basically the programming languages got more inefficient. So there's at least 35% inefficiency in the software. So being a software engineer, even if you're not an AI engineer. So AI would probably be the highest impact being a software engineer to focus on building new applications that are going to be efficient applications that they're well utilizing the transistor that they're not leaving zombie you know, services running that aren't being utilized. So I actually think-- >> So we got a program in assembly? (all laughing) >> (indistinct), would get really offended. >> Get machine language. I have to throw that in sorry. >> Maybe not that bad. (all laughing) >> That's funny, just a joke. But the question is what's my career path. What's a hot career in this area? Sustainability, AI totally see that. Anything else, any other career opportunities you see or hot jobs or hot areas to work on? >> Yeah, I mean, just really, I think it takes every architect, every engineer to think differently about their design, whether it's the design of a building or the design of a processor or a motherboard we have a whole low carbon architecture, you know, set of actions that are we're underway that will take to the ecosystem. So it could really span from any engineering discipline I think. But it's a mindset with which you approach that customer problem. >> John: That system thinking, yeah. >> Yeah sustainability designed in. Jen thanks so much for coming back in theCube, coming on theCube. It's great to have you. >> Thank you. >> All right. Dave Vellante for John Furrier, we're sustaining theCube. We're winding down day three, HPE Discover 2022. We'll be right back. (upbeat music)
SUMMARY :
brought to you by HPE. the formerly Sands Convention I don't know, the last decade and the impact that climate but then they started to come together. and lower the footprint What's the ideal goal in the long term? into the sand into, you but maybe holistically across the company that you know, with the growth, to where you've been, that already have the benefit to help companies get rebates at the time was one and it is ready to be used. the CPU that already exists. and been part of the whole, And then that's going to allow you And the gentleman comes on, The one thing to affect And even the next step to to have you on theCube. that are going to be would get really offended. I have to throw that in sorry. Maybe not that bad. But the question is what's my career path. or the design of a It's great to have you. Dave Vellante for John Furrier,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jen Huffstetler | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
PG&E | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Jen | PERSON | 0.99+ |
SEC | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Northern Europe | LOCATION | 0.99+ |
one factor | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.98+ |
Pat | PERSON | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
one location | QUANTITY | 0.98+ |
20% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Net-Zero | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.96+ |
DL Boost | COMMERCIAL_ITEM | 0.96+ |
last decade | DATE | 0.95+ |
today | DATE | 0.93+ |
decades | QUANTITY | 0.92+ |
day three | QUANTITY | 0.9+ |
one area | QUANTITY | 0.9+ |
East Coast | LOCATION | 0.9+ |
KDDI | ORGANIZATION | 0.89+ |
Discover | ORGANIZATION | 0.88+ |
less than 10% renewable | QUANTITY | 0.86+ |
Wall Street | LOCATION | 0.86+ |
Sands Convention Center | LOCATION | 0.84+ |
theCube | ORGANIZATION | 0.83+ |
four X | QUANTITY | 0.82+ |
Wall | ORGANIZATION | 0.82+ |
least 35% | QUANTITY | 0.75+ |
Chief | PERSON | 0.75+ |
IBM 2.0 | ORGANIZATION | 0.74+ |
Sustainability Officer | PERSON | 0.72+ |
last few decades | DATE | 0.69+ |
second | QUANTITY | 0.63+ |
Net-Zero 2040 | TITLE | 0.62+ |
Generation | COMMERCIAL_ITEM | 0.6+ |
HPE Discover 2022 | COMMERCIAL_ITEM | 0.55+ |
2022 | COMMERCIAL_ITEM | 0.55+ |
every engineer | QUANTITY | 0.54+ |
5G | QUANTITY | 0.54+ |
-Zero | OTHER | 0.54+ |
HPE | COMMERCIAL_ITEM | 0.48+ |
Street | LOCATION | 0.47+ |
Guido Appenzeller, Intel | HPE Discover 2021
>>Please >>welcome back to HP discover 2021 the virtual version. My name is Dave Volonte and you're watching the cube and we're here with Guido appenzeller who's the C. T. O. Of the data platforms group at Intel. Guido. Welcome to the cube. Come on in. >>Thanks. Dave. I appreciate it's great to be here today. So >>I'm interested in your role at the company. Let's talk about that. Your brand new. Tell us a little bit about your background. What attracted you to intel and what's your role here? >>Yeah. So I'm, you know, I grew up in the startup ecosystem of Silicon Valley came from my PhD and and and never left and uh you know, built software companies, worked at software companies worked at the embassy for a little bit and I think my, my initial reaction when the intel recruiter called me, it was like you got the wrong phone number, right? I'm a software guy that's probably not who you're looking for. And uh you know, we had a good conversation I think at Intel, you know, there's a, there's a realization that you need to look at what intel builds more as an overall system from novel systems perspective right, that you have the software stack and then the hardware components that we're getting more and more intricately linked and you know, you need the software to basically bridge across the different hardware components that intel is building. So I'm here now is the CEO for the data platform school. So that builds the data center for Arts here at Intel. And it's a really exciting job. These are exciting times that intel, you know, with, with Pat, you got a fantastic uh you know, CEO at the home, I worked with him before at december, so a lot of things to do. Um but I think a very exciting future. >>Well, I mean the data center is the wheelhouse of intel. I mean of course you, your ascendancy was a function of the pcs and the great volume and how you change that industry. But really data centers is where they, I mean I remember the days of people that until will never be the data center, it's just a toy and of course your dominant player there now. So your initial focus here is is really defining the vision. Uh and and I'd be interested in your thoughts on the future, what the data center looks like in the future, where you see intel playing a role. What what are you seeing is the big trends there. You know, Pat Pat Gelsinger talks about the waves. He says if you don't ride the waves you're gonna end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >>That's right. You want to surf the waves? Right? That's the way to do it. So look, I like to look at this in sort of in terms of major macro trends. Right? And I think the biggest thing that's happening um in the market right now is the cloud revolution. Right? And I think we're halfway through or something like that and this transition from the classic uh client server type model, uh you know that we're with enterprises running their own data centers to more of a cloud model where something is, you know, run by by hyper scale operators or it may be run you know by uh by an enterprise themselves that message to the absolute there's a variety of different models, but the provisioning models have changed, right? The it's it's much more of a turnkey type service. And when when we started out on this journey, I think the we build data centers the same way that we built them before. Although you know the way to deliver it had really changed. Right? That's going to morph a service model and we're really now starting to see the hardware diverge right there actually. Silicon that we need to build or to address these use cases diverge. And so I think one of the things that is kind of the most interesting for me is really to think through how does intel in the future build silicon? That's that's built for clouds. You know, like on prem clouds. Edge clouds, hyper scale cloud but basically built for these new use cases that have emerged. So >>just kind of quick aside, I mean to me, the definition of cloud is changing. It's evolving. It used to be this set of remote services in a hyper scale data center. It's now, you know, that experience is coming on prem it's connecting across clouds. It's moving out to the edge, it's supporting, you know, all kinds of different workloads. How do you see that? It's evolving Cloud. >>Yeah, I think, I mean, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate, right? And that is actually has major architectural implications. I mean, just to, you know, this is a perfect analogy, but if I build a single family home, right, where everything is owned by one party, uh you know, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense? Right, sorry. In my house here has actually open kitchen, it's the same room essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture, right? The kitchen from from your restaurants where the cooks are busy preparing the food and the dining room where the guests are sitting there separate. Right? I mean, the hotel staff has a dedicated place to work and the guests have a dedicated places to mingle, but they don't overlap typically. I think it's the same thing with architecture in the clouds. Right? That's you know, initially the assumption was it's all one thing. And now suddenly we're starting to see, you know, like a much much cleaner separation of these different different areas. I think a second major influences that the type of workloads we're seeing. It's just evolving incredibly quickly. Right? I mean, you know, 10 years ago, you know, things were mostly monolithic today. You know, most new workloads are micro service base and that that has a huge impact in uh you know, where where CPU cycles are spent, you know, a way we need to put in accelerators, you know, how we how we build silicon for that too. Give you an idea, I mean there's some really good research out of google and facebook where they run numbers. For example, if you just take a a standard system and you run a micro service based application, written a micro service based architecture, you can spend anywhere from, I want to say 25 in some cases over 80% of your CPU cycles. Just an overhead. Right. And just on marshalling the marshaling the protocols and uh the encryption and decryption of the packets and your service match that sits in between all these things. So I created a huge amount of overhead so for us, 80% go into these, into these overhead functions. Really our focus suddenly needs to be uh how do we enable um, that kind of infrastructure? >>Yeah, So let's talk a little bit more about workloads if we can. I mean the overhead, there's also sort of as the software, as the data center becomes software defined, you know, thanks thanks to your good work at VM where there's a lot of cores that are supporting that software defined data center and then >>that's exactly right as >>well. You mentioned micro services, container based applications, but but as well, you know, aI is coming into play and what it is, you know, a i is this kind of amorphous, but it's really data oriented workloads versus kind of general purpose CRP and finance and HCM So those workloads are exploding and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is intel playing there? >>Look, I think the trend you're talking about is definitely Right, Right. We're getting more and more data centric, you know, shifting the data around becomes a larger and larger part of the overall workload in the data center. Ai is getting a ton of attention. Right? It's look, if I talked to the most operators, aI is still emerging category. Right. I mean, we're seeing, I'd say five, maybe 10% percent of workloads being A. I. Um it's growing the very high value workloads right now, very challenging workloads. Um but you know, it's still a smaller part of the overall mix. Now, Edge edge is big and it's too big. It's big. And it's complicated because of the way I think about edges. It's not just one homogeneous market, it's really a collection of separate sub markets, right? It's very heterogeneous, you know, it runs on a variety of different hardware. All right. It can be everything from, you know, a little a little server that's families that's strapped to a phone, telephone pole with an antenna on top of, you know, to greater micro cell. Or it can be, you know, something that's running inside a car, Right. I mean, you know, uh, modern cars has a small little data center inside, it can be something that runs in the industrial factory floor, right. The network operators, there's a pretty broad range of verticals that all looks slightly different in, in their requirements. And uh, you know, and it's, I think it's really interesting, right? It's one of those areas that really creates opportunities for, for vendors like, like HPV right to, to, to really shine and and address this, this heterogeneity with a, with a broad range of solutions. Very excited to work together with them in that space. >>Yeah, I'm glad you brought HP into the discussion because we're here at HP discover I want to connect them. But so my question is, what's the role of the data center in this, this world of edge? How do you see it? >>Yeah. Look, I think in a sense, what the cloud revolution is doing is that it's showing us a leads to polarisation of a classic data into edge and clout. That makes sense. Right. It's splitting right before this was all mingled a little bit together. If my data centers in my basement anyways, you know what the edge, what's data says the same thing. Right? At the moment I'm moving some workloads in the clouds. I don't even know where they're running anymore than some other workloads that have to have a certain sense of locality. I need to keep closely. Right. And there's some workloads, you just just can't move into the cloud, right? I mean, there's uh if I'm generating a lot of time on the video data that I have to process, it's financially completely unattractive to shift all of that, you know, to, to essential location. I want to do this locally. Right? Will I ever connect my smoke detector with my sprinkler system via the cloud? No, I won't write just for if things go bad, right, they may not work anymore. So I need something that does this locally. So I think as many reasons, you know, why, why you want to keep something on, on premises And I think it's, I think it's a growing market, right? It's very exciting. You know, we're doing some some very good stuff with friends at hp. You know, the they have the pro line dl 1, 10, 10, 10 plus server with our latest third generation z johnson them uh, the open ran, you know, which is the radio access network for the telco space HP Edge Line service. Also, the third generation says it's a really nice products there that I think can really help addressing enterprises carriers, a number of different organizations. You know, these these alleged use cases, Can you >>explain you mentioned open randy rand. So we essentially think of that as kind of the software to find telco. >>Yeah, exactly. It's a software defined cellular. Right. I mean, actually, I learned a lot about that of the recent months, You know, when, when, when I was taking these classes at stanford, you know, these things were still dying down in analogue, Right. That basically a radio signal will be processed in a long way and, and digested. And today, typically the radio signal is immediately digitized and all the processing of the radio signal happens happens digitally and uh, you know, it happens on servers, right? Um, something HP servers and uh, you know, it's, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually visualize these servers and, you know, run a number of different cells inside the same server. Right? It's really complicated because you have to have fantastic real time guarantees, very sophisticated software stack. But it's, it's really fascinating news case. >>You know, a lot of times we have these debates and it may be somewhat academic, but I'd love to get your thoughts on the debate is about, okay, how much data that that is, you know, processed and inferred at the edge is actually gonna come back to the cloud most of the day, is going to stay at the edge. A lot of it's not even gonna be persisted. And the counter to that is so that's sort of the negative for the data center. But the counter that is, they're gonna be so much data. Even a small percentage of all the data that we're going to create is going to create so much more data, you know, back in the cloud, back in the data center. What's your take on that? >>Look? I think there's different applications that are easier to do in certain places. Right? I mean, look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, you know, lots of different services. Um, you have four. If you need very specialized hardware. If I want to run a big learning task where something need 1000 machines. Right. And then this runs for a couple of days and then I don't need to do that for for another month or two. Right. For that is really great. There's on demand infrastructure, right? Having having all this capability up there, uh you know, at the same time it costs money to send the data up there, Right. If I just look at the hardware cost is much, much cheaper to to build myself, you know, in my own data center or in the edge. Um so I think we'll we'll see, you know, customers picking and choosing what they want to do. Where. Right. And and there's a role for both. Right. Absolutely. And so, you know, I think there's there's certain categories, I mean, at the end of the day, um, why do I absolutely need to have something at the edge? And there's a couple of, I think good, good use cases. I mean one is, let me ask you a few phrases, but I think it's three primary reasons. Right? Um, one is simply a bandwidth, Right? What I'm saying? Okay, my my video data, like I have have 100 and four K video cameras, you know, with 60 frames a second feet, there's no way I'm going to move into the cloud. It's just cost prohibitive. I have a hard time getting a line that allows you to do this right. Um, there might be latency, right. If I don't want to reliably react in a very short period of time, I can't do that in the cloud. I need to do this locally with me. Um, I can't even do this in my data center. This has to be very, very closely coupled. And then there's this idea of faith sharing, I think, you know, that if I want to make sure that if things go wrong right, uh, the system is still intact, right. You know, anything that's an emergency kind of backup, emergency type procedure, right? If things go wrong, I can't rely on there'll be a good internet connection, I need to handle things things locally. Like, you know, that's the smoke detector and sprinkler system. Right? And so for for, for all of these, right, there's good reasons why we need to move things close to the edge. So I think there'll be a creative tension between the two, Right? But both are huge markets and I think there's, there's great opportunities for, for hp ahead to uh, you know, to, to work on these two cases. >>Yeah, for sure. Top brand in that compute business. So before we wrap up today, you know, thinking about your, your role, I mean part of your role is the trend spotter. You're right, you gotta, you're, you're kind of driving innovation, riding, surfing the waves as you said, you know, skating to the park, all >>the all my perfect crystal ball right here, Yeah, got all the cliches. >>Right? Yes, yeah. Right foot's a little pressure on you. But so what are some of the things that you're overseeing that you're, you're looking towards in terms of innovation projects, particularly obviously in the data center space, what's really exciting you >>look, I mean there's a lot of them and I pretty much all the, you know, the interesting ideas I get from talking to customers, right? You talk to to the sophisticated customers, you try to understand the problems that are trying to solve that they cancel right now and that that gives you ideas to just to pick a couple. Right? I mean, one thing, what area I'm probably thinking about a lot is how can we built in a sense, better accelerators for the infrastructure functions. Right. So, so no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can, I can reduce the amount of CPU cycles I I spent on, you know, Microsoft's marshalling the marshaling service mesh, you know, storage acceleration and these things like that. Right? So clearly, if this is a large chunk of the overall uh cycle budget, right? We need to find ways to, to to shrink that right to to make this more efficient. Right? So that I think so this basically infrastructure function acceleration, it sounds probably as unsexy as any topic could sound, but I think this is actually really, really interesting area. One of the big levers we have right now in the data set. >>I would agree. I think that's actually really exciting because you actually can pick up a lot of the wasted cycles now and that's that drops right to the bottom line. But >>exactly. I mean it's you know, it's kind of funny. I mean we're still measuring so much with speck and rates of Cpus right performances like, well, They may actually make measuring the wrong thing, right? If 80% of the cycles of my upper spent an overhead right then the speed of the CPU doesn't matter as much. Right? It's other functions that end. So that's one um the second big one is memory is becoming a bigger and bigger issue. Right? And and it's it's memory cost because you know, memory prices, they used to have declined the same rate that, you know, our core counts and and and you know, Fox speeds increased. That's no longer the case. That we've run to some scaling limits there some physical scaling limits where memory prices are becoming stagnant and this is becoming a major pain point for everybody was building servers. Right. So I think we need to find ways how we can leverage memory more efficiently. Right, share memory more efficiently. We have some really cool ideas and in that space that we're working on. >>Yeah, let me just sorry to interrupt. But Pat hinted to that and your big announcement, I mean you talk about system on package I think is what he used to talk about what I call disaggregated memory and better sharing of that memory resource. And I mean that seems to be a clear benefit of value creation for the industry. >>Exactly, right. I mean, if this becomes a larger for our customers, this becomes a larger part of the overall cost, right? We want to help them address that issue. And you know, and then the third one is um, you know, we're seeing more and more data center operators effectively power limited. Right? So we need to reduce the overall power of systems or, you know, uh maybe to some degree, just figure out better ways of cooling these systems. But I think there's a there's a lot of innovation that can be done their right to both make these data centers more economical, but also to make them a little more green today, data centers have gotten big enough that if you look at the total amount of energy that we're spending in this world is mankind. Right. A chunk of that is going just to data centers. Right. And so if we're spending energy at that scale, right. I think we have to start thinking about how can we build data centers that are more energy officials? I'll do the same thing with less energy in the future. >>Well, thank you for for laying those out. I mean you guys have been long term partners with with HP and now of course H P E. I'm sure Gelsinger's really happy to have you on board Guido. I would be and thanks so much for coming on the cube. >>It's great to be here. Great to be at the HP show. Thanks >>For being with us for HP Discover 2021 the virtual version. You're watching the Cube, the leader in digital tech coverage. Right back.
SUMMARY :
Welcome to the cube. So What attracted you to intel and what's your role here? And uh you know, we had a good conversation I think at Intel, you know, there's a, What what are you seeing is the big trends there. is, you know, run by by hyper scale operators or it may be run you know by uh by an enterprise It's moving out to the edge, it's supporting, you know, all kinds of different workloads. I mean, just to, you know, this is a perfect analogy, the software, as the data center becomes software defined, you know, thanks thanks to your good work at you know, aI is coming into play and what it is, you know, a i is this kind of amorphous, I mean, you know, uh, modern cars has a small little data center inside, Yeah, I'm glad you brought HP into the discussion because we're here at HP discover I want to connect them. So I think as many reasons, you know, why, why you want to keep something on, explain you mentioned open randy rand. you know, these things were still dying down in analogue, Right. is going to create so much more data, you know, back in the cloud, back in the data center. at the hardware cost is much, much cheaper to to build myself, you know, in my own data center or in the you know, skating to the park, all space, what's really exciting you you know, Microsoft's marshalling the marshaling service mesh, you know, storage acceleration and these things like that. I think that's actually really exciting because you I mean it's you know, it's kind of funny. And I mean that seems to be a clear benefit of value creation And you know, and then the third one is um, you know, we're seeing more and more data center operators of course H P E. I'm sure Gelsinger's really happy to have you on board Guido. It's great to be here. For being with us for HP Discover 2021 the virtual version.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
60 frames | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Guido | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
25 | QUANTITY | 0.99+ |
Pat Pat Gelsinger | PERSON | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
december | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
intel | ORGANIZATION | 0.99+ |
10 years ago | DATE | 0.99+ |
hp | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.98+ |
third one | QUANTITY | 0.98+ |
Gelsinger | PERSON | 0.98+ |
one party | QUANTITY | 0.98+ |
four K | QUANTITY | 0.98+ |
four | QUANTITY | 0.97+ |
Guido appenzeller | PERSON | 0.97+ |
2021 | DATE | 0.97+ |
second | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
two cases | QUANTITY | 0.96+ |
10% percent | QUANTITY | 0.95+ |
stanford | ORGANIZATION | 0.94+ |
three primary reasons | QUANTITY | 0.94+ |
over 80% | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.91+ |
third | QUANTITY | 0.91+ |
10 | COMMERCIAL_ITEM | 0.91+ |
10 plus | COMMERCIAL_ITEM | 0.9+ |
single family | QUANTITY | 0.88+ |
z johnson | PERSON | 0.87+ |
Discover 2021 | COMMERCIAL_ITEM | 0.87+ |
dl 1 | COMMERCIAL_ITEM | 0.79+ |
couple | QUANTITY | 0.77+ |
second feet | QUANTITY | 0.76+ |
second big | QUANTITY | 0.74+ |
third generation | QUANTITY | 0.73+ |
rand | ORGANIZATION | 0.73+ |
days | QUANTITY | 0.7+ |
HPV | ORGANIZATION | 0.7+ |
H P | ORGANIZATION | 0.63+ |
things | QUANTITY | 0.62+ |
T. O. | PERSON | 0.57+ |
month | QUANTITY | 0.52+ |
E. | PERSON | 0.5+ |
generation | OTHER | 0.47+ |
Edge | COMMERCIAL_ITEM | 0.42+ |
HPE | ORGANIZATION | 0.37+ |
Guido Appenzeller, Intel | HPE Discover 2021
(soft music) >> Welcome back to HPE Discover 2021, the virtual version, my name is Dave Vellante and you're watching theCUBE and we're here with Guido Appenzeller, who is the CTO of the Data Platforms Group at Intel. Guido, welcome to theCUBE, come on in. >> Aww, thanks Dave, I appreciate it. It's great to be here today. >> So I'm interested in your role at the company, let's talk about that, you're brand new, tell us a little bit about your background. What attracted you to Intel and what's your role here? >> Yeah, so I'm, I grew up with the startup ecosystem of Silicon Valley, I came from my PhD and never left. And, built software companies, worked at software companies worked at VMware for a little bit. And I think my initial reaction when the Intel recruiter called me, was like, Hey you got the wrong phone number, I'm a software guy, that's probably not who you're looking for. And, but we had a good conversation but I think at Intel, there's a realization that you need to look at what Intel builds more as this overall system from an overall systems perspective. That the software stack and then the hardware components are all getting more and more intricately linked and, you need the software to basically bridge across the different hardware components that Intel is building. So again, I was the CTO for the Data Platforms Group, so that builds the data center products here at Intel. And it's a really exciting job. And these are exciting times at Intel, with Pat, I've got a fantastic CEO at the helm. I've worked with him before at VMware. So a lot of things to do but I think a very exciting future. >> Well, I mean the, the data centers the wheelhouse of Intel, of course your ascendancy was a function of the PCs and the great volume and how you change that industry but really data centers is where, I remember the days people said, Intel will never be at the data center, it's just the toy. And of course, you're dominant player there now. So your initial focus here is really defining the vision and I'd be interested in your thoughts on the future what the data center looks like in the future where you see Intel playing a role, what are you seeing as the big trends there? Pat Gelsinger talks about the waves, he says, if you don't ride the waves you're going to end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >> Yeah, that's right. You want to surf the waves, that's the way to do it. So look, I like to look at this and sort of in terms of major macro trends, And I think that the biggest thing that's happening in the market right now is the cloud revolution. And I think we're well halfway through or something like that. And this transition from the classic, client server type model, that way with enterprises running all data centers to more of a cloud model where something is run by hyperscale operators or maybe run by an enterprise themselves of (indistinct) there's a variety of different models. but the provisioning models have changed. It's much more of a turnkey type service. And when we started out on this journey I think the, we built data centers the same way that we built them before. Although, the way to deliver IT have really changed, it's going through more of a service model and we really know starting to see the hardware diverge, the actual silicon that we need to build and how to address these use cases, diverge. And so I think one of the things that is probably most interesting for me is really to think through, how does Intel in the future build silicon that's built for clouds, like on-prem clouds, edge clouds, hyperscale clouds, but basically built for these new use cases that have emerged. >> So just a quick, kind of a quick aside, to me the definition of cloud is changing, it's evolving and it used to be this set of remote services in a hyperscale data center, it's now that experience is coming on-prem it's connecting across clouds, it's moving out to the edge it's supporting, all kinds of different workloads. How do you see that sort of evolving cloud? >> Yeah, I think, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate. And that is actually has major architectural implications, it just, this is a perfect analogy, but if I build a single family home, where everything is owned by one party, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense. So, in my house here is actually the open kitchen, it's the same room, essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture. The kitchen from your restaurants where the cooks are busy preparing the food and the dining room, where the guests are sitting, they are separate. The hotel staff has a dedicated place to work and the guests have a dedicated places to mingle but they don't overlap, typically. I think it's the same thing with architecture in the clouds. So, initially the assumption was it's all one thing and now suddenly we're starting to see like a much cleaner separation of these different areas. I think a second major influence is that the type of workloads we're seeing it's just evolving incredibly quickly, 10 years ago, things were mostly monolithic, today most new workloads are microservice based, and that has a huge impact in where CPU cycles are spent, where we need to put an accelerators, how we build silicon for that to give you an idea, there's some really good research out of Google and Facebook where they run numbers. And for example, if you just take a standard system and you run a microservice based an application but in the microservice-based architecture you can spend anywhere from I want to say 25 in some cases, over 80% of your CPU cycles just on overhead, and just on, marshaling demarshaling the protocols and the encryption and decryption of the packets and your service mesh that sits in between all of these things, that created a huge amount of overhead. So for us might have 80% go into these overhead functions really all focus on this needs to be on how do we enable that kind of infrastructure? >> Yeah, so let's talk a little bit more about workloads if we can, the overhead there's also sort of, as the software as the data center becomes software defined thanks to your good work at VMware, it is a lot of cores that are supporting that software-defined data center. And then- >> It's at VMware, yeah. >> And as well, you mentioned microservices container-based applications, but as well, AI is coming into play. And what is, AI is just kind of amorphous but it's really data-oriented workloads versus kind of general purpose ERP and finance and HCM. So those workloads are exploding, and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is Intel playing there? >> I think the trends you're talking about is definitely right, and we're getting more and more data centric, shifting the data around becomes a larger and larger part of the overall workload in the data center. And AI is getting a ton of attention. Look if I talk to the most operators AI is still an emerging category. We're seeing, I'd say five, maybe 10% percent of workloads being AI is growing, they're very high value workloads. And they're very challenging workloads, but it's still a smaller part of the overall mix. Now edge is big and edge is two things, it's big and it's complicated because of the way I think about edge is it's not just one homogeneous market, it's really a collection of separate sub markets It's, very heterogeneous, it runs on a variety of different hardware. Edge can be everything from a little server, that's fanless, it's strapped to a phone, a telephone pole with an antenna on top of it, to aid a microcell, or it can be something that's running inside a car, modern cars has a small little data center inside. It can be something that runs on an industrial factory floor, the network operators, there's pretty broad range of verticals that all looks slightly different in their requirements. And, it's, I think it's really interesting, it's one of those areas that really creates opportunities for vendors like HPE, to really shine and address this heterogeneity with a broad range of solutions, very excited to work together with them in that space. >> Yeah, so I'm glad you brought HPE into the discussion, 'cause we're here at HPE Discover, I want to connect that. But so when I think about HPE strategy, I see a couple of opportunities for them. Obviously Intel is going to play in every part of the edge, the data center, the near edge and the far edge, and I gage HPE does as well with Aruba. Aruba is going to go to the far edge. I'm not sure at this point, anyway it's not yet clear to me how far, HPE's traditional server business goes to the, inside of automobiles, we'll see, but it certainly will be at the, let's call it the near edge as a consolidation point- >> Yeah. >> Et cetera and look the edge can be a race track, it could be a retail store, it could be defined in so many ways. Where does it make sense to process the data? But, so my question is what's the role of the data center in this world of edge? How do you see it? >> Yeah, look, I think in a sense what the cloud revolution is doing is that it's showing us, it leads to polarization of a classic data into edge and cloud, if that makes sense, it's splitting, before this was all mingled a little bit together, if my data centers my basement anyways, what's the edge, what's data center? It's the same thing. The moment I'm moving some workloads to the clouds I don't even know where they're running anymore then some other workloads that have to have a certain sense of locality, I need to keep closely. And there are some workloads you just can't move into the cloud. There's, if I'm generating lots of all the video data that I have to process, it's financially a completely unattractive to shift all of that, to a central location, I want to do this locally. And will I ever connect my smoke detector with my sprinkler system be at the cloud? No I won't, this stuff, if things go bad, that may not work anymore. So I need something that's that does this locally. So I think there's many reasons, why you want to keep something on premises. And I think it's a growing market, it's very exciting, we're doing some very good stuff with friends like HPE, they have the ProLiant DL, one 10 Gen10 Plus server with our latest a 3rd Generation Xeons on them the Open RAN, which is the radio access network in the telco space. HP Edgeline servers, also a 3rd Generation Xeons there're some really nice products there that I think can really help addressing enterprises, carriers and a number of different organizations, these edge use cases. >> Can you explain, you mentioned Open RAN, vRAN, should we essentially think of that as kind of the software-defined telco? >> Yeah, exactly. It's software-defined cellular. I actually, I learned a lot about that over the recent months. When I was taking these classes at Stanford, these things were still done in analog, that doesn't mean a radio signal will be processed in an analog way and digest it and today typically the radio signal is immediately digitized and all the processing of the radio signal happens digitally. And, it happens on servers, some of them HPE servers. And, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually virtualize these servers and, run a number of different cells, inside the same server. And it's really complicated because you have to have fantastic real-time guarantees versus sophisticated software stack. But it's a really fascinating use case. >> A lot of times we have these debates and it's maybe somewhat academic, but I'd love to get your thoughts on it. And debate is about, how much data that is processed and inferred at the edge is actually going to come back to the cloud, most of the data is going to stay at the edge, a lot of it's not even going to be persisted. And the counter to that is, so that's sort of the negative is at the data center, but then the counter that is there going to be so much data, even a small percentage of all the data that we're going to create is going to create so much more data, back in the cloud, back in the data center. What's your take on that? >> Look, I think there's different applications that are easier to do in certain places. Look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, lots of different services. You'll have first, if you need very specialized hardware, if I wanted to run the bigger learning task where somebody needed a 1000 machines, and then this runs for a couple of days, and then I don't need to do that for another month or two, for that is really great. There's on demand infrastructure, having all this capability up there, at the same time it costs money to send the data up there. If I just look at the hardware cost, it's much much cheaper to build it myself, in my own data center or in the edge. So I think we'll see, customers picking and choosing what they want to do where, and that there's a role for both, absolutely. And so, I think there's certain categories. At the end of the day why do I absolutely need to have something at the edge? There's a couple of, I think, good use cases. One is, let me actually rephrase a little bit. I think it's three primary reasons. One is simply a bandwidth, where I'm saying, my video data, like I have a 100 4K video cameras, with 60 frames per second feeds, there's no way I'm going to move that into the cloud. It's just, cost prohibitive- >> Right. >> I have a hard time even getting (indistinct). There might be latency, if I need want to reliably react in a very short period of time, I can't do that in the cloud, I need to do this locally with me. I can't even do this in my data center. This has to be very closely coupled. And, then there's this idea of fade sharing. I think, if I want to make sure that if things go wrong, the system is still intact, anything that's sort of an emergency kind of a backup, an emergency type procedure, if things go wrong, I can't rely on the big good internet connection, I need to handle things, things locally, that's the smoke detector and the sprinkler system. And so for all of these, there's good reasons why we need to move things close to the edge so I think there'll be a creative tension between the two but both are huge markets. And I think there's great opportunities for HP ahead to work on all these use cases. >> Yeah, for sure, top brand is in that compute business. So before we wrap up today, thinking about your role, part of your role is a trend spotter. You're kind of driving innovation righty, surfing the waves as you said, skating to the puck, all the- >> I've got my perfect crystal ball right here, yeah I got. >> Yeah, all the cliches. (Dave chuckles) puts a little pressure on you, but, so what are some of the things that you're overseeing that you're looking towards in terms of innovation projects particularly obviously in the data center space, what's really exciting you? >> Look, there's a lot of them and I pretty much all the interesting ideas I get from talking to customers. You talk to the sophisticated customers, you try to understand the problems that they're trying to solve and they can't solve right now, and that gives you ideas to just to pick a couple, one thing what area I'm probably thinking about a lot is how can we build in a sense better accelerators for the infrastructure functions? So, no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can reduce the amount of CPU cycles I spend on microservice marshaling demarshaling, service mesh, storage acceleration and these things like that. And so well clearly, if this is a large chunk of the overall cycle budget, we need to find ways to shrink that to make this more efficient. So then I think, so this basic infrastructure function acceleration, sounds probably as unsexy as any topic would sound but I think this is actually really, really interesting area and one of the big levers we have right now in the data center. >> Yeah, I would agree Guido, I think that's actually really exciting because, you actually can pick up a lot of the wasted cycles now and that drops right to the bottom line, but please- >> Yeah, exactly. And it's kind of funny we're still measuring so much with SPEC and rates of CPU's performances, it's like, well, we may actually be measuring the wrong thing. If 80% of the cycles of my app are spent in overhead, then the speed of the CPU doesn't matter as much, it's other functions that (indistinct). >> Right. >> So that's one. >> The second big one is memory is becoming a bigger and bigger issue, and it's memory cost 'cause, memory prices, they used to sort of decline at the same rate that our core counts and then clock speeds increased, that's no longer the case. So we've run to some scaling limits, there's some physical scaling limits where memory prices are becoming stagnant. And this has become a major pain point for everybody who's building servers. So I think we need to find ways how we can leverage memory more efficiently, share memory more efficiently. We have some really cool ideas in that space that we're working on. >> Well, yeah. And Pat, let me just sorry to interrupt but Pat hinted to that and your big announcement. He talked about system on package and I think is what you used to talk about what I call disaggregated memory and better sharing of that memory resource. And that seems to be a clear benefit of value creation for the industry. >> Exactly. If this becomes a larger, if for our customers this becomes a larger part of the overall costs, we want to help them address that issue. And the third one is, we're seeing more and more data center operators that effectively power limited. So we need to reduce the overall power of systems, or maybe to some degree just figure out better ways of cooling these systems. But I think there's a lot of innovation that can be done there to both make these data centers more economical but also to make them a little more Green. Today data centers have gotten big enough that if you look at the total amount of energy that we're spending, this world as mankind, a chunk of that is going just to data center. And so if we're spending energy at that scale, I think we have to start thinking about how can we build data centers that are more energy efficient that are also doing the same thing with less energy in the future. >> Well, thank you for laying those out, you guys have been long-term partners with HP and now of course HPE, I'm sure Gelsinger is really happy to have you on board, Guido I would be and thanks so much for coming to theCUBE. >> It's great to be here and great to be at the HP show. >> And thanks for being with us for HPE Discover 2021, the virtual version, you're watching theCUBE the leader in digital tech coverage, be right back. (soft music)
SUMMARY :
2021, the virtual version, It's great to be here today. and what's your role here? so that builds the data data center of the future? the actual silicon that we need to build it's moving out to the edge is that the type of workloads we're seeing as the data center It's at VMware, And as well, you mentioned and larger part of the overall the data center, the near the role of the data center lots of all the video data about that over the recent months. And the counter to that is, move that into the cloud. and the sprinkler system. righty, surfing the waves I've got my perfect in the data center space, of the overall cycle If 80% of the cycles of my that's no longer the case. And that seems to be a clear benefit that are also doing the same thing happy to have you on board, great to be at the HP show. the virtual version,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Guido | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
Data Platforms Group | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one party | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
HPE | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Today | DATE | 0.98+ |
ProLiant DL | COMMERCIAL_ITEM | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
three primary reasons | QUANTITY | 0.96+ |
second | QUANTITY | 0.95+ |
Data Platforms Group | ORGANIZATION | 0.94+ |
Open RAN | TITLE | 0.94+ |
10% percent | QUANTITY | 0.94+ |
vRAN | TITLE | 0.92+ |
HPE Discover | ORGANIZATION | 0.91+ |
Stanford | ORGANIZATION | 0.91+ |
HPE | TITLE | 0.89+ |
over 80% | QUANTITY | 0.89+ |
single family home | QUANTITY | 0.88+ |
10 Gen10 Plus | COMMERCIAL_ITEM | 0.83+ |
HPE Discover 2021 | EVENT | 0.81+ |
couple | QUANTITY | 0.81+ |
60 frames per second feeds | QUANTITY | 0.79+ |
one thing | QUANTITY | 0.77+ |
HP | EVENT | 0.75+ |
Edgeline | COMMERCIAL_ITEM | 0.74+ |
4K | QUANTITY | 0.74+ |
couple of days | QUANTITY | 0.73+ |
second big | QUANTITY | 0.72+ |
3rd Generation | COMMERCIAL_ITEM | 0.72+ |
month | QUANTITY | 0.69+ |
Aruba | ORGANIZATION | 0.6+ |
telco | ORGANIZATION | 0.57+ |
Discover 2021 | EVENT | 0.55+ |
theCUBE | ORGANIZATION | 0.54+ |
George Hope, HPE, Terry Richardson and Peter Chan, AMD | HPE Discover 2021
>>from the cube studios in Palo alto in boston connecting with thought leaders all around the world. >>This is a cute conversation. Welcome to the cubes coverage of HP discover 2021 I'm lisa martin. I've got three guests with me here. They're going to be talking about the partnership between HP and AMG. Please welcome George hope worldwide Head of partner sales at HP terry, Richardson north american channel chief for AMG and Peter chan, the director of media channel sales at AMG Gentlemen, it's great to have you on the cube. >>Well, thanks for having us lisa. >>All right, >>we're excited to talk to you. We want to start by talking about this partnership terry. Let's go ahead and start with you. H P E and M D have been partners for a very long time, very long history of collaboration. Talk to us about the partnership >>HB named, He do have a rich history of collaboration spinning back to the days of chapter on and then when A M. D brought the first generation AMG equity process department back in 2017, HP was a foundational partner providing valuable engineering and customer insights from day one AmY has a long history of innovation that created a high performance CP roadmap for value partners like HP to leverage in their workload optimized product portfolios, maximizing the synergies between the two companies. We've kicked off initiatives to grow the chain of business together with workload focused solutions and together we define the future. >>Thanks terry George, let's get your perspective as worldwide had a partner sales at HP. Talked to me about H P S perspective of that AMG partnership. >>Yeah, they say it's uh the introduction of the third generation AMG Epic processors, we've we've doubled our A. M. D. Based Pro Lion portfolio. We've even extended it to our follow systems. And with this we have achieved a number of world records across a variety of workloads and are seeing real world results. The third generation am the epic processor delivers strong performance, expand ability and the security our customers need as they continue their digital transformation, We can deliver better outcomes and lay a strong foundation for profitable apartment growth. And we're incorporating unmatched workload optimization and intelligent automation with 360° security. And of course, uh with that as a service experience. >>But as a service experience becoming even more critical as is the security as we've seen some of the groundbreaking numbers and data breaches in 2020 alone. Peter I want to jump over to you now. One of the things that we see H P E and M. D. Talking about our solutions and workloads that are key areas of focus for both companies. Can you explain some of those key solutions and the value that they deliver for your customers? >>Absolutely. It's from computing to HPC to the cloud and everything in between and the young HB have been focused on delivering not just servers but meaningful solutions that can solve customer challenges. For example, we've seen here in India, the DL- 325 has been really powerful for customers that want to deploy video. Hp nmD have worked together with icy partners in the industry to tune the performance and ensure that the user experience is exceptional. Um This just one example of many of course, for instance, the 3 45 with database 3 65 for dense deployments, it's key the 35 That has led the way in big data analytics. Um the Apollo 60 500 breaking new path in terms of AI and Machine learning, quite a trending topic and m D H p are always in the news when it comes to groundbreaking HPC solutions and oh by the way, we're able to do this due to an unyielding commitment to the data center and long term laser focused execution on the M the road map. >>Excellent. Thanks. Peter. Let's talk about the channel expansion a little bit more terry with you. You know, you and the team here. Channel Chief focused on the channel. What is A. M. D. Doing specifically to expand your channel capabilities and support all of the Channel partners that work with Andy >>great question lisa Campbell is investing in so many areas around the channel. Let's start with digital transformation. Our Channel partners consistently provided feedback that customers need to do more with less between A and B and H P. E. We have solutions that increase capabilities and deliver faster time to value for the customer looking to do more with less. We have a tool on our website called the and metrics server virtualization, Tco estimation tool and those who have visually see the savings. We also have lots of other resources such as technical documentation, A and E arena for training and general CPU's departments can take advantage of aside from solution examples, AMG is investing in headcount internally and at our channel part race. I'm actually an example of the investment MD is making to build out the channel. One more thing that I'll mention is the investment that are, you know, lisa su and Andy are making to build out the ecosystem from head Count to code development and is investing to have a more powerful user experience with our software partners in the ecosystem. From my discussions with our channel partners, they're glad to see A and d expanding our our channel through the many initiatives and really bringing that ecosystem. >>Here's another question for you as channel chief. I'm just curious in the last year, speaking and you talked about digital transformation. We've seen so much acceleration of the adoption of that since the last 15 months has presented such challenges. Talk to me a little bit about some of the feedback from your channel partners about what you am, D N H B are doing together to help those customers needed to deliver that fast time to value, >>you know, so really it's all about close collaboration. Um we we work very closely with our counterparts at H P. E just to make sure we understand partner and customer requirements and then we work to craft solutions together from engaging, technically to collaborating on on, you know, when products will be shipped and delivered and also just what are we doing to uh to identify the next key workloads and projects that are going to be engaged in together? So it's it's really brought the companies I think even closer together, >>that's excellent as a covid catalyst. As I say, there's a lot of silver linings that we've seen and it sounds like the collaboration terry that you mentioned has become even stronger George. I want to go to you. Let's HP has been around for a long time. My first job in tech was Hewlett Packard by the way, many years ago. I won't mention how long but talk to me about the partnership with AMG from H P s perspective, is this part of H P S D N A? >>Absolutely. Partnering is our D N A. We've had 80 years of collaboration with an ever expanding ecosystem of partners that that all play a key role in our go to market strategy. We actually design and test our strategic initiatives in close collaboration with our partners so that we can meet their most pressing needs. We do that through like farmer advisory boards and things of that nature. Um but we have we have one of the most profitable partner programs in the industry, 2-3 times higher rebates than most of our competitors. And we continue to invest in the partner experience in creating that expertise so partners can stand out in a highly competitive market. Uh And Andy is in direct alignment with that strategy. We have strong synergies and a common focus between the two companies. >>And I also imagine George one question and one question to that there's tremendous value in it for your end user customers, especially those that have had to everyone pivot so many times in the last year and have talked to me a little bit about George What you're saying from the customer's perspective. >>Well as Antonio Neri said a couple of years back, the world is going to be hybrid and uh, he was right. We continue uh we continue to see that evolution and we continue to deliver solutions around a hybrid digital world with, with Green Lake and the new wave of digital transformation that we refer to now as the age of insight customers want a cloud experience everywhere. And 70% of today's workloads can easily be re factored for the public cloud or they need to stay physically close to the data and other apps at the emerging edge or in polos are in the data centers. So as a result, most organizations are forced to deal with the complexity of having two divergent operating models and they're paying higher cost to maintain them both with Green Lake, we provide one consistent operating model with visibility and control across public clouds and on prem environments. And that applies to all workloads, you know, whether it's cloud native or non cloud native applications. Um we also have other benefits like no cloud block in or no data. Egress charges, so you have to pay a steep price just to move workloads out of the public cloud. And then we're expanding collaboration opportunities within for our partner ecosystem so that we can bring that cloud experience to a faster growing number of customers worldwide. So we've launched new initiatives uh in support of the core strategy as we accelerate our as a service vision and then work with partners to unlock better customer outcomes with Green Lake and of course, hb compute of which I am d is part of is, is the underlying value added technology. >>Can you expand on some of those customer outcomes as we look at, as I mentioned before, this very dynamic market in which we live. It's all about customer outcomes. What are some of those that from a hybrid cloud environment perspective with Green like that you're helping customers achieve? >>Well, at least Greenland has come out with with about 30 different different offerings that package up some solutions. So you're not just buying infrastructure as a service. We have offerings like HPC as a service. We have offerings like uh, V D I as a service, ml, ops as a service. So we're packaging in technology, some are are some are not ours, but into completing some solutions. So that creates the outcome that the customers are looking for. >>Excellent. Thanks, George and Peter, last question to you again with the hybrid cloud environment being something that we're seeing more and more of the benefits that Green Lake is delivering through the channel. What's your perspective from a. M decide? >>Absolutely lisa. So, so I mean I think it's clear with a MD based systems, customers get the benefit of performance, security and fast time to value whether deployed on prem and cloud on a hybrid model. So please come try out our HP system based on name the processors and see how we can accelerate and protect your applications. Thank you lisa. >>Excellent, Peter George terry, thank you for joining me today. I'm sure there's a lot more that folks are going to be able to learn about what AM D and H. P. Are doing together on the virtual show floor. We appreciate your time. Thank you. Yeah, for my guests, I'm lisa martin. You're watching the cubes coverage of HP discover 2021 Yeah.
SUMMARY :
it's great to have you on the cube. Let's go ahead and start with you. We've kicked off initiatives to grow the chain of business together with workload focused solutions Talked to me about H P S perspective of that AMG partnership. And of course, uh with that as a service experience. One of the things that we see H P E and M. Um This just one example of many of course, for instance, the 3 45 with database Let's talk about the channel expansion a little bit more terry with you. I'm actually an example of the investment MD is making to build out the channel. I'm just curious in the last year, speaking and you talked about digital transformation. and projects that are going to be engaged in together? the collaboration terry that you mentioned has become even stronger George. We actually design and test our strategic initiatives in close collaboration with our partners And I also imagine George one question and one question to that there's tremendous value in it factored for the public cloud or they need to stay physically close to the data and other apps What are some of those that from a hybrid cloud environment perspective with Green like that you're helping So that creates the outcome that the customers are looking for. being something that we're seeing more and more of the benefits that Green Lake is customers get the benefit of performance, security and fast time to value whether deployed on prem going to be able to learn about what AM D and H. P. Are doing together on the virtual show floor.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George | PERSON | 0.99+ |
AMG | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
lisa martin | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Peter chan | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
lisa Campbell | PERSON | 0.99+ |
80 years | QUANTITY | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
one question | QUANTITY | 0.99+ |
Green | ORGANIZATION | 0.99+ |
both companies | QUANTITY | 0.99+ |
Peter Chan | PERSON | 0.99+ |
H P. E | ORGANIZATION | 0.99+ |
Palo alto | LOCATION | 0.99+ |
three guests | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
lisa su | PERSON | 0.99+ |
George Hope | PERSON | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Peter George terry | PERSON | 0.99+ |
DL- 325 | COMMERCIAL_ITEM | 0.99+ |
2021 | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
AMG Gentlemen | ORGANIZATION | 0.98+ |
Greenland | ORGANIZATION | 0.98+ |
first job | QUANTITY | 0.98+ |
M D | PERSON | 0.98+ |
Richardson | PERSON | 0.98+ |
Green Lake | ORGANIZATION | 0.98+ |
AMD | ORGANIZATION | 0.97+ |
lisa | PERSON | 0.97+ |
One | QUANTITY | 0.97+ |
Apollo 60 500 | COMMERCIAL_ITEM | 0.97+ |
Terry Richardson | PERSON | 0.97+ |
AmY | ORGANIZATION | 0.96+ |
D N H B | ORGANIZATION | 0.96+ |
terry George | PERSON | 0.95+ |
terry | PERSON | 0.94+ |
first generation | QUANTITY | 0.94+ |
H P | ORGANIZATION | 0.93+ |
about 30 different different offerings | QUANTITY | 0.93+ |
boston | LOCATION | 0.93+ |
two divergent operating models | QUANTITY | 0.92+ |
2-3 times | QUANTITY | 0.91+ |
3 65 | OTHER | 0.89+ |
one example | QUANTITY | 0.87+ |
M. D. | PERSON | 0.85+ |
HB | PERSON | 0.84+ |
last 15 months | DATE | 0.84+ |
HPC | ORGANIZATION | 0.82+ |
360° | QUANTITY | 0.81+ |
Guido Appenzeller | HPE Discover 2021
(soft music) >> Welcome back to HPE Discover 2021, the virtual version, my name is Dave Vellante and you're watching theCUBE and we're here with Guido Appenzeller, who is the CTO of the Data Platforms Group at Intel. Guido, welcome to theCUBE, come on in. >> Aww, thanks Dave, I appreciate it. It's great to be here today. >> So I'm interested in your role at the company, let's talk about that, you're brand new, tell us a little bit about your background. What attracted you to Intel and what's your role here? >> Yeah, so I'm, I grew up with the startup ecosystem of Silicon Valley, I came from my PhD and never left. And, built software companies, worked at software companies worked at VMware for a little bit. And I think my initial reaction when the Intel recruiter called me, was like, Hey you got the wrong phone number, I'm a software guy, that's probably not who you're looking for. And, but we had a good conversation but I think at Intel, there's a realization that you need to look at what Intel builds more as this overall system from an overall systems perspective. That the software stack and then the hardware components are all getting more and more intricately linked and, you need the software to basically bridge across the different hardware components that Intel is building. So again, I was the CTO for the Data Platforms Group, so that builds the data center products here at Intel. And it's a really exciting job. And these are exciting times at Intel, with Pat, I've got a fantastic CEO at the helm. I've worked with him before at VMware. So a lot of things to do but I think a very exciting future. >> Well, I mean the, the data centers the wheelhouse of Intel, of course your ascendancy was a function of the PCs and the great volume and how you change that industry but really data centers is where, I remember the days people said, Intel will never be at the data center, it's just the toy. And of course, you're dominant player there now. So your initial focus here is really defining the vision and I'd be interested in your thoughts on the future what the data center looks like in the future where you see Intel playing a role, what are you seeing as the big trends there? Pat Gelsinger talks about the waves, he says, if you don't ride the waves you're going to end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >> Yeah, that's right. You want to surf the waves, that's the way to do it. So look, I like to look at this and sort of in terms of major macro trends, And I think that the biggest thing that's happening in the market right now is the cloud revolution. And I think we're well halfway through or something like that. And this transition from the classic, client server type model, that way with enterprises running all data centers to more of a cloud model where something is run by hyperscale operators or maybe run by an enterprise themselves of (indistinct) there's a variety of different models. but the provisioning models have changed. It's much more of a turnkey type service. And when we started out on this journey I think the, we built data centers the same way that we built them before. Although, the way to deliver IT have really changed, it's going through more of a service model and we really know starting to see the hardware diverge, the actual silicon that we need to build and how to address these use cases, diverge. And so I think one of the things that is probably most interesting for me is really to think through, how does Intel in the future build silicon that's built for clouds, like on-prem clouds, edge clouds, hyperscale clouds, but basically built for these new use cases that have emerged. >> So just a quick, kind of a quick aside, to me the definition of cloud is changing, it's evolving and it used to be this set of remote services in a hyperscale data center, it's now that experience is coming on-prem it's connecting across clouds, it's moving out to the edge it's supporting, all kinds of different workloads. How do you see that sort of evolving cloud? >> Yeah, I think, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate. And that is actually has major architectural implications, it just, this is a perfect analogy, but if I build a single family home, where everything is owned by one party, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense. So, in my house here is actually the open kitchen, it's the same room, essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture. The kitchen from your restaurants where the cooks are busy preparing the food and the dining room, where the guests are sitting, they are separate. The hotel staff has a dedicated place to work and the guests have a dedicated places to mingle but they don't overlap, typically. I think it's the same thing with architecture in the clouds. So, initially the assumption was it's all one thing and now suddenly we're starting to see like a much cleaner separation of these different areas. I think a second major influence is that the type of workloads we're seeing it's just evolving incredibly quickly, 10 years ago, things were mostly monolithic, today most new workloads are microservice based, and that has a huge impact in where CPU cycles are spent, where we need to put an accelerators, how we build silicon for that to give you an idea, there's some really good research out of Google and Facebook where they run numbers. And for example, if you just take a standard system and you run a microservice based an application but in the microservice-based architecture you can spend anywhere from I want to say 25 in some cases, over 80% of your CPU cycles just on overhead, and just on, marshaling demarshaling the protocols and the encryption and decryption of the packets and your service mesh that sits in between all of these things, that created a huge amount of overhead. So for us might have 80% go into these overhead functions really all focus on this needs to be on how do we enable that kind of infrastructure? >> Yeah, so let's talk a little bit more about workloads if we can, the overhead there's also sort of, as the software as the data center becomes software defined thanks to your good work at VMware, it is a lot of cores that are supporting that software-defined data center. And then- >> It's at VMware, yeah. >> And as well, you mentioned microservices container-based applications, but as well, AI is coming into play. And what is, AI is just kind of amorphous but it's really data-oriented workloads versus kind of general purpose ERP and finance and HCM. So those workloads are exploding, and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is Intel playing there? >> I think the trends you're talking about is definitely right, and we're getting more and more data centric, shifting the data around becomes a larger and larger part of the overall workload in the data center. And AI is getting a ton of attention. Look if I talk to the most operators AI is still an emerging category. We're seeing, I'd say five, maybe 10% percent of workloads being AI is growing, they're very high value workloads. So (indistinct) any workloads, but it's still a smaller part of the overall mix. Now edge is big and edge is two things, it's big and it's complicated because of the way I think about edge is it's not just one homogeneous market, it's really a collection of separate sub markets It's, very heterogeneous, it runs on a variety of different hardware. Edge can be everything from a little server, that's (indistinct), it's strapped to a phone, a telephone pole with an antenna on top of it, to (indistinct) microcell, or it can be something that's running inside a car, modern cars has a small little data center inside. It can be something that runs on an industrial factory floor, the network operators, there's pretty broad range of verticals that all looks slightly different in their requirements. And, it's, I think it's really interesting, it's one of those areas that really creates opportunities for vendors like HPE, to really shine and address this heterogeneity with a broad range of solutions, very excited to work together with them in that space. >> Yeah, so I'm glad you brought HPE into the discussion, 'cause we're here at HPE Discover, I want to connect that. But so when I think about HPE strategy, I see a couple of opportunities for them. Obviously Intel is going to play in every part of the edge, the data center, the near edge and the far edge, and I gage HPE does as well with Aruba. Aruba is going to go to the far edge. I'm not sure at this point, anyway it's not yet clear to me how far, HPE's traditional server business goes to the, inside of automobiles, we'll see, but it certainly will be at the, let's call it the near edge as a consolidation point- >> Yeah. >> Et cetera and look the edge can be a race track, it could be a retail store, it could be defined in so many ways. Where does it make sense to process the data? But, so my question is what's the role of the data center in this world of edge? How do you see it? >> Yeah, look, I think in a sense what the cloud revolution is doing is that it's showing us, it leads to polarization of a classic data into edge and cloud, if that makes sense, it's splitting, before this was all mingled a little bit together, if my data centers my basement anyways, what's the edge, what's data center? It's the same thing. The moment I'm moving some workloads to the clouds I don't even know where they're running anymore then some other workloads that have to have a certain sense of locality, I need to keep closely. And there are some workloads you just can't move into the cloud. There's, if I'm generating lots of all the video data that I have to process, it's financially a completely unattractive to shift all of that, to a central location, I want to do this locally. And will I ever connect my smoke detector with my sprinkler system be at the cloud? No I won't (Guido chuckles) this stuff, if things go bad, that may not work anymore. So I need something that's that does this locally. So I think there's many reasons, why you want to keep something on premises. And I think it's a growing market, it's very exciting, we're doing some very good stuff with friends like HPE, they have the ProLiant DL, one 10 Gen10 Plus server with our latest a 3rd Generation Xeons on them the Open RAN, which is the radio access network in the telco space. HP Edgeline servers, also a 3rd Generation Xeons there're some really nice products there that I think can really help addressing enterprises, carriers and a number of different organizations, these edge use cases. >> Can you explain, you mentioned Open RAN, vRAN, should we essentially think of that as kind of the software-defined telco? >> Yeah, exactly. It's software-defined cellular. I actually, I learned a lot about that over the recent months. When I was taking these classes at Stanford, these things were still done in analog, that doesn't mean a radio signal will be processed in an analog way and digest it and today typically the radio signal is immediately digitized and all the processing of the radio signal happens digitally. And, it happens on servers, some of them HPE servers. And, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually virtualize these servers and, run a number of different cells, inside the same server. And it's really complicated because you have to have fantastic real-time guarantees versus sophisticated software stack. But it's a really fascinating use case. >> A lot of times we have these debates and it's maybe somewhat academic, but I'd love to get your thoughts on it. And debate is about, how much data that is processed and inferred at the edge is actually going to come back to the cloud, most of the data is going to stay at the edge, a lot of it's not even going to be persisted. And the counter to that is, so that's sort of the negative is at the data center, but then the counter that is there going to be so much data, even a small percentage of all the data that we're going to create is going to create so much more data, back in the cloud, back in the data center. What's your take on that? >> Look, I think there's different applications that are easier to do in certain places. Look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, lots of different services. You'll have first, if you need very specialized hardware, if I wanted to run the bigger learning task where somebody needed a 1000 machines, and then this runs for a couple of days, and then I don't need to do that for another month or two, for that is really great. There's on demand infrastructure, having all this capability up there, at the same time it costs money to send the data up there. If I just look at the hardware cost, it's much much cheaper to build it myself, in my own data center or in the edge. So I think we'll see, customers picking and choosing what they want to do where, and that there's a role for both, absolutely. And so, I think there's certain categories. At the end of the day why do I absolutely need to have something at the edge? There's a couple of, I think, good use cases. One is, let me actually rephrase a little bit. I think it's three primary reasons. One is simply a bandwidth, where I'm saying, my video data, like I have a 100 4K video cameras, with 60 frames per second feeds, there's no way I'm going to move that into the cloud. It's just, cost prohibitive- >> Right. >> I have a hard time even getting (indistinct). There might be latency, if I need want to reliably react in a very short period of time, I can't do that in the cloud, I need to do this locally with me. I can't even do this in my data center. This has to be very closely coupled. And, then there's this idea of fade sharing. I think, if I want to make sure that if things go wrong, the system is still intact, anything that's sort of an emergency kind of a backup, an emergency type procedure, if things go wrong, I can't rely on the big good internet connection, I need to handle things, things locally, that's the smoke detector and the sprinkler system. And so for all of these, there's good reasons why we need to move things close to the edge so I think there'll be a creative tension between the two but both are huge markets. And I think there's great opportunities for HP ahead to work on all these use cases. >> Yeah, for sure, top brand is in that compute business. So before we wrap up today, thinking about your role, part of your role is a trend spotter. You're kind of driving innovation righty, surfing the waves as you said, skating to the puck, all the- >> I've got my perfect crystal ball right here, yeah I got. >> Yeah, all the cliches. (Dave chuckles) puts a little pressure on you, but, so what are some of the things that you're overseeing that you're looking towards in terms of innovation projects particularly obviously in the data center space, what's really exciting you? >> Look, there's a lot of them and I pretty much all the interesting ideas I get from talking to customers. You talk to the sophisticated customers, you try to understand the problems that they're trying to solve and they can't solve right now, and that gives you ideas to just to pick a couple, one thing what area I'm probably thinking about a lot is how can we build in a sense better accelerators for the infrastructure functions? So, no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can reduce the amount of CPU cycles I spend on microservice marshaling demarshaling, service mesh, storage acceleration and these things like that. And so well clearly, if this is a large chunk of the overall cycle budget, we need to find ways to shrink that to make this more efficient. So then I think, so this basic infrastructure function acceleration, sounds probably as unsexy as any topic would sound but I think this is actually really, really interesting area and one of the big levers we have right now in the data center. >> Yeah, I would agree Guido, I think that's actually really exciting because, you actually can pick up a lot of the wasted cycles now and that drops right to the bottom line, but please- >> Yeah, exactly. And it's kind of funny we're still measuring so much with SPEC and rates of CPU's performances, it's like, well, we may actually be measuring the wrong thing. If 80% of the cycles of my app are spent in overhead, then the speed of the CPU doesn't matter as much, it's other functions that (indistinct). >> Right. >> So that's one. >> The second big one is memory is becoming a bigger and bigger issue, and it's memory cost 'cause, memory prices, they used to sort of decline at the same rate that our core counts and then clock speeds increased, that's no longer the case. So we've run to some scaling limits, there's some physical scaling limits where memory prices are becoming stagnant. And this has become a major pain point for everybody who's building servers. So I think we need to find ways how we can leverage memory more efficiently, share memory more efficiently. We have some really cool ideas in that space that we're working on. >> Well, yeah. And Pat, let me just sorry to interrupt but Pat hinted to that and your big announcement. He talked about system on package and I think is what you used to talk about what I call disaggregated memory and better sharing of that memory resource. And that seems to be a clear benefit of value creation for the industry. >> Exactly. If this becomes a larger, if for our customers this becomes a larger part of the overall costs, we want to help them address that issue. And the third one is, we're seeing more and more data center operators that effectively power limited. So we need to reduce the overall power of systems, or maybe to some degree just figure out better ways of cooling these systems. But I think there's a lot of innovation that can be done there to both make these data centers more economical but also to make them a little more Green. Today data centers have gotten big enough that if you look at the total amount of energy that we're spending, this world as mankind, a chunk of that is going just to data center. And so if we're spending energy at that scale, I think we have to start thinking about how can we build data centers that are more energy efficient that are also doing the same thing with less energy in the future. >> Well, thank you for laying those out, you guys have been long-term partners with HP and now of course HPE, I'm sure Gelsinger is really happy to have you on board, Guido I would be and thanks so much for coming to theCUBE. >> It's great to be here and great to be at the HP show. >> And thanks for being with us for HPE Discover 2021, the virtual version, you're watching theCUBE the leader in digital tech coverage, be right back. (soft music)
SUMMARY :
2021, the virtual version, It's great to be here today. and what's your role here? so that builds the data data center of the future? the actual silicon that we need to build it's moving out to the edge is that the type of workloads we're seeing as the data center It's at VMware, And as well, you mentioned and larger part of the overall the data center, the near the role of the data center lots of all the video data about that over the recent months. And the counter to that is, move that into the cloud. and the sprinkler system. righty, surfing the waves I've got my perfect in the data center space, of the overall cycle If 80% of the cycles of my that's no longer the case. And that seems to be a clear benefit that are also doing the same thing happy to have you on board, great to be at the HP show. the virtual version,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Guido | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
60 frames | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Gelsinger | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
Data Platforms Group | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
one party | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
VMware | ORGANIZATION | 0.97+ |
ProLiant DL | COMMERCIAL_ITEM | 0.97+ |
three primary reasons | QUANTITY | 0.96+ |
second | QUANTITY | 0.96+ |
Data Platforms Group | ORGANIZATION | 0.94+ |
10% percent | QUANTITY | 0.93+ |
Open RAN | TITLE | 0.9+ |
over 80% | QUANTITY | 0.89+ |
single family home | QUANTITY | 0.88+ |
HPE Discover | ORGANIZATION | 0.87+ |
HPE | TITLE | 0.85+ |
vRAN | TITLE | 0.85+ |
couple | QUANTITY | 0.82+ |
Stanford | ORGANIZATION | 0.81+ |
4K | QUANTITY | 0.79+ |
telco | ORGANIZATION | 0.79+ |
Aruba | LOCATION | 0.79+ |
second feeds | QUANTITY | 0.78+ |
couple of days | QUANTITY | 0.77+ |
one thing | QUANTITY | 0.77+ |
HPE Discover 2021 | EVENT | 0.75+ |
10 Gen10 Plus | COMMERCIAL_ITEM | 0.75+ |
HP | EVENT | 0.75+ |
Edgeline | COMMERCIAL_ITEM | 0.74+ |
theCUBE | ORGANIZATION | 0.67+ |
Jerome Lecat and Chris Tinker | CUBE Conversation 2021
>>and welcome to this cube conversation. I'm john for a host of the queue here in Palo alto California. We've got two great remote guests to talk about, some big news hitting with scalability and Hewlett Packard enterprise drill, MCAT ceo of sexuality and chris Tinker, distinguished technologist from H P E. Hewlett Packard enterprise U room chris, Great to see you both. Cube alumni's from an original gangster days. As we say Back then when we started almost 11 years ago. Great to see you both. >>It's great to be back. >>So let's see. So >>really compelling news around kind of this next generation storage, cloud native solution. Okay. It's a, it's really kind of an impact on the next gen. I call, next gen devops meets application, modern application world and some, we've been covering heavily, there's some big news here around sexuality and HP offering a pretty amazing product. You guys introduced essentially the next gen piece of it are pesca, we'll get into in a second. But this is a game changing announcement you guys announces an evolution continuing I think it's more of a revolution but I think you know storage is kind of abstraction layer of evolution to this app centric world. So talk about this environment we're in and we'll get to the announcement which is object store for modern workloads but this whole shift is happening jerome, this is a game changer to storage, customers are gonna be deploying workloads. >>Yeah skeleton. Really I mean I personally really started working on Skele T more than 10 years ago 15 now And if we think about it I mean cloud has really revolutionized IT. and within the cloud we really see layers and layers of technology. I mean we all started around 2006 with Amazon and Google and finding ways to do initially we was consumer it at very large scale, very low incredible reliability and then slowly it creeped into the enterprise and at the very beginning I would say that everyone was kind of wizards trying things and and really coupling technologies together uh and to some degree we were some of the first wizard doing this But we're now close to 15 years later and there's a lot of knowledge and a lot of experience, a lot of schools and this is really a new generation, I'll call it cloud native, you can call it next year and whatever, but there is now enough experience in the world, both at the development level and at the infrastructure level to deliver truly distributed automate systems that run on industry standard service. Obviously good quality server deliver a better service than the service. But there is now enough knowledge for this to truly go at scale and call this cloud or call this cloud native. Really the core concept here is to deliver scalable I. T at very low cost, very high level of reliability. All based on software. We've we've been participated in this solution but we feel that now the draft of what's coming is at the new level and it was time for us to think, develop and launch a new product that specifically adapted to that. And chris I will let you comment on this because customers or some of them you can add a custom of you to that. >>Well, you know, you're right. You know, I've been in there have been like you have been in this industry for uh, well a long time, a little longer to 20, years. This HPV and engineering and look at the actual landscape has changed with how we're doing scale out, suffered to find storage for particular workloads and were a catalyst has evolved. Here is an analytic normally what was only done in the three letter acronyms and massively scale out politics name, space, file systems, parallel file systems. The application space has encroached into the enterprise world where the enterprise world needed a way to actually take a look at how to help simplify the operations. How do I actually be able to bring about an application that can run in the public cloud or on premise or hybrid. Be able to actually look at a workload off my stat that aligns the actual cost to the actual analytics that I'm going to be doing the work load that I'm going to be doing and be able to bridge those gaps and be able to spin this up and simplify operations. And you know, and if you if you are familiar with these parallel fossils, which by the way we we actually have on our truck. I do engineer those. But they are they are they are they have their own unique challenges. But in the world of enterprise where customers are looking to simplify operations, then take advantage of new application, analytic workloads, whether it be sparred may so whatever it might be right. If I want to spend the Mongol BB or maybe maybe a last a search capability, how do I actually take those technologies embrace a modern scale out storage stack that without without breaking the bank but also provide a simple operations. And that's that's why we look for object storage capabilities because it brings us this massive parallelization. Thank you. >>Well, before we get into the product, I want to just touch on one thing from you mentioned and chris you, you brought up the devoPS piece, next gen, next level, whatever term you use it is cloud Native. Cloud Native has proven that deVOPS infrastructure as code is not only legit being operationalized in all enterprises, add security in there. You have def sec ops this is the reality and hybrid cloud in particular has been pretty much the consensus. Is that standard. So or de facto saying whatever you want to call it, that's happening. Multi cloud on the horizon. So these new workloads have these new architectural changes, cloud on premises and edge, this is the number one story and the number one challenge, all enterprises are now working on how do I build the architecture for the cloud on premises and edge. This is forcing the deVOPS team to flex and build new apps. Can you guys talk about that particular trend and is and is that relevant here? >>Yeah, I, I not talk about uh really storage anywhere and cloud anywhere. And and really the key concept is edged to go to cloud. I mean we all understand now that the Edge will host a lot of data and the edges many different things. I mean it's obviously a smartphone, whatever that is, but it's also factories, it's also production, it's also, you know, moving uh moving machinery, trains, playing satellites, um that that's all the Edge cars obviously uh and a lot of that, I will be both produced and processed there. But from the Edge you will want to be able to send that uh for analysis for backup for logging to a court. And that core could be regional maybe not, you know, one call for the whole planet, but maybe one corporate region uh state in the US. Uh and then from there, you will also want to push some of the data to probably cloud. Uh One of the things that we see more and more is that the the our data center, the disaster recovery is not another physical data center, it's actually the cloud and that's a very efficient infrastructure, very cost efficient. Especially so really it's changing the padding on how you think about storage because you really need to integrate these three layers in a consistent approach, especially around the topic of security because you want the data to be secure all along the way and the data is not just data data and who can access the data, can modify the data. What are the conditions that allow modification or automatically ratios that are in some cases it's super important that data be automatically raised 10 years and all this needs to be transported fromage Co two cloud. So that that's one of the aspects, another aspect that resonates for me with what you said is a word you didn't say but it's actually crucial this whole revolution. It's kubernetes mean Cuban it isn't now a mature technology and it's just, you know, the next level of automaticity operation for distributed system Which we didn't have five or 10 years ago and that is so powerful that it's going to allow application developers to develop much faster system that can be distributed again edge to go to crowd because it's going to be an underlying technology that spans the three layers >>chris your thoughts. Hybrid cloud, I've been, I've been having conscious with the HP folks for got years and years on hybrid clouds now here. >>Well, you know, and it's exciting in a layout, right? So if you look at like a whether it be enterprise virtualization that is a scale out gender purpose fertilization workload. Whether the analytic workloads, whether we know data protection is a paramount to all of this orchestration is paramount. Uh if you look at that depth laptops absolutely you mean securing the actual data. The digital last set is absolutely paramount. And if you look at how we do this, look at the investments we're making we're making. And if you look at the collaborative platform development which goes to our partnership with reality it is we're providing them an integral aspect of everything we do. Whether we're bringing as moral which is our suffer be used orchestration. Look at the veneer of its control plane controlling kubernetes being able to actually control the african area clusters in the actual backing store for all the analytics. And we just talked about whether it be a web scale out That is traditionally using politics. Name space has now been modernized to take advantage of newer technologies running an envy me burst buffers or 100 gig networks with slingshot network at 200 and 400 gigabit. Looking at how do we actually get the actual analytics the workload to the CPU and have it attached to the data at rest? Where is the data? How do we land the data and how do we actually align essentially locality, locality of the actual asset to the compute. This is where, you know, we can leverage whether it be a juror or google or name your favorite hyper scaler, leverage those technologies leveraging the actual persistent store and this is where scale it is with this object store capability has been an industry trend setter, uh setting the actual landscape of how to provide an object store on premise and hybrid cloud running into public cloud but be able to facilitate data mobility and tie it back to and tie it back to an application. And this is where a lot of things have changed in the world of the, of analytics because the applications, the newer technologies that are coming on the market have taken advantage of this particular protocol as three so they can do web scale massively parallel concurrent workloads, >>you know what, let's get into the announcement, I love cool and relevant products and I think this hits the Mark Scaletta you guys have are Tesco which is um, just announced and I think, you know, we obviously we reported on it. You guys have a lightweight, true enterprise grade object store software for kubernetes. This is the announcement, Jerome. Tell us about it. >>What's the big >>deal? Cool and >>relevant? Come on, >>this is cool. All right, tell us >>I'm super excited. I'm not sure that it did. That's where on screen, but I'm super, super excited. You know, we, we introduced the ring 11 years ago and this is our biggest announcements for the past 11 years. So yes, do pay attention. Uh, you know, after after looking at all these trends and understanding where we see the future going, uh, we decided that it was time to embark block. So there's not one line of code that's the same as the previous generation product. They will both could exist. They both have space in the market, uh, and artist that was specifically this design for this cloud native era. And what we see is that people want something that's lightweight, especially because it had to go to the edge. They still want the enterprise grade, the security is known for and it has to be modern. What we really mean by modern is uh, we see object storage now being the primary storage for many application more and more applications and so we have to be able to deliver the performance that primary storage expects. Um this idea of skeletons serving primary storage is actually not completely new When we launched guilty 10 years ago, the first application that we were supporting West consumer email for which we were and we are still today the primary story. So we have we know what it is to be the primary store, we know what's the level of reliability you need to hit. We know what, what latest thinking and latency is different from fruit, but you really need to optimize both. Um, and I think that's still today. We're the only object storage company that protects that after both replication and the red recording because we understand that replication is factor the recording is better and more larger file were fast in terms of latency doesn't matter so much. So we, we've been bringing all that experience but really rethinking a product for that new generation that really is here now. And so we're truly excited against a little bit more about the product. It's a software was guilty is a software company and that's why we love to partner with HP who's producing amazing service. Um, you know, for the record and history, the very first deployment of skeleton in 2000 and 10 was on the HP service. So this is a, a long love story here. Um, and so to come back to artistic, uh, is lightweight in the sense that it's easy to use. We can start small, we can start from just one server or 11 VM instance. I mean start really small. Can grow infinitely. The fact that we start small, we didn't, you know, limit the technology because of that. Uh, so you can start from one too many. Um, and uh, it's contaminated in the sense that it's completely Cuban, it is compatible. It's communities orchestrated. It will deploy on many Cuban distributions. We're talking obviously with Admiral, we're also talking with Ponzu and with the other in terms of uh, communities distribution will also be able to be run in the cloud. I'm not sure that there will be many uh, true production deployment of artists in the club because you already have really good object storage by the cloud providers. But when you are developing something and you want to test their, um, you know, just doing it in the cloud is very practical. So you'll be able to deploy our discount communities cloud distribution and it's modern object storage in the sense that its application century. A lot of our work is actually validating that our storage is fit for a single purpose application and making sure that we understand the requirement of this application that we can guide our customers on how to deploy. And it's really designed to be the primary storage for these new workloads. >>The big part of the news is your relationship with Hewlett Packard Enterprises? Some exclusivity here as part of this announced, you mentioned, the relationship goes back many, many years. We've covered your relationship in the past chris also, you know, we cover HP like a blanket. Um, this is big news for h P E as >>well. >>What is the relationship talk about this? Exclusivity could you share about the partnership and the exclusivity piece? >>Well, the partnership expands into the pan HPV portfolio. We look we made a massive investment in edge IOT devices. Uh, so we actually have, how do we align the cost to the demand for our customers come to us wanting to looking at? Uh think about what we're doing with green, like a consumption based modeling, they want to be able to be able to consume the asset without having to do a capital outlay out of the gate uh, number to look at, you know, how do you deploy? Technology really demand? It depends on the scale. Right? So in a lot of your web skill, you know, scale out technologies, uh, putting them on a diet is challenging, meaning how skinny can you get it getting it down into the 50 terabyte range and then the complexities of those technologies at as you take a day one implementation and scale it out over, you know, you know, multiple iterations of recorders. The growth becomes a challenge. So, working with scalability, we we believe we've actually cracked this nut. We figured out how to a number one, how to start small but not limited customers ability to scale it out incrementally or grotesquely grotesque. A you can depending on the quarters the month, whatever whatever the workload is, how do you actually align and be able to consume it? Uh So now, whether it be on our edge line products are D. L. Products go back there. Now what the journalist talking about earlier, you know, we ship a server every few seconds. That won't be a problem. But then of course into our density optimized compute with the Apollo product. Uh This where uh our two companies have worked in an exclusivity where the, the scaly software bonds on the HP ecosystem. Uh and then we can of course provide you our customers the ability to consume that through our Green link financial models or through a complex parts of >>awesome. So jerome and chris who's the customer here? Obviously there's an exclusive period talk about the target customer. And how do customers get the product? How do we get the software? And how does this exclusivity with HP fit into it? >>Yeah. So there's really three types of customers and we really, we've worked a lot with a company called use design to optimize the user interface for each of the three types of customers. So we really thought about each uh customer role and providing with each of them the best product. Uh So the first type of customer application owners who are deploying application that requires an object storage in the back end. They typically want a simple objects to of one application. They wanted to be temple and work. I mean yesterday they want no freedom to just want an object store that works and they want to be able to start as small as they start with their application. Often it's, you know, the first department, maybe a small deployment. Um, you know, applications like backup like female rubric or uh, analytics like Stone Carver, tikka or false system now available as a software. Uh, you know, like Ceta does a really great department or nass that works very well. That means an object store in the back end of high performance computing. Wake up file system is an amazing file system. Um, we also have vertical application like broad peak, for example, who provides origin and view the software, the broadcasters. So all these applications, they request an object store in the back end and you just need a simple, high performance, working well object store and I'll discuss perfect. The second type of people that we think will be interested by artists. Uh essentially developers who are currently developing some communities of collaborative application your next year. Um and as part of their development stack, um it's getting better and better when you're developing a cloud native application to really target an object storage rather than NFS as you're persistently just, you know, think about generations of technologies and um, NFS and file system were great 25 years ago. I mean, it's an amazing technology. But now when you want to develop a distributed scalable application, objects toys a better fit because it's the same generation and so same thing. I mean, you know, developing something, they need uh an object so that they can develop on so they wanted very lightweight, but they also want the product that they're enterprise or their customers will be able to rely on for years and years on and this guy is really great for today. Um, the third type of customer are more architecture with security architects that are designing, uh, System where they're going to have 50 factories, 1000 planes, a million cars are going to have some local storage, which will they want to replicate to the core and possibly also to the club. And uh, as the design is really new generation workloads that are incredibly distributed. But with local storage, uh, these guys are really grateful for that >>and talk about the HP exclusive chris what's the, how does that fit into? They buy through sexuality. Can they get it for the HP? Are you guys working together on how customers can procure >>place? Yeah. Both ways they can procure it through security. They can secure it through HP. Uh, and it is the software stack running on our density, optimized compute platforms which you would choose online does. And to provide an enterprise quality because if it comes back to it in all of these use cases it's how do we align up into a true enterprise step? Um bringing about multi Tennessee, bringing about the fact that, you know, if you look at like a local racial coding, uh one of the things that they're bringing to it so that we can get down into the deal 3 25. So with the exclusivity, uh you actually get choice and that choice comes into our entire portfolio, whether it be the edge line platform, the D. L 3:25 a.m. B. Processing stack or the intel deal three eighties or whether whether it be the Apollo's or Alexa, there's there's so many ample choices there that facilitates this and it just allows us to align those two strategies >>awesome. And I think the kubernetes pieces really relevant because, you know, I've been interviewing folks practitioners um and kubernetes is very much maturing fast. It's definitely the centerpiece of the cloud native, both below the line, if you will under the hood for the, for the infrastructure and then for apps, um they want to program on top of it. That's critical. I mean, jeremy, this is like this is the future. >>Yeah. And if you don't mind, like to come back for a minute on the exclusive with HP. So we did a six month exclusive and the very reason we could do this is because HP has suffered such wrath of server portfolio and so we can go from, you know, really simple, very cheap, you know, HDD on the L 3 80 means a machine that retails for a few $4. I mean it's really like Temple System 50 terabyte. Uh we can have the dl 3 25. That uh piece mentioned there is really a powerhouse. All envy any uh slash uh all the storage is envy any uh very fast processors or uh you know, dance large large system like the Apollo 4500. So it's a very large breath of portfolio. We support the whole portfolio and we work together on this. So I want to say that you know, one of the reasons I want to send kudos to HP for for the breath of the silver lining rio as mentioned, um Jessica can be ordered from either company, hand in hand together. So anyway you'll see both of us uh and our field is working incredibly well together. >>We'll just on that point, I think just for clarification, uh was this co design by scalability and H P E. Because chris you mentioned, you know, the configuration of your systems. Can you guys quickly talk about the design, co design >>from from from the code base? The software entirely designed and developed by security from a testing and performance. So this really was a joint work with HP providing both hardware and manpower so that we could accelerate the testing phase. >>You know, chris H P E has just been doing such a great job of really focused on this. And you know, I've been Governor for years before it was fashionable the idea of apps working no matter where it lives. Public Cloud data center Edge, you mentioned. Edge line has been around for a while. You know, apps centric, developer friendly cloud first has been an H P E. Kind of guiding first principle for many, many years. >>But it has and you know, you know as our our ceo internal areas cited by 2022 everything will be able to be consumed as a service in our portfolio. Uh And then this stack allows us the simplicity and the consume ability of the technology and degranulation of it allows us to simplify the installation, simplify the actual deployment bringing into a cloud ecosystem. But more importantly for the end customer, they simply get an enterprise quality product running on identity optimized stack that they can consume through a orchestrated simplistic interface. That's that's cos that's what they're warning for today is where they come to me and asked hey how do I need a, I've got this new app new project and you know it goes back to who's actually coming, it's no longer the I. T. People who are actually coming to us, it's the lines of business. It's it's that entire dimension of business owners coming to us going this is my challenge and how can you HP help us And we rely on our breath of technology but also a breath of partners to come together and are of course reality is hand in hand and are collaborative business unit are collaborative storage product engineering group that actually brought this market. So we're very excited about this solution >>chris thanks for that input. Great insight, Jerome, congratulations on a great partnership with H. P. E. Obviously um great joint customer base congratulations on the product release here. Big moving the ball down the field as they say new functionality, clouds cloud native object store, phenomenal um So wrap wrap wrap up the interview. Tell us your vision for scalability in the future of storage. >>Yeah. Yeah I start I mean skeleton is going to be an amazing leader is already um but yeah so you know I have three themes that I think will govern how storage is going and obviously um Mark Andrews had said it software is everywhere and software is eating the world so definitely that's going to be true in the data center in storage in particular. Uh But the free trends that are more specific. First of all I think that security performance and agility is now basic expectation. It's not you know, it's not like an additional feature. It's just the best table, stakes, security performance and a job. Um The second thing is and we've talked about it during this conversation is edged to go you need to think your platform with Edge Co and cloud. You know you don't want to have separate systems separate design interface point for edge and then think about corn and think about clouds and then think about the divers. All this needs to be integrated in the design. And the third thing that I see as a major trend for the next 10 years is that a sovereignty uh more and more. You need to think about where is the data residing? What are the legal challenges? What is the level of protection against who are you protected? What what is your independence uh strategy? How do you keep as a company being independent from the people? You need to be independent. And I mean I say companies, but this is also true for public services. So these these for me are the three big trends. I do believe that uh software find distributed architecture are necessary for these tracks. But you also need to think about being truly enterprise grade. And there has been one of our focus with the design of a fresca. How do we combine a lot with product With all of the security requirements and that our sovereignty requirements that we expect to have in the next 10 years? >>That's awesome. Congratulations on the news scale. D Artois ca the big release with HP exclusive um, for six months, chris tucker, distinguished engineer at H P E. Great to ceo, jeremy, katz, ceo sexuality. Great to see you as well. Congratulations on the big news. I'm john for the cube. Thanks for watching. >>Mhm. >>Yeah.
SUMMARY :
from H P E. Hewlett Packard enterprise U room chris, Great to see you both. So let's see. but I think you know storage is kind of abstraction layer of evolution to this app centric world. the infrastructure level to deliver truly distributed And you know, Well, before we get into the product, I want to just touch on one thing from you mentioned and chris you, So that that's one of the aspects, another aspect that resonates for me with what you said Hybrid cloud, I've been, I've been having conscious with the HP folks for got locality of the actual asset to the compute. this hits the Mark Scaletta you guys have are Tesco which is um, this is cool. So we have we know what it is to be the primary store, we know what's the level of reliability you in the past chris also, you know, we cover HP like a blanket. number to look at, you know, how do you deploy? And how do customers get the product? I mean, you know, and talk about the HP exclusive chris what's the, how does that fit into? So with the exclusivity, uh you actually get choice And I think the kubernetes pieces really relevant because, you know, I've been interviewing folks all the storage is envy any uh very fast processors or uh you know, scalability and H P E. Because chris you mentioned, you know, the configuration of your from from from the code base? And you know, and asked hey how do I need a, I've got this new app new project and you know it goes back Big moving the ball down the field as they say new functionality, What is the level of protection against who are you protected? Great to see you as well.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jerome | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Chris Tinker | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Jessica | PERSON | 0.99+ |
Mark Andrews | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
1000 planes | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
jeremy | PERSON | 0.99+ |
200 | QUANTITY | 0.99+ |
50 factories | QUANTITY | 0.99+ |
Jerome Lecat | PERSON | 0.99+ |
Tesco | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
100 gig | QUANTITY | 0.99+ |
three types | QUANTITY | 0.99+ |
jerome | PERSON | 0.99+ |
katz | PERSON | 0.99+ |
six month | QUANTITY | 0.99+ |
chris | PERSON | 0.99+ |
50 terabyte | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
10 years | QUANTITY | 0.99+ |
$4 | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
chris tucker | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Hewlett Packard Enterprises | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Palo alto California | LOCATION | 0.99+ |
10 years ago | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
11 years ago | DATE | 0.99+ |
Edge Co | ORGANIZATION | 0.99+ |
chris Tinker | PERSON | 0.99+ |
third thing | QUANTITY | 0.99+ |
a million cars | QUANTITY | 0.98+ |
15 years later | DATE | 0.98+ |
L 3 80 | COMMERCIAL_ITEM | 0.98+ |
two strategies | QUANTITY | 0.98+ |
one application | QUANTITY | 0.98+ |
25 years ago | DATE | 0.98+ |
second thing | QUANTITY | 0.98+ |
first application | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
third type | QUANTITY | 0.98+ |
2022 | DATE | 0.98+ |
one server | QUANTITY | 0.98+ |
first department | QUANTITY | 0.97+ |
five | DATE | 0.97+ |
three themes | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
three letter | QUANTITY | 0.97+ |
Both ways | QUANTITY | 0.97+ |
one line | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
Apollo 4500 | COMMERCIAL_ITEM | 0.96+ |
H P E. | ORGANIZATION | 0.96+ |
11 VM | QUANTITY | 0.96+ |
Compute Session 03
>>Hello and welcome to this session on experiencing secure agile hybrid cloud for your absent data. My name is Andrew labor. I'm a worldwide business unit product manager, Hc I Solutions with HP and I'm joined by my teammate Jeff Corcoran, who was go to market program Solutions or HP as well. And with that let's just dive right into it. Well, everybody has absent data. They're all over the place. They're both live on your phones, your computers and the cloud and servers are everywhere, absent data are all over the place. Well, what can we really do about that from moving forward modernization of that? Well, we have expectations for personalized, instant and engaging experiences that are the benchmark of your experience, more speed and agility or more paramount than ever. You see a world where apps and data like I mentioned our live and all over the place and that data explosion is happening at the edge where 75 of data is now created in moving us from a data center too many locations and many centers of that data. We have a digital transformation that has reached only a fraction of that. And we have modern cloud experiences for speed and agility and we want to really push that into an on premise reality where data has gravity security formats and compliance that you require. You really want that data transformation that somehow remains elusive for most outside of the public cloud. We want that true private premised on premise cloud infrastructure that translates to your hybrid cloud where you already have your apps and data live in the public cloud. And so as I mentioned, 70 of the public of the apps are outside in the public cloud and we really want that to be able to be brought into the local as well. And the on premise give you more flexibility, more agility and only H P E brings the cloud experience absent data everywhere. We define that right mix for you to move your data to the local and with that we have an approach that's any cloud anywhere and we have the expertise to help you define that right mix of cloud for your enterprise. We also create modern casual platforms for innovation where we bring your non native traditional apps that are slowing you down. We bring that into a modern enabled cloud experience together with cloud data of apps to achieve that speed and agility that I mentioned, being able to create a consistent strategy for you and your infrastructure. We also consume everything as a service everywhere. We bring the modern cloud experience to you and your apps and data self service ease being able to scale up or down depending on usage and flexibility. And we also have to pay for use and all managed for you with HP. Green Lake services the market leading infrastructure as a service platform for well over a decade. We also unify that hybrid cloud estate being able to move operations to a cloud native Cloud ops process manage for you with one unified management platform. Hp Green Lake Central. This helps you manage and unify your applications across cloud native and non cloud native workloads, drive insights and control for operational excellence and we do that by defining the right mix of cloud for you with HPD Point Next services, we're able to assess applications to determine the right mix for your business objectives. Hp Point Next services, we have cloud in technology experts on hand and ready to task for you to assess your existing IT infrastructure strategy, identify trapped capital that you might not even notice is there as well as help you assess your people and teams to identify critical gaps in your cloud journey. Finally, HP Point next services capital experts can determine the right mix of cloud strategy for you. Help you move and migrate your data into that optimized for every workload. And we do that by creating modern agile platform for innovation and we achieve the speed and agility you want report folio of software defined rack optimized HP keep Reliant and H. P. S. Energy infrastructure. Using that compose Herbal cloud compose double infrastructure platform that we support through our intellectual property and through leading partner Cloud solutions and who is that? That's BM wear with cloud foundation. I am a cloud foundation is the perfect blend for HP synergy and HP. Reliant to create that universal hybrid cloud platform, both modern and traditional applications. The Cloud foundation is characterized by many tenants such as develop Already Infrastructure, which creates that automated full stack experience. To help you get ready to do your development through a PS and infrastructure. Universal platforms, a single platform virtual machines and containers as well as application focused management. To simplify your management, being able to have multiple application resources and foundation for that hybrid cloud that I described being able to extend that same software stack to the public cloud. You connect to your flavor of choice for public cloud consume. And together with HB solutions and BMR Cloud Foundation, we create that perfect platform for a consistent hybrid cloud experience from the mid market to the large enterprise customers. We are transforming that traditional I. T. To a virtualized data center. Our goal is to help you move quickly and be agile to digitally transform software defined data center supporting that hybrid infrastructure. Hp envy m where have been working together for years and we are providing a simple experience for hybrid cloud that you can create and deliver to show value instantly and continuously achieve faster innovation, consistent operation and reducing costs. And how do we do that together? Well with HB solutions from being more cloud foundation, we've revolutionized that data centre by building a single consistent hybrid cloud experience that you can see that delivers greater agility and simplicity with five times faster automation tools for building out your infrastructure in getting time to market quicker, invalidating that solution stack. Where we have end to end fully tested and validated solutions that reduce your complexity and allow you to consolidate your VMS and your containers into one environment. Seamlessly, we also integrate management. We have unique the upper management integration and automation through firmer lifecycle management. Vis a vis L C M on the VM ware side, simplify I. T and deliver more agility to your infrastructure as well as your software defined data center. And then we also have services with HB Point Next they accelerate that time to deployment using HP Green Lake and providing as a service experience that we bring that cloud to you. And we bring that with an enhanced ballistic 360° view of security that begins in the manufacturing supply chain of our servers and concludes with safeguarded end of life Decommissioning. We power that by the recently announced Gen 10 plus servers uhh peep Reliant NHP synergy and integrate that Silicon Root of Trust technology offering protection detection and recovery from attacks industry leading encryption and firmware protection. And finally all of that is brought together. Hp one view We take HP one view as the management solution which transforms all of the compute storage networking into one software defined infrastructure Through HP one View we offer a template driven approach for deploying provisioning, updating, integrating compute storage networking All together in one infrastructure. and HP one View uses those software templates single line of code. We can deploy and manage and compose all of your physical resources, require for that application or virtual host or container infrastructure. We deliver the flexibility to compose different tiers of storage as well as types of provisioning by HP One View through direct or attach fabric using cloud foundation and HPV Premera. And now I'd like to ask my coworker Jeff to dive into some customer experiences around the hybrid cloud Jeff. Take it away. >>Thanks. Andrew. I think a great way to follow up and talk about our solutions is to really look at how one of our customers is enabling this transformation. So Wedbush Security is one of the leading financial services firms in the US, providing private and institutional clients, securities brokerage wealth management, in investment banking services. The company is headquartered in los Angeles California and has about 100 offices across the United States to meet increasingly rigorous financial regulations for more resilient operations and mitigate the threat of earthquakes in the Los Angeles area and increase operational efficiencies. Wedbush was looking for transformation is looking for a change to what the way they are currently operating. And to do this, Wedbush partnered with lumen and HP to develop a new private, cloud based data center using bloomin Private cloud on VM ware Cloud foundation. This was located in lumens Dallas hosting center using HP Keep Reliant dl 33 60 jen tens to create a hyper converged, high performance infrastructure using integrated software defined networking and security. To date, Wedbush has migrated its entire production facility to this private cloud. The virtual machines support a range of business applications, including Refinitiv, Thompson, Reuters and if I ask financial systems, they're also hosting Web Bush's in house broker management tool and Microsoft, sequel server and Mongo DB. Now, how did this impact them? They were able to impact Their financial reporting by cutting that from five hours down to 58 minutes. At the same time, they are able to reduce the time that it takes to deploy these Infrastructure resources by 50%. So this allows them to deploy a modern IT infrastructure for performance, reliability and efficiency improvement. The net impact on their business Was that it reduces the analytic costs by 27%. It increases their business agility and it developed, allows them to develop new lines of business faster and increases their compliance for the new Finra financial regulations with HP Green Lake, the cloud that comes to you. Hp Green like brings that cloud experience, self serve paper use scale up and down and manage for you by HP and our partners to absent data everywhere, whether they're in the edges co locations or data centers, enabling you to free up capital most operational and financial flexibility and free up talent to accelerate what's next for you and your business with HP Green Lake customers get cloud services that our production ready, elastic for any scale With a simple experience delivered to customer locations and as little as 14 days. Now, let's take a look at how some of our customers are experiencing the benefit of HP Green Lake as the voice of Austrian business. The Austrian economic chamber delivers advocacy and support to over 500,000 companies and trade groups, thereby helping to foster the country's robust economic growth. However, a policy of fiscal prudence Led to a mandated 30 cost reduction and the chambers it service provider needed to cut costs without compromising service levels. So to do this, they turn to HP to pair a future proof compose herbal infrastructure with a consumption based support model and HP Green Lake. Now, both the internal and regional chambers offices are getting better performance and faster access to I. T. Services enabled them to focus more than ever on boosting critical Austrian economic forces in sectors. Hp is here to help you accelerate your transformation. We just talked about Green Lake. So this enables you to deploy any workload as a service and with HP Green Lake services, you can bring that cloud like speed, agility and as a service model to where your data, data and apps live today, it enables you to transform the way you do business with one experience in one operating model across your distributed clouds for apps and data at the edge in co locations and in data centers with HP Point Next services. They have conducted over 11,000. IT. projects in over 1.4 million customer interactions each and every year. HB Point Next services 15,000 plus experts and its vast ecosystem of solution partners and channel partners are uniquely able to help you at every stage of your digital transformation journey because we address some of the biggest areas of concern that can slow you down. We bring together technology and expertise to help you drive your business forward. Lastly, with HP financial services, flexibility and investment capacity are key considerations for business to drive digital transformation initiatives. In order to forge a path forward, you need access to flexible payment options that allow you to match your IT costs to usage, from helping release capital from existing infrastructures to deferring payments and providing pre owned technology to relieve capital strain. Hp financial services unlocks the value of your entire estate from edge to cloud to end user with multi vendor solutions consistently and sustainably around the world. H P E F s helps you create the financial capacity to transform your business, Y H P E. We have the experience to get you there Over 1000 successful cloud migrations. We have the expertise to help you at any stage to accelerate adoption of any cloud or financial model to help you deploy the like cloud experience for your apps and data. We're open to any cloud strategy with deep expertise across Azure AWS and google cloud. We have unbiased expertise and I p to accelerate your right mix of clouds for your enterprise and we can tie that all together with I. T. As a service from our market leading platform of HP Green Lake. After you viewed this session, we have a lot of resources that you can now use to help you continue your digital transformation and educate yourself. You'll find links here on the slide to a lot of different products and solution areas as well as social media interactions that we have to engage with you. Thank you for joining. We hope you find the sexual useful. Have a great day.
SUMMARY :
modern cloud experience to you and your apps and data self service ease We have the expertise to help you at any stage to accelerate adoption of any cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Corcoran | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
50% | QUANTITY | 0.99+ |
lumen | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
70 | QUANTITY | 0.99+ |
five hours | QUANTITY | 0.99+ |
Wedbush | ORGANIZATION | 0.99+ |
14 days | QUANTITY | 0.99+ |
27% | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
58 minutes | QUANTITY | 0.99+ |
30 cost | QUANTITY | 0.99+ |
HP Green Lake | ORGANIZATION | 0.99+ |
los Angeles California | LOCATION | 0.99+ |
Wedbush Security | ORGANIZATION | 0.99+ |
Green Lake | ORGANIZATION | 0.99+ |
over 500,000 companies | QUANTITY | 0.99+ |
Hp Green | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
HP Green Lake | ORGANIZATION | 0.98+ |
75 of data | QUANTITY | 0.98+ |
one experience | QUANTITY | 0.98+ |
Hc I Solutions | ORGANIZATION | 0.98+ |
Reuters | ORGANIZATION | 0.98+ |
BMR Cloud Foundation | ORGANIZATION | 0.98+ |
about 100 offices | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Andrew labor | PERSON | 0.97+ |
Hp Point | ORGANIZATION | 0.97+ |
one environment | QUANTITY | 0.95+ |
15,000 plus experts | QUANTITY | 0.95+ |
Over 1000 successful cloud migrations | QUANTITY | 0.95+ |
HPD Point | ORGANIZATION | 0.95+ |
over 1.4 million customer | QUANTITY | 0.95+ |
HP Point | ORGANIZATION | 0.95+ |
Hp Green Lake Central | ORGANIZATION | 0.94+ |
Finra | ORGANIZATION | 0.94+ |
Austrian | OTHER | 0.94+ |
each | QUANTITY | 0.93+ |
single | QUANTITY | 0.93+ |
five times | QUANTITY | 0.92+ |
Reliant | ORGANIZATION | 0.92+ |
H. P. S. Energy | ORGANIZATION | 0.91+ |
single platform | QUANTITY | 0.9+ |
one software | QUANTITY | 0.89+ |
360° | QUANTITY | 0.89+ |
Thompson | ORGANIZATION | 0.89+ |
HP keep | ORGANIZATION | 0.88+ |
HB | ORGANIZATION | 0.88+ |
today | DATE | 0.88+ |
Green Lake | COMMERCIAL_ITEM | 0.86+ |
Next | COMMERCIAL_ITEM | 0.85+ |
NHP | ORGANIZATION | 0.85+ |
Azure | TITLE | 0.82+ |
single line | QUANTITY | 0.8+ |
I. T. Services | ORGANIZATION | 0.79+ |
Gen 10 | QUANTITY | 0.79+ |
Point Next | COMMERCIAL_ITEM | 0.79+ |
over 11,000. | QUANTITY | 0.78+ |
Lake | ORGANIZATION | 0.78+ |
Web Bush | ORGANIZATION | 0.76+ |
Green | COMMERCIAL_ITEM | 0.73+ |
Wikibon Presents: Software is Eating the Edge | The Entangling of Big Data and IIoT
>> So as folks make their way over from Javits I'm going to give you the least interesting part of the evening and that's my segment in which I welcome you here, introduce myself, lay out what what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know Wikibon is a part of SiliconANGLE which also includes theCUBE, so if you look around, this is what we have been doing for the past couple of days here in the TheCUBE. We've been inviting some significant thought leaders from over on the show and in incredibly expensive limousines driven them up the street to come on to TheCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down, and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of TheCUBE and other places and we incorporated it into our research. And work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight, and I've got a couple of my analysts assembled, and we're also going to have a panel, is this notion of software is eating the Edge. Now most of you have probably heard Marc Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software's eating the world. Well, if software is truly going to eat the world, it's going to eat at, it's going to take the big chunks, big bites at the Edge. That's where the actual action's going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things and IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that we're going to, I've already blown the schedule, that's on me. But to do that I'm going to spend a couple minutes talking about what we regard as the essential digital business capabilities which includes analytics and Big Data, and includes IIoT and we'll explain at least in our position why those two things come together the way that they do. But I'm going to ask the august and revered Neil Raden, Wikibon analyst to come on up and talk about harvesting value at the Edge. 'Cause there are some, not now Neil, when we're done, when I'm done. So I'm going to ask Neil to come on up and we'll talk, he's going to talk about harvesting value at the Edge. And then Jim Kobielus will follow up with him, another Wikibon analyst, he'll talk specifically about how we're going to take that combination of analytics and Edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come, going to invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room. That includes smart people here, smart people in the audience having a conversation ultimately about some of these significant changes so please participate and we look forward to talking about the rest of it. All right, let's get going! What is digital business? One of the nice things about being an analyst is that you can reach back on people who were significantly smarter than you and build your points of view on the shoulders of those giants including Peter Drucker. Many years ago Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping your customer. Now you can argue with that, at the end of the day, if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxi cab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason, the difference between Uber and a taxi cab company is data. That's the primary difference. Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is the business using data as an asset? Is a business driving its engagement with customers, the role of its product et cetera using data? And if they are, they are becoming a more digital business. Now when you think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improved customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about. But it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear. When I say strategic capabilities I mean something specific. When you talk about, for example technology architecture or information architecture there is this notion of what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in the digital business these are the capabilities that are now additive to this core question, ultimately of whether or not the company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition. In a way that is ultimately less intrusive on your markets and on your customers. That's in many respects, one of the first priorities of the internet of things and people. The idea of using sensors and related technologies to capture more data. Once you capture that data you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of Big Data. Including operations, including financial performance and everything else, but ultimately it's taking the data that's being captured and turning it into value within the business. The last point here is that once you have generated a model, or an insight or some other resource that you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency 'cause we think it's an interesting concept and it's something Jim Kobielus is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand. Or systems, combinations of machines and people are acting on behalf of the brand. And this whole notion of agency is the idea that ultimately these systems are now acting as the business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example I was talking to a senior decision maker at a business today and they made a quick observation, they talked about they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird. And then went through security with the bird. And the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this, the bird started talking and repeating things that the woman had said and many of those things, in fact, might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf, digital instrumentation and elements to do more on our behalf, it's going to have blow back and an impact on our brand if we don't do it well. I want to draw that forward a little bit because I suggest there's going to be a new lifecycle for data. And the way that we think about it is we have the internet or the Edge which is comprised of things and crucially people, using sensors, whether they be smaller processors in control towers or whether they be phones that are tracking where we go, and this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new costs. It's one of the few assets that you can say about that. So the concept of an information transducer's really important because it's the basis for a lot of transformations of data as data flies through organizations. So we end up with the transducers storing data in the form of analytics, machine learning, business operations, other types of things, and then it goes back and it's transduced, back into to the real world as we program the real world and turning into these systems of agency. So that's the new lifecycle. And increasingly, that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That could have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers. And that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect there's going to be a lot of hardware. A significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On premise technologies that provide the cloud experience but act where the data naturally needs to be processed. I'll come a little bit more to that in a second. So we think that it's going to be a relatively balanced market, a lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the Edge 'cause that's where the action's going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular the Edge is driving these conversations. So here is a piece of research that one of our cohorts at Wikibon did, David Floyer. Taking a look at IoT Edge cost comparisons over a three year period. And it showed on the left hand side, an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the Edge and processed at the Edge. And literally it is a 85 plus percent cost reduction to keep more of the data at the Edge. Now that has enormous implications, how we think about big data, how we think about next generation architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next two years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management has to be put in place. Ultimately we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper, look what Amazon can do. Or what AWS can do. All true statements. Very very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves do we have to move our business to the cloud, or can we move the cloud to the business? And increasingly what we see happening as we talk to our large customers about this, is that the cloud is being extended out to the Edge, we're moving the cloud and cloud services out to the business. Because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, a lot of planning, lot of change within an IT and business organizations. How we capture data, how we turn it into value, and how we translate that into real world action through software. That's going to lead to a rethinking, ultimately, based on cost and other factors about how we deploy infrastructure. How we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role the Edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil! So let me invite Neil up to spend some time talking about harvesting value at the Edge. Can you see his, all right. Got it. >> Oh boy. Hi everybody. Yeah, this is a really, this is a really big and complicated topic so I decided to just concentrate on something fairly simple, but I know that Peter mentioned customers. And he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I'd started a magazine called Hired Brains. It was for consultants. And Peter said, Peter said a number of really interesting things to me, but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was! So anyway, he had to leave to do a video conference with Jack Welch and so I said to him, how do you charge Jack Welch to spend an hour on a video conference? And he said, you know I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Suzie Welch recently and I told her that story and she said, "Oh he's full of it, Jack never paid "a dime for those conferences!" (laughs) So anyway, all right, so let's talk about this. To me, things about, engineered things like the hardware and network and all these other standards and so forth, we haven't fully developed those yet, but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in Edge Analytics is what you're going to get out of it, what the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition? What's the definition of this, that and the other thing? And with sensor data, I really have the feeling, when companies get, not you know, not companies, organizations get instrumented and start dealing with this kind of data what they're going to find is that this is the first time, and I've been involved in analytics, I don't want to date myself, 'cause I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours! We're not going to get it from some other source. It's going to be the real deal, to the extent that it's the real deal. Now you may say, ya know Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be really careful with this data. It's ours, we have to take care of it. We don't get to reload it from source some other day. If we munge it up it's gone forever. So that has, that has very serious implications, but let me, let me roll you back a little bit. The way I look at analytics is it's come in three different eras. And we're entering into the third now. The first era was business intelligence. It was basically built and governed by IT, it was system of record kind of reporting. And as far as I can recall, it probably started around 1988 or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that was sort of like BI, but 88 was when they really started coming out, that's when we saw BusinessObjects and Cognos and MicroStrategy and those kinds of things. The second generation just popped out on everybody else. We're all looking around at BI and we were saying why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep and Line of Business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the Big Data thing. Now we're in third generation, so we not only had Big Data, which has come and hit us like a tsunami, but we're looking at smart discovery, we're looking at machine learning. We're looking at AI induced analytics workflows. And then all the natural language cousins. You know, natural language processing, natural language, what's? Oh Q, natural language query. Natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says this chart is about the following and it used the following data, and it's blah blah blah blah blah. I think it's kind of wordy and it's going to refined some, but it's an interesting, it's an interesting thing to do. Now, the problem I see with Edge Analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I know we talk about autonomous cars, I hope to God we never have them, 'cause I'm a car guy. Fleet Management, I think Qualcomm started Fleet Management in 1988, that is not a new application. Industrial controls. I seem to remember, I seem to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative. Because the real value in Edge Analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different. Creating a brand new business. Changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about so, if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or Fleet Managing, 'cause I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with Big Data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said as far as I can tell, they don't have a narrative they have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product? They don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately some of them are going to be really stinky, you know, they're going to be really bad. You're going to lose more of your privacy, it's going to get harder to get, I dunno, mortgage for example, I dunno, maybe it'll be easier, but in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project. Because number one, it's very expensive, and number two, it's a waste of the technology because you should be looking at, you know the old numerator denominator thing? You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is you don't want to get over confident. Actually this is good advice about anything, right? But in this case, I love this quote by Derek Sivers He's a pretty funny guy. He said, "If more information was the answer, "then we'd all be billionaires with perfect abs." I'm not sure what's on his wishlist, but you know, I would, those aren't necessarily the two things I would think of, okay. Now, what I said about the data, I want to explain some more. Big Data Analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places, it's not original material. And when it comes in, it's always used as second hand data. Now what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay. That's Big Data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before. It's yours to construct, okay. You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it 'cause if you lose it, it's gone. It's the original data. It's the same way, in operational systems for a long long time we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data, that we're going to look at it for a minute, and we're going to throw most of it away. Personally I don't think that's going to happen. I think it's all going to be saved, at least for a while. Now, the governance and security, oh, by the way, I don't know where you're going to find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today they're still thinking along the same grids that we thought about it all along. But this is very very different and again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now when I say governance, my experience has been over the years that governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now the other thing is you may not get to think about it differently, because some of the stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you in how you put this together, but you know, that's the way it works. Now, machine learning, I think I told somebody the other day that claims about machine learning in software products are as common as twisters in trail parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the Edge, here's the question, what kind of machine learning would you have at the Edge? As opposed to developing your models back at say, the cloud, when you transmit the data there. The devices at the Edge are not very powerful. And they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this working with gobs of data creating models and testing them offline. And when you have something that works, you can put it there. Now there's one thing I want to talk about before I finish, and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making and the conclusion that I came up with was that little decisions add up, and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life, or frankly any life. Or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control, you're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that may seem fairly mundane. Managing machinery on the factory floor, I mean that may sound great, but really isn't that interesting. Managing well heads, drilling for oil, well I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in Edge Analytics, the next time I talk to you I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So, in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing a project with them. And in the miserable weather I looked at him and I said, for godsake Paul, where's the black car? And he said, that was the 90s. (laughs) Thank you. So, Jim, up to you. (audience applauding) This is terrible, go that way, this was terrible coming that way. >> Woo, don't want to trip! And let's move to, there we go. Hi everybody, how ya doing? Thanks Neil, thanks Peter, those were great discussions. So I'm the third leg in this relay race here, talking about of course how software is eating the world. And focusing on the value of Edge Analytics in a lot of real world scenarios. Programming the real world for, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really to all material reality potentially at the Edge. But mobile applications and industrial IoT and the smart appliances and self driving vehicles. I will break it out in terms of a reference architecture for understanding what functions are being pushed to the Edge to hardware, to our phones and so forth to drive various scenarios in terms of real world results. So I'll move a pace here. So basically AI software or AI microservices are being infused into Edge hardware as we speak. What we see is more vendors of smart phones and other, real world appliances and things like smart driving, self driving vehicles. What they're doing is they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data driven systems of agency of the sort that Peter is talking about. Infusing data driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, ya know for doing real time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architecture, neuro synaptic chips, GPUs and so forth. So what I've got here in front of you is a sort of a high level reference architecture that we're building up in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not going to unpack it completely. Of course we don't have oodles of time so I'm going to take you fairly quickly through the high points. It's a driver for systems of agency. Programming the real world. Transducing digital inputs, the data, to analog real world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the Edge with points of decision and action in real time. And there are four capabilities that we're seeing in terms of AI enabled, enabling capabilities that are absolutely critical to software being pushed to the Edge are sensing, actuation, inference and Learning. Sensing and actuation like Peter was describing, it's about capturing data from the environment within which a device or users is operating or moving. And then actuation is the fancy term for doing stuff, ya know like industrial IoT, it's obviously machine controlled, but clearly, you know self driving vehicles is steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data, the logic of the application. Predictive logic, correlations, classification, abstractions, differentiation, anomaly detection, recognizing faces and voices. We see that now with Apple and the latest version of the iPhone is embedding face recognition as a core, as the core multifactor authentication technique. Clearly that's a harbinger of what's going to be universal fairly soon which is that depends on AI. That depends on convolutional neural networks, that is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is the AI software is taking root in hardware to power continuous agency. Getting stuff done. Powered decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily want to let the car steer itself in all scenarios, we want some degree of override, for lots of good reasons. They want to protect life and limb including their own. And just more data driven automation across the internet of things in the broadest sense. So unpacking this reference framework, what's happening is that AI driven intelligence is powering real time decisioning at the Edge. Real time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all of that data, may be persistent at the Edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI deep learning, multilayer, to do a variety of anti-fraud and higher level like narrative, auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing is going to begin to happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device along with other data that may be coming down in real time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action, at the Edge. Continuous inference. What it all comes down to is that inference is what's going on inside the chips at the Edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, ASIC, Neuro synaptic chips of all sorts playing in various combinations that are automating more and more very complex inference scenarios at the Edge. And not just individual devices, swarms of devices, like drones and so forth are essentially an Edge unto themselves. You'll see these tiered hierarchies of Edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be, this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here, training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task, predictions, classifications, face recognition that you, you've built them for. But I use the term learning in a broader sense. It's what's make your inferences get better and better, more accurate over time is that you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know there's maximize a reward function versus minimize a loss function, you know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis critically important in a lot of real world scenarios. So Edge AI Algorithms, clearly, deep learning which is multilayered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures, doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of great sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing essentially what's called generative applications of all sort, composing music, and a lot of it's being used for auto programming. These are all deep learning. There's a variety of other algorithm approaches I'm not going to bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's going to have, has a camera, it has a microphone, it has the ability to of course, has geolocation and navigation capabilities. It's environmentally aware, it's got an accelerometer and so forth embedded therein. The reason that your phone and all of the devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in the wider range of scenarios. So machine learning is the foundation of all of this, but there are other, I mean of deep learning, artificial neural networks is the foundation of that. But there are other approaches for machine learning I want to make you aware of because support vector machines and these other established approaches for machine learning are not going away but really what's driving the show now is deep learning, because it's scary effective. And so that's where most of the investment in AI is going into these days for deep learning. AI Edge platforms, tools and frameworks are just coming along like gangbusters. Much development of AI, of deep learning happens in the context of your data lake. This is where you're storing your training data. This is the data that you use to build and test to validate in your models. So we're seeing a deepening stack of Hadoop and there's Kafka, and Spark and so forth that are driving the training (coughs) excuse me, of AI models that are power all these Edge Analytic applications so that that lake will continue to broaden in terms, and deepen in terms of a scope and the range of data sets and the range of modeling, AI modeling supports. Data science is critically important in this scenario because the data scientist, the data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and to deploy and iterate all this AI that's being pushed to the Edge. So clearly data science is at the center, data scientists of an increasingly specialized nature are necessary to the realization to this value at the Edge. AI frameworks are coming along like you know, a mile a minute. TensorFlow has achieved a, is an open source, most of these are open source, has achieved sort of almost like a defacto standard, status, I'm using the word defacto in air quotes. There's Theano and Keras and xNet and CNTK and a variety of other ones. We're seeing range of AI frameworks come to market, most open source. Most are supported by most of the major tool vendors as well. So at Wikibon we're definitely tracking that, we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort of thing Neil's covered in a variety of contexts in his career is fundamentally important to Edge Analytics to systems of agency 'cause it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the Edge to individual users as well as to process that automation. That's absolutely necessary for self driving vehicles to do their jobs and industrial IoT. So what we're seeing is more and more recommendation engine or recommender capabilities powered by ML and DL are going to the Edge, are already at the Edge for a variety of applications. Edge AI capabilities, like I said, there's sensing. And sensing at the Edge is becoming ever more rich, mixed reality Edge modalities of all sort are for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the Edge, into the chip sets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives. And you know, it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning. Like I said, it's, a growing range of inferences that are being done at the Edge. And that's where it has to happen 'cause that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the Edge at Edge devices so, the models that are built and trained in the cloud are pushed down through a dev ops process down to the Edge and that's the way it will work pretty much in most AI environments, Edge analytics environments. You centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications. I'll just run you through sort of a core list of the ones that are coming into, already come into the mainstream at the Edge. Multifactor authentication, clearly the Apple announcement of face recognition is just a harbinger of the fact that that's coming to every device. Computer vision speech recognition, NLP, digital assistance and chat bots powered by natural language processing and understanding, it's all AI powered. And it's becoming very mainstream. Emotion detection, face recognition, you know I could go on and on but these are like the core things that everybody has access to or will by 2020 and they're core devices, mass market devices. Developers, designers and hardware engineers are coming together to pool their expertise to build and train not just the AI, but also the entire package of hardware in UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, the submitted intelligence enables and most, much of what they build in terms of AI will be containerized as micro services through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, the Edge, you know at the Edge, some data will be persistent, needs to be persistent to drive inference. That's, and you know to drive a variety of different application scenarios that need some degree of historical data related to what that device in question happens to be sensing or has sensed in the immediate past or you know, whatever. The hardware itself is geared towards both sensing and increasingly persistence and Edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do. That's where that comes in. That has to be powered by low cost, low power commodity chip sets of various sorts. What we see right now in terms of chip sets is it's a GPUs, Nvidia has gone real far and GPUs have come along very fast in terms of power inference engines, you know like the Tesla cars and so forth. But GPUs are in many ways the core hardware sub straight for in inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized, and so we see a fair number of CPUs being used as the hardware for Edge Analytic applications. Some vendors are fairly big on FPGAs, I believe Microsoft has gone fairly far with FPGAs inside DL strategy. ASIC, I mean, there's neuro synaptic chips like IBM's got one. There's at least a few dozen vendors of neuro synaptic chips on the market so at Wikibon we're going to track that market as it develops. And what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chip set architecture at the inference side of the Edge, and other chip set architectures that are driving the DL as processed in the cloud, playing together within a common architecture. And we see some, a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory, but pushing Tensorflow models that might be trained through Spark down to the Edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very, very, likely to be standard going forward in lots of areas. So analytics at the Edge power continuous results is what it's all about. The whole point is really not moving the data, it's putting the inference at the Edge and working from the data that's already captured and persistent there for the duration of whatever action or decision or result needs to be powered from the Edge. Like Neil said cost takeout alone is not worth doing. Cost takeout alone is not the rationale for putting AI at the Edge. It's getting new stuff done, new kinds of things done in an automated consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a dev ops context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed. Continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm going to hand it over now. It's five minutes after the hour. We're going to get going with the Influencer Panel so what we'd like to do is I call Peter, and Peter's going to call our influencers. >> All right, am I live yet? Can you hear me? All right so, we've got, let me jump back in control here. We've got, again, the objective here is to have community take on some things. And so what we want to do is I want to invite five other people up, Neil why don't you come on up as well. Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. >> Neil: I'm glad I'm on the left side. >> From the Hurwitz Group. >> From the Hurwitz Group. Jennifer Shin who's affiliated with UC Berkeley. Jennifer are you here? >> She's here, Jennifer where are you? >> She was here a second ago. >> Neil: I saw her walk out she may have, >> Peter: All right, she'll be back in a second. >> Here's Jennifer! >> Here's Jennifer! >> Neil: With 8 Path Solutions, right? >> Yep. >> Yeah 8 Path Solutions. >> Just get my mic. >> Take your time Jen. >> Peter: All right, Stephanie McReynolds. Far left. And finally Joe Caserta, Joe come on up. >> Stephie's with Elysian >> And to the left. So what I want to do is I want to start by having everybody just go around introduce yourself quickly. Judith, why don't we start there. >> I'm Judith Hurwitz, I'm president of Hurwitz and Associates. We're an analyst research and fault leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. >> Jennifer. >> Hi, my name's Jennifer Shin. I'm the founder and Chief Data Scientist 8 Path Solutions LLC. We do data science analytics and technology. We're actually about to do a big launch next month, with Box actually. >> We're apparent, are we having a, sorry Jennifer, are we having a problem with Jennifer's microphone? >> Man: Just turn it back on? >> Oh you have to turn it back on. >> It was on, oh sorry, can you hear me now? >> Yes! We can hear you now. >> Okay, I don't know how that turned back off, but okay. >> So you got to redo all that Jen. >> Okay, so my name's Jennifer Shin, I'm founder of 8 Path Solutions LLC, it's a data science analytics and technology company. I founded it about six years ago. So we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have, I've been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecturer in data science. >> You know Jim, you know Neil, Joe, you ready to go? >> Joe: Just broke my microphone. >> Joe's microphone is broken. >> Joe: Now it should be all right. >> Jim: Speak into Neil's. >> Joe: Hello, hello? >> I just feel not worthy in the presence of Joe Caserta. (several laughing) >> That's right, master of mics. If you can hear me, Joe Caserta, so yeah, I've been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence type of work since 1996. And been doing, wholly dedicated to Big Data solutions and modern data engineering since 2009. Where should I be looking? >> Yeah I don't know where is the camera? >> Yeah, and that's basically it. So my company was formed in 2001, it's called Caserta Concepts. We recently rebranded to only Caserta 'cause what we do is way more than just concepts. So we conceptualize the stuff, we envision what the future brings and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. >> Peter: And finally Stephanie McReynolds. >> I'm Stephanie McReynolds, I had product marketing as well as corporate marketing for a company called Elysian. And we are a data catalog so we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision making. And some of our customers like City of San Diego, a large automotive manufacturer working on self driving cars and General Electric use Elysian to help power their solutions for IoT at the Edge. >> All right so let's jump right into it. And again if you have a question, raise your hand, and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between Big Data AI and IoT? Now Wikibon's put forward its observation that data's being generated at the Edge, that action is being taken at the Edge and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, you want to start? >> Yeah, so I think that if you look at AI machine learning, all these different areas, you have to be able to have the data learned. Now when it comes to IoT, I think one of the issues we have to be careful about is not all data will be at the Edge. Not all data needs to be analyzed at the Edge. For example if the light is green and that's good and it's supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually only really want to be able to analyze and take action when there's an anomaly. Well if it goes purple, that's actually a sign that something might explode, so that's where you want to make sure that you have the analytics at the edge. Not for everything, but for the things where there is an anomaly and a change. >> Joe, how about from your perspective? >> For me I think the evolution of data is really becoming, eventually oxygen is just, I mean data's going to be the oxygen we breathe. It used to be very very reactive and there used to be like a latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it and then you collect it, and then you can analyze it. And it was very very waterfallish, right? And then eventually we figured out to put it back into the system. Or at least human beings interpret it to try to make the system better and that is really completely turned on it's head, we don't do that anymore. Right now it's very very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean human beings are involved a bit, but less and less and less. And it's just a reality, it may not be politically correct to say but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing. And it can actually help me do things like get to where I want to go faster depending on my preference if I want to save money or save time or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes with the term of ARBI and ARBI is the integration of Augmented Reality and Business Intelligence. Where I think essentially we're going to see, you know, hold your phone up to Jim's face and it's going to recognize-- >> Peter: It's going to break. >> And it's going to say exactly you know, what are the key metrics that we want to know about Jim. If he works on my sales force, what's his attainment of goal, what is-- >> Jim: Can it read my mind? >> Potentially based on behavior patterns. >> Now I'm scared. >> I don't think Jim's buying it. >> It will, without a doubt be able to predict what you've done in the past, you may, with some certain level of confidence you may do again in the future, right? And is that mind reading? It's pretty close, right? >> Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess, I guess we could call that the Turing machine test of the paranormal. >> Well, face recognition, micro gesture recognition, I mean facial gestures, people can do it. Maybe not better than a coin toss, but if it can be seen visually and captured and analyzed, conceivably some degree of mind reading can be built in. I can see when somebody's angry looking at me so, that's a possibility. That's kind of a scary possibility in a surveillance society, potentially. >> Neil: Right, absolutely. >> Peter: Stephanie, what do you think? >> Well, I hear a world of it's the bots versus the humans being painted here and I think that, you know at Elysian we have a very strong perspective on this and that is that the greatest impact, or the greatest results is going to be when humans figure out how to collaborate with the machines. And so yes, you want to get to the location more quickly, but the machine as in the bot isn't able to tell you exactly what to do and you're just going to blindly follow it. You need to train that machine, you need to have a partnership with that machine. So, a lot of the power, and I think this goes back to Judith's story is then what is the human decision making that can be augmented with data from the machine, but then the humans are actually training the training side and driving machines in the right direction. I think that's when we get true power out of some of these solutions so it's not just all about the technology. It's not all about the data or the AI, or the IoT, it's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way and I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. >> So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial capturing information, and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken Big Data ultimately powering how a lot of that changes. Let's go to the next one. >> So actually I have something to add to that. So I think it makes sense, right, with IoT, why we have Big Data associated with it. If you think about what data is collected by IoT. We're talking about a serial information, right? It's over time, it's going to grow exponentially just by definition, right, so every minute you collect a piece of information that means over time, it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with Big Data. And also why you need AI to be able to differentiate between one minute versus next minute, right? Trying to find a better way rather than looking at all that information and manually picking out patterns. To have some automated process for being able to filter through that much data that's being collected. >> I want to point out though based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, or within IT certainly is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance. And we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousandths of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So, the reality is we're not just dealing with stylized data, we're dealing with real data, and it's more, more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? >> Well, I mean, I agree with that, I think I already said that, right. >> Yes you did, okay let's move on to the next one. >> Well it's a doppelganger, the digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent, or might not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests, and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know I mean there needs to be that kind of logic somewhere in this fabric to enable true agency. >> All right, so I'm going to start with you. Oh go ahead. >> I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the, we didn't have the facilities, we didn't have the resources to really do AI, we just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we develop now are very inward looking, we look at our organization, we look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. >> Actually what I would add to that is also it actually introduces being able to use engineering, right? Having engineers interested in the data. Because it's actually technical data that's collected not just say preferences or information about people, but actual measurements that are being collected with IoT. So it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. >> Well, Neil, you and I have talked about something, 'cause it's not just engineers. We have in the healthcare industry for example, which you know a fair amount about, there's this notion of empirical based management. And the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way the managers collect or collaborate and ultimately collectively how they take action. So it's not just engineers, it's supposed to also inform business, what's actually happening in the healthcare world when we start thinking about some of this empirical based management, is it working? What are some of the barriers? >> It's not a function of technology. What happens in medicine and healthcare research is, I guess you can say it borders on fraud. (people chuckling) No, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost written by pharmaceutical companies. (man chuckling) Right, so I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the women's health initiative. They spent $700 million gathering this data over 20 years. And when they released it they looked at all the wrong things deliberately, right? So I think that's a systemic-- >> I think you're bringing up a really important point that we haven't brought up yet, and that is is can you use Big Data and machine learning to begin to take the biases out? So if you let the, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think get better over time, but it's going to take a while to get there because we do tend to gravitate towards our biases. >> I will share an anecdote. So I had some arm pain, and I had numbness in my thumb and pointer finger and I went to, excruciating pain, went to the hospital. So the doctor examined me, and he said you probably have a pinched nerve, he said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it. (Neil laughs) And he came back because this little bit of information was something that could easily be looked up, right? Every nerve in your spine is connected to your different fingers so the pointer and the thumb just happens to be your C6, so he came back and said, it's your C6. (Neil mumbles) >> You know an interesting, I mean that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community, so by making Big Data accessible to everyone, you actually start a more rational conversation or debate on well what are the true insights-- >> If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test. It's not just about improving your understanding of the wrong thing, it's also testing whether it's the right or wrong thing as well. >> That's right, to be able to test that you need to have humans in dialog with one another bringing different biases to the table to work through okay is there truth in this data? >> It's context and it's correlation and you can have a great correlation that's garbage. You know if you don't have the right context. >> Peter: So I want to, hold on Jim, I want to, >> It's exploratory. >> Hold on Jim, I want to take it to the next question 'cause I want to build off of what you talked about Stephanie and that is that this says something about what is the Edge. And our perspective is that the Edge is not just devices. That when we talk about the Edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action which is not a simple thing. So what do you guys think? What does the Edge mean to you? Joe, why don't you start? >> Well, I think it could be a combination of the two. And specifically when we talk about healthcare. So I believe in 2017 when we eat we don't know why we're eating, like I think we should absolutely by now be able to know exactly what is my protein level, what is my calcium level, what is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have, and eat very very purposely and not by taste-- >> And it's amazing that red wine is always the answer. >> It is. (people laughing) And tequila, that helps too. >> Jim: You're a precision foodie is what you are. (several chuckle) >> There's no reason why we should not be able to know that right now, right? And when it comes to healthcare is, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't, you can't, you can't manage what you can't measure. And you're really not allowed to use a lot of this data so you can't measure it, right? You can't do things very very scientifically right, in the healthcare world and I think regulation in the healthcare world is really burdening advancement in science. >> Peter: Any thoughts Jennifer? >> Yes, I teach statistics for data scientists, right, so you know we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find a balance, right, a middle ground. For instance, in the case of are you being too biased through data, well you could say like we want to look at data only objectively, but then there are certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw earth, saw the people, everyone's carrying umbrellas right, and then it started to rain. That alien might think well, it's because they're carrying umbrellas that it's raining. Now we know from real world that that's actually not the way these things work. So if you look only at the data, that's the potential risk. That you'll start making associations or saying something's causal when it's actually not, right? So that's one of the, one of the I think big challenges. I think when it comes to looking also at things like healthcare data, right? Do you collect data about anything and everything? Does it mean that A, we need to collect all that data for the question we're looking at? Or that it's actually the best, more optimal way to be able to get to the answer? Meaning sometimes you can take some shortcuts in terms of what data you collect and still get the right answer and not have maybe that level of specificity that's going to cost you millions extra to be able to get. >> So Jennifer as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down to that that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these Big Datas. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely best or at least good result? >> So I think that's a bit of a tough question, right? Because part of it is, it's going to depend on how many of these researchers actually get exposed to real world scenarios, right? Research looks into all these papers, and you come up with all these models, but if it's never tested in a real world scenario, well, I mean we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is. Funding is going to matter in this case. If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again on the other side of course, if researchers don't validate those models then you really can't say for sure that it's actually more accurate, or it's more efficient. >> It's the issue of real world testing and experimentation, A B testing, that's standard practice in many operationalized ML and AI implementations in the business world, but real world experimentation in the Edge analytics, what you're actually transducing are touching people's actual lives. Problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you've got to tread lightly on doing operationalizing that kind of testing in the IoT when people's lives and health are at stake. >> We still give 'em placebos. So we still test 'em. All right so let's go to the next question. What are the hottest innovations in AI? Stephanie I want to start with you as a company, someone at a company that's got kind of an interesting little thing happening. We start thinking about how do we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? >> I think it's a little counter intuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath. But they're having a significant impact on business decision making or bringing a different type of application to the market and you know, I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision making and that means kind of hiding the AI to the business user. Because if you think a bot is making a decision instead of you, you're not going to partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM when recommendation engines were all the rage online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them, that algorithm was 99% correct most of the time, but there was this human resistance to letting a computer tell you what to tell that customer on the other side even if it was more successful in the end. And so I think that the innovation in AI that's really going to push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about as assisting their work and getting to a better result-- >> Hence the augmentation point you made earlier. >> Absolutely, absolutely. >> Joe how 'about you? What do you look at? What are you excited about? >> I think the coolest thing at the moment right now is chat bots. Like to be able, like to have voice be able to speak with you in natural language, to do that, I think that's pretty innovative, right? And I do think that eventually, for the average user, not for techies like me, but for the average user, I think keyboards are going to be a thing of the past. I think we're going to communicate with computers through voice and I think this is the very very beginning of that and it's an incredible innovation. >> Neil? >> Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, in all sorts of things. Meteorology. And that's where, well, hopefully not on an every day basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you can probably figure that one out right? But there are legions of people in many many companies that are involved in that industry. So if you're talking about the dollars spent on AI, I think the stuff that we do in our industries is probably fairly small. >> Well it reminds me of an application I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. >> Why not? >> Which is not a bad thing for a whole lot of people. Judith what do you look at? >> So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to, to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some, basically we have to move to levels of abstraction before this becomes truly ubiquitous across many different areas. >> Peter: Jennifer? >> So I'm going to say computer vision. >> Computer vision? >> Computer vision. So computer vision ranges from image recognition to be able to say what content is in the image. Is it a dog, is it a cat, is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a chihuahua and then it compares the two. And can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the new innovations. I think, for instance, cloud vision I think that's what Google still calls it. The vision API we've they've released on beta allows you to actually use an API to send your image and then have it be recognized right, by their API. There's another startup in New York called Clarify that also does a similar thing as well as you know Amazon has their recognition platform as well. So I think in a, from images being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it, but having a computer actually tally that information for you, right? >> There's actually an extra piece to that. So if I have a picture of a stop sign, and I'm an automated car, and is it a picture on the back of a bus of a stop sign, or is it a real stop sign? So that's going to be one of the complications. >> Doesn't matter to a New York City cab driver. How 'about you Jim? >> Probably not. (laughs) >> Hottest thing in AI is General Adversarial Networks, GANT, what's hot about that, well, I'll be very quick, most AI, most deep learning, machine learning is analytical, it's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff. In other words, to create realistic looking photographs, to compose music, to build CAD CAM models essentially that can be constructed on 3D printers. So GANT, it's a huge research focus all around the world are used for, often increasingly used for natural language generation. In other words it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like it was constructed by a human and doing it over and over again to fool humans. I mean you can imagine the fraud potential. But you can also imagine just the sheer, like it's going to shape the world, GANT. >> All right so I'm going to say one thing, and then we're going to ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs, or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the text. And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is absolutely fascinating to me. Whether it's gamification, or even some things we're thinking about with block chain and bitcoins and related types of stuff. To my mind that's going to have an enormous impact, some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think sir? And I'll try to do my best to repeat it. Oh we have a mic. >> So my question's about, Okay, so the question's pretty much about what Stephanie's talking about which is human and loop training right? I come from a computer vision background. That's the problem, we need millions of images trained, we need humans to do that. And that's like you know, the workforce is essentially people that aren't necessarily part of the AI community, they're people that are just able to use that data and analyze the data and label that data. That's something that I think is a big problem everyone in the computer vision industry at least faces. I was wondering-- >> So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have domain expertise people who have algorithm expertise and working together? >> I think the expertise issue comes in healthcare, right? In healthcare you need experts to be labeling your images. With contextual information where essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context based intelligence. And all of that comes through training images, and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily an algorithm, right? It's how well are datas labeled? Who's doing the labeling and how do we ensure that it happens? >> Great question. So for the panel. So if you think about it, a consultant talks about being on the bench. How much time are they going to have to spend on trying to develop additional business? How much time should we set aside for executives to help train some of the assistants? >> I think that the key is not, to think of the problem a different way is that you would have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow of that executive, or that individual? And is there a way to gather that context automatically using AI, right? And if you can do that, it's similar to what we do in our product, we observe how someone is analyzing the data and from those observations we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. >> Peter: Anybody else? >> I'll just add to what Stephanie said, so in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, all that, that's all potential training data. So it cross checks against all the other models that are processing all the other data coming from that device. So that the natural language process of understanding can be reality checked against the images that the person happens to be commenting upon, or the scene in which they're embedded, so yeah, the data's embedded-- >> I don't think we're, we're not at the stage yet where this is easy. It's going to take time before we do start doing the pre-training of some of these details so that it goes faster, but right now, there're not that many shortcuts. >> Go ahead Joe. >> Sorry so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know in Ethereum that has this notion, which is bot chain, has this theory, this concept of gas. Where like as the process becomes more efficient it costs less to actually run, right? It costs less ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's going to cost until the machine processes it, right? So there is like some notion of that there. But as far as like vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing, so as people start using it more they're going to be adding more pictures. Very very organically. And then the machines will be trained and right now is a very small handful doing it, and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. >> So Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? >> Well, to a certain extent the people who are contributing the insight own nothing because the systems collect their actions and the things they do and then that data doesn't belong to them, it belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff. It's not enough to say that the systems, people will do the right thing, because a lot of them are not motivated to do the right thing. The whole grant thing, the whole oh my god I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at University of Pittsburgh and they were doing a clinical study on the tubes that they put in little kids' ears who have ear infections, right? And-- >> Google it! Who helps out? >> Anyway, I forget the exact thing, but he came out and said that the principle investigator lied when he made the presentation, that it should be this, I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. 'Cause he went against the senior line of authority. He was-- >> Another question back here? >> Man: Yes, Mark Turner has a question. >> Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images and color images in the case of sonograms and ultrasound and mammograms, you see that happening using AI? You see that being, I mean it's already happening, do you see it moving forward in that kind of way? I mean, talk more about that, about you know, AI and black and white images being used and they can be transfixed, they can be made to color images so you can see things better, doctors can perform better operations. >> So I'm sorry, but could you summarize down? What's the question? Summarize it just, >> I had a lot of students, they're interested in the cross pollenization between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it can, using algorithms and stuff be made to color images that can help doctors better do the work that they've already been doing, just do it better. You touched on it like 30 seconds. >> So how AI can be used to actually add information in a way that's not necessarily invasive but is ultimately improves how someone might respond to it or use it, yes? Related? I've also got something say about medical images in a second, any of you guys want to, go ahead Jennifer. >> Yeah, so for one thing, you know and it kind of goes back to what we were talking about before. When we look at for instance scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order for me, who I don't have a medical background, to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is, so it's on both the slice level as well as, within that 2D image, where it's located and the size of it. So the beauty of things like AI is that ultimately right now a radiologist has to look at every slice and actually identify this manually, right? The goal of course would be that one day we wouldn't have to have someone look at every slice to like 300 usually slices and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%. And with anything we do in the real world it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is, definitively say a lung cancer nodule or not. I think the other thing to think about is in terms of how their using other information, what they might use is a for instance, to say like you know, based on other characteristics of the person's health, they might use that as sort of a grading right? So you know, how dark or how light something is, identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in the computer vision sense. I think that's, the difficulty with this of course, is being able to identify which variables were introduced into data that does exist. >> So I'll make two quick observations on this then I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within the medical community partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors which means they can do a lot of work in a little bit of time, and charge a fair amount of money. As we start to introduce some of these technologies that allow us to from a machine standpoint actually make diagnoses based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. >> It's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasounds and so forth are getting as accurate as many of the best radiologists. >> That's the point! >> Detecting cancer >> Now radiologists are saying oh look, we do this great thing in terms of interacting with the patients, never have because they're being dis-intermediated. The second thing that I'll note is one of my favorite examples of that if I got it right, is looking at the images, the deep space images that come out of Hubble. Where they're taking data from thousands, maybe even millions of images and combining it together in interesting ways you can actually see depth. You can actually move through to a very very small scale a system that's 150, well maybe that, can't be that much, maybe six billion light years away. Fascinating stuff. All right so let me go to the last question here, and then I'm going to close it down, then we can have something to drink. What are the hottest, oh I'm sorry, question? >> Yes, hi, my name's George, I'm with Blue Talon. You asked earlier there the question what's the hottest thing in the Edge and AI, I would say that it's security. It seems to me that before you can empower agency you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there has to be security similarly done at the Edge. And I saw (speaking faintly) slides that called out security as a key prerequisite and maybe Judith can comment, but I'm curious how security's going to evolve to meet this analytics at the Edge. >> Well, let me do that and I'll ask Jen to comment. The notion of agency is crucially important, slightly different from security, just so we're clear. And the basic idea here is historically folks have thought about moving data or they thought about moving application function, now we are thinking about moving authority. So as you said. That's not necessarily, that's not really a security question, but this has been a problem that's been in, of concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. >> Yeah actually I'll, yeah, thank you for bringing up security so identity is the foundation of security. Strong identity, multifactor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning are powering a new era of biometrics and you know it's behavioral metrics and so forth that's organic to people's use of devices and so forth. You know getting to the point that Peter was raising is important, agency! Systems of agency. Your agent, you have to, you as a human being should be vouching in a secure, tamper proof way, your identity should be vouching for the identity of some agent, physical or virtual that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well a lot of that's been worked. It all ran through webs of trust, public key infrastructure, formats and you know SAML for single sign and so forth. It's all about assertion, strong assertions and vouching. I mean there's the whole workflows of things. Back in the ancient days when I was actually a PKI analyst three analyst firms ago, I got deep into all the guts of all those federation agreements, something like that has to be IoT scalable to enable systems agency to be truly fluid. So we can vouch for our agents wherever they happen to be. We're going to keep on having as human beings agents all over creation, we're not even going to be aware of everywhere that our agents are, but our identity-- >> It's not just-- >> Our identity has to follow. >> But it's not just identity, it's also authorization and context. >> Permissioning, of course. >> So I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. >> Role based permissioning, yeah. Or persona based. >> That's right. >> I agree. >> And obviously it's going to be interesting to see the role that block chain or its follow on to the technology is going to play here. Okay so let me throw one more questions out. What are the hottest applications of AI at the Edge? We've talked about a number of them, does anybody want to add something that hasn't been talked about? Or do you want to get a beer? (people laughing) Stephanie, you raised your hand first. >> I was going to go, I bring something mundane to the table actually because I think one of the most exciting innovations with IoT and AI are actually simple things like City of San Diego is rolling out 3200 automated street lights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so has some environmental change, positive environmental change impact. I mean, it's street lights, it's not like a, it's not medical industry, it doesn't look like a life changing innovation, and yet if we automate streetlights and we manage our energy better, and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone's life. >> And dramatically suppress the impact of backseat driving! >> (laughs) Exactly. >> Joe what were you saying? >> I was just going to say you know there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an x-ray and determine if there's a tumor there is not out of the realm of possibility, right? >> Neil? >> I agree with both of them, that's what I meant about external kind of applications. Instead of figuring out what to sell our customers. Which is most what we hear. I just, I think all of those things are imminently doable. And boy street lights that help you find a parking place, that's brilliant, right? >> Simple! >> It improves your life more than, I dunno. Something I use on the internet recently, but I think it's great! That's, I'd like to see a thousand things like that. >> Peter: Jim? >> Yeah, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything to enable fine grain microclimate awareness of all of us as human beings moving through the world. And enable reading of every microclimate in buildings. In other words, you know you have sensors on your body that are always detecting the heat, the humidity, the level of pollution or whatever in every environment that you're in or that you might be likely to move into fairly soon and either A can help give you guidance in real time about where to avoid, or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be to your specific requirements. And you know when you have a room like this, full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever but I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet potentially. That's really the Edge analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. >> Jennifer? >> So I think, what's really interesting about it is being able to utilize the technology we do have, it's a lot cheaper now to have a lot of these ways of measuring that we didn't have before. And whether or not engineers can then leverage what we have as ways to measure things and then of course then you need people like data scientists to build the right model. So you can collect all this data, if you don't build the right model that identifies these patterns then all that data's just collected and it's just made a repository. So without having the models that supports patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what will be really interesting is to see how existing technology is leveraged, to collect data and then how that's actually modeled as well as to be able to see how technology's going to now develop from where it is now, to being able to either collect things more sensitively or in the case of say for instance if you're dealing with like how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights in things like healthcare and just maybe even just our behaviors. >> Peter: Judith? >> So, I think we also have to look at it from a peer to peer perspective. So I may be able to get some data from one thing at the Edge, but then all those Edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may, in our business lives, act in silos, but in the real world when you look at things like sensors and devices it's how they react with each other on a peer to peer basis. >> All right, before I invite John up, I want to say, I'll say what my thing is, and it's not the hottest. It's the one I hate the most. I hate AI generated music. (people laughing) Hate it. All right, I want to thank all the panelists, every single person, some great commentary, great observations. I want to thank you very much. I want to thank everybody that joined. John in a second you'll kind of announce who's the big winner. But the one thing I want to do is, is I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember, and I'm going to give you the award Stephanie. And that is increasing we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John why don't you come on up. Who won the, whatever it's called, the raffle? >> You won. >> Thank you! >> How 'about a round of applause for the great panel. (audience applauding) Okay we have a put the business cards in the basket, we're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and speaker, Bluetooth speaker. Got to wait for that. I just want to say thank you for coming and for the folks watching, this is our fifth year doing our own event called Big Data NYC which is really an extension of the landscape beyond the Big Data world that's Cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. Really appreciate you taking the time to come out and share your data and your knowledge, appreciate it. Thank you. Where's the? >> Sam's right in front of you. >> There's the thing, okay. Got to be present to win. We saw some people sneaking out the back door to go to a dinner. >> First prize first. >> Okay first prize is the Bose headset. >> Bluetooth and noise canceling. >> I won't look, Sam you got to hold it down, I can see the cards. >> All right. >> Stephanie you won! (Stephanie laughing) Okay, Sawny Cox, Sawny Allie Cox? (audience applauding) Yay look at that! He's here! The bar's open so help yourself, but we got one more. >> Congratulations. Picture right here. >> Hold that I saw you. Wake up a little bit. Okay, all right. Next one is, my kids love this. This is great, great for the beach, great for everything portable speaker, great gift. >> What is it? >> Portable speaker. >> It is a portable speaker, it's pretty awesome. >> Oh you grabbed mine. >> Oh that's one of our guys. >> (lauging) But who was it? >> Can't be related! Ava, Ava, Ava. Okay Gene Penesko (audience applauding) Hey! He came in! All right look at that, the timing's great. >> Another one? (people laughing) >> Hey thanks everybody, enjoy the night, thank Peter Burris, head of research for SiliconANGLE, Wikibon and he great guests and influencers and friends. And you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party and some drinks and that's out, that's it for the influencer panel and analyst discussion. Thank you. (logo music)
SUMMARY :
is that the cloud is being extended out to the Edge, the next time I talk to you I don't want to hear that are made at the Edge to individual users We've got, again, the objective here is to have community From the Hurwitz Group. And finally Joe Caserta, Joe come on up. And to the left. I've been in the market for a couple years now. I'm the founder and Chief Data Scientist We can hear you now. And I have, I've been developing a lot of patents I just feel not worthy in the presence of Joe Caserta. If you can hear me, Joe Caserta, so yeah, I've been doing We recently rebranded to only Caserta 'cause what we do to make recommendations about what data to use the realities of how data is going to work in these to make sure that you have the analytics at the edge. and ARBI is the integration of Augmented Reality And it's going to say exactly you know, And if the machine appears to approximate what's and analyzed, conceivably some degree of mind reading but the machine as in the bot isn't able to tell you kind of some of the things you talked about, IoT, So that's one of the reasons why the IoT of the primary source. Well, I mean, I agree with that, I think I already or might not be the foundation for your agent All right, so I'm going to start with you. a lot of the applications we develop now are very So it's really interesting in the engineering space And the idea that increasingly we have to be driven I know the New England Journal of Medicine So if you let the, if you divorce your preconceived notions So the doctor examined me, and he said you probably have One of the issues with healthcare data is that the data set the actual model that you use to set priorities and you can have a great correlation that's garbage. What does the Edge mean to you? And then find the foods to meet that. And tequila, that helps too. Jim: You're a precision foodie is what you are. in the healthcare world and I think regulation For instance, in the case of are you being too biased We don't have the same notion to the same degree but again on the other side of course, in the Edge analytics, what you're actually transducing What are some of the hottest innovations in AI and that means kind of hiding the AI to the business user. I think keyboards are going to be a thing of the past. I don't have to tell you what country that was. AI being applied to removing mines from war zones. Judith what do you look at? and the problems you're trying to solve. And can the AI really actually detect difference, right? So that's going to be one of the complications. Doesn't matter to a New York City cab driver. (laughs) So GANT, it's a huge research focus all around the world So the thing that I find interesting is traditional people that aren't necessarily part of the AI community, and all of that requires people to do it. So for the panel. of finding the workflow that then you can feed into that the person happens to be commenting upon, It's going to take time before we do start doing and Jim's and Jen's images, it's going to keep getting Who owns the value that's generated as a consequence But the other thing, getting back to the medical stuff. and said that the principle investigator lied and color images in the case of sonograms and ultrasound and say the medical community as far as things in a second, any of you guys want to, go ahead Jennifer. to say like you know, based on other characteristics I find it fascinating that you now see television ads as many of the best radiologists. and then I'm going to close it down, It seems to me that before you can empower agency Well, let me do that and I'll ask Jen to comment. agreements, something like that has to be IoT scalable and context. So I may be the right person to do something yesterday, Or persona based. that block chain or its follow on to the technology into the atmosphere, so has some environmental change, the technology out there where you can put a camera And boy street lights that help you find a parking place, That's, I'd like to see a thousand things like that. that are always detecting the heat, the humidity, patterns that are actually in the data, but in the real world when you look at things and I'm going to give you the award Stephanie. and for the folks watching, We saw some people sneaking out the back door I can see the cards. Stephanie you won! Picture right here. This is great, great for the beach, great for everything All right look at that, the timing's great. that's it for the influencer panel and analyst discussion.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Judith | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Stephanie McReynolds | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Jennifer Shin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Joe Caserta | PERSON | 0.99+ |
Suzie Welch | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Jen | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Mark Turner | PERSON | 0.99+ |
Judith Hurwitz | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Elysian | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Honeywell | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Derek Sivers | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
1998 | DATE | 0.99+ |
Michael Greene, Intel - #SparkSummit - #theCUBE
>> Announcer: Live from San Francisco, it's the Cube covering Spark Summit 2017. Brought to you by Data Bricks. >> Welcome back to the Cube. Continuing our coverage here at Spark Summit 2017. What a great lineup of guests. I can't wait to introduce this gentleman. We have Intel's VP of the software and service group, Mr. Michael Green. Michael, welcome. >> Thank you for having me. >> All right, we also have George with us over here and George and I will both be peppering you with questions. Are you ready for that? >> I am. I've got the salt to go with the pepper. (laughs) >> Well, you just got off the stage. You did the keynote this morning. What do you think was the most important message you delivered in your keynote? >> Well, it was interesting. One of the things that we're looking at with Big DL, so the big DL framework, was we're hearing a lot of the challenges of making sure that these AI-type workloads scale easily. And one of the things when we open-source Big DL, we really were designing it to leverage that sparkability for massive scale from the beginning. So I thought that that was one of the things that connected with several of the keynotes ahead of me was talking about if this is your challenge, here is one of many solutions but a very good one that will let you take advantage of the scale that people have in their infrastructure, lots of Xeons out there. Might also make sure to fully utilize running the workloads of the future, AI. >> Okay, so Intel not just a hardware company. You do software, right? (laughs) >> Well, you know, Intel's a solutions company, right? And hardware's awesome, but hardware without software is a brick. Maybe a warm one, but it doesn't do much- >> Not a data brick. >> That's right, not a data brick, just a brick. >> And not melted down, either. >> That's right, that's right. So sand without software doesn't go very far. And I see it as software is used to ignite the hardware so that you actually get useful productivity out of it. So as a software solution and as customers, they have problems to solve. It's rare that they come in and say that, "Nope, I just need a nail," right? They're usually like, "I need a home." Well, you can't just provide the nail, you have to provide all the pieces, and one of the things that's exciting for me being part of Intel is that we provide silicon, of course, right? The processors Xeon, Accelerators, and now, software, tools, frameworks, to make sure that a customer can actually really get the value of the entire solution. >> Host: Okay, go ahead, George. >> So Michael, help those of us who've been watching from afar but aren't up-to-date on the day-to-day tactics and strategy of what Intel's doing with (mumbles) in terms of where does Big DL fit? And then the acquisition of the floating point (mumbles) technology so that there's a special purpose acceleration on the chip, so how do those two work together along with the rest of the ecosystem? >> Sure, great question. Do if you think of Intel, really, we're always looking at how we can leverage Moore's Law to get more and more integrated into the solution. And if you quickly step through a brief history, at one point, we had a 386, which was a great integer processor, which was partnered with a 387 for the floating point accelerate. 46 combined that because we're able to leverage Moore's Law to bring those two together. Got a lot of reuse for the instruction set with the acceleration. As we bring in - Altera was recently integrated into Intel - they come with a suite of incredible FPGAs and accelerators, as well as another company with Nirvana, that also accelerators, and we're looking at those special case opportunities to accelerate the user experience. So we're going to continue to follow that trend and make sure that you have the general purpose capabilities and where new workloads are coming in, and we really see a lot of growth in AI. As I think I said in the keynote, about 12x growth by 2020. We need to make sure that we have the silicon, as well as the software, and that's for Big DL to pull those two together to make sure that we're getting the full benefit of the solution. >> So a couple years ago, we were told that Intel actually thought that there was going to be more Hadoop servers, and Hadoop is umbrella term for the ecosystem, than database servers in three to fives years' time. When you look at deep learning, because we know it's so much more compute-intensive than the traditional statistical machine learning, if you look out three to five years, how much of the compute cycles, share of workloads, do you see deep learning comprising? >> I think that maybe in the last year, deep learning, or AI, as a workload's about seven percent. But if you grow by 12x, it's definitely growing quickly. So what we're expecting is that AI will become inherent in pretty much every application. An example of this is, at one point, facial detection was something that was the new thing. You can't buy a camera that doesn't do that. So if you pull up your camera and you see the little square show up, it's just commonplace. We're expecting that AI will just become an integral part of solutions, not a solution in and of itself. It's there to make software solutions smarter, it's there to make them go further, it's there to make them smarter. It's not there to be independent. It's like, "Wow, we've identified a cat." That's cool, but if we're identifying problems or making sure that the autonomous delivery systems don't kill a cat, there's a little bit more that needs to go one, so it's going to be part of the solution. >> What about the trade-off between processing at the edge and learning in the cloud? I mean, you can learn on the edge, you can learn in the cloud, you can do the analysis on either end of the run time. How do you guys see that being split up in the future? >> Absolutely, I think that the deep learning training, there's always opportunities that go through vast amount of data to figure out how to identify what's interesting, identify new insights. Once you have those models trained, then you want to use them everywhere. And what makes sense, then, then we're switching from training to inference. Inference at the edge allows you to be more real-time. In some cases, if you've imagined a smart camera, even from a smart camera point-of-view, do I send all the data stream to the data center? Well, maybe not. Let's assume that it's being used for highway patrol. If you identify the car speeding, then send the information, except leave me out. (laughs) Kidding on that. But it's that kind of piece where you allow both sides to be smart. More information for the continual training in the cloud, but also more ability to add compute to the edge so that we can do some really cool activities right at the edge, real-time, without having to send all the information. >> If you had to describe to people working on architectures for the new distributed computing in IOT, what would an edge device look like in its hardware footprint in terms of compute, memory, connectivity? >> So in terms of connectivity, we're expecting an explosion of 5G. A lot of high bandwidth, multiple things being connected with some type of communication, 5G capability. It won't just be about, let's just say, cars feeding back where they are from their GPS, but it's going to be cars talking to other cars. Maybe one needs to move over a lane. Can they adjust? We're talking autonomous world. There's going to be so much interconnection through 5G, so I expect to see 5G show up in most edge devices. And to your point, I think it's very important to add that we expect edge devices to all have some kind of compute capability. Not just sensors, but ability to sense and make some decisions based on what they're sensing. We're going to continue to see more and more compute go to the edge devices. So again, where we look at leveraging the power of Moore's Law, we're going to be able to move that compute that today is like, the cloud is just incredible with its collective compute power, that will slowly move away. And now, we've seen that from mainframe to workstations to PC, the phones, and to edge devices. I think that trend will continue and we'll continue to see bigger data centers and other use cases that require deeper analysis. So from a developer's point of view, if you're working on an edge device, make sure it has great connectivity and compute. >> So one last follow-up from me. Google is making a special effort to build their own framework, open source (mumbles) flow, and then marry it to specialized hardware, tenser processing units. So specialization versus generalization. Do you have a sense for someone who's running TPU in the cloud, do you have a sense for if they're learning tenser flow models or tenser flow-based models, would there be an advantage for that narrow set running on tenser processing units? Or would that be supported just as well on Intel hardware? >> You know, specialization is anything that's purpose-built. As you said, it's just not general purpose, but as I mentioned, over time, the specialized capabilities slide into general purpose opportunities. Recently, we added ASNI, which is an encryption algorithm, into our processors very specialized for encryption/decryption. But because it was so generally used now, it's now just part of our processor offering, it's just part of our instruction set. I expect to continue to see that trend, so many things may start off specialized, which is great, it's a great way to innovate, and then, over time, if it becomes general purpose or if it's so specialized that everyone's using it, it's not general purpose and it slides into the general purpose opportunity. I think that will be a continuation. We've seen that since the dawn of the computer, specialized memory, specialized compute, specialized floating point capabilities, are now just generally available. And so when we deploy things like Big DL, a lot of the benefit of it is that we know the Xeon processor has so much capability because it has pulled in, over time, the best of the specialized use cases that are now generally used. >> Great deep-dive questions, George. We have a couple of minutes left so I know you brought a lot to this conference. They put you up on stage. So what were you hoping to gain from the conference? Maybe you came here to learn or have you had any interesting conversations so far? >> You know, what I'm always excited about at these conferences is that the open-source community is just one that is so incredibly adaptive and innovative, so we're always out there looking to see where the world is going. By doing that, we're learning where- Because again, where the software goes, we want to make sure that the hardware that supports it, we're there to meet their needs. So today, we're learning about new frameworks coming out, the next spark on the roadmap, what they're looking at doing. I expect that we'll hear a little more about scripting languages as well. All of that is just fantastic because I'm always impressed but have come to expect a lot of innovation, but still impressed by the amount of innovation. So it's good to be in the right place and as we approach things from an Intel point of view, we know we approach it from a portfolio solutions set. It's not just silicon, it's not just an accelerator, but it's from the hardware through the software solution. So we know that we can really help to accelerate and usher in the next compute paradigm. So this has been fun. >> That would be a great ending but I got to ask you this. When you're sitting in this chair next year at Spark 2018, what do you hope to be talking about? >> Well, one of the things that we're looking and talking about is this massive amounts of data. I would love to be here next year talking more about the new memory technologies that are coming out that allow for tremendous more storage at incredible speeds, better SSDs and how they will impact the performance of the overall solution, and of course, we're going to continue to accelerate our processing cores, accelerators for unique capabilities. I want to come back in and say, "Wow, what did we 10x this year?" That's always fun. It's a great challenge to the engineering team who just heard that and said, "Ugh, he's starting off with 10x again?" (laughs) >> Great, Michael. That's a great wrap-up, too. We appreciate you coming on and sharing with the Cube audience the exciting things happening at Intel with Spark. >> Well, thank you for the time. I really appreciate it. >> All right, and thank you all for joining us for this segment. We'll be back with more guests in just a few. You're watching the Cube. (electronic music)
SUMMARY :
Brought to you by Data Bricks. We have Intel's VP of the software and service group, and George and I will both be peppering you with questions. I've got the salt to go with the pepper. Well, you just got off the stage. One of the things that we're looking at with Big DL, Okay, so Intel not just a hardware company. Well, you know, Intel's a solutions company, right? so that you actually get useful productivity out of it. as the software, and that's for Big DL to pull those two how much of the compute cycles, share of workloads, So if you pull up your camera and you see the little square in the cloud, you can do the analysis on either end Inference at the edge allows you to be more real-time. is like, the cloud is just incredible with its collective in the cloud, do you have a sense for if they're learning We've seen that since the dawn of the computer, specialized We have a couple of minutes left so I know you brought So it's good to be in the right place and as we approach what do you hope to be talking about? of the overall solution, and of course, we're going to continue We appreciate you coming on and sharing Well, thank you for the time. All right, and thank you all for joining us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George | PERSON | 0.99+ |
Michael Green | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Greene | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
five years | QUANTITY | 0.99+ |
12x | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Nirvana | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
both sides | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Spark Summit 2017 | EVENT | 0.98+ |
last year | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
about seven percent | QUANTITY | 0.97+ |
fives years' | QUANTITY | 0.97+ |
10x | QUANTITY | 0.97+ |
386 | COMMERCIAL_ITEM | 0.96+ |
one point | QUANTITY | 0.95+ |
couple years ago | DATE | 0.92+ |
Altera | ORGANIZATION | 0.91+ |
46 | QUANTITY | 0.91+ |
Moore's Law | TITLE | 0.89+ |
One | QUANTITY | 0.88+ |
this morning | DATE | 0.86+ |
Xeon | COMMERCIAL_ITEM | 0.86+ |
Spark 2018 | EVENT | 0.84+ |
one of the things | QUANTITY | 0.82+ |
Cube | COMMERCIAL_ITEM | 0.76+ |
Bricks | ORGANIZATION | 0.75+ |
about 12x | QUANTITY | 0.74+ |
387 | COMMERCIAL_ITEM | 0.72+ |
Moore | TITLE | 0.7+ |
things | QUANTITY | 0.62+ |
Big DL | TITLE | 0.61+ |
Spark | ORGANIZATION | 0.6+ |
couple | QUANTITY | 0.53+ |
Xeons | OTHER | 0.53+ |
Data | PERSON | 0.49+ |
big DL | TITLE | 0.47+ |
minutes | QUANTITY | 0.47+ |
Big DL | COMMERCIAL_ITEM | 0.45+ |
5G | OTHER | 0.37+ |
ASNI | ORGANIZATION | 0.34+ |