Image Title

Search Results for INVIDIA:

Google's PoV on Confidential Computing NO PUB


 

>> Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start, and then Patricia you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm honing a lot of interesting activities in Google and again, security or infrastructure securities that I usually hone, and we're talking about encryption, Antware encryption, and confidential computing is a part of portfolio. In additional areas that I contribute to get with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operating your confidential environment to have end to end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it, okay. Patricia? >> Well I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologies from large corporations, institutions, and a lot of success for startups as well. And we have two main goals. First, we work side by side with some of our largest, more strategic or most strategic customers and we help them solve complex engineering technical problems. And second, we are device Google and Google Cloud engineering and product management on emerging trends in technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent, thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool. And it's one of the tools in our toolbox. And confidential computing is a way how would help our customers to complete this very interesting end to end lifecycle of their data. And when customers bring in the data to Cloud and want to protect it, as they ingest it to the Cloud, they protect it address when they store data in the Cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to Cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to Cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential matters. Because at the end of the day it reduces more and more the customers thrush boundaries and the attack surface, that's about reducing that periphery, the boundary, in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now we are also encrypting data while in use. And among other beneficial I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry. Even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where you are a customer is actually trying to get a finance on an asset, let's say a boat or a house and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the of the data. >> Interesting, and I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can, because there's a narrative out there that says confidential computing is a marketing ploy. I talked about this upfront, by Cloud providers that are just trying to placate people that are scared of the Cloud. And I'm presuming you don't agree with that but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems, it is overhyped by Cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine, it's a crazy statement. But the most importantly is we mixing multiple concepts I guess. And exactly as Patricia said, we need to look at the end-to-end story not again the mechanism of how confidential computing trying to again execute and protect customer's data, and why it's so critically important. Because what confidential computing was able to do it's in addition to isolate our tenants in multi-tenant environments the Cloud over. To offer additional stronger isolation, we called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants that's running on the same host but also us, because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment but also incredibly important, stronger isolation of our customers. So tenants from us, we also writing code, we also software providers will also make mistakes or have some zero days sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants, and amongst those tenants, they're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together this very sensitive data, knowing that this particular protection is available to them. >> Okay, thank you, appreciate that. And I, you know, I think malicious code is often a threat model missed in these narratives. You know, operator access, yeah, could maybe I trust my Clouds provider, but if I can fence off your access even better I'll sleep better at night. Separating a code from the data, everybody's arm Intel, AM, Invidia, others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and industry way of dealing with confidential computing is to ensure as it's three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any Cloud can, something that Google actually pioneered in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 code Titan. Those our specific ASIC specific, again inch by inch system on every single motherboard that we have, that ensures that your low level former, your actually system code, your kernel, the most powerful system, is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing concluded. But for confidential computing what we have to change we bring in a MD again, future silicon vendors, and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity not only our software and our firmware but also firmware and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of the secure processor. It's special Asics best, specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes, or every single worker thread in our Spark capability. We offer all of that, and those keys are not available to us. It's the best keys ever in encryption space. Because when we are talking about encryption the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypt it enough. But the case in confidential computing quite so revolutionary technology, ask Cloud providers who don't have access to the keys. They're sitting in the hardware and they fed to memory controller. And it means when Hypervisors that also know about these wonderful things, saying I need to get access to the memories that this particular VM I'm trying to get access to. They do not encrypt the data, they don't have access to the key. Because those keys are random, ephemeral and VM, but the most importantly in hardware not exportable. And it means now you will be able to have this very interesting role that customers all Cloud providers, will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you're running in VM you actually see your memory in clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, you will not be able to do it. Now you'll see cybernet. And it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified, and OS is modified such way to provide integrity. It means even OS that you're running in UVM bucks is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine, Dave, that's increasing it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance, and scales as they would expect from Cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well you know you've already given me guarantees as a Cloud provider that you don't have access to my data but this gives another level of assurance. Key management as they say is key. Now you're not, humans aren't managing the keys the machines are managing them. So Patricia, my question to you is in addition to, you know, let's go pre-confidential computing days what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality the customer cares then they want to know whether their systems are protected from outside or unauthorized access. And that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret, right? The code is actually looking at the data only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered, with or impacted by outside actors. And what confidential computing insures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data it's also it has not been tempered and preserves integrity. I would also say that this is all verifiable. So you have attestation, and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tempered with. Confidentiality and integrity of code and data. >> Got it, okay, thank you. You know, Nelly, you mentioned, I think I heard you say that the applications, it's transparent,you don't have to change the application it just comes for free essentially. And I'm, we showed some various parts of the stack before. I'm curious as to what's affected but really more importantly what is specifically Google's value add? You know, how do partners, you know, participate in this? The ecosystem or maybe said another way how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees actually a lot of works was done by community. Google is very much operate and open. So again, our operating system we working in this operating system repository OS vendors to ensure that all capabilities that we need is part of their kernels, are part of their releases, and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors, kernel, host kernel, to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this role. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is pulling the lead and also announcing the trusted domain extension very similar architecture and no surprise, it's again a lot of work done with our partners to again, convince, work with them, and make this capability available. The same with ARM this year, actually last year, ARM unknowns are future design for confidential computing. It's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing. For example, simply to mention to ensure interop, as you mentioned, between different confidential environments of Cloud providers. We want to ensure that they can attest to each other. Because when you're communicating with different environments, you need to trust them. And if it's running on different Cloud providers you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at the station, the community based systems that we want to build and influence and work with ARM and every other Cloud providers to ensure that they can interrupt. And it means it doesn't matter where confidential workloads will be hosted but they can exchange the data in secure, verifiable, and controlled by customers way. And to do it, we need to continue what we are doing. Working open again and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let talk about data sovereignty, because when you think about data sharing you think about data sharing across, you know, the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, you know, the technology industry and sometimes is problematic. I know, you know, there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you you know, when you delete data, can you actually prove the data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect, so for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses at all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption, and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software, stack, any operations, that is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the Cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data, an insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty we care about whether it resides, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment. That the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity, safety of the confidential computing environment. And that's why we believe confidential computing is one, necessary and essential technology that will allow us to ensure data sovereignty especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed, so I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post you guys sent in some predictions, and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like, you know, this decade in, in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it'll become utility. It'll become TLS. As of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heading and heading, I don't know if we are there yet yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you, and Patricia, what's your prediction? >> I would double that and say, hey, in the future, in the very near future you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, If I say mode of operation, I like to compare that, today is inconceivable if we talk to the young technologists. It's inconceivable to think that at some point in history and I happen to be alive that we had data at address that was not encrypted. Data in transit, that was not encrypted. And I think that we will be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus, I think the beauty of the this industry is because there's so much competition this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much.

Published Date : Feb 10 2023

SUMMARY :

Patricia, great to have you. and then Patricia you can weigh in. In additional areas that I contribute to Got it, okay. of the CTO, OCTO for Excellent, thank you in the data to Cloud into the architecture a bit and privacy of the of the data. but I'm going to push you a is available to them. we could stay with you and they fed to memory controller. So Patricia, my question to you is and integrity of the data and of the code. that the applications, and ideas of our partners to this role is when you you know, and that the data will be only used of the enforcement. and we will support encrypted traffic. and I happen to be alive and we can double click

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

last yearDATE

0.99+

2017DATE

0.99+

two partiesQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

second pointQUANTITY

0.99+

FirstQUANTITY

0.99+

ARMORGANIZATION

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

two decades agoDATE

0.99+

AsicsORGANIZATION

0.99+

secondQUANTITY

0.99+

Gaia XORGANIZATION

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.98+

seven yearsQUANTITY

0.98+

OCTOORGANIZATION

0.98+

one thoughtQUANTITY

0.98+

a decade agoDATE

0.98+

this yearDATE

0.98+

10 years agoDATE

0.98+

InvidiaORGANIZATION

0.98+

'23DATE

0.98+

todayDATE

0.98+

CloudTITLE

0.98+

three pillarsQUANTITY

0.97+

one wayQUANTITY

0.97+

hundred percentQUANTITY

0.97+

zero daysQUANTITY

0.97+

three main propertyQUANTITY

0.95+

third pillarQUANTITY

0.95+

two main goalsQUANTITY

0.95+

CTOORGANIZATION

0.93+

NellPERSON

0.9+

KubernetesTITLE

0.89+

every single VMQUANTITY

0.86+

NellyORGANIZATION

0.83+

Google CloudTITLE

0.82+

every single workerQUANTITY

0.77+

every single nodeQUANTITY

0.74+

AMORGANIZATION

0.73+

doubleQUANTITY

0.71+

single motherboardQUANTITY

0.68+

single siliconQUANTITY

0.57+

SparkTITLE

0.53+

kernelTITLE

0.53+

inchQUANTITY

0.48+

Satish Iyer, Dell Technologies | SuperComputing 22


 

>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.

Published Date : Nov 17 2022

SUMMARY :

Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TerryPERSON

0.99+

Dave NicholsonPERSON

0.99+

AWSORGANIZATION

0.99+

Ian ColeyPERSON

0.99+

Dave VellantePERSON

0.99+

Terry RamosPERSON

0.99+

DavePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GellPERSON

0.99+

DavidPERSON

0.99+

Paul GillumPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

190 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

PaulPERSON

0.99+

European Space AgencyORGANIZATION

0.99+

Max PetersonPERSON

0.99+

DellORGANIZATION

0.99+

CIAORGANIZATION

0.99+

AfricaLOCATION

0.99+

oneQUANTITY

0.99+

Arcus GlobalORGANIZATION

0.99+

fourQUANTITY

0.99+

BahrainLOCATION

0.99+

D.C.LOCATION

0.99+

EvereeORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

JohnPERSON

0.99+

UKLOCATION

0.99+

four hoursQUANTITY

0.99+

USLOCATION

0.99+

DallasLOCATION

0.99+

Stu MinimanPERSON

0.99+

Zero DaysTITLE

0.99+

NASAORGANIZATION

0.99+

WashingtonLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

CapgeminiORGANIZATION

0.99+

Department for Wealth and PensionsORGANIZATION

0.99+

IrelandLOCATION

0.99+

Washington, DCLOCATION

0.99+

an hourQUANTITY

0.99+

ParisLOCATION

0.99+

five weeksQUANTITY

0.99+

1.8 billionQUANTITY

0.99+

thousandsQUANTITY

0.99+

GermanyLOCATION

0.99+

450 applicationsQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

AsiaLOCATION

0.99+

John WallsPERSON

0.99+

Satish IyerPERSON

0.99+

LondonLOCATION

0.99+

GDPRTITLE

0.99+

Middle EastLOCATION

0.99+

42%QUANTITY

0.99+

Jet Propulsion LabORGANIZATION

0.99+

Anthony Dina, Dell Technologies and Bob Crovella, NVIDIA | SuperComputing 22


 

>>How do y'all, and welcome back to Supercomputing 2022. We're the Cube, and we are live from Dallas, Texas. I'm joined by my co-host, David Nicholson. David, hello. Hello. We are gonna be talking about data and enterprise AI at scale during this segment. And we have the pleasure of being joined by both Dell and Navidia. Anthony and Bob, welcome to the show. How you both doing? Doing good. >>Great. Great show so far. >>Love that. Enthusiasm, especially in the afternoon on day two. I think we all, what, what's in that cup? Is there something exciting in there that maybe we should all be sharing with you? >>Just say it's just still Yeah, water. >>Yeah. Yeah. I love that. So I wanna make sure that, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about data unstructured versus structured data. I, it's in your title, Anthony, tell me what, what's the difference? >>Well, look, the world has been based in analytics around rows and columns, spreadsheets, data warehouses, and we've made predictions around the forecast of sales maintenance issues. But when we take computers and we give them eyes, ears, and fingers, cameras, microphones, and temperature and vibration sensors, we now translate that into more human experience. But that kind of data, the sensor data, that video camera is unstructured or semi-structured, that's what that >>Means. We live in a world of unstructured data structure is something we add to later after the fact. But the world that we see and the world that we experience is unstructured data. And one of the promises of AI is to be able to take advantage of everything that's going on around us and augment that, improve that, solve problems based on that. And so if we're gonna do that job effectively, we can't just depend on structured data to get the problem done. We have to be able to incorporate everything that we can see here, taste, smell, touch, and use >>That as, >>As part of the problem >>Solving. We want the chaos, bring it. >>Chaos has been a little bit of a theme of our >>Show. It has been, yeah. And chaos is in the eye of the beholder. You, you think about, you think about the reason for structuring data to a degree. We had limited processing horsepower back when everything was being structured as a way to allow us to be able to, to to reason over it and gain insights. So it made sense to put things into rows and tables. How does, I'm curious, diving right into where Nvidia fits into this, into this puzzle, how does NVIDIA accelerate or enhance our ability to glean insight from or reason over unstructured data in particular? >>Yeah, great question. It's really all about, I would say it's all about ai and Invidia is a leader in the AI space. We've been investing and focusing on AI since at least 2012, if not before, accelerated computing that we do it. Invidia is an important part of it, really. We believe that AI is gonna revolutionize nearly every aspect of computing. Really nearly every aspect of problem solving, even nearly every aspect of programming. And one of the reasons is for what we're talking about now is it's a little impact. Being able to incorporate unstructured data into problem solving is really critical to being able to solve the next generation of problems. AI unlocks, tools and methodologies that we can realistically do that with. It's not realistic to write procedural code that's gonna look at a picture and solve all the problems that we need to solve if we're talking about a complex problem like autonomous driving. But with AI and its ability to naturally absorb unstructured data and make intelligent reason decisions based on it, it's really a breakthrough. And that's what NVIDIA's been focusing on for at least a decade or more. >>And how does NVIDIA fit into Dell's strategy? >>Well, I mean, look, we've been partners for many, many years delivering beautiful experiences on workstations and laptops. But as we see the transition away from taking something that was designed to make something pretty on screen to being useful in solving problems in life sciences, manufacturing in other places, we work together to provide integrated solutions. So take for example, the dgx a 100 platform, brilliant design, revolutionary bus technologies, but the rocket ship can't go to Mars without the fuel. And so you need a tank that can scale in performance at the same rate as you throw GPUs at it. And so that's where the relationship really comes alive. We enable people to curate the data, organize it, and then feed those algorithms that get the answers that Bob's been talking about. >>So, so as a gamer, I must say you're a little shot at making things pretty on a screen. Come on. That was a low blow. That >>Was a low blow >>Sassy. What I, >>I Now what's in your cup? That's what I wanna know, Dave, >>I apparently have the most boring cup of anyone on you today. I don't know what happened. We're gonna have to talk to the production team. I'm looking at all of you. We're gonna have to make that better. One of the themes that's been on this show, and I love that you all embrace the chaos, we're, we're seeing a lot of trend in the experimentation phase or stage rather. And it's, we're in an academic zone of it with ai, companies are excited to adopt, but most companies haven't really rolled out their strategy. What is necessary for us to move from this kind of science experiment, science fiction in our heads to practical application at scale? Well, >>Let me take this, Bob. So I've noticed there's a pattern of three levels of maturity. The first level is just what you described. It's about having an experience, proof of value, getting stakeholders on board, and then just picking out what technology, what algorithm do I need? What's my data source? That's all fun, but it is chaos over time. People start actually making decisions based on it. This moves us into production. And what's important there is normality, predictability, commonality across, but hidden and embedded in that is a center of excellence. The community of data scientists and business intelligence professionals sharing a common platform in the last stage, we get hungry to replicate those results to other use cases, throwing even more information at it to get better accuracy and precision. But to do this in a budget you can afford. And so how do you figure out all the knobs and dials to turn in order to make, take billions of parameters and process that, that's where casual, what's >>That casual decision matrix there with billions of parameters? >>Yeah. Oh, I mean, >>But you're right that >>That's, that's exactly what we're, we're on this continuum, and this is where I think the partnership does really well, is to marry high performant enterprise grade scalability that provides the consistency, the audit trail, all of the things you need to make sure you don't get in trouble, plus all of the horsepower to get to the results. Bob, what would you >>Add there? I think the thing that we've been talking about here is complexity. And there's complexity in the AI problem solving space. There's complexity everywhere you look. And we talked about the idea that NVIDIA can help with some of that complexity from the architecture and the software development side of it. And Dell helps with that in a whole range of ways, not the least of which is the infrastructure and the server design and everything that goes into unlocking the performance of the technology that we have available to us today. So even the center of excellence is an example of how do I take this incredibly complex problem and simplify it down so that the real world can absorb and use this? And that's really what Dell and Vidia are partnering together to do. And that's really what the center of excellence is. It's an idea to help us say, let's take this extremely complex problem and extract some good value out of >>It. So what is Invidia's superpower in this realm? I mean, look, we're we are in, we, we are in the era of Yeah, yeah, yeah. We're, we're in a season of microprocessor manufacturers, one uping, one another with their latest announcements. There's been an ebb and a flow in our industry between doing everything via the CPU versus offloading processes. Invidia comes up and says, Hey, hold on a second, gpu, which again, was focused on graphics processing originally doing something very, very specific. How does that translate today? What's the Nvidia again? What's, what's, what's the superpower? Because people will say, well, hey, I've got a, I've got a cpu, why do I need you? >>I think our superpower is accelerated computing, and that's really a hardware and software thing. I think your question is slanted towards the hardware side, which is, yes, it is very typical and we do make great processors, but the processor, the graphics processor that you talked about from 10 or 20 years ago was designed to solve a very complex task. And it was exquisitely designed to solve that task with the resources that we had available at that time. Time. Now, fast forward 10 or 15 years, we're talking about a new class of problems called ai. And it requires both exquisite, soft, exquisite processor design as well as very complex and exquisite software design sitting on top of it as well. And the systems and infrastructure knowledge, high performance storage and everything that we're talking about in the solution today. So Nvidia superpower is really about that accelerated computing stack at the bottom. You've got hardware above that, you've got systems above that, you have middleware and libraries and above that you have what we call application SDKs that enable the simplification of this really complex problem to this domain or that domain or that domain, while still allowing you to take advantage of that processing horsepower that we put in that exquisitely designed thing called the gpu >>Decreasing complexity and increasing speed to very key themes of the show. Shocking, no one, you all wanna do more faster. Speaking of that, and I'm curious because you both serve a lot of different unique customers, verticals and use cases, is there a specific project that you're allowed to talk about? Or, I mean, you know, you wanna give us the scoop, that's totally cool too. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited Anthony? We'll start with that. >>Look, I'm, I've always been a big fan of natural language processing. I don't know why, but to derive intent based on the word choices is very interesting to me. I think what compliments that is natural language generation. So now we're having AI programs actually discover and describe what's inside of a package. It wouldn't surprise me that over time we move from doing the typical summary on the economic, the economics of the day or what happened in football. And we start moving that towards more of the creative advertising and marketing arts where you are no longer needed because the AI is gonna spit out the result. I don't think we're gonna get there, but I really love this idea of human language and computational linguistics. >>What a, what a marriage. I agree. Think it's fascinating. What about you, Bob? It's got you >>Pumped. The thing that really excites me is the problem solving, sort of the tip of the spear in problem solving. The stuff that you've never seen before, the stuff that you know, in a geeky way kind of takes your breath away. And I'm gonna jump or pivot off of what Anthony said. Large language models are really one of those areas that are just, I think they're amazing and they're just kind of surprising everyone with what they can do here on the show floor. I was looking at a demonstration from a large language model startup, basically, and they were showing that you could ask a question about some obscure news piece that was reported only in a German newspaper. It was about a little shipwreck that happened in a hardware. And I could type in a query to this system and it would immediately know where to find that information as if it read the article, summarized it for you, and it even could answer questions that you could only only answer by looking pic, looking at pictures in that article. Just amazing stuff that's going on. Just phenomenal >>Stuff. That's a huge accessibility. >>That's right. And I geek out when I see stuff like that. And that's where I feel like all this work that Dell and Invidia and many others are putting into this space is really starting to show potential in ways that we wouldn't have dreamed of really five years ago. Just really amazing. And >>We see this in media and entertainment. So in broadcasting, you have a sudden event, someone leaves this planet where they discover something new where they get a divorce and they're a major quarterback. You wanna go back somewhere in all of your archives to find that footage. That's a very laborist project. But if you can use AI technology to categorize that and provide the metadata tag so you can, it's searchable, then we're off to better productions, more interesting content and a much richer viewer experience >>And a much more dynamic picture of what's really going on. Factoring all of that in, I love that. I mean, David and I are both nerds and I know we've had take our breath away moments, so I appreciate that you just brought that up. Don't worry, you're in good company. In terms of the Geek Squad over >>Here, I think actually maybe this entire show for Yes, exactly. >>I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, or the only show where you would come and see it at this level in scale and, and just, yeah, it's, it's, it's very, it's very exciting. How important for the future of innovation in HPC are partnerships like the one that Navia and Dell have? >>You wanna start? >>Sure, I would, I would just, I mean, I'm gonna be bold and brash and arrogant and say they're essential. Yeah, you don't not, you do not want to try and roll this on your own. This is, even if we just zoomed in to one little beat, little piece of the technology, the software stack that do modern, accelerated deep learning is incredibly complicated. There can be easily 20 or 30 components that all have to be the right version with the right buttons pushed, built the right way, assembled the right way, and we've got lots of technologies to help with that. But you do not want to be trying to pull that off on your own. That's just one little piece of the complexity that we talked about. And we really need, as technology providers in this space, we really need to do as much as we do to try to unlock the potential. We have to do a lot to make it usable and capable as well. >>I got a question for Anthony. All >>Right, >>So in your role, and I, and I'm, I'm sort of, I'm sort of projecting here, but I think, I think, I think your superpower personally is likely in the realm of being able to connect the dots between technology and the value that that technology holds in a variety of contexts. That's right. Whether it's business or, or whatever, say sentences. Okay. Now it's critical to have people like you to connect those dots. Today in the era of pervasive ai, how important will it be to have AI have to explain its answer? In other words, words, should I trust the information the AI is giving me? If I am a decision maker, should I just trust it on face value? Or am I going to want a demand of the AI kind of what you deliver today, which is No, no, no, no, no, no. You need to explain this to me. How did you arrive at that conclusion, right? How important will that be for people to move forward and trust the results? We can all say, oh hey, just trust us. Hey, it's ai, it's great, it's got Invidia, you know, Invidia acceleration and it's Dell. You can trust us, but come on. So many variables in the background. It's >>An interesting one. And explainability is a big function of ai. People want to know how the black box works, right? Because I don't know if you have an AI engine that's looking for potential maladies in an X-ray, but it misses it. Do you sue the hospital, the doctor or the software company, right? And so that accountability element is huge. I think as we progress and we trust it to be part of our everyday decision making, it's as simply as a recommendation engine. It isn't actually doing all of the decisions. It's supporting us. We still have, after decades of advanced technology algorithms that have been proven, we can't predict what the market price of any object is gonna be tomorrow. And you know why? You know why human beings, we are so unpredictable. How we feel in the moment is radically different. And whereas we can extrapolate for a population to an individual choice, we can't do that. So humans and computers will not be separated. It's a, it's a joint partnership. But I wanna get back to your point, and I think this is very fundamental to the philosophy of both companies. Yeah, it's about a community. It's always about the people sharing ideas, getting the best. And anytime you have a center of excellence and algorithm that works for sales forecasting may actually be really interesting for churn analysis to make sure the employees or students don't leave the institution. So it's that community of interest that I think is unparalleled at other conferences. This is the place where a lot of that happens. >>I totally agree with that. We felt that on the show. I think that's a beautiful note to close on. Anthony, Bob, thank you so much for being here. I'm sure everyone feels more educated and perhaps more at peace with the chaos. David, thanks for sitting next to me asking the best questions of any host on the cube. And thank you all for being a part of our community. Speaking of community here on the cube, we're alive from Dallas, Texas. It's super computing all week. My name is Savannah Peterson and I'm grateful you're here. >>So I.

Published Date : Nov 16 2022

SUMMARY :

And we have the pleasure of being joined by both Dell and Navidia. Great show so far. I think we all, cause we haven't talked about this at all during the show yet, on the cube, I wanna make sure that everyone's on the same page when we're talking about But that kind of data, the sensor data, that video camera is unstructured or semi-structured, And one of the promises of AI is to be able to take advantage of everything that's going on We want the chaos, bring it. And chaos is in the eye of the beholder. And one of the reasons is for what we're talking about now is it's a little impact. scale in performance at the same rate as you throw GPUs at it. So, so as a gamer, I must say you're a little shot at making things pretty on a I apparently have the most boring cup of anyone on you today. But to do this in a budget you can afford. the horsepower to get to the results. and simplify it down so that the real world can absorb and use this? What's the Nvidia again? So Nvidia superpower is really about that accelerated computing stack at the bottom. We're here for the scoop on the cube, but is there a specific project or use case that has you personally excited And we start moving that towards more of the creative advertising and marketing It's got you And I'm gonna jump or pivot off of what That's a huge accessibility. And I geek out when I see stuff like that. and provide the metadata tag so you can, it's searchable, then we're off to better productions, so I appreciate that you just brought that up. I mean, we were talking about how steampunk some of the liquid cooling stuff is, and you know, this is the only place on earth really, There can be easily 20 or 30 components that all have to be the right version with the I got a question for Anthony. to have people like you to connect those dots. And anytime you have a center We felt that on the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David NicholsonPERSON

0.99+

BobPERSON

0.99+

AnthonyPERSON

0.99+

Bob CrovellaPERSON

0.99+

DellORGANIZATION

0.99+

20QUANTITY

0.99+

InvidiaORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

MarsLOCATION

0.99+

VidiaORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

10QUANTITY

0.99+

bothQUANTITY

0.99+

DavePERSON

0.99+

Dallas, TexasLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

Dallas, TexasLOCATION

0.99+

NavidiaORGANIZATION

0.99+

OneQUANTITY

0.99+

first levelQUANTITY

0.99+

both companiesQUANTITY

0.98+

TodayDATE

0.98+

oneQUANTITY

0.98+

2012DATE

0.98+

todayDATE

0.98+

billionsQUANTITY

0.98+

earthLOCATION

0.97+

10DATE

0.96+

Anthony DinaPERSON

0.96+

five years agoDATE

0.96+

30 componentsQUANTITY

0.95+

NaviaORGANIZATION

0.95+

day twoQUANTITY

0.94+

one little pieceQUANTITY

0.91+

tomorrowDATE

0.87+

three levelsQUANTITY

0.87+

HPCORGANIZATION

0.86+

20 years agoDATE

0.83+

one littleQUANTITY

0.77+

billions of parametersQUANTITY

0.75+

a decadeQUANTITY

0.74+

decadesQUANTITY

0.68+

GermanOTHER

0.68+

dgx a 100 platformCOMMERCIAL_ITEM

0.67+

themesQUANTITY

0.63+

secondQUANTITY

0.57+

22QUANTITY

0.48+

SquadORGANIZATION

0.4+

Supercomputing 2022ORGANIZATION

0.36+

Rajesh Pohani, Dell Technologies | SuperComputing 22


 

>>Good afternoon friends, and welcome back to Supercomputing. We're live here at the Cube in Dallas. I'm joined by my co-host, David. My name is Savannah Peterson and our a fabulous guest. I feel like this is almost his show to a degree, given his role at Dell. He is the Vice President of HPC over at Dell. Raja Phan, thank you so much for being on the show with us. How you doing? >>Thank you guys. I'm doing okay. Good to be back in person. This is a great show. It's really filled in nicely today and, and you know, a lot of great stuff happening. >>It's great to be around all of our fellow hardware nerds. The Dell portfolio grew by three products. It it did, I believe. Can you give us a bit of an intro on >>That? Sure. Well, yesterday afternoon and yesterday evening, we had a series of events that announced our new AI portfolio, artificial intelligence portfolio, you know, which will really help scale where I think the world is going in the future with, with the creation of, of all this data and what we can do with it. So yeah, it was an exciting day for us. Yesterday we had a, a session over in a ballroom where we did a product announce and then in the evening had an unveil in our booth here at the SUPERCOMPUTE conference, which was pretty eventful cupcakes, you know, champagne drinks and, and most importantly, Yeah, I know. Good time. Did >>You get the invite? >>No, I, most importantly, some really cool new servers for our customers. >>Well, tell us about them. Yeah, so what's, what's new? What's in the news? >>Well, you know, as you think about artificial intelligence and what customers are, are needing to do and the way artificial intelligence is gonna change how, you know, frankly, the world works. We have now developed and designed new purpose-built hardware, new purpose-built servers for a variety of AI and artificial intelligence needs. We launched our first eight way, you know, Invidia H 100 a a 100 s XM product. Yesterday we launched a four u four way H 100 product yesterday and a two u fully liquid cooled intel data center, Max GPU server yesterday as well. So, you know, a full range of portfolio for a variety of customer needs, depending on their use cases, what they're trying to do, their infrastructure, we're able to now provide, you know, servers to and hardware that help, you know, meet those needs in those use cases. >>So I wanna double click, you just said something interesting, water cooled. >>Yeah. So >>Where does, at what point do you need to move in the direction of water cooling and, you know, I know you mentioned, you know, GPU centric, but, but, but talk about that, that balance between, you know, a density and what you can achieve with the power that's going into the system. Well, you system, >>It all depends on what the customers are trying to accommodate, right? I, I think that there's a dichotomy that's existing now between customers who have already or are planning liquid cooled infrastructures and power distribution to the rack. So you take those two together and if you have the power distribution to the rack, you wanna take advantage of the density to take advantage of the density you need to be able to cool the servers and therefore liquid cooling comes into play. Now you have other customers that either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, they're not gonna want to take advantage. They can't take advantage of the density. So there's this dichotomy in products, and that's why we've got our XE 96 40, which is in two U dense liquid cooled, but we also have our XE 86 40, which is a four U air cold, right? Or liquid assisted air cold, right? So depending on where you are on your journey, whether it's power infrastructure, liquid cooling, infrastructure, we've got the right solution for you that, you know, meets your needs. You don't have to take advantage of the density, the expense of liquid cooling, unless you're ready to do that. Otherwise we've got this other option for you. And so that's really what dichotomy is beginning to exist in our customers infrastructures today. >>I was curious about that. So do you see, is there a category or a vertical that is more in the liquid cooling zone because that's a priority in terms of the density or >>Yeah, yeah. I mean, you've got your, your large HTC installations, right? Your large clusters that not only have the power have, you know, the liquid cooling density that they've built in, you've got, you know, federal government installations, you've got financial tech installations, you've got colos that are built for sustainability and density and space that, that can also take advantage of it. Then you've got others that are, you know, more enterprises, more in the mainstream of what they do, where, you know, they're not ready for that. So it just, it just depends on the scale of the customer that we're talking about and what they're trying to do and, and where they're, and where they're doing it. >>So we hear, you know, we hear at Supercomputing conference and HPC is sort of the kind of trailing mini version of supercomputing in a way where maybe you have someone who they don't need 2 million CPU cores, but maybe they need a hundred thousand CPU cores. So it's all a matter of scale. What is, can you identify kind of an HPC sweet spot right now as, as Dell customers are adopting the kinds of things that you just just announced? >>You know, I think >>How big are these clusters at this >>Point? Well, let, let me, let me hit something else first. Yeah, I think people talk about HPC as, as something really specific and what we're seeing now with the, you know, vast amount of data creation, the need for computational analytics, the need for artificial intelligence, the HPC is kind of morphing right into, into, you know, more and more general customer use cases. And so where before you used to think about HPC is research and academics and computational dynamics. Now, you know, there's a significant Venn diagram overlap with just regular artificial intelligence, right? And, and so that is beginning to change the nature of how we think about hpc. You think about the vast data that's being created. You've got data driven HPC where you're running computational analytics on this data that's giving you insights or outcomes or information. It's not just, Hey, I'm running, you know, physics calculations or astronomical how, you know, calculations. It is now expanding in a variety of ways where it's democratizing into, you know, customers who wouldn't actually talk about themselves as HVC customers. And when you meet with them, it's like, well, yeah, but your compute needs are actually looking like HPC customers. So let's talk to you about these products. Let's talk to you about these solutions, whether it's software solutions, hardware solutions, or even purpose-built hardware. Like we're, like we talked about that now becomes the new norm. >>Customer feedback and community engagement is big for you. I know this portfolio of products that was developed based on customer feedback, correct? Yep. >>So everything we do at Dell is customer driven, right? We want to be, we want to drive, you know, customer driven innovation, customer driven value to meet our customer's needs. So yeah, we spent a while, right, researching these products, researching these needs, understanding is this one product? Is it two products? Is it three products? Talking to our partners, right? Driving our own innovation in IP and then where they're going with their roadmaps to be able to deliver kind of a harmonized solution to customers. So yeah, it was a good amount of customer engagement. I know I was on the road quite a bit talking to customers, you know, one of our products was, you know, we almost named after one of our customers, right? I'm like, Hey, this, we've talked about this. This is what you said you wanted. Now he, he was representative of a group of customers and we validated that with other customers and it's also a way of me making sure he buys it. But great, great. Yeah, >>Sharing sales there, >>That was good. But you know, it's heavily customer driven and that's where understanding those use cases and where they fit drove the various products. And, you know, in terms of, in terms of capability, in terms of size, in terms of liquid versus air cooling, in terms of things like number of P C I E lanes, right? What the networking infrastructure was gonna look like. All customer driven, all designed to meet where customers are going in their artificial intelligence journey, in their AI journey. >>It feels really collaborative. I mean, you've got both the intel and the Nvidia GPU on your new product. There's a lot of CoLab between academics and the private sector. What has you most excited today about supercomputing? >>What it's going to enable? If you think about what artificial intelligence is gonna enable, it's gonna enable faster medical research, right? Genomics the next pandemic. Hopefully not anytime soon. We'll be able to diagnose, we'll be able to track it so much faster through artificial intelligence, right? That the data that was created in this last one is gonna be an amazing source of research to, to go address stuff like that in the future and get to the heart of the problem faster. If you think about a manufacturing and, and process improvement, you can now simulate your entire manufacturing process. You don't have to run physical pilots, right? You can simulate it all, get 90% of the way there, which means your, your either factory process will get reinvented factor faster, or a new factory can get up and running faster. Think about retail, how retail products are laid out. >>You can use media analytics to track how customers go through the store, what they're buying. You can lay things out differently. You're not gonna have in the future people going, you know, to test cell phone reception. Can you hear me now? Can you hear me? Now you can simulate where customers are patterns to ensure that the 5G infrastructure is set up, you know, to the maximum advantage. All of that through digital simulation, through digital twins, through media analytics, through natural language processing. Customer experience is gonna be better, communication's gonna be better. All of this stuff with, you know, using this data, training it, and then applying it is probably what excites me the most about super computing and, and really compute in the future. >>So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, Dell has been well known for democratizing things in it, making them available to, at a variety of levels. Never a one size fits all right? Company, these latest announcements would be fair to say. They represent sort of the tip of the spear in terms of high performance. What about, what about rpc regular performance computing? Where's, where's the overlap? Cause you know, we're in this season where we've got AMD and Intel leapfrogging one another, new bus architectures. The, the, you know, the, the connectivity that's plugged into these things are getting faster and faster and faster. So from a Dell perspective, where does my term rpc regular performance computing and, and HPC begin? Are you seeing people build stuff on kind of general purpose clusters also? >>Well, sure, I mean, you can run a, a good amount of artificial acceleration on, you know, high core count CPUs without acceleration, and you can do it with P C I E accelerators and then, then you can do it with some of the, the, the very specific high performance accelerators like that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. So there are these scale up opportunities. I mean, if you think about, >>You know, >>Our mission to democratize compute, not just hpc, but general compute is about making it easier for customers to implement, to get the value out of what they're trying to do. So we focus on that with, you know, reference designs or validated designs that take out a good amount of time that customers would have to do it on their own, right? We can cut by six to 12 months the ability for customers in, in, I'm gonna use an HPC example and then I'll come back to your, your regular performance compute by us doing the work us, you know, setting, you know, determining the configuration, determining the software packages, testing it, tuning it so that by the time it gets to the customer, they get to take advantage of the expertise of Dell Engineers Dell Scale and they are ready to go in a much faster point of view. >>The challenge with AI is, and you talk to customers, is they all know what it can lead to and the benefits of it. Sometimes they just dunno how to start. We are trying to make it easier for customers to start, whether it is using regular RPC or you know, non optimized, non specialized compute, or as you move up the value stack into compute capability, our goal is to make it easier for customers to start to get on their journey and to get to what they're trying to do faster. So where do I see, you know, regular performance compute, you know, it's, it's, you know, they go hand in hand, right? As you think about what customers are trying to do. And I think a lot of customers, like we talked about, don't actually think about what they're trying to do as high performance computing. They don't think of themselves as one of those specialized institutions as their hpc, but they're on this glide path to greater and greater compute needs and greater and greater compute attributes that that merge kind of regular performance computing and high performance computing to where it's hard to really draw the line, especially when you get to data driven HPC data's everywhere >>And so much data. And it sounds like a lot people are very early in this journey. From our conversation with Travis, I mean five AI programs per very large company or less at this point for 75% of customers, that's pretty wild. I mean you're, you're an educational coach, you're teachers, you're innovating on the hardware front, you're doing everything at Dell. Last question for you. You've been at 24 years, >>25 in this coming march. >>What has a company like that done to retain talent like you for more than two and a half decades? >>You know, for me and I, I, and I'd like to say I had an atypical journey, but I don't think I have right there, there has always been opportunity for me, right? You know, I started off as a quality engineer. A couple years later I'm living in Singapore running or you know, running services for Enterprise and apj. I come back couple years in Austin, then I'm in our Bangalore development center helping set that up. Then I come back, then I'm in our Taiwan development center helping with some of the work out there. And then I come back. There has always been the next opportunity before I could even think about am I ready for the next opportunity? Oh. And so for me, why would I leave? Right? Why would I do anything different given that there's always been the next opportunity? The other thing is jobs are what you make of it and Dell embraces that. So if there's something that needs to be done or there was an opportunity, or even in the case of our AI ML portfolio, we saw an opportunity, we reviewed it, we talked about it, and then we went all in. So that innovation, that opportunity, and then most of all the people at Dell, right? I can't ask to work with a better set of set of folks from from the top on down. >>That's fantastic. Yeah. So it's culture. >>It is culture B really, at the end of the day, it is culture. >>That's fantastic. Raja, thank you so much for being here with us. >>Thank you guys, the >>Show. >>Really appreciate it. >>Questions? Yeah, this was such a pleasure. And thank you for tuning into the Cube Live from Dallas here at Supercomputing. My name is Savannah Peterson, and we'll see y'all in just a little bit.

Published Date : Nov 16 2022

SUMMARY :

Raja Phan, thank you so much for being on the show with us. nicely today and, and you know, a lot of great stuff happening. Can you give us a bit of an intro on which was pretty eventful cupcakes, you know, What's in the news? the way artificial intelligence is gonna change how, you know, frankly, the world works. cooling and, you know, I know you mentioned, you know, either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, So do you see, is there a category or a vertical that is more in the more in the mainstream of what they do, where, you know, they're not ready for that. So we hear, you know, we hear at Supercomputing conference and HPC is sort of ways where it's democratizing into, you know, customers who wouldn't actually I know this portfolio of products that was developed customers, you know, one of our products was, you know, we almost named after one of our But you know, it's heavily customer driven and that's where understanding those use cases has you most excited today about supercomputing? you can now simulate your entire manufacturing process. you know, to the maximum advantage. So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. you know, setting, you know, determining the configuration, determining the software packages, testing it, see, you know, regular performance compute, you know, it's, And it sounds like a lot people are very early in this journey. you know, running services for Enterprise and apj. That's fantastic. Raja, thank you so much for being here with us. And thank you for tuning into the Cube Live from Dallas here at

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rajesh PohaniPERSON

0.99+

DavidPERSON

0.99+

SingaporeLOCATION

0.99+

Savannah PetersonPERSON

0.99+

sixQUANTITY

0.99+

90%QUANTITY

0.99+

Raja PhanPERSON

0.99+

24 yearsQUANTITY

0.99+

AustinLOCATION

0.99+

75%QUANTITY

0.99+

HTCORGANIZATION

0.99+

yesterdayDATE

0.99+

XE 86 40COMMERCIAL_ITEM

0.99+

XE 96 40COMMERCIAL_ITEM

0.99+

AMDORGANIZATION

0.99+

DellORGANIZATION

0.99+

yesterday afternoonDATE

0.99+

two productsQUANTITY

0.99+

three productsQUANTITY

0.99+

IntelORGANIZATION

0.99+

yesterday eveningDATE

0.99+

NvidiaORGANIZATION

0.99+

DallasLOCATION

0.99+

YesterdayDATE

0.99+

25QUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

one productQUANTITY

0.99+

12 monthsQUANTITY

0.99+

H 100COMMERCIAL_ITEM

0.99+

fiveQUANTITY

0.99+

TaiwanLOCATION

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

RajaPERSON

0.98+

TravisPERSON

0.98+

SupercomputingORGANIZATION

0.98+

HPCORGANIZATION

0.98+

intelORGANIZATION

0.98+

more than two and a half decadesQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.98+

BangaloreLOCATION

0.97+

H 100 a aCOMMERCIAL_ITEM

0.97+

pandemicEVENT

0.97+

NVIDIAsORGANIZATION

0.96+

InvidiaORGANIZATION

0.96+

2 million CPUQUANTITY

0.93+

firstQUANTITY

0.92+

A couple years laterDATE

0.92+

CubeORGANIZATION

0.9+

Dell EngineersORGANIZATION

0.88+

a hundred thousand CPUQUANTITY

0.88+

Cube LiveTITLE

0.84+

SUPERCOMPUTEEVENT

0.82+

couple yearsQUANTITY

0.79+

100 s XMCOMMERCIAL_ITEM

0.77+

first eight wayQUANTITY

0.76+

SuperComputing 22ORGANIZATION

0.73+

CoLabORGANIZATION

0.7+

hpcORGANIZATION

0.68+

doubleQUANTITY

0.68+

SupercomputingEVENT

0.66+

four wayQUANTITY

0.62+

marchDATE

0.62+

100COMMERCIAL_ITEM

0.6+

fourQUANTITY

0.57+

PresidentPERSON

0.54+

Travis Vigil, Dell Technologies | SuperComputing 22


 

>>How do y'all, and welcome to Dallas, where we're proud to be live from Supercomputing 2022. My name is Savannah Peterson, joined here by my cohost David on the Cube, and our first guest today is a very exciting visionary. He's a leader at Dell. Please welcome Travis Vhi. Travis, thank you so much for being here. >>Thank you so much for having me. >>How you feeling? >>Okay. I I'm feeling like an exciting visionary. You >>Are. That's, that's the ideas why we tee you up for that. Great. So, so tell us, Dell had some huge announcements Yes. Last night. And you get to break it to the cube audience. Give us the rundown. >>Yeah. It's a really big show for Dell. We announced a brand new suite of GPU enabled servers, eight ways, four ways, direct liquid cooling. Really the first time in the history of the portfolio that we've had this much coverage across Intel amd, Invidia getting great reviews from the show floor. I had the chance earlier to be in the whisper suite to actually look at the gear. Customers are buzzing over it. That's one thing I love about this show is the gear is here. >>Yes, it is. It is a haven for hardware nerds. Yes. Like, like well, I'll include you in this group, it sounds like, on >>That. Great. Yes. Oh >>Yeah, absolutely. And I know David is as well, sew up >>The street. Oh, big, big time. Big time hardware nerd. And just to be clear, for the kids that will be watching these videos Yes. We're not talking about alien wear gaming systems. >>No. Right. >>So they're >>Yay big yay tall, 200 pounds. >>Give us a price point on one of these things. Re retail, suggested retail price. >>Oh, I'm >>More than 10 grand. >>Oh, yeah. Yeah. Try another order of magnitude. Yeah. >>Yeah. So this is, this is the most exciting stuff from an infrastructure perspective. Absolutely. You can imagine. Absolutely. But what is it driving? So talk, talk to us about where you see the world of high performance computing with your customers. What are they, what are they doing with this? What do they expect to do with this stuff in the future? >>Yeah. You know, it's, it's a real interesting time and, and I know that the provenance of this show is HPC focused, but what we're seeing and what we're hearing from our customers is that AI workloads and traditional HPC workloads are becoming almost indistinguishable. You need the right mix of compute, you need GPU acceleration, and you need the ability to take the vast quantities of data that are being generated and actually gather insight from them. And so if you look at what customers are trying to do with, you know, enterprise level ai, it's really, you know, how do I classify and categorize my data, but more, more importantly, how do I make sense of it? How do I derive insights from it? Yeah. And so at the end of the day, you know, you look, you look at what customers are trying to do. It's, it's take all the various streams of data, whether it be structured data, whether it be unstructured data, bring it together and make decisions, make business decisions. >>And it's a really exciting time because customers are saying, you know, the same things that, that, that, you know, research scientists and universities have been trying to do forever with hpc. I want to do it on industrial scale, but I want to do it in a way that's more open, more flexible, you know, I call it AI for the rest of us. And, and, and customers are here and they want those systems, but they want the ecosystem to support ease of deployment, ease of use, ease of scale. And that's what we're providing in addition to the systems. We, we provide, you know, Dell's one of the only providers on the on in the industry that can provide not only the, the compute, but the networking and the storage, and more importantly, the solutions that bring it all together. Give you one example. We, we have what we call a validated design for, for ai. And that validated design, we put together all of the pieces, provided the recipe for customers so that they can take what used to be two months to build and run a model. We provide that capability 18 times faster. So we're talking about hours versus months. So >>That's a lot. 18 times faster. I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot up here, that makes a huge difference in what people are able to do. Absolutely. >>Absolutely. And so, I mean, we've, you know, you've been doing this for a while. We've been talking about the, the deluge of data forever, but it's gotten to the point and it's, you know, the, the disparity of the data, the fact that much of it remains siloed. Customers are demanding that we provide solutions that allow them to bring that data together, process it, make decisions with it. So >>Where, where are we in the adoption cycle early because we, we've been talking about AI and ML for a while. Yeah. You, you mentioned, you know, kind of the leading edge of academia and supercomputing and HPC and what that, what that conjures up in people's minds. Do you have any numbers or, you know, any, any thoughts about where we are in this cycle? How many, how many people are actually doing this in production versus, versus experimenting at this point? Yeah, >>I think it's a, it's a reason. There's so much interest in what we're doing and so much demand for not only the systems, but the solutions that bring the systems together. The ecosystem that brings the, the, the systems together. We did a study recently and ask customers where they felt they were at in terms of deploying best practices for ai, you know, mass deployment of ai. Only 31% of customers said that they felt that they self-reported. 31% said they felt that they were deploying best practices for their AI deployments. So almost 70% self reporting saying we're not doing it right yet. Yeah. And, and, and another good stat is, is three quarters of customers have fewer than five AI applications deployed at scale in their, in their IT environments today. So, you know, I think we're on the, you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and customers are asking, Can I do it end to end? >>Can I do it with the best of breed in terms of systems? But Dell, can you also use an ecosystem that I know and understand? And I think that's, you know, another great example of something that Dell is doing is, is we have focused on ethernet as connectivity for many of the solutions that we put together. Again, you know, provenance of hpc InfiniBand, it's InfiniBand is a great connectivity option, but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it both with InfiniBand for those, you know, government class CU scale, government scale clusters or university scale clusters and more of our enterprise customers can do it with, with ethernet on premises. It's a great option. >>Yeah. You've got so many things going on. I got to actually check out the million dollar hardware that you have just casually Yeah. Sitting in your booth. I feel like, I feel like an event like this is probably one of the only times you can let something like that out. Yeah, yeah. And, and people would actually know what it is you're working >>With. We actually unveiled it. There was a sheet on it and we actually unveiled it last night. >>Did you get a lot of uz and os >>You know, you said this was a show for hardware nerds. It's been a long time since I've been at a shoe, a show where people cheer and u and a when you take the sheet off the hardware and, and, and Yes, yes, >>Yes, it has and reveal you had your >>Moment. Exactly, exactly. Our three new systems, >>Speaking of u and os, I love that. And I love that everyone was excited as we all are about it. What I wanna, It's nice to be home with our nerds. Speaking of, of applications and excitement, you get to see a lot of different customers across verticals. Is there a sector or space that has you personally most excited? >>Oh, personally most excited, you know, for, for credibility at home when, when the sector is media and entertainment and the movie is one that your, your children have actually seen, that one gives me credibility. Exciting. It's, you can talk to your friends about it at, at at dinner parties and things like that. I'm like, >>Stuff >>Curing cancer. Marvel movie at home cred goes to the Marvel movie. Yeah. But, but, but you know, what really excites me is the variety of applications that AI is being used, used in healthcare. You know, on a serious note, healthcare, genomics, a huge and growing application area that excites me. You know, doing, doing good in the world is something that's very important to Dell. You know, know sustainability is something that's very important to Dell. Yeah. So any application related to that is exciting to me. And then, you know, just pragmatically speaking, anything that helps our customers make better business decisions excites me. >>So we are, we are just at the beginning of what I refer to as this rolling thunder of cpu. Yes. Next generation releases. We re recently from AMD in the near future it'll be, it'll be Intel joining the party Yeah. Going back and forth, back and forth along with that gen five PCI e at the motherboard level. Yep. It's very easy to look at it and say, Wow, previous gen, Wow, double, double, double. It >>Is, double >>It is. However, most of your customers, I would guess a fair number of them might be not just N minus one, but n minus two looking at an upgrade. So for a lot of people, the upgrade season that's ahead of us is going to be not a doubling, but a four x or eight x in a lot of, in a lot of cases. Yeah. So the quantity of compute from these new systems is going to be a, it's gonna be a massive increase from where we've been in, in, in the recent past, like as in last, last Tuesday. So is there, you know, this is sort of a philosophical question. We talked a little earlier about this idea of the quantitative versus qualitative difference in computing horsepower. Do we feel like we're at a point where there's gonna be an inflection in terms of what AI can actually deliver? Yeah. Based on current technology just doing it more, better, faster, cheaper? Yeah. Or do we, or do we need this leap to quantum computing to, to get there? >>Yeah. I look, >>I think we're, and I was having some really interesting conversations with, with, with customers that whose job it is to run very, very large, very, very complex clusters. And we're talking a little bit about quantum computing. Interesting thing about quantum computing is, you know, I think we're or we're a ways off still. And in order to make quantum computing work, you still need to have classical computing surrounding Right. Number one. Number two, with, with the advances that we're, we're seeing generation on generation with this, you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade cycle to, to something that because of all of the technology that's being deployed into the industry is almost more continuous upgrade cycle. I, I'm personally optimistic that we are on the, the cusp of a new level of infrastructure modernization. >>And it's not just the, the computing power, it's not just the increases in GPUs. It's not, you know, those things are important, but it's things like power consumption, right? One of the, the, the ways that customers can do better in terms of power consumption and sustainability is by modernizing infrastructure. Looking to your point, a lot of people are, are running n minus one, N minus two. The stuff that's coming out now is, is much more energy efficient. And so I think there's a lot of, a lot of vectors that we're seeing in, in the market, whether it be technology innovation, whether it be be a drive for energy efficiency, whether it be the rise of AI and ml, whether it be all of the new silicon that's coming in into the portfolio where customers are gonna have a continuous reason to upgrade. I mean, that's, that's my thought. What do you think? >>Yeah, no, I think, I think that the, the, the objective numbers that are gonna be rolling out Yeah. That are starting to roll out now and in the near future. That's why it's really an exciting time. Yeah. I think those numbers are gonna support your point. Yeah. Because people will look and they'll say, Wait a minute, it used to be a dollar, but now it's $2. That's more expensive. Yeah. But you're getting 10 times as much Yeah. For half of the amount of power boom. And it's, and it's >>Done. Exactly. It's, it's a >>Tco It's, it's no brainer. It's Oh yeah. You, it gets to the point where it's, you look at this rack of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. >>And Right. >>The power is such a huge component of this. Yeah. It's huge, huge. >>Our customer, I mean, it's always a huge issue, but our customers, especially in Amia with what's going on over there are, are saying, I, you know, I need to upgrade because I need to be more energy efficient. >>Yeah. >>Yeah. I I, we were talking about 20 years from now, so you've been at Dell over 18 years. >>Yeah. It'll be 19 in in May. >>Congratulations. Yeah. What, what commitment, so 19 years from now in your, in your second Dell career. Yeah. What are we gonna be able to say then that perhaps we can't say now? >>Oh my gosh. Wow. 19 years from now. >>Yeah. I love this as an arbitrary number too. This is great. Yeah. >>38 year Dell career. Yeah. >>That might be a record. Yeah. >>And if you'd like to share the winners of Super Bowls and World Series in advance, like the world and the, the sports element act from back to the future. So we can play ball bets power and the >>Power ball, but, but any >>Point building Yeah. I mean this is what, what, what, what do you think ai, what's AI gonna deliver in the next decade? >>Yeah. I, I look, I mean, there are are, you know, global issues that advances in computing power will help us solve. And, you know, the, the models that are being built, the ability to generate a, a digital copy of the analog world and be able to run models and simulations on it is, is amazing. Truly. Yeah. You know, I, I was looking at some, you know, it's very, it's a very simple and pragmatic thing, but I think it's, it, it's an example of, of what could be, we were with one of our technology providers and they, they were, were showing us a digital simulation, you know, a digital twin of a factory for a car manufacturer. And they were saying that, you know, it used to be you had to build the factory, you had to put the people in the factory. You had to, you know, run cars through the factory to figure out sort of how you optimize and you know, where everything's placed. >>Yeah. They don't have to do that anymore. No. Right. They can do it all via simulation, all via digital, you know, copy of, of analog reality. And so, I mean, I think the, you know, the, the, the, the possibilities are endless. And, you know, 19 years ago, I had no idea I'd be sitting here so excited about hardware, you know, here we are baby. I think 19 years from now, hardware still matters. Yeah. You know, hardware still matters. I know software eats the world, the hardware still matters. Gotta run something. Yeah. And, and we'll be talking about, you know, that same type of, of example, but at a broader and more global scale. Well, I'm the knucklehead who >>Keeps waving his phone around going, There's one terabyte in here. Can you believe that one terabyte? Cause when you've been around long enough, it's like >>Insane. You know, like, like I've been to nasa, I live in Texas, I've been to NASA a couple times. They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on >>Too far less in our pocket computers. Yeah. It's, it's amazing. >>I am an optimist on, on where we're going clearly. >>And we're clearly an exciting visionary, like we said, said the gate. It's no surprise that people are using Dell's tech to realize their AI ecosystem dreams. Travis, thank you so much for being here with us David. Always a pleasure. And thank you for tuning in to the Cube Live from Dallas, Texas. My name is Savannah Peterson. We'll be back with more supercomputing soon.

Published Date : Nov 15 2022

SUMMARY :

Travis, thank you so much for being here. You And you get to break it to the cube audience. I had the chance earlier to be in the whisper suite to actually look at the gear. Like, like well, I'll include you in this group, And I know David is as well, sew up And just to be clear, for the kids that will be Give us a price point on one of these things. Yeah. you see the world of high performance computing with your customers. And so at the end of the day, you know, And it's a really exciting time because customers are saying, you know, the same things that, I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot you know, the, the disparity of the data, the fact that much of it remains siloed. you have any numbers or, you know, any, any thoughts about where we are in this cycle? you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it I got to actually check out the million dollar hardware that you have just There was a sheet on it and we actually unveiled it last night. You know, you said this was a show for hardware nerds. Our three new systems, that has you personally most excited? Oh, personally most excited, you know, for, for credibility at home And then, you know, the near future it'll be, it'll be Intel joining the party Yeah. you know, this is sort of a philosophical question. you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade It's not, you know, those things are important, but it's things like power consumption, For half of the amount of power boom. It's, it's a of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. what's going on over there are, are saying, I, you know, I need to upgrade because Yeah. Wow. 19 years from now. Yeah. Yeah. Yeah. advance, like the world and the, the sports element act from back to the future. what's AI gonna deliver in the next decade? And they were saying that, you know, it used to be you had to build the factory, And so, I mean, I think the, you know, the, the, the, the possibilities are endless. Can you believe that one terabyte? They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on Yeah. And thank you for tuning in to the Cube Live from Dallas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TravisPERSON

0.99+

DavidPERSON

0.99+

TexasLOCATION

0.99+

Savannah PetersonPERSON

0.99+

$2QUANTITY

0.99+

DellORGANIZATION

0.99+

18 timesQUANTITY

0.99+

DallasLOCATION

0.99+

Dallas, TexasLOCATION

0.99+

twoQUANTITY

0.99+

two monthsQUANTITY

0.99+

one terabyteQUANTITY

0.99+

10 timesQUANTITY

0.99+

InvidiaORGANIZATION

0.99+

secondQUANTITY

0.99+

200 poundsQUANTITY

0.99+

38 yearQUANTITY

0.99+

31%QUANTITY

0.99+

last TuesdayDATE

0.99+

todayDATE

0.99+

three yearQUANTITY

0.99+

AMDORGANIZATION

0.99+

Super BowlsEVENT

0.99+

More than 10 grandQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

19 years agoDATE

0.99+

first timeQUANTITY

0.98+

Last nightDATE

0.98+

million dollarQUANTITY

0.98+

World SeriesEVENT

0.98+

one exampleQUANTITY

0.98+

oneQUANTITY

0.98+

first guestQUANTITY

0.98+

MayDATE

0.98+

OneQUANTITY

0.98+

next decadeDATE

0.97+

over 18 yearsQUANTITY

0.97+

last nightDATE

0.97+

19 yearsQUANTITY

0.97+

Travis VigilPERSON

0.97+

hpcORGANIZATION

0.97+

19QUANTITY

0.96+

four waysQUANTITY

0.96+

eight waysQUANTITY

0.96+

bothQUANTITY

0.95+

InfiniBandCOMMERCIAL_ITEM

0.95+

one thingQUANTITY

0.94+

fourQUANTITY

0.92+

IntelORGANIZATION

0.92+

almost 70%QUANTITY

0.92+

AmiaLOCATION

0.91+

first inflectionQUANTITY

0.91+

NASAORGANIZATION

0.88+

MarvelORGANIZATION

0.88+

Intel amdORGANIZATION

0.83+

three quartersQUANTITY

0.82+

fiveQUANTITY

0.82+

three new systemsQUANTITY

0.82+

eight xQUANTITY

0.81+

nasaLOCATION

0.78+

Cube LiveCOMMERCIAL_ITEM

0.77+

couple timesQUANTITY

0.73+

about 20 yearsQUANTITY

0.7+

doublingQUANTITY

0.67+

timesQUANTITY

0.64+

a dollarQUANTITY

0.61+

The Truth About MySQL HeatWave


 

>>When Oracle acquired my SQL via the Sun acquisition, nobody really thought the company would put much effort into the platform preferring to focus all the wood behind its leading Oracle database, Arrow pun intended. But two years ago, Oracle surprised many folks by announcing my SQL Heatwave a new database as a service with a massively parallel hybrid Columbia in Mary Mary architecture that brings together transactional and analytic data in a single platform. Welcome to our latest database, power panel on the cube. My name is Dave Ante, and today we're gonna discuss Oracle's MySQL Heat Wave with a who's who of cloud database industry analysts. Holgar Mueller is with Constellation Research. Mark Stammer is the Dragon Slayer and Wikibon contributor. And Ron Westfall is with Fu Chim Research. Gentlemen, welcome back to the Cube. Always a pleasure to have you on. Thanks for having us. Great to be here. >>So we've had a number of of deep dive interviews on the Cube with Nip and Aggarwal. You guys know him? He's a senior vice president of MySQL, Heatwave Development at Oracle. I think you just saw him at Oracle Cloud World and he's come on to describe this is gonna, I'll call it a shock and awe feature additions to to heatwave. You know, the company's clearly putting r and d into the platform and I think at at cloud world we saw like the fifth major release since 2020 when they first announced MySQL heat wave. So just listing a few, they, they got, they taken, brought in analytics machine learning, they got autopilot for machine learning, which is automation onto the basic o l TP functionality of the database. And it's been interesting to watch Oracle's converge database strategy. We've contrasted that amongst ourselves. Love to get your thoughts on Amazon's get the right tool for the right job approach. >>Are they gonna have to change that? You know, Amazon's got the specialized databases, it's just, you know, the both companies are doing well. It just shows there are a lot of ways to, to skin a cat cuz you see some traction in the market in, in both approaches. So today we're gonna focus on the latest heat wave announcements and we're gonna talk about multi-cloud with a native MySQL heat wave implementation, which is available on aws MySQL heat wave for Azure via the Oracle Microsoft interconnect. This kind of cool hybrid action that they got going. Sometimes we call it super cloud. And then we're gonna dive into my SQL Heatwave Lake house, which allows users to process and query data across MyQ databases as heatwave databases, as well as object stores. So, and then we've got, heatwave has been announced on AWS and, and, and Azure, they're available now and Lake House I believe is in beta and I think it's coming out the second half of next year. So again, all of our guests are fresh off of Oracle Cloud world in Las Vegas. So they got the latest scoop. Guys, I'm done talking. Let's get into it. Mark, maybe you could start us off, what's your opinion of my SQL Heatwaves competitive position? When you think about what AWS is doing, you know, Google is, you know, we heard Google Cloud next recently, we heard about all their data innovations. You got, obviously Azure's got a big portfolio, snowflakes doing well in the market. What's your take? >>Well, first let's look at it from the point of view that AWS is the market leader in cloud and cloud services. They own somewhere between 30 to 50% depending on who you read of the market. And then you have Azure as number two and after that it falls off. There's gcp, Google Cloud platform, which is further way down the list and then Oracle and IBM and Alibaba. So when you look at AWS and you and Azure saying, hey, these are the market leaders in the cloud, then you start looking at it and saying, if I am going to provide a service that competes with the service they have, if I can make it available in their cloud, it means that I can be more competitive. And if I'm compelling and compelling means at least twice the performance or functionality or both at half the price, I should be able to gain market share. >>And that's what Oracle's done. They've taken a superior product in my SQL heat wave, which is faster, lower cost does more for a lot less at the end of the day and they make it available to the users of those clouds. You avoid this little thing called egress fees, you avoid the issue of having to migrate from one cloud to another and suddenly you have a very compelling offer. So I look at what Oracle's doing with MyQ and it feels like, I'm gonna use a word term, a flanking maneuver to their competition. They're offering a better service on their platforms. >>All right, so thank you for that. Holger, we've seen this sort of cadence, I sort of referenced it up front a little bit and they sat on MySQL for a decade, then all of a sudden we see this rush of announcements. Why did it take so long? And and more importantly is Oracle, are they developing the right features that cloud database customers are looking for in your view? >>Yeah, great question, but first of all, in your interview you said it's the edit analytics, right? Analytics is kind of like a marketing buzzword. Reports can be analytics, right? The interesting thing, which they did, the first thing they, they, they crossed the chasm between OTP and all up, right? In the same database, right? So major engineering feed very much what customers want and it's all about creating Bellevue for customers, which, which I think is the part why they go into the multi-cloud and why they add these capabilities. And they certainly with the AI capabilities, it's kind of like getting it into an autonomous field, self-driving field now with the lake cost capabilities and meeting customers where they are, like Mark has talked about the e risk costs in the cloud. So that that's a significant advantage, creating value for customers and that's what at the end of the day matters. >>And I believe strongly that long term it's gonna be ones who create better value for customers who will get more of their money From that perspective, why then take them so long? I think it's a great question. I think largely he mentioned the gentleman Nial, it's largely to who leads a product. I used to build products too, so maybe I'm a little fooling myself here, but that made the difference in my view, right? So since he's been charged, he's been building things faster than the rest of the competition, than my SQL space, which in hindsight we thought was a hot and smoking innovation phase. It kind of like was a little self complacent when it comes to the traditional borders of where, where people think, where things are separated between OTP and ola or as an example of adjacent support, right? Structured documents, whereas unstructured documents or databases and all of that has been collapsed and brought together for building a more powerful database for customers. >>So I mean it's certainly, you know, when, when Oracle talks about the competitors, you know, the competitors are in the, I always say they're, if the Oracle talks about you and knows you're doing well, so they talk a lot about aws, talk a little bit about Snowflake, you know, sort of Google, they have partnerships with Azure, but, but in, so I'm presuming that the response in MySQL heatwave was really in, in response to what they were seeing from those big competitors. But then you had Maria DB coming out, you know, the day that that Oracle acquired Sun and, and launching and going after the MySQL base. So it's, I'm, I'm interested and we'll talk about this later and what you guys think AWS and Google and Azure and Snowflake and how they're gonna respond. But, but before I do that, Ron, I want to ask you, you, you, you can get, you know, pretty technical and you've probably seen the benchmarks. >>I know you have Oracle makes a big deal out of it, publishes its benchmarks, makes some transparent on on GI GitHub. Larry Ellison talked about this in his keynote at Cloud World. What are the benchmarks show in general? I mean, when you, when you're new to the market, you gotta have a story like Mark was saying, you gotta be two x you know, the performance at half the cost or you better be or you're not gonna get any market share. So, and, and you know, oftentimes companies don't publish market benchmarks when they're leading. They do it when they, they need to gain share. So what do you make of the benchmarks? Have their, any results that were surprising to you? Have, you know, they been challenged by the competitors. Is it just a bunch of kind of desperate bench marketing to make some noise in the market or you know, are they real? What's your view? >>Well, from my perspective, I think they have the validity. And to your point, I believe that when it comes to competitor responses, that has not really happened. Nobody has like pulled down the information that's on GitHub and said, Oh, here are our price performance results. And they counter oracles. In fact, I think part of the reason why that hasn't happened is that there's the risk if Oracle's coming out and saying, Hey, we can deliver 17 times better query performance using our capabilities versus say, Snowflake when it comes to, you know, the Lakehouse platform and Snowflake turns around and says it's actually only 15 times better during performance, that's not exactly an effective maneuver. And so I think this is really to oracle's credit and I think it's refreshing because these differentiators are significant. We're not talking, you know, like 1.2% differences. We're talking 17 fold differences, we're talking six fold differences depending on, you know, where the spotlight is being shined and so forth. >>And so I think this is actually something that is actually too good to believe initially at first blush. If I'm a cloud database decision maker, I really have to prioritize this. I really would know, pay a lot more attention to this. And that's why I posed the question to Oracle and others like, okay, if these differentiators are so significant, why isn't the needle moving a bit more? And it's for, you know, some of the usual reasons. One is really deep discounting coming from, you know, the other players that's really kind of, you know, marketing 1 0 1, this is something you need to do when there's a real competitive threat to keep, you know, a customer in your own customer base. Plus there is the usual fear and uncertainty about moving from one platform to another. But I think, you know, the traction, the momentum is, is shifting an Oracle's favor. I think we saw that in the Q1 efforts, for example, where Oracle cloud grew 44% and that it generated, you know, 4.8 billion and revenue if I recall correctly. And so, so all these are demonstrating that's Oracle is making, I think many of the right moves, publishing these figures for anybody to look at from their own perspective is something that is, I think, good for the market and I think it's just gonna continue to pay dividends for Oracle down the horizon as you know, competition intens plots. So if I were in, >>Dave, can I, Dave, can I interject something and, and what Ron just said there? Yeah, please go ahead. A couple things here, one discounting, which is a common practice when you have a real threat, as Ron pointed out, isn't going to help much in this situation simply because you can't discount to the point where you improve your performance and the performance is a huge differentiator. You may be able to get your price down, but the problem that most of them have is they don't have an integrated product service. They don't have an integrated O L T P O L A P M L N data lake. Even if you cut out two of them, they don't have any of them integrated. They have multiple services that are required separate integration and that can't be overcome with discounting. And the, they, you have to pay for each one of these. And oh, by the way, as you grow, the discounts go away. So that's a, it's a minor important detail. >>So, so that's a TCO question mark, right? And I know you look at this a lot, if I had that kind of price performance advantage, I would be pounding tco, especially if I need two separate databases to do the job. That one can do, that's gonna be, the TCO numbers are gonna be off the chart or maybe down the chart, which you want. Have you looked at this and how does it compare with, you know, the big cloud guys, for example, >>I've looked at it in depth, in fact, I'm working on another TCO on this arena, but you can find it on Wiki bod in which I compared TCO for MySEQ Heat wave versus Aurora plus Redshift plus ML plus Blue. I've compared it against gcps services, Azure services, Snowflake with other services. And there's just no comparison. The, the TCO differences are huge. More importantly, thefor, the, the TCO per performance is huge. We're talking in some cases multiple orders of magnitude, but at least an order of magnitude difference. So discounting isn't gonna help you much at the end of the day, it's only going to lower your cost a little, but it doesn't improve the automation, it doesn't improve the performance, it doesn't improve the time to insight, it doesn't improve all those things that you want out of a database or multiple databases because you >>Can't discount yourself to a higher value proposition. >>So what about, I wonder ho if you could chime in on the developer angle. You, you followed that, that market. How do these innovations from heatwave, I think you used the term developer velocity. I've heard you used that before. Yeah, I mean, look, Oracle owns Java, okay, so it, it's, you know, most popular, you know, programming language in the world, blah, blah blah. But it does it have the, the minds and hearts of, of developers and does, where does heatwave fit into that equation? >>I think heatwave is gaining quickly mindshare on the developer side, right? It's not the traditional no sequel database which grew up, there's a traditional mistrust of oracles to developers to what was happening to open source when gets acquired. Like in the case of Oracle versus Java and where my sql, right? And, but we know it's not a good competitive strategy to, to bank on Oracle screwing up because it hasn't worked not on Java known my sequel, right? And for developers, it's, once you get to know a technology product and you can do more, it becomes kind of like a Swiss army knife and you can build more use case, you can build more powerful applications. That's super, super important because you don't have to get certified in multiple databases. You, you are fast at getting things done, you achieve fire, develop velocity, and the managers are happy because they don't have to license more things, send you to more trainings, have more risk of something not being delivered, right? >>So it's really the, we see the suite where this best of breed play happening here, which in general was happening before already with Oracle's flagship database. Whereas those Amazon as an example, right? And now the interesting thing is every step away Oracle was always a one database company that can be only one and they're now generally talking about heat web and that two database company with different market spaces, but same value proposition of integrating more things very, very quickly to have a universal database that I call, they call the converge database for all the needs of an enterprise to run certain application use cases. And that's what's attractive to developers. >>It's, it's ironic isn't it? I mean I, you know, the rumor was the TK Thomas Curian left Oracle cuz he wanted to put Oracle database on other clouds and other places. And maybe that was the rift. Maybe there was, I'm sure there was other things, but, but Oracle clearly is now trying to expand its Tam Ron with, with heatwave into aws, into Azure. How do you think Oracle's gonna do, you were at a cloud world, what was the sentiment from customers and the independent analyst? Is this just Oracle trying to screw with the competition, create a little diversion? Or is this, you know, serious business for Oracle? What do you think? >>No, I think it has lakes. I think it's definitely, again, attriting to Oracle's overall ability to differentiate not only my SQL heat wave, but its overall portfolio. And I think the fact that they do have the alliance with the Azure in place, that this is definitely demonstrating their commitment to meeting the multi-cloud needs of its customers as well as what we pointed to in terms of the fact that they're now offering, you know, MySQL capabilities within AWS natively and that it can now perform AWS's own offering. And I think this is all demonstrating that Oracle is, you know, not letting up, they're not resting on its laurels. That's clearly we are living in a multi-cloud world, so why not just make it more easy for customers to be able to use cloud databases according to their own specific, specific needs. And I think, you know, to holder's point, I think that definitely lines with being able to bring on more application developers to leverage these capabilities. >>I think one important announcement that's related to all this was the JSON relational duality capabilities where now it's a lot easier for application developers to use a language that they're very familiar with a JS O and not have to worry about going into relational databases to store their J S O N application coding. So this is, I think an example of the innovation that's enhancing the overall Oracle portfolio and certainly all the work with machine learning is definitely paying dividends as well. And as a result, I see Oracle continue to make these inroads that we pointed to. But I agree with Mark, you know, the short term discounting is just a stall tag. This is not denying the fact that Oracle is being able to not only deliver price performance differentiators that are dramatic, but also meeting a wide range of needs for customers out there that aren't just limited device performance consideration. >>Being able to support multi-cloud according to customer needs. Being able to reach out to the application developer community and address a very specific challenge that has plagued them for many years now. So bring it all together. Yeah, I see this as just enabling Oracles who ring true with customers. That the customers that were there were basically all of them, even though not all of them are going to be saying the same things, they're all basically saying positive feedback. And likewise, I think the analyst community is seeing this. It's always refreshing to be able to talk to customers directly and at Oracle cloud there was a litany of them and so this is just a difference maker as well as being able to talk to strategic partners. The nvidia, I think partnerships also testament to Oracle's ongoing ability to, you know, make the ecosystem more user friendly for the customers out there. >>Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able to be best of breed. That's the kind of surprising thing that I'm hearing about, about heatwave. I want to, I want to talk about Lake House because when I think of Lake House, I think data bricks, and to my knowledge data bricks hasn't been in the sites of Oracle yet. Maybe they're next, but, but Oracle claims that MySQL, heatwave, Lakehouse is a breakthrough in terms of capacity and performance. Mark, what are your thoughts on that? Can you double click on, on Lakehouse Oracle's claims for things like query performance and data loading? What does it mean for the market? Is Oracle really leading in, in the lake house competitive landscape? What are your thoughts? >>Well, but name in the game is what are the problems you're solving for the customer? More importantly, are those problems urgent or important? If they're urgent, customers wanna solve 'em. Now if they're important, they might get around to them. So you look at what they're doing with Lake House or previous to that machine learning or previous to that automation or previous to that O L A with O ltp and they're merging all this capability together. If you look at Snowflake or data bricks, they're tacking one problem. You look at MyQ heat wave, they're tacking multiple problems. So when you say, yeah, their queries are much better against the lake house in combination with other analytics in combination with O ltp and the fact that there are no ETLs. So you're getting all this done in real time. So it's, it's doing the query cross, cross everything in real time. >>You're solving multiple user and developer problems, you're increasing their ability to get insight faster, you're having shorter response times. So yeah, they really are solving urgent problems for customers. And by putting it where the customer lives, this is the brilliance of actually being multicloud. And I know I'm backing up here a second, but by making it work in AWS and Azure where people already live, where they already have applications, what they're saying is, we're bringing it to you. You don't have to come to us to get these, these benefits, this value overall, I think it's a brilliant strategy. I give Nip and Argo wallet a huge, huge kudos for what he's doing there. So yes, what they're doing with the lake house is going to put notice on data bricks and Snowflake and everyone else for that matter. Well >>Those are guys that whole ago you, you and I have talked about this. Those are, those are the guys that are doing sort of the best of breed. You know, they're really focused and they, you know, tend to do well at least out of the gate. Now you got Oracle's converged philosophy, obviously with Oracle database. We've seen that now it's kicking in gear with, with heatwave, you know, this whole thing of sweets versus best of breed. I mean the long term, you know, customers tend to migrate towards suite, but the new shiny toy tends to get the growth. How do you think this is gonna play out in cloud database? >>Well, it's the forever never ending story, right? And in software right suite, whereas best of breed and so far in the long run suites have always won, right? So, and sometimes they struggle again because the inherent problem of sweets is you build something larger, it has more complexity and that means your cycles to get everything working together to integrate the test that roll it out, certify whatever it is, takes you longer, right? And that's not the case. It's a fascinating part of what the effort around my SQL heat wave is that the team is out executing the previous best of breed data, bringing us something together. Now if they can maintain that pace, that's something to to, to be seen. But it, the strategy, like what Mark was saying, bring the software to the data is of course interesting and unique and totally an Oracle issue in the past, right? >>Yeah. But it had to be in your database on oci. And but at, that's an interesting part. The interesting thing on the Lake health side is, right, there's three key benefits of a lakehouse. The first one is better reporting analytics, bring more rich information together, like make the, the, the case for silicon angle, right? We want to see engagements for this video, we want to know what's happening. That's a mixed transactional video media use case, right? Typical Lakehouse use case. The next one is to build more rich applications, transactional applications which have video and these elements in there, which are the engaging one. And the third one, and that's where I'm a little critical and concerned, is it's really the base platform for artificial intelligence, right? To run deep learning to run things automatically because they have all the data in one place can create in one way. >>And that's where Oracle, I know that Ron talked about Invidia for a moment, but that's where Oracle doesn't have the strongest best story. Nonetheless, the two other main use cases of the lake house are very strong, very well only concern is four 50 terabyte sounds long. It's an arbitrary limitation. Yeah, sounds as big. So for the start, and it's the first word, they can make that bigger. You don't want your lake house to be limited and the terabyte sizes or any even petabyte size because you want to have the certainty. I can put everything in there that I think it might be relevant without knowing what questions to ask and query those questions. >>Yeah. And you know, in the early days of no schema on right, it just became a mess. But now technology has evolved to allow us to actually get more value out of that data. Data lake. Data swamp is, you know, not much more, more, more, more logical. But, and I want to get in, in a moment, I want to come back to how you think the competitors are gonna respond. Are they gonna have to sort of do a more of a converged approach? AWS in particular? But before I do, Ron, I want to ask you a question about autopilot because I heard Larry Ellison's keynote and he was talking about how, you know, most security issues are human errors with autonomy and autonomous database and things like autopilot. We take care of that. It's like autonomous vehicles, they're gonna be safer. And I went, well maybe, maybe someday. So Oracle really tries to emphasize this, that every time you see an announcement from Oracle, they talk about new, you know, autonomous capabilities. It, how legit is it? Do people care? What about, you know, what's new for heatwave Lakehouse? How much of a differentiator, Ron, do you really think autopilot is in this cloud database space? >>Yeah, I think it will definitely enhance the overall proposition. I don't think people are gonna buy, you know, lake house exclusively cause of autopilot capabilities, but when they look at the overall picture, I think it will be an added capability bonus to Oracle's benefit. And yeah, I think it's kind of one of these age old questions, how much do you automate and what is the bounce to strike? And I think we all understand with the automatic car, autonomous car analogy that there are limitations to being able to use that. However, I think it's a tool that basically every organization out there needs to at least have or at least evaluate because it goes to the point of it helps with ease of use, it helps make automation more balanced in terms of, you know, being able to test, all right, let's automate this process and see if it works well, then we can go on and switch on on autopilot for other processes. >>And then, you know, that allows, for example, the specialists to spend more time on business use cases versus, you know, manual maintenance of, of the cloud database and so forth. So I think that actually is a, a legitimate value proposition. I think it's just gonna be a case by case basis. Some organizations are gonna be more aggressive with putting automation throughout their processes throughout their organization. Others are gonna be more cautious. But it's gonna be, again, something that will help the overall Oracle proposition. And something that I think will be used with caution by many organizations, but other organizations are gonna like, hey, great, this is something that is really answering a real problem. And that is just easing the use of these databases, but also being able to better handle the automation capabilities and benefits that come with it without having, you know, a major screwup happened and the process of transitioning to more automated capabilities. >>Now, I didn't attend cloud world, it's just too many red eyes, you know, recently, so I passed. But one of the things I like to do at those events is talk to customers, you know, in the spirit of the truth, you know, they, you know, you'd have the hallway, you know, track and to talk to customers and they say, Hey, you know, here's the good, the bad and the ugly. So did you guys, did you talk to any customers my SQL Heatwave customers at, at cloud world? And and what did you learn? I don't know, Mark, did you, did you have any luck and, and having some, some private conversations? >>Yeah, I had quite a few private conversations. The one thing before I get to that, I want disagree with one point Ron made, I do believe there are customers out there buying the heat wave service, the MySEQ heat wave server service because of autopilot. Because autopilot is really revolutionary in many ways in the sense for the MySEQ developer in that it, it auto provisions, it auto parallel loads, IT auto data places it auto shape predictions. It can tell you what machine learning models are going to tell you, gonna give you your best results. And, and candidly, I've yet to meet a DBA who didn't wanna give up pedantic tasks that are pain in the kahoo, which they'd rather not do and if it's long as it was done right for them. So yes, I do think people are buying it because of autopilot and that's based on some of the conversations I had with customers at Oracle Cloud World. >>In fact, it was like, yeah, that's great, yeah, we get fantastic performance, but this really makes my life easier and I've yet to meet a DBA who didn't want to make their life easier. And it does. So yeah, I've talked to a few of them. They were excited. I asked them if they ran into any bugs, were there any difficulties in moving to it? And the answer was no. In both cases, it's interesting to note, my sequel is the most popular database on the planet. Well, some will argue that it's neck and neck with SQL Server, but if you add in Mariah DB and ProCon db, which are forks of MySQL, then yeah, by far and away it's the most popular. And as a result of that, everybody for the most part has typically a my sequel database somewhere in their organization. So this is a brilliant situation for anybody going after MyQ, but especially for heat wave. And the customers I talk to love it. I didn't find anybody complaining about it. And >>What about the migration? We talked about TCO earlier. Did your t does your TCO analysis include the migration cost or do you kind of conveniently leave that out or what? >>Well, when you look at migration costs, there are different kinds of migration costs. By the way, the worst job in the data center is the data migration manager. Forget it, no other job is as bad as that one. You get no attaboys for doing it. Right? And then when you screw up, oh boy. So in real terms, anything that can limit data migration is a good thing. And when you look at Data Lake, that limits data migration. So if you're already a MySEQ user, this is a pure MySQL as far as you're concerned. It's just a, a simple transition from one to the other. You may wanna make sure nothing broke and every you, all your tables are correct and your schema's, okay, but it's all the same. So it's a simple migration. So it's pretty much a non-event, right? When you migrate data from an O LTP to an O L A P, that's an ETL and that's gonna take time. >>But you don't have to do that with my SQL heat wave. So that's gone when you start talking about machine learning, again, you may have an etl, you may not, depending on the circumstances, but again, with my SQL heat wave, you don't, and you don't have duplicate storage, you don't have to copy it from one storage container to another to be able to be used in a different database, which by the way, ultimately adds much more cost than just the other service. So yeah, I looked at the migration and again, the users I talked to said it was a non-event. It was literally moving from one physical machine to another. If they had a new version of MySEQ running on something else and just wanted to migrate it over or just hook it up or just connect it to the data, it worked just fine. >>Okay, so every day it sounds like you guys feel, and we've certainly heard this, my colleague David Foyer, the semi-retired David Foyer was always very high on heatwave. So I think you knows got some real legitimacy here coming from a standing start, but I wanna talk about the competition, how they're likely to respond. I mean, if your AWS and you got heatwave is now in your cloud, so there's some good aspects of that. The database guys might not like that, but the infrastructure guys probably love it. Hey, more ways to sell, you know, EC two and graviton, but you're gonna, the database guys in AWS are gonna respond. They're gonna say, Hey, we got Redshift, we got aqua. What's your thoughts on, on not only how that's gonna resonate with customers, but I'm interested in what you guys think will a, I never say never about aws, you know, and are they gonna try to build, in your view a converged Oola and o LTP database? You know, Snowflake is taking an ecosystem approach. They've added in transactional capabilities to the portfolio so they're not standing still. What do you guys see in the competitive landscape in that regard going forward? Maybe Holger, you could start us off and anybody else who wants to can chime in, >>Happy to, you mentioned Snowflake last, we'll start there. I think Snowflake is imitating that strategy, right? That building out original data warehouse and the clouds tasking project to really proposition to have other data available there because AI is relevant for everybody. Ultimately people keep data in the cloud for ultimately running ai. So you see the same suite kind of like level strategy, it's gonna be a little harder because of the original positioning. How much would people know that you're doing other stuff? And I just, as a former developer manager of developers, I just don't see the speed at the moment happening at Snowflake to become really competitive to Oracle. On the flip side, putting my Oracle hat on for a moment back to you, Mark and Iran, right? What could Oracle still add? Because the, the big big things, right? The traditional chasms in the database world, they have built everything, right? >>So I, I really scratched my hat and gave Nipon a hard time at Cloud world say like, what could you be building? Destiny was very conservative. Let's get the Lakehouse thing done, it's gonna spring next year, right? And the AWS is really hard because AWS value proposition is these small innovation teams, right? That they build two pizza teams, which can be fit by two pizzas, not large teams, right? And you need suites to large teams to build these suites with lots of functionalities to make sure they work together. They're consistent, they have the same UX on the administration side, they can consume the same way, they have the same API registry, can't even stop going where the synergy comes to play over suite. So, so it's gonna be really, really hard for them to change that. But AWS super pragmatic. They're always by themselves that they'll listen to customers if they learn from customers suite as a proposition. I would not be surprised if AWS trying to bring things closer together, being morely together. >>Yeah. Well how about, can we talk about multicloud if, if, again, Oracle is very on on Oracle as you said before, but let's look forward, you know, half a year or a year. What do you think about Oracle's moves in, in multicloud in terms of what kind of penetration they're gonna have in the marketplace? You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at the, the Microsoft Azure deal. I think that's really interesting. I've, I've called it a little bit of early days of a super cloud. What impact do you think this is gonna have on, on the marketplace? But, but both. And think about it within Oracle's customer base, I have no doubt they'll do great there. But what about beyond its existing install base? What do you guys think? >>Ryan, do you wanna jump on that? Go ahead. Go ahead Ryan. No, no, no, >>That's an excellent point. I think it aligns with what we've been talking about in terms of Lakehouse. I think Lake House will enable Oracle to pull more customers, more bicycle customers onto the Oracle platforms. And I think we're seeing all the signs pointing toward Oracle being able to make more inroads into the overall market. And that includes garnishing customers from the leaders in, in other words, because they are, you know, coming in as a innovator, a an alternative to, you know, the AWS proposition, the Google cloud proposition that they have less to lose and there's a result they can really drive the multi-cloud messaging to resonate with not only their existing customers, but also to be able to, to that question, Dave's posing actually garnish customers onto their platform. And, and that includes naturally my sequel but also OCI and so forth. So that's how I'm seeing this playing out. I think, you know, again, Oracle's reporting is indicating that, and I think what we saw, Oracle Cloud world is definitely validating the idea that Oracle can make more waves in the overall market in this regard. >>You know, I, I've floated this idea of Super cloud, it's kind of tongue in cheek, but, but there, I think there is some merit to it in terms of building on top of hyperscale infrastructure and abstracting some of the, that complexity. And one of the things that I'm most interested in is industry clouds and an Oracle acquisition of Cerner. I was struck by Larry Ellison's keynote, it was like, I don't know, an hour and a half and an hour and 15 minutes was focused on healthcare transformation. Well, >>So vertical, >>Right? And so, yeah, so you got Oracle's, you know, got some industry chops and you, and then you think about what they're building with, with not only oci, but then you got, you know, MyQ, you can now run in dedicated regions. You got ADB on on Exadata cloud to customer, you can put that OnPrem in in your data center and you look at what the other hyperscalers are, are doing. I I say other hyperscalers, I've always said Oracle's not really a hyperscaler, but they got a cloud so they're in the game. But you can't get, you know, big query OnPrem, you look at outposts, it's very limited in terms of, you know, the database support and again, that that will will evolve. But now you got Oracle's got, they announced Alloy, we can white label their cloud. So I'm interested in what you guys think about these moves, especially the industry cloud. We see, you know, Walmart is doing sort of their own cloud. You got Goldman Sachs doing a cloud. Do you, you guys, what do you think about that and what role does Oracle play? Any thoughts? >>Yeah, let me lemme jump on that for a moment. Now, especially with the MyQ, by making that available in multiple clouds, what they're doing is this follows the philosophy they've had the past with doing cloud, a customer taking the application and the data and putting it where the customer lives. If it's on premise, it's on premise. If it's in the cloud, it's in the cloud. By making the mice equal heat wave, essentially a plug compatible with any other mice equal as far as your, your database is concern and then giving you that integration with O L A P and ML and Data Lake and everything else, then what you've got is a compelling offering. You're making it easier for the customer to use. So I look the difference between MyQ and the Oracle database, MyQ is going to capture market more market share for them. >>You're not gonna find a lot of new users for the Oracle debate database. Yeah, there are always gonna be new users, don't get me wrong, but it's not gonna be a huge growth. Whereas my SQL heatwave is probably gonna be a major growth engine for Oracle going forward. Not just in their own cloud, but in AWS and in Azure and on premise over time that eventually it'll get there. It's not there now, but it will, they're doing the right thing on that basis. They're taking the services and when you talk about multicloud and making them available where the customer wants them, not forcing them to go where you want them, if that makes sense. And as far as where they're going in the future, I think they're gonna take a page outta what they've done with the Oracle database. They'll add things like JSON and XML and time series and spatial over time they'll make it a, a complete converged database like they did with the Oracle database. The difference being Oracle database will scale bigger and will have more transactions and be somewhat faster. And my SQL will be, for anyone who's not on the Oracle database, they're, they're not stupid, that's for sure. >>They've done Jason already. Right. But I give you that they could add graph and time series, right. Since eat with, Right, Right. Yeah, that's something absolutely right. That's, that's >>A sort of a logical move, right? >>Right. But that's, that's some kid ourselves, right? I mean has worked in Oracle's favor, right? 10 x 20 x, the amount of r and d, which is in the MyQ space, has been poured at trying to snatch workloads away from Oracle by starting with IBM 30 years ago, 20 years ago, Microsoft and, and, and, and didn't work, right? Database applications are extremely sticky when they run, you don't want to touch SIM and grow them, right? So that doesn't mean that heat phase is not an attractive offering, but it will be net new things, right? And what works in my SQL heat wave heat phases favor a little bit is it's not the massive enterprise applications which have like we the nails like, like you might be only running 30% or Oracle, but the connections and the interfaces into that is, is like 70, 80% of your enterprise. >>You take it out and it's like the spaghetti ball where you say, ah, no I really don't, don't want to do all that. Right? You don't, don't have that massive part with the equals heat phase sequel kind of like database which are more smaller tactical in comparison, but still I, I don't see them taking so much share. They will be growing because of a attractive value proposition quickly on the, the multi-cloud, right? I think it's not really multi-cloud. If you give people the chance to run your offering on different clouds, right? You can run it there. The multi-cloud advantages when the Uber offering comes out, which allows you to do things across those installations, right? I can migrate data, I can create data across something like Google has done with B query Omni, I can run predictive models or even make iron models in different place and distribute them, right? And Oracle is paving the road for that, but being available on these clouds. But the multi-cloud capability of database which knows I'm running on different clouds that is still yet to be built there. >>Yeah. And >>That the problem with >>That, that's the super cloud concept that I flowed and I I've always said kinda snowflake with a single global instance is sort of, you know, headed in that direction and maybe has a league. What's the issue with that mark? >>Yeah, the problem with the, with that version, the multi-cloud is clouds to charge egress fees. As long as they charge egress fees to move data between clouds, it's gonna make it very difficult to do a real multi-cloud implementation. Even Snowflake, which runs multi-cloud, has to pass out on the egress fees of their customer when data moves between clouds. And that's really expensive. I mean there, there is one customer I talked to who is beta testing for them, the MySQL heatwave and aws. The only reason they didn't want to do that until it was running on AWS is the egress fees were so great to move it to OCI that they couldn't afford it. Yeah. Egress fees are the big issue but, >>But Mark the, the point might be you might wanna root query and only get the results set back, right was much more tinier, which been the answer before for low latency between the class A problem, which we sometimes still have but mostly don't have. Right? And I think in general this with fees coming down based on the Oracle general E with fee move and it's very hard to justify those, right? But, but it's, it's not about moving data as a multi-cloud high value use case. It's about doing intelligent things with that data, right? Putting into other places, replicating it, what I'm saying the same thing what you said before, running remote queries on that, analyzing it, running AI on it, running AI models on that. That's the interesting thing. Cross administered in the same way. Taking things out, making sure compliance happens. Making sure when Ron says I don't want to be American anymore, I want to be in the European cloud that is gets migrated, right? So tho those are the interesting value use case which are really, really hard for enterprise to program hand by hand by developers and they would love to have out of the box and that's yet the innovation to come to, we have to come to see. But the first step to get there is that your software runs in multiple clouds and that's what Oracle's doing so well with my SQL >>Guys. Amazing. >>Go ahead. Yeah. >>Yeah. >>For example, >>Amazing amount of data knowledge and, and brain power in this market. Guys, I really want to thank you for coming on to the cube. Ron Holger. Mark, always a pleasure to have you on. Really appreciate your time. >>Well all the last names we're very happy for Romanic last and moderator. Thanks Dave for moderating us. All right, >>We'll see. We'll see you guys around. Safe travels to all and thank you for watching this power panel, The Truth About My SQL Heat Wave on the cube. Your leader in enterprise and emerging tech coverage.

Published Date : Nov 1 2022

SUMMARY :

Always a pleasure to have you on. I think you just saw him at Oracle Cloud World and he's come on to describe this is doing, you know, Google is, you know, we heard Google Cloud next recently, They own somewhere between 30 to 50% depending on who you read migrate from one cloud to another and suddenly you have a very compelling offer. All right, so thank you for that. And they certainly with the AI capabilities, And I believe strongly that long term it's gonna be ones who create better value for So I mean it's certainly, you know, when, when Oracle talks about the competitors, So what do you make of the benchmarks? say, Snowflake when it comes to, you know, the Lakehouse platform and threat to keep, you know, a customer in your own customer base. And oh, by the way, as you grow, And I know you look at this a lot, to insight, it doesn't improve all those things that you want out of a database or multiple databases So what about, I wonder ho if you could chime in on the developer angle. they don't have to license more things, send you to more trainings, have more risk of something not being delivered, all the needs of an enterprise to run certain application use cases. I mean I, you know, the rumor was the TK Thomas Curian left Oracle And I think, you know, to holder's point, I think that definitely lines But I agree with Mark, you know, the short term discounting is just a stall tag. testament to Oracle's ongoing ability to, you know, make the ecosystem Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able So when you say, yeah, their queries are much better against the lake house in You don't have to come to us to get these, these benefits, I mean the long term, you know, customers tend to migrate towards suite, but the new shiny bring the software to the data is of course interesting and unique and totally an Oracle issue in And the third one, lake house to be limited and the terabyte sizes or any even petabyte size because you want keynote and he was talking about how, you know, most security issues are human I don't think people are gonna buy, you know, lake house exclusively cause of And then, you know, that allows, for example, the specialists to And and what did you learn? The one thing before I get to that, I want disagree with And the customers I talk to love it. the migration cost or do you kind of conveniently leave that out or what? And when you look at Data Lake, that limits data migration. So that's gone when you start talking about So I think you knows got some real legitimacy here coming from a standing start, So you see the same And you need suites to large teams to build these suites with lots of functionalities You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at Ryan, do you wanna jump on that? I think, you know, again, Oracle's reporting I think there is some merit to it in terms of building on top of hyperscale infrastructure and to customer, you can put that OnPrem in in your data center and you look at what the So I look the difference between MyQ and the Oracle database, MyQ is going to capture market They're taking the services and when you talk about multicloud and But I give you that they could add graph and time series, right. like, like you might be only running 30% or Oracle, but the connections and the interfaces into You take it out and it's like the spaghetti ball where you say, ah, no I really don't, global instance is sort of, you know, headed in that direction and maybe has a league. Yeah, the problem with the, with that version, the multi-cloud is clouds And I think in general this with fees coming down based on the Oracle general E with fee move Yeah. Guys, I really want to thank you for coming on to the cube. Well all the last names we're very happy for Romanic last and moderator. We'll see you guys around.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarkPERSON

0.99+

Ron HolgerPERSON

0.99+

RonPERSON

0.99+

Mark StammerPERSON

0.99+

IBMORGANIZATION

0.99+

Ron WestfallPERSON

0.99+

RyanPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

WalmartORGANIZATION

0.99+

Larry EllisonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Holgar MuellerPERSON

0.99+

AmazonORGANIZATION

0.99+

Constellation ResearchORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

17 timesQUANTITY

0.99+

twoQUANTITY

0.99+

David FoyerPERSON

0.99+

44%QUANTITY

0.99+

1.2%QUANTITY

0.99+

4.8 billionQUANTITY

0.99+

JasonPERSON

0.99+

UberORGANIZATION

0.99+

Fu Chim ResearchORGANIZATION

0.99+

Dave AntePERSON

0.99+

Chris Wolf, VMware | VMware Explore 2022


 

>>Hey guys. Good morning. And welcome back to the cube. Lisa Martin here with John furrier. This is the Cube's third day of Wal Dal coverage of VMware Explorer. We're very pleased to welcome one of our alumni back to the program. Chris Wolf joins us chief research and innovation officer at VMware. Chris, welcome back to the >>Cube. Yeah. Thanks Lisa. It's always a pleasure. >>This has been a great event. We, we, the key note was standing room only on Tuesday morning. We've had great conversations with VMware's ecosystem and VMware of course, what are some of the, the hot things going on from an R and D perspective? >>Yeah, there's, there's a lot. I mean, we're, we have about four or five different priorities. And these look at this is looking at sovereign clouds and multi-cloud edge computing, modern applications and data services. We're doing quite a bit of work in machine learning as well as insecurity. So we're, we're relatively large organization, but at the same time, we really look to pick our bets. So when we're doing something in ML or security, then we wanna make sure that it's high quality and it's differentiated and adds value for VMware, our partners and our customers. >>Where are our customers in the mix in terms of being influential in the roadmap? >>Very, very much in the mix. What we, what we like to do is in early stage R and D, we want to have five to 10 customers as design partners. And that really helps. And in addition to that, as we get closer to go to market, we look to a lineup between one and three of our SI partners as well, to really help us, you know, in a large company, sometimes your organic innovations can get lost in the shuffle. Yeah. And when we have passionate SI that are like, yes, we want to take this forward with you together. That's just awesome. And it also helps us to understand at a very early stage, what are the integration requirements? So we're not just thinking about the, the core product itself, but how would it play in the ecosystem equally important? >>We had hit Culbert on CTO, great work. He's dealing with the white paper and cross cloud, obviously vSphere, big release, lot of this stuff. Dave ante had mentioned that in the analyst session, you had a lot of good stuff you were talking about. That's coming around the corner. That's shipping coming outta the oven and a big theme this year is multi-cloud cloud native. The relationship what's one's ahead. Bleed dog. No one, you kinda get a feel for multi-cloud. It's kind of out front right now, but now cloud native's got the most history what's coming out of the oven right now in terms of hitting the market. That's not yet in this, in the, in the, in the numbers, in terms of sales, like there's, there's some key cloud native stuff coming out. Where's the action. Can you share what you've shared at the analyst meeting? >>Yeah. So at the analyst meeting, what I was going through was a number of our new innovation projects or projects. And, and these are things that are typically close to being product or service at VMware, you know, somewhere in the year out timeframe. Some, some of these are just a few months out. So let me just go through some of them, I'll start with project keek. So keek is super exciting because when you think about edge, what we're hearing from customers is the, the notion of a single platform, a single piece of hardware that can run their cloud services, their containers, their VMs, their network, and security functions. Doing all of this on one platform, gives them the flexibility that as changes happen, it's a software update. They don't have to buy another piece of hardware, but if we step back, what's the management experience you want, right? >>Simple get ops oriented, simple life cycle and configuration management, very low touch. I don't need technical skills to deploy these types of devices. So this is where keek comes in. So what keek is doing is exposing a Kubernetes API above the ESXi hypervisor and taking a complete, get op style of management. So imagine now, when you need to do an update for infrastructure, you're logging into GitHub, you're editing a YAML file and pushing the update. We're doing the same thing for the applications that reside. I can do all of this through GitHub. So this is very, I would say, even internally disruptive to VMware, but super exciting for our customers and partners that we've shared this with. >>What else is happening? What else on the cloud native side Tansu Monterey those lot areas. >>Oh, there's so much. So if we look at project Monterey, I had a presentation within Invidia yesterday. We're really talking through this. And what I'm seeing now is there's a couple of really interesting inflection points with DPU. The first thing is the performance that you're getting and the number of cores that you can save on an X 86 host is actually providing a very strong business case now to bring DPU into the servers, into the data center. So that's one. So now you have a positive ROI. Number two, you start to decouple core services now from the X 86 host itself. So think about a distributed firewall that I can run on a PCI adapter. Now that's DEC coupled, physically from the server, and it really allows me to scale out east west security in a way that I could not do before. So again, I think that's really exciting and that's where we're seeing a lot of buzz from customers. >>So that DPU, which got a lot of buzz, by the way, Lisa, I never, you had trouble interviews on this. I had to the Dell folks too, V X RS taking the advantage of it, the performances, I see the performance angle on that and deep user hot. Can you talk about that security east west thing? Cuz Tom Gillis was on yesterday talking about that's a killer advantage for the security side. Can you touch on that real >>Quick? Yeah. A hundred percent. So what I can now do is take a, a firewall and run it isolated from the X 86 host that it's trying to protect. So it's right next to the host. I can get line rate speeds in terms of analytics and processing of my network and security traffic. So that's also huge. So I'm running line rate on the host and I'm able to run one of these firewall instances on every host in my data center, you cannot do that. You can never afford it with physical appliances. So to me, this is an inflection point because this is the start of network and security functions moving off of hardware appliances and onto DPU. And if you're the ecosystem vendors, this is how they're going to be able to scale some of their services and offerings into the public >>Cloud. So a lot of good stuff happening within the VMware kind of the hardware, low level atoms and the bits as well as the software. The other thing I wanna get your thoughts on relative to the next question is that takes to the next level is the super cloud world we're living in is about cloud native developers, which is DevOps dev security ops and data ops are now big parts of the, the challenges that the people are reigning in the chaos that that's being reigned in. How does VMware look at the relationship to the cloud providers? Cause we heard cloud universal. We had the cloud. If you believe in multi-cloud, which you guys are saying, people are agreeing with, then you gotta have good tight couple coupled relationships with the cloud services, >>A hundred percent. >>We can be decoupled, but highly cohesive, but you gotta connect in via APIs. What's the vision for the VMware customers who want to connect say AWS, for instance, is that seamless? What makes that happen? What's that roadmap look like for taking that VMware on premises hybrid and making it like turbo charging it to be like public cloud hybrid together? >>Yeah, I think there's some lessons that can be learned here. You know, an analogy I've been using lately is look at the early days of virtualization when VMware had vCenter, right? What was happening was you saw the enterprise management vendors try to do this overlay above virtualization management and say, we can manage all hypervisors. And at the end of the day, these multi hypervisor managers, no one bought 'em because they can do 20% of the functionality of a tool from VMware or Microsoft. And that's the lesson that we have to take to multi-cloud. We don't have to overlay every functionality. There's really good capabilities that the cloud providers are offering through their own tooling and APIs. Right? But you, you, if you step back, you say, well, what do I wanna centralize? I wanna have a centralized, secure software supply chain and I can get that through VMware tan zoo and, and where we're going with Kubernetes. When you're going with native cloud services, you might say, you know what, I wanna have a central view of, of visibility for compliance. So that's what we're doing with secure state or a central view of cost management. And we're doing that with cloud health. So you can have some brokering and governance, but then you also have to look from a surgical perspective as to what are the things that I really need to centralize versus what do I not need to centralize? >>One of the themes that we heard on the keynote on Tuesday was the, the different phases and that a lot of customers are still in the cloud chaos phase. We talked a lot about that in the last couple days with VMware, with its partner ecosystem. And, but the goal of getting to cloud smart, how does the R and D organization, how do, how are you helping customers really navigate that journey from the chaos that they're in, maybe they've inherited multi-cloud environment to getting to cloud smart. And what does cloud smart mean from your perspective >>Cloud? Smartt from my perspective means pragmatism. It means really thinking about what should I do here first, right? I don't want to just go somewhere because I can, right. I want to be really mindful of the steps I'm going to take. So one ex one example of this is I've met with a customer this morning and we were talking about using our vRealize network insight tool, because what that allows 'em to do is get a map of all of their application dependencies in their data center. And they can learn like, well, I can move this to the cloud or maybe I can't move this cuz it has all these other dependencies and it would be really difficult. So that's that's one example. It also means really thinking through issues around data sovereignty, you know, what do I wanna hold onto a customer? I just met with yesterday. They were talking about how valuable their data is and their services that they want to use via SA in the cloud. But then there's also services, which is their core research. They wanna make sure that they can maintain that in their data centers and maintain full control because they see researchers will leave. And now all of a sudden, so that intellectual property has actually gone with the person and they need to, they need to have, you know, better accountability there. >>Yeah. One of the things about that we discovered at our super cloud event was is that, you know, we kind didn't really kind of put too much structure on other than our, our vision. It's, it's not just SaaS on cloud and it's not just, multi-cloud, it's a new kind of application end state or reality that if you believe in digital transformation, then technology is everywhere. And like it in the old days, it powered the back office and then terminals and PCs and whatnot, wasn't powering the boardroom obviously or other business. But if, if it happens like that digital transformation, the company is the app, the app is the company. So you're all digital. So that means the operating expenses has to drive an income statement and the CapEx handled by the cloud provides a lot of goodness. So I think everyone's gonna realize that AWS and the hyperscalers are providing great CapEx gifts. They do all the work and you only pay when you've made your success. So that's a great business model. >>Absolutely >>That's and then combine that with open source, which is now growing so fast, going next level, the software industry's open source. That's not even a debate Mo in some circles, maybe like telco, cloud's got the CapEx. The new operating model is this cloud layer. That's going to transform the companies finally in a hundred percent. Okay. That's super cloud. If that's the case, does it really matter who provides the electricity or the power? It's the coders that are in charge. It's the developers that have to make the calls because if the application is the core, the developers are, are not only the front lines, they are the company. This is really kind of where the sea change is. So if, if we believe that, I'm sure you, you agree with that generally? >>Yeah, of >>Course. Okay. So then what's the VMware customer roadmap here. So to me, that's the big story here at the show is that we're at this point in time where the VMware customers are, have to go there >>A hundred percent, >>What's that path. What is the path for the VMware customer to go from here to there? And what's this order of operations or is there a roadmap? Can, can you share your thoughts on >>That? Yeah, I think part of it is, is with these disruptive technologies, you have to start small, you know, whether it's in your data center, into cloud, you have to build the own institutional knowledge of your team members in the organization. It's much easier than trying to attract outside talent, for least for many of our customers. So I think that's important. The other part of this when with the developer and control, like in my organization, I want my innovators to innovate any other noise around them. I don't want them to have to worry about it. And it's the same thing with our customers. So if your developers are building the technologies that is really differentiating your company, then things like security and cryptography shouldn't have to be things they worry about. So we've been doing a lot of work. Like one of the projects we announced this week was around being able to decouple cryptography from the applications themselves. And we can expose that through a proxy through service mesh. And that's really exciting because now it ops can make these changes. Our SecOps teams can make these changes without having to impact the application. So that's really key is focusing the developers on innovation and then really being mindful about how you can build the right automation around everything else. And certainly open source is key to all >>That. So that's so, so then if you, if that's happening, which I'm, I'm not gonna debate that then in essence, what's really going on here is that the companies are decomposing their entire businesses down to levels that are manageable completely different than the way they did them 20, 30 years ago. >>Absolutely. You, you, you could take a modular approach to how you're solving business problems. And we do the same thing with technology, where there might be a ML algorithms that we've developed that we're exposing as SA service, but then all of the interconnects around that service are open source and very flexible so that the businesses and the customers and the VMware partners can decide what's the right way to build a puzzle for a given problem. >>We were talking on day one, I was riffing with an executives. It was Ragu and Victoria. And the concept around cross cloud was if you get to this Nirvana state, which is we, people want to get to this or composability mode, you're not coding, you're composing cuz coding's kinda happening open source and not the old classic, write some code and write that app. It's more orchestrate, compose and orchestrate. Do you, what's your thoughts on >>That? Yeah, yeah. Yeah. I, I agree. And it's it's I would add one more part to it too, which is scope. You know, I think sometimes we see projects fail because the, the initial scope is just too big. You know, what is the problem that you need to solve, scope it properly and then continuously calibrate. So even like our customers have to listen to their customers and we have to be thinking about our customers' customers, right? Because that's really how we innovate because then we can really be mindful of a holistic solution for them. >>You know, Lisa, when we had a super cloud event, you know, one of the panels was called the innovators dilemma with a question mark. And of course everyone kinds of quotes that book innovators dilemma, but one of the panelists, Chris ho beaker on Twitter said, let's change the name from the innovator's dilemma to the integrator's dilemma. And we all kind of got chuckled. We all kind of paused and said, Hey, that's actually a good point. Yeah. If you're now in a cloud and you're seeing some of the ecosystem floor vendors out there talking in this game too, they're all kind of fitting in snapping in almost like modular, like you said, so this is a Lego game. Now it feels like, it feels like, you know, let's compose, let's orchestrate, let's integrate. Now I integrations API driven. Now you're seeing a lot more about API security in the news and we've been covering at least I've probably interviewed six companies in the past, you know, six months that are doing API security, who would've thought API, that's the link, frankly, with the web. Now that's now a target area for hackers. >>Oh. And that's such an innovation area for VMware, John. Okay. >>There it is. So, I mean, this is, again, this means the connected tissue is being attacked yet. We need it to grow. No one's debating that is wrong, but it's under siege. >>Yes. Yes. So something else we introduced this week was a project. We called project Trinidad. And the way, the way you can think about it is a lot of the anomaly detection software today is looking at point based anomalies. Like this API header looks funny where we, where we've gone further is we can look at full sequence based anomalies so we can learn the sequences of transactions at an application takes and really understand what is expected behavior within those API calls within the headers, within the payloads. And we can model legitimate application behavior based on what those expectations are. So like a, like a common sequence might be doing an e-commerce checkout, right? There's lots of operations that happen logging into the site, searching, finding a product, going through the cart. Right. All of those things. Right. So if something's out of sequence, like all of a sudden somebody's just trying to do a checkout, but they haven't actually added to the cart. Right. This just seems odd. Right. So we can start to, and that's a simplistic example, but we're able now to use our algorithms to model legitimate application behavior through the entire sequence of how applications behave and then we can start to trap on anomalies. That's very differentiating IP and, and we think it's gonna be really important for the industry. Yeah. >>Because a lot of the hacks, sometimes on the API side, even as a example, are not necessarily on the API, it's the business logic in them. That's what you're getting at here. Yes. The APIs are hard. Oh our APIs are secure. Right. Well, yeah, but you're not actually securing the business logic internally. That's what you're getting at. If I read >>That right. Or exactly. Exactly. Yeah. Yeah. And it, it's the thing it's right. It's great that you can, you can look at a header, but what's the payload, right? What is what's, what's the actual data flow, right. That's associated with the call and that's what we want to really hone in on. And that's just a, it's, it's a, it's a far different level of sophistication in being able to understand east west vulnerabilities, you know, log for JX voice and these kind of things. So we have some real, it's interesting technology >>There. Security conversations now are not about security there about defense ability because security's a state of time, your secure here, you're not secure or someone might be in the network or in the app, but can you defend yourself from, and in >>That's it, you know, our, our, our malware software, right. That we're building to prevent and respond has to be more dynamic than the threats we face. Right. And this is why machine learning is so essential in, in these types of applications. >>Let me ask you a question. So just now zooming out riffing here since day, three's our conversational day where we debate and just riff more like a podcast style. If you had to do a super cloud or build a NextGen cloud multi-cloud with abstraction layer, that's, you know, all singing and dancing and open everyone's happy hardware below it's working ISAs and then apps are killed. Can ass what's in that. What does it look like to you if you had to architect the, the ultimate super cloud enabler, that something that would disrupt the next 10 years, what would it look like and how does, and assuming, and trying to do where everybody wins go, you have 10 seconds. No, >>Yeah, yeah. So the, you know, first of all, there has to be open source at all of the intersections. I think that's really important. And, and this is, this goes from networking constructs to our database, as a service layers, you know, everything in between, you know, the, the, the participants should be able to win on merit there. The other part of super cloud though, that hasn't happened that I probably is the most important area of innovation is going to be decoupled control planes. We have a number of organizations building sovereign cloud initiatives. They wanna have flexibility in where their services physically run. And you're not going to have that with a limited number of control planes that live in very specific public cloud data centers. So that's an area, give >>An example of what a, a, a, a narrowly defined control plane is. >>Yeah, sure. So my database as a service layer, so the, the, the actual portal that the customer is going into to provision databases, right. Rep managed replication, et cetera. Right. I should be able to run that in a colo. I should be able to run that somewhere in region that is guaranteed, that I'm going to have data stay physically in region. You know, we still have some of these challenges in networking in terms of being able to constrain traffic flows and be able to predict and audit them within a particular region as well. >>It's interesting. You bring up region again, more complexity. You know, you got catalogs here, catalogs different. I mean, this is where the chaos really comes down. I mean, it's, it's advancing, but it's advancing the state of functionality, but making it hella complex, I mean, come on. Don't you think it's like pretty amazingly hard to reign in that? Well, or is it maybe you guys making it easier? I just think I just, my mind just went, oh my God, I gotta, I gotta provision to that region, but then it's gotta be the same over there. And >>When you go back to modular architecture constructs, it gets far easier. This has been really key for how VMware is even building our own clouds internally is so that we have a, a shared services platform for the different apps and services that we're building, so that you do have that modularized approach. Like I said, the, the examples of innovation projects I've shared have been really driven by the fact that, you know, what, I don't know how customers are gonna consume it, and I don't have to know. And if you have the right modular architecture, the right APIs around it, you don't have to limit a particular project or technology's future at the time you build >>It. Okay. So your super would have multiple control planes that you can move, manage with that within one place. I get that. What about the data control plane? That seems to be something that used to be the land grab in, in conversations from vendors. But that seems to be much more of a customer side, cuz if I'm a customer, I want my control plane data plane to be, you know, mine. Like I don't want to have anyone cuz data's gotta move around, gotta be secure. >>Oh exactly. >>And that's gonna be complicated. How does, how do you see the data planes emerging? >>Yeah. Yeah. We, we see an opportunity really around having a, a centralized view that can give me consistent indexing and consistent awareness of data, no matter where it resides. And then being able to have that level of integration now between my data services and my applications, because you're right, you know, right now we have data in different places, but we could have a future where data's more perpetually in motion. You know, we're already looking at time sensitive fabrics where we're expecting microservices to sometimes run in different cell towers depending on the SLA that they need to achieve. So then you have data parts that's going to follow, right? That may not always be in the same cloud data center. So there's, this is enormously complicated, not just in terms of meeting application SLAs, but auditing and security. Right. That makes it even further. So having these types of data layers that can give me a consistent purview of data, regardless of where it is, allow me to manage and life cycle data globally, that's going to be super important, I believe going forward. >>Yeah. Awesome. Well, my one last question, Lisa, gonna get a question in here. It's hard. Went for her. I'm getting all the, all the questions in, sorry, Lisa that's okay. What's your favorite, most exciting thing that you think's going on right now that people should pay attention to of all the things you're looking at, the most important thing that that's happening and maybe something that's super important that people aren't talking about or it could be the same thing. So the, the most important thing that you think that's happening in the industry for cloud next today and, and maybe something that you think people should look at and pay more attention to. >>Okay. Yeah, those are good questions. And that's hard to answer because there's, there's probably so much happening. I I've been on here before I've talked about edge. I still think that's really important. I think the value of edge soft of edge velocity being defined by software updates, I think is quite powerful. And that's, that's what we're building towards. And I would say the industry is as well. If you look at AWS and Azure, when they're packaging a service to go out to the edge it's package as a container. So it's already quite flexible and being able to think about how can I have a single platform that can give me all of this flexibility, I think is really, really essential. We're building these capabilities into cars. We have a version of our Velo cloud edge device. That's able to run on a ruggedized hardware in a police car today. We're piloting that with a customer. So there is a shift happening where you can have a core platform that can now allow you to layer on applications that you're not thinking about in the future. So I think that's probably obvious. A lot of people are like, yeah. Okay. Yes. Let's talk about edge, big deal. >>Oh it's, it's, it's big. Yes. It's >>Exploding, but >>It's complicated too. It's not easy. It's not obvious. Right. And it's merging >>There's new things coming every day. Yeah. Yeah. And related to that though, there is this kind of tension that's existing between machine learning and privacy and that's really important. So an area of investment that I don't think enough people are paying attention to today is federated machine learning. There's really good projects in open source that are having tangible impact on, in a lot of industries in VMware. We are, we're investing in a, in a couple of those projects, namely fate in the Linux foundation and open FFL. And in these use cases like the security product I mentioned to you that is looking at analyzing API sequence API call sequences. We architected that originally so that it can run in public cloud, but we're also leveraging now federated machine learning so that we can ensure that those API calls and metadata associated with that is staying on premises for the customers to ensure privacy. So I think those intersections are really important. Federated learning, I think is a, an area not getting enough attention. All right. All >>Right, Chris, thanks so much for coming on. Unfortunately we are out of time. I know you guys could keep going. Yeah. Good stuff. But thank you for sharing. What's going on in R and D the customer impact the outcomes that you're enabling customers to achieve. We appreciate your >>Insights. We're just getting started >>In, in early innings, right? Yeah. Awesome. Good stuff for guest and John furrier. I'm Lisa Martin. You're watching the cube live from VMware Explorer, 2022. Our next guest joins us momentarily. >>Okay.

Published Date : Sep 1 2022

SUMMARY :

This is the Cube's third day of Wal Dal coverage of VMware Explorer. We've had great conversations with VMware's ecosystem and VMware of course, And these look at this is looking at sovereign clouds and multi-cloud edge computing, And in addition to that, as we get closer to go to market, we look to a It's kind of out front right now, but now cloud native's got the most history what's coming out So keek is super exciting because when you think So imagine now, when you need to do an update for infrastructure, you're logging into GitHub, you're editing a YAML What else on the cloud native side Tansu Monterey those Now that's DEC coupled, physically from the server, and it really allows me to scale out east west security So that DPU, which got a lot of buzz, by the way, Lisa, I never, you had trouble interviews on this. So I'm running line rate on the How does VMware look at the relationship to the cloud providers? We can be decoupled, but highly cohesive, but you gotta connect in via APIs. And that's the lesson that we have to take to multi-cloud. but the goal of getting to cloud smart, how does the R and D organization, how do, how are you helping customers they need to have, you know, better accountability there. They do all the work and you only pay when you've made your It's the developers that have to make the calls because if the application is the core, So to me, that's the big story here at the show What is the path for the VMware customer to go from here to there? So that's really key is focusing the developers on innovation to levels that are manageable completely different than the way they did them 20, so that the businesses and the customers and the VMware partners can decide what's the right way to build And the concept around cross cloud was if So even like our customers have to listen to their customers and we have to be thinking about And of course everyone kinds of quotes that book innovators dilemma, but one of the Oh. And that's such an innovation area for VMware, John. We need it to grow. And the way, the way you can think about it is a lot of the anomaly detection software today is looking at point Because a lot of the hacks, sometimes on the API side, even as a example, are not necessarily on And it, it's the thing it's right. but can you defend yourself from, and in That's it, you know, our, our, our malware software, right. What does it look like to you if you had to architect the, the ultimate super cloud enabler, So the, you know, first of all, there has to be open the customer is going into to provision databases, right. Don't you think it's like pretty amazingly hard to reign in the right APIs around it, you don't have to limit a particular project or technology's future customer, I want my control plane data plane to be, you know, mine. How does, how do you see the data planes emerging? So then you have data parts that's going to follow, right? in the industry for cloud next today and, and maybe something that you think people should look So there is a shift happening where you can have a core platform that can now allow It's And it's merging So an area of investment that I don't think enough people are paying attention to today is federated What's going on in R and D the customer impact the outcomes We're just getting started Yeah.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

ChrisPERSON

0.99+

fiveQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Chris WolfPERSON

0.99+

Tom GillisPERSON

0.99+

LisaPERSON

0.99+

20%QUANTITY

0.99+

Tuesday morningDATE

0.99+

JohnPERSON

0.99+

DellORGANIZATION

0.99+

threeQUANTITY

0.99+

VMwareORGANIZATION

0.99+

10 secondsQUANTITY

0.99+

oneQUANTITY

0.99+

AWSORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

yesterdayDATE

0.99+

six companiesQUANTITY

0.99+

TuesdayDATE

0.99+

one platformQUANTITY

0.99+

one exampleQUANTITY

0.99+

10 customersQUANTITY

0.99+

telcoORGANIZATION

0.99+

six monthsQUANTITY

0.99+

John furrierPERSON

0.99+

this weekDATE

0.99+

todayDATE

0.98+

2022DATE

0.98+

this yearDATE

0.98+

OneQUANTITY

0.98+

third dayQUANTITY

0.98+

single platformQUANTITY

0.97+

ESXiTITLE

0.97+

NirvanaLOCATION

0.97+

one last questionQUANTITY

0.97+

VMware ExplorerTITLE

0.97+

LegoORGANIZATION

0.97+

hundred percentQUANTITY

0.96+

one placeQUANTITY

0.96+

Chris ho beakerPERSON

0.95+

this morningDATE

0.95+

CapExORGANIZATION

0.95+

TwitterORGANIZATION

0.95+

KubernetesTITLE

0.94+

one more partQUANTITY

0.93+

first thingQUANTITY

0.91+

VictoriaPERSON

0.89+

vCenterTITLE

0.89+

one exampleQUANTITY

0.87+

DevOpsTITLE

0.86+

firstQUANTITY

0.85+

AzureORGANIZATION

0.82+

RaguPERSON

0.82+

Keith Norbie, NetApp | VMware Explore 2022


 

>>Okay, welcome back everyone to the Cube's live coverage of VMware Explorer, 2022. I'm John Forer host of the cube with Dave Lisa Martin, Dave Nicholson, two sets for three days. We're on three days, we're here breaking down all the action of what's going on around VMware is our 12th year covering VMware's user conference. Formerly known as world. Now explore as it explores new territory, its future multi-cloud vSphere eight and a variety of new next generation cloud. We're here on day three, breaking out. This is day three more, more intimate, much more deeper conversations. And we have coming back on the Q Keith Norby with NetApp, the worldwide product partner solutions executive at NetApp Keith. Great to see you industry to veteran cube alumni. Thanks for coming back. It's >>Good to see you >>Again. Yeah. I wanted to bring you back for a couple reasons. One is I want to talk about the NetApp story and also where that's going with DM VMware as that's evolving and, and is changing and, and with Broadcom and, and the new next generation, but also analyzing kind of the customer impact piece of it. You're like an analyst who've been in the industry for a long time. Been commentating on the cube. VMware's in an interesting spot right now because I, I mean, I love the story. I mean, we can debate the messaging. Some people are very critical of it a little bit too multicloud, not enough cloud native, but I see the waves, right? I get it. Virtualization kicked ass tech names. Now it moves to hybrid cloud. And now this next gen is a, you know, clear cloud native multi-cloud environment. I, I get that. I can see, I can, I can get there, but is it ready? And the timing. Right. And do they have all the peace parts? What's the role of the ecosystem? These are all open questions. >>Yeah. And, and the reality is no one has a single answer. And that's part of the fun of this, is that not just a NetApp, but the rest of the ecosystem and videos here, as an example, who, who is thinking, you know, the Kings of AI are gonna be sitting at a V VMware show and yet it's absolutely relevant. So you have a very complex set of things that emerge, but yet also it's, it's, that's not overcomplicated. There is a set of primary principles that, you know, organizations I think are all looking to get to. And I think the reality is that this is maturing in different spurts. So whether it's ecosystem or it's, you know, operations modes and several other factors that kind of come into it, you know, that's part of the landscape, >>You know, I gotta ask you, you know, you and I are both kind of historians. We always talk about what's happened and happening and gonna happen. You know, it's interesting 12 years covering world and now explore NetApp has always been such a great company. We've been, I've been following that company, you know, since, you know, 1997, you know, days. And, and certainly with the past decade of the cloud or so the moves you guys may have been really good, but NetApp's never really had the kind of positioning in the VMware story going back in the past 12 years. And this keynote, you guys were mentioned in the keynote. Yeah. Has there ever been a time where NetApp was actually mentioned in a keynote at world or now explore? >>Well, you know, when we started this relationship back when I was a partner, I really monetized and took advantage of some of the advantages that NetApp had with VMware back in the early days, we're talking to ESX three days and they were dominant to the point where the rest of, you know, the ecosystem was trying to catch up. And of course, you know, a lot of competition from there, but yeah, it, it, it was great seeing a day, one VMware keynote with NetApp mentioned in the same relevance as AWS and VMware, which is exactly where we've been. You know, one thing that NetApp has done really well is not just being AWS, but be in all the hyperscalers as first party services and having a, a portfolio of other ways that we deal with things like, you know, data governance and cloud data management and cloud cloud backup, and overall dealing with cyber resiliency and, and ransomware protection and list goes on and on. So we've done our job to really make ourself both relevant and easy for people to consume. And it was great to see VMware and AWS come together. And the funny part was that, you know, we had on, on the previous cube session, you have VMware and AWS in between NetApp, all talking about, we have this whole thing running at all three of our booths. And that's fantastic. You >>Know, I, I can say because I actually was there and documented it and actually wrote about it in the early 20 11, 20 12, the then CEO Georgian's and I had an interview. He actually was the first storage company to actually engage with AWS back then. Yeah. I mean, that's a long time ago. That's that's 10 years ago. And then everyone else kind of followed EMC kind of was deer in the headlights at that point. They were poo pooing, AWS. Oh yeah, no, it'll never work either of which will never work. It's just a, a fluke. Yeah. For developers. NetApp was on the Amazon web services partnership train for a long time. >>Yeah. It, it, it's really amazing how early we got on this thing, which you can see the reason why that matters now is because it's not only in first party service, but that's also very robust and scalable. And this is one of the reasons why we think this opens it up. And, you know, as much as you wanna talk about the technology capabilities in, in this offering, the funny part is, is the intro conversation is how much money you save. So it unlocks all the, the use cases that you weren't able to do before. And when you, when you look at use case after use case on these workloads, they were hell held back. The number one conversation we had at this show was partner after partner, organization, after organization that came into our booth and talked to us about, yeah, I've got a bunch of these scenarios that I've been holding back on because I heard whispers about this. Now we're gonna go in >>Unleash those. All right. So what are, what's the top stories for you guys now at NetApp? What's the update it's been a while, since we had a cube update with you guys, what are you guys showing of the show? What's your agenda? What are your talking points? What's the main story? >>Well, for us, it's, it's, it's, it's always, you know, a cloud and on-prem combination of priorities within our partner ecosystem. The way we kind of communicate that out is really through three lenses. You know, one is on the hybrid cloud opportunity, people taking data center and modernizing the data center with the apps and getting the cloud, just like we're delivering here at this VMware world show. Also the AI and modern data analytics opportunity, and then public cloud, because really in a lot of these situations at apps, you know, the, the buyer, the consumer, the people that are interested in transforming are looking at it from different lenses. And these all start with really the customer journeys, the data ops buyer is different than the data center ops buyer. And, and that's exactly who we target this in is, is NetApp. I think, focuses relentlessly on how we reach them. And by the way, not just on storage products, if you look at like our instant cluster acquisition and all these other things, we're trying to be as relevant, we, as we can in data management and you know, whether that's pipelining data management or storing data management, that's >>Where we're there. You know, I, I was talking with David Nicholson, cuz we have, you know, we joked together. I say the holy Trinity, he goes with the devil's triangle. I'm Catholic, gotta know what his, his denomination is, but storage, networking, and compute. Obviously the, the three majors, it never changes. And I think it was interesting now, and I wanna get your reaction to this and what NetApp's doing around it is that if look at the DevOps movement, it's clearly cloud native, but the it ops is not it anymore. It's basically security and data I'm I'm oversimplifying, but DevOps, the developers now do a lot of that. I call it work in, in the CSD pipeline, but the real challenge is data and ops. That's a storage conversation. Compute is beautiful. You got containers, Kubernetes, all kinds of stuff going on with compute, move, compute around, move the data to compute. But storage is where the action is for cyber and data ops. Yeah. And AI. So like storage is back. They never left, but it's, it's transformed to even be more important because the role of hyper-convergence shows that compute and storage go well together. What's your take on this and how is NetApp modernized to, to solve the data ops and take that to the next level and of obviously enable and, and enable in great security and or defense ability. >>Yeah. And that's why no one architecture is gonna solve every problem. That's why, when we look at the data ops buyer, there's adjacencies to the apps buyer, to the other cloud ops buyer. And there's also the fin ops buyer because all of 'em have to work together. What we're, what we're focusing on. Isn't just storing data. But it's also things around how you discover govern data. You know, how you protect data, even things like in the ed workspace, the chip manufacturers, how we use cloud bursting to be able to accelerate performance on chip design. So whether you're translating this for the industry vernacular about how we help say in the financial sector for AI and what we do within Invidia, or it's something translated to this VMware opportunity on AWS, you know, what we've put together is, is something that has as much meaningful relevance for storing data, but also for all the other adjacencies that kind of extend off there. >>Talk about what you're doing with your partner. I saw last night I did, I did a fly by a NetApp event. It was Nvidia insight, which is a partner, an integrator partner. So you got a lot of the frontline on the front lines, you got partners and you got, you know, big solutions with NetApp and now vendors like Nvidia, what are you actually selling? What's what's getting, I guess what's being put together, not selling, I'm obviously selling gear and what, but like solutions, but what's being packaged to the customer. Where does, what does and video fit in? What are you guys? And what's the winning formula. Take us through the highlights. >>Yeah. And so the VMware highlights here are obviously that we're trying to get infrastructure foundations to just not have, be, be trapped in one cloud or anyone OnPrem. So having a little more E elasticity, but if you extend that out, like you, like you mentioned with a partner that's trying to, to go drive AI within Nvidia, you know, NetApp doesn't create any AI deals cuz no one starts an AI journey with storage. They always start it with the, a with the data model. So the data scientists will actually start these things in cloud and they'll bring 'em on prem. Once the data sets get to a, a big enough scenario and then they wanna build it into a multi-cloud over time. And that's where Nvidia has really led the charge. So someone like an insight or other partners could be Kindra or, or Accenture, or even small boutique partners that are in the data analytics space. They'll go drive that. And we provide not just data storage, but are really complimentary infrastructure. In fact, I always say it like on the AI story alone, we have an integration for the data scientists. So when they go pull the data sets in, you can either do that as a manual copy that takes hours sometimes days, or you can do it instantaneously with our integration to their Jupyter notebook. So I say for AI, as an example, NetApp creates time for data scientists. Got >>It. And where's the, the cloud transformation with you guys right now? How is the hybrid working? Obviously you got the public and hybrids, a steady state right now multi-cloud is still a little fantasy in terms of actual multi-cloud that's coming next, but hybrid and cloud, what's the key key configuration for NetApp what's the hot products? >>Well, I think the key is that you can't just be trapped in one location. So we started this whole thing back with data fabric, as you know, and it's built from there up into, into more of the ops layer and some of the technology layers that have to compliment to come with it. In fact, one of the things that we do that isn't always seen as adjacency to us is our spot product on cloud, which allows you to play in the finops space to be able to look at the analyzed spend and sort of optimized environments for a DevOps environment cloud, to be able to give back a big percentage of what you probably misallocate in those operating models. Once you're working with NetApp and allow it to re re redeploy it in the place that you wanna spend it, you know, so it's, it's both the upper and lower stories coming together. >>Yeah. I was on the walking around the hallway yesterday and I was kind of going through the main event last night, overheard people talking about ransomware. I mean, still ransomware is such a big problem. Security's huge. How are you guys doing there? What's the story with security? Obviously ransomware is a big storage aspect and, and backup recovery and whatnot. All that's kind of tied together. How does NetApp enable better security? What's the story >>There? Yeah, it's funny because that's, that's where a lot of the headlines are at this show at every other show is security for us. It's really about cyber resilience. It is one of the key foundational parts of our hybrid cloud offerings. So as we go out to the partners, you mentioned, you know, insight and there's others, you know, CDW ahead here, and the GSI hosting providers, they're all trying to figure out the security opportunity because that is live. So we have a cyber resiliency solution that isn't just our snapshot technologies, but it's also some of the discovery data governance. But also, you know, you gotta work this with ecosystem, as we said, you know, you have all the other ISVs out there that have several solutions, not just the traditional data protection ones, but also the security players. Because if you look at the full perimeter and you look at how you have to secure that and be able to both block remediate and bring back a site, you know, those are complex sets of things that no one person owns. But what we've tried to do is really be as, as meaningful and pervasive and integrated to that package as possible. That's why it's a lead story in the hybrid clouds. >>Can you share for a minute, just give the NetApp commercial plug cuz you guys have continued to stay relevant. What's the story this year for the folks watching that our customers or potential customers, what's the NetApp story for this year? >>Well, the net, the nets right for this year is kind of what I mentioned, which is, you know, we're in this multi-cloud world. So whether you're coming at this from any perspective, we have relevancy for, for the, the on-prem place that you've always enjoyed us, but at the opposite of the spectrum, if you're coming at us from an AWS show or the cloud op the cloud ops buyer, we have a complete portfolio that if you never knew net from the on-prem, you're gonna see us massively relevant in that, in that environment. And you just go to an AWS show or a Microsoft Azure, so, or a Google show, you'll see us there. You'll see exactly why we were relevant there. You'll see them mention why we're relevant there. So our message is really that we have a full portfolio across the hybrid multi-cloud from anyone buyer perspective, to be able to solve those problems, but by the way, do it with partners cuz the partners are the ones that complete all this. None of us on our own, AWS, Microsoft, VMware, NetApp, none of us have the singular solution ourselves. And we can't deliver ourselves. You have to have those partners that have those skills, those competencies. And that's why we, we leverage it that way. >>Great, great stuff. Now I gotta ask you what what's going on in your world with partners. How's it going? What's the vibe what's that just share some insight into what's happening inside the partners? Are they happy with the margins? Are they shifting behavior? What are some of the, the high order bit news items or, or trends going on at the, on the front lines with your partners? >>Well, I think listen, the, the, the challenges pitfalls, the, the objections, the, all the problems that have been there in the past are even more multiplied with today's economy and all the situations we've gone through with COVID. But the reality is what's emerged is an interesting kind of tapestry of a lot of different partner types. So for us, we recognize that across the traditional GSIs, you see these cloud native partners emerging, which is an exciting realm, you know, to look at folks that really built their business in the cloud with no on-prem and being relevant with them, just consulting partners alone. Like the SAP ecosystem has a very condensed set of partners that really drive a lot of the transformation of SAP. And a lot of them don't, you know, don't do product business. So how does someone like NetApp be relevant with them? You gotta put together an offering that says we do X, Y, and Z for SAP. And so it's, it's a combination of these partners across the, the different >>Ecosystems. Yeah. And I, and I, I'm gonna, I wanna get your reaction to something and you probably don't, you don't have to go out, out in the limb and, and put NetApp in a, in a position on official position. But I've been saying on the cube that no matter what happens with VMware's situation with Broadcom, this is not a dying market, right? I mean like you you'd think when someone gets bought out or, or intention bought out, that'd be like this, this dark cloud that would hang over the, the company and this condition is their user conference. So this is a good barometer to get a feel for it. And I gotta tell you, Sunday night here at VMware Explorer, the expo floor was not dead. It was buzzing. It was packed the ecosystem and even the conversations and the positionings, it's all, all growth. So, so I think VMware's in a really interesting spot here with the Broadcom, because no matter what happens that ecosystem's going to settle somewhere. Yeah. It's not going away cuz they have such great customer base. So, you know, assume that broad Tom is gonna do the right thing and they keep most of the jewels they'll keep all the customers. So, but still that wave is coming. Yeah. It's independent of VMware. Yeah. That's the whole point. So what happens next? >>Well, I think, you know, we, >>We, you guys are gonna get mop up in business. Amazon's gonna get some business, Microsoft, HPE, you name it all gonna, >>Yeah. I think, you know, we've, we've been in business with Broadcom for a long time, whether it be the switch business, the chip business, everything in between. And so we've got a very mature relationship with them and we have a great relationship with VMware. It's it's best. It's almost ever been now and together. I think that will all just rationalize and, and settle over time as this kind of goes through both the next Barcelona show and when it comes back here next year, and I think, you know, what you'll see is probably, you know, some of the stuff settle into the new things they announced here at the show and the things that maybe you haven't heard from, but ultimately the, these, these, these solutions that they have to come forward with, you know, have to land on things that go forward. And so today you just saw that with VMware trying to do VMware cloud and AWS, they realized that there was a gap in terms of people adopting and wanting to do a storage expansion without adding compute. So they made a move with us that made total sense. I think you're gonna see more of those things that are very common sense, ways to solve the, the barriers to, you know, modernization, adoption and maturity. That's just gonna be a natural part of the vetting. And I think they'll probably come a lot more. >>It's gonna be very interesting. We interviewed AJ Patel yesterday. He heads up he's SVP G of the modern app side. He's a middleware guy. So you can almost connect the dots kind of where we're going with this. Yeah. So I assume there's a nice middleware layer of developing everybody wins yeah. In this, if done properly. So it's clearly that VMware, no matter what happens at Broadcom from this show, my assessment's all steam all steam ahead. No, one's holding back at this point. >>Yeah. It's funny. The, the most mature partners we talk to have this interesting sort of upper and lower story and the upper story is all about that, that application data and middleware kind of layer. What are you doing there to be relevant about the different issues they run into versus some of the stuff that we've grown up with on the infrastructure side, they wanna make that as, as nascent as possible, like infrastructure's code and all this stuff that the automation platforms do. But you're right. If you don't get up into that application, middleware space, you know, and work on that, on that side of the house, you know, you're not gonna be >>Relevant. Yeah. I mean, it's interesting, you know, most people, people take it literally. It doesn't mean middleware. We don't mean middleware. We mean that what middleware was yeah. In the old metaphor just still has to happen. That's where complexity solved. You got hardware, essentially cloud and you got applications, right. So it's all, all kind of the same, but not >>Yeah. In a lot of cases, it could be conceived as even like pipelining, you know, it's it's, you have data and apps going through a transformation from the old style and the old application structures to cloud native apps and a, a much different architecture. The, the whole deal is how you're relevant there. How you solving real problems about simplifying, improving performance, improving securities, you mentioned all those things are relevant and that's where, that's where you have to place >>Your bets. I love that storage is continuing to be at the center of the value proposition. Again, storage compute, networking never goes away. It's just being kind of flexed in new ways just to continue to say, deliver better value. Keith, thanks for coming on the queue. Great to see you for the, see you again, man, day three for coming back on and give us some commentary. Really appreciate it. And congratulations on all the success with the partners and having the cloud story. Right. Thanks. Cheers. Okay. More cube coverage. After this short break day three, Walter Wall coverage. I'm John furier host Dave ante, Lisa Martin, Dave Nicholson, all here covering VMware. We'll be back with more after this short break.

Published Date : Sep 1 2022

SUMMARY :

I'm John Forer host of the cube with Dave Lisa Martin, Dave Nicholson, two sets for three days. And now this next gen is a, you know, kind of come into it, you know, that's part of the landscape, the moves you guys may have been really good, but NetApp's never really had the kind of positioning And the funny part was that, you know, we had on, early 20 11, 20 12, the then CEO Georgian's and And, you know, as much as you wanna talk about the technology capabilities in, since we had a cube update with you guys, what are you guys showing of the show? Well, for us, it's, it's, it's, it's always, you know, a cloud and on-prem combination You know, I, I was talking with David Nicholson, cuz we have, you know, we joked together. you know, what we've put together is, is something that has as much meaningful relevance So you got a lot of the frontline on the front lines, you got partners and you got, you know, big solutions with to go drive AI within Nvidia, you know, NetApp doesn't create any AI deals cuz no one It. And where's the, the cloud transformation with you guys right now? allow it to re re redeploy it in the place that you wanna spend it, you know, so it's, What's the story with security? So as we go out to the partners, you mentioned, you know, Can you share for a minute, just give the NetApp commercial plug cuz you Well, the net, the nets right for this year is kind of what I mentioned, which is, you know, we're in this multi-cloud world. Now I gotta ask you what what's going on in your world with partners. which is an exciting realm, you know, to look at folks that really built their business So, you know, assume that broad Tom is gonna do the right thing We, you guys are gonna get mop up in business. the barriers to, you know, modernization, adoption and maturity. So you can almost connect the dots kind of where we're going with this. middleware space, you know, and work on that, on that side of the house, you know, you're not gonna be In the old metaphor just still has to happen. that's where you have to place Great to see you for the, see you again,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

KeithPERSON

0.99+

Dave Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

Keith NorbiePERSON

0.99+

AWSORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

David NicholsonPERSON

0.99+

BroadcomORGANIZATION

0.99+

AJ PatelPERSON

0.99+

MicrosoftORGANIZATION

0.99+

John ForerPERSON

0.99+

AmazonORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

yesterdayDATE

0.99+

VMwareORGANIZATION

0.99+

next yearDATE

0.99+

KindraORGANIZATION

0.99+

three daysQUANTITY

0.99+

Walter WallPERSON

0.99+

todayDATE

0.99+

OneQUANTITY

0.99+

12 yearsQUANTITY

0.99+

AccentureORGANIZATION

0.99+

1997DATE

0.99+

HPEORGANIZATION

0.99+

last nightDATE

0.99+

oneQUANTITY

0.99+

John furierPERSON

0.99+

bothQUANTITY

0.99+

12th yearQUANTITY

0.99+

InvidiaORGANIZATION

0.98+

Sunday nightDATE

0.98+

EMCORGANIZATION

0.98+

2022DATE

0.98+

this yearDATE

0.98+

VMware ExplorerORGANIZATION

0.98+

NetAppTITLE

0.98+

10 years agoDATE

0.97+

two setsQUANTITY

0.97+

first storageQUANTITY

0.97+

Dave antePERSON

0.97+

BarcelonaLOCATION

0.96+

three majorsQUANTITY

0.96+

day threeQUANTITY

0.95+

ESXTITLE

0.95+

single answerQUANTITY

0.94+

GSIORGANIZATION

0.94+

TomPERSON

0.93+

Mark Nickerson & Paul Turner | VMware Explore 2022


 

(soft joyful music) >> Welcome back everyone to the live CUBE coverage here in San Francisco for VMware Explore '22. I'm John Furrier with my host Dave Vellante. Three days of wall to wall live coverage. Two sets here at the CUBE, here on the ground floor in Moscone, and we got VMware and HPE back on the CUBE. Paul Turner, VP of products at vSphere and cloud infrastructure at VMware. Great to see you. And Mark Nickerson, Director of Go to Mark for Compute Solutions at Hewlett-Packard Enterprise. Great to see you guys. Thanks for coming on. >> Yeah. >> Thank you for having us. >> So we, we are seeing a lot of traction with GreenLake, congratulations over there at HPE. The customers changing their business model consumption, starting to see that accelerate. You guys have the deep partnership, we've had you guys on earlier yesterday. Talked about the technology partnership. Now, on the business side, where's the action at with the HP and you guys with the customer? Because, now as they go cloud native, third phase of the inflection point, >> Yep. >> Multi-cloud, hybrid-cloud, steady state. Where's the action at? >> So I think the action comes in a couple of places. Um, one, we see increased scrutiny around, kind of not only the cost model and the reasons for moving to GreenLake that we've all talked about there, but it's really the operational efficiencies as well. And, this is an area where the long term partnership with VMware has really been a huge benefit. We've actually done a lot of joint engineering over the years, continuing to do that co-development as we bring products like Project Monterey, or next generations of VCF solutions, to live in a GreenLake environment. That's an area where customers not only see the benefits of GreenLake from a business standpoint, um, on a consumption model, but also around the efficiency operationally as well. >> Paul, I want to, I want to bring up something that we always talk about on the CUBE, which is experience in the enterprise. Usually it's around, you know, technology strategy, making the right product market fit, but HPE and VMware, I mean, have exceptional depth and experience in the enterprise. You guys have a huge customer base, doesn't churn much, steady state there, you got vSphere, killer product, with a new release coming out, HP, unprecedented, great sales force. Everyone knows that you guys have great experience serving customers. And, it seems like now the fog is clearing, we're seeing clear line of sight into value proposition, you know, what it's worth, how do you make money with it, how do partners make money? So, it seems like the puzzle's coming together right now with consumption, self-service, developer focus. It just seems to be clicking. What's your take on all this because... >> Oh, absolutely. >> you got that engine there at VMware. >> Yeah. I think what customers are looking for, customers want that cloud kind of experience, but they want it on their terms. So, the work that we're actually doing with the GreenLake offerings that we've done, we've released, of course, our subscription offerings that go along with that. But, so, customers can now get cloud on their terms. They can get systems services. They know that they've got the confidence that we have integrated those services really well. We look at something like vSphere 8, we just released it, right? Well, immediately, day zero, we come out, we've got trusted integrated servers from HPE, Mark and his team have done a phenomenal job. We make sure that it's not just the vSphere releases but VSAN and we get VSAN ready nodes available. So, the customers get that trusted side of things. And, you know, just think about it. We've... 200,000 joined customers. >> Yeah, that's a lot. >> We've a hundred thousand kind of enabled partners out there. We've an enormous kind of install base of customers. But also, those customers want us to modernize. And, you know, the fact that we can do that with GreenLake, and then of course with our new features, and our new releases. >> Yeah. And it's nice that the products market fits going well on both sides. But can you guys share, both of you share, the cadence of the relationship? I mean, we're talking about vSphere, every two years, a major release. Now since 6, vSphere 6, you guys are doing three months' releases, which is amazing. So you guys got your act together there, doing great. But, you guys, so many joint customers, what's the cadence? As stuff comes out, how do you guys put that together? How tightly integrated? Can you share a quick... insight into that dynamic? >> Yeah, sure. So, I mean Mark can and add to this too, but the teams actually work very closely, where it's every release that we do is jointly qualified. So that's a really, really important thing. But it's more interesting is this... the innovation side of things. Right? If you just think about it, 'cause it's no use to just qualify. That's not that interesting. But, like I said, we've released with vSphere 8 you know... the new enhanced storage architecture. All right? The new, next generation of vSphere. We've got that immediately qualified, ready on HPE equipment. We built out new AI servers, actually with Invidia and with HPE. And, we're able to actually push the extremes of... AI and intelligence... on systems. So that's kind of work. And then, of course, our Project Monterey work. Project Monterey Distributed Services Engine. That's something we're really excited about, because we're not just building a new server anymore, we're actually going to change the way servers are built. Monterey gives us a new platform to build from that we're actually jointly working. >> So double click on that, and then to explain how HPE is taking advantage of it. I mean, obvious you have more diversity of XPU's, you've got isolation, you've got now better security, and confidential computing, all that stuff. Explain that in some detail, and how does HPE take advantage of that? >> Yeah, definitely. So, if you think about vSphere 8, vSphere 8 I can now virtualize anything. I can virtualize your CPU's, your GPU's, and now what we call DPU's, or data processing units. A data processing unit, it's... think of it as we're running, actually, effectively another version of ESX, sitting down on this processor. But, that gives us an ability to run applications, and some of the virtualization services, actually down on that DPU. It's separated away from where you run your application. So, all your applications get to consume all your CPU. It's all available to you. Your DPU is used for that virtualization and virtualization services. And that's what we've done. We've been working with HPE and HPE and Pensando. Maybe you can talk some of the new systems that we've built around this too. >> Yeah. So, I mean, that's one of the... you talked about the cadence and that... back to the cadence question real briefly. Paul hit on it. Yeah, there's a certain element of, "Let's make sure that we're certified, we're qualified, we're there day zero." But, that cadence goes a lot beyond it. And, I think Project Monterey is a great example of where that cadence expands into really understanding the solutioning that goes into what the customer's expecting from us. So, to Paul's point, yeah, we could have just qualified the ESX version to go run on a DPU and put that in the market and said, "Okay, great. Customers, We know that it works." We've actually worked very tightly with VMware to really understand the use case, what the customer needs out of that operating environment, and then provide, in the first instantiation, three very discrete product solutions aimed at different use cases, whether that's a more robust use case for customers who are looking at data intensive, analytic intensive, environments, other customers might be looking at VDI or even edge applications. And so, we've worked really closely with VMware to engineer solutions specific to those use cases, not just to a qualification of an operating environment, not just a qualification of certain software stack, but really into an understanding of the use case, the customer solution, and how we take that to market with a very distinct point of view alongside our partners. >> And you can configure the processors based on that workload. Is that right? And match the workload characteristics with the infrastructure is that what I'm getting? >> You do, and actually, well, you've got the same flexibility that we've actually built in why you love virtualization, why people love it, right? You've got the ability to kind of bring harness hardware towards your application needs in a very dynamic way. Right? So if you even think about what we built in vSphere 8 from an AI point of view, we're able to scale. We built the ability to actually take network device cards, and GPU cards, you're to able to build those into a kind of composed device. And, you're able to provision those as you're provisioning out VM's. And, the cool thing about that, is you want to be able to get extreme IO performance when you're doing deep learning applications, and you can now do that, and you can do it very dynamically, as part of the provisioning. So, that's the kind of stuff. You've got to really think, like, what's the use case? What's the applications? How do we build it? And, for the DPU side of things, yes, we've looked at how do we take some of our security services, some of our networking services, and we push those services down onto the SmartNIC. It frees up processors. I think the most interesting thing, that you probably saw on the keynote, was we did benchmarks with Reddit databases. We were seeing 20 plus, I'm sure the exact number, I think it was 27%, I have to get exact number, but a 27% latency improvement, to me... I came from the database background, latency's everything. Latency's king. It's not just... >> Well it's... it's number one conversation. >> I mean, we talk about multi-cloud, and as you start getting into hybrid. >> Right. >> Latency, data movement, efficiency, I mean, this is all in the workload mindset that the workhorses that you guys have been working at HPE with the compute, vSphere, this is heart center of the discussion. I mean, it is under the hood, and we're talking about the engine here, right? >> Sure. >> And people care about this stuff, Mark. This is like... Kubernetes only helps this better with containers. I mean, it's all kind of coming together. Where's that developer piece? 'Cause remember, infrastructure is code, what everybody wants. That's the reality. >> Right. Well, I think if you take a look at... at where the Genesis of the desire to have this capability came from, it came directly out of the fact that you take a look at the big cloud providers, and sure, the ability to have a part of that operating environment, separated out of the CPU, free up as much processing as you possibly can, but it was all in this very lockdown proprietary, can't touch it, can't develop on it. The big cloud guys owned it. VMware has come along and said, "Okay, we're going to democratize that. We're going to make this available for the masses. We're opening this up so that developers can optimize workloads, can optimize applications to run in this kind of environment." And so, really it's about bringing that cloud experience, that demand that customers have for that simplicity, that flexibility, that efficiency, and then marrying it with the agility and security of having your on premises or hybrid cloud environment. And VMware is kind of helping with that... >> That's resonating with the customer, I got to imagine. >> Yeah. >> What's the feedback you're hearing? When you talk to customers about that, the like, "Wait a minute, we'd have to like... How long is that going to take? 'Cause that sounds like a one off." >> Yeah. I'll tell you what... >> Everything is a one off now. You could do a one off. It scales. >> What I hear is give me more. We love where we're going in the first instantiation of what we can do with the Distributed Services Engine. We love what we're seeing. How do we do more? How do we drive more workloads in here? How do we get more efficiency? How can we take more of the overhead out of the CPU, free up more cores. And so, it's a tremendously positive response. And then, it's a response that's resonating with, "Love it. Give me more." >> Oh, if you're democratizing, I love that word because it means democratization, but someone's being democratized. Who's... What's... Something when... that means good things are happening, which means someone's not going to be winning out. Who's that? What... >> Well it, it's not necessarily that someone's not winning out. (laughs) What you read, it comes down to... Democratizing means you've got to look at it, making it widely available. It's available to all. And these things... >> No silos. No gatekeepers. Kind of that kind of thing. >> It's a little operationally difficult to use. You've got... Think about the DPU market. It was a divergent market with different vendors going into that market with different kind of operating systems, and that doesn't work. Right? You've got to actually go and virtualize those DPU's. So then, we can actually bring application innovation onto those DPU's. We can actually start using them in smart ways. We did the same thing with GPU's. We made them incredibly easy to use. We virtualized those GPU's, we're able to, you know, you can provision them in a very simple way. And, we did the same thing with Kubernetes. You mentioned about container based applications and modern apps in the one platform now, you can just set a cluster and you can just say, "Hey I want that as a modern apps enabled cluster." And boom. It's done. And, all of the configurations, set up, Kubernetes, it's done for you. >> But the thing that just GreenLake too, the democratization aspect of how that changed the business model unleashes... >> Right. >> ...efficiency and just simplicity. >> Oh yeah, absolutely. >> But the other thing was the 20% savings on the Reddit's benchmark, with no change required at the application level, correct? >> No change at the application level. In the vCenter, you have to set a little flag. >> Okay. You got to tick a box. >> You got to tick a little box... >> So I can live with that. But the point I'm making is that traditionally, we've had... We have an increasing amount of waste to do offloads, and now you're doing them much more efficiently, right? >> Yes. >> Instead of using the traditional x86 way of doing stuff, you're now doing purpose built, applying that to be much more efficient >> Totally agree. And I think it's becoming, it's going to become even more important. Look at, we are... our run times for our applications, We've got to move to a world where we're building completely confidential applications at all time. And that means that they are secured, encrypted, all traffic is encrypted, whether it's storage traffic, whether it's IO traffic, we've got to make sure we've got complete route of trust of the applications. And so, to do all of that is actually a... compute intensive. It just is. And so, I think as we move forward and people build much more complete, confidential, compute secured environments, you're going to be encrypting all traffic all the time. You're going to be doing micro-zoning and firewalling down at the VM level so that you've got the protection. You can take a VM, you can move it up to the cloud, it will inherit all of its policies, will move with it. All of that will take compute capacity. >> Yup. >> The great thing is that the DPU's give us this ability to offload and to use some of that spare compute capacity. >> And isolate so the application chance can't just tunnel in and get access to that >> You guys got so much going on. You can have your own CUBE show, just on the updating, what's going on between the two companies, and then the innovation. We got one minute left. Just quickly, what's the goal in the partnership? What's next? You guys going to be in the field together, doing joint customer work? Is there bigger plans? Is there events out there? What are some of your plans together in the marketplace? >> That's you. >> Yup. So, I think, Paul kind of alluded to it. Talk about the fact that you've got a hundred thousand partners in common. The venn diagram of looking at the HPE channel and the VMware channel, clearly there's an opportunity there to continue to drive a joint, go to market message, through both of our sales organizations, and through our shared channel. We have a 25,000 strong... solution architect... force that we can leverage. So as we get these exciting things to talk about, I mean, you talk about Project Monterey, the Distributed Services Engine. That's big news. There's big news around vSphere 8. And so, having those great things to go talk about with that strong sales team, with that strong channel organization, I think you're going to see a lot stronger partnership between VMware and HPE as we continue to do this joint development and joint selling >> Lots to get enthused about, pretty much there. >> Oh yeah! >> Yeah, I would just add in that we're actually in a very interesting point as well, where Intel's just coming out with Next Rev systems, we're building the next gen of these systems. I think this is a great time for customers to look at that aging infrastructure that they have in place. Now is a time we can look at upgrading it, but when they're moving it, they can move it also to a cloud subscription based model, you know can modernize not just what you have in terms of the capabilities and densify and get much better efficiency, but you can also modernize the way you buy from us and actually move to... >> Real positive change transformation. Checks the boxes there. And put some position for... >> You got it. >> ... cloud native development. >> Absolutely. >> Guys, thanks for coming on the CUBE. Really appreciate you coming out of that busy schedule and coming on and give us the up... But again, we can do a whole show some... all the moving parts and innovation going on with you guys. So thanks for coming on. Appreciate it. Thank you. I'm John Dave Vellante we're back with more live coverage day two, two sets, three days of wall to wall coverage. This is the CUBE at VMware Explorer. We'll be right back.

Published Date : Aug 31 2022

SUMMARY :

Great to see you guys. You guys have the deep partnership, Where's the action at? kind of not only the cost and experience in the enterprise. just the vSphere releases and then of course with our new features, both of you share, but the teams actually work very closely, and then to explain how HPE and some of the virtualization services, and put that in the market and said, And match the workload characteristics We built the ability to actually number one conversation. and as you start getting into hybrid. that the workhorses that That's the reality. the ability to have a part of customer, I got to imagine. How long is that going to take? Everything is a one off now. in the first instantiation I love that word because It's available to all. Kind of that kind of thing. We did the same thing with GPU's. But the thing that just GreenLake too, In the vCenter, you have But the point I'm making and firewalling down at the VM level the DPU's give us this ability just on the updating, and the VMware channel, Lots to get enthused about, the way you buy from us Checks the boxes there. and innovation going on with you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Mark NickersonPERSON

0.99+

Paul TurnerPERSON

0.99+

MarkPERSON

0.99+

PaulPERSON

0.99+

John Dave VellantePERSON

0.99+

VMwareORGANIZATION

0.99+

HPEORGANIZATION

0.99+

John FurrierPERSON

0.99+

27%QUANTITY

0.99+

Hewlett-Packard EnterpriseORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

HPORGANIZATION

0.99+

MosconeLOCATION

0.99+

two companiesQUANTITY

0.99+

MontereyORGANIZATION

0.99+

PensandoORGANIZATION

0.99+

25,000QUANTITY

0.99+

two setsQUANTITY

0.99+

one minuteQUANTITY

0.99+

vSphereTITLE

0.99+

both sidesQUANTITY

0.99+

vSphere 8TITLE

0.99+

three months'QUANTITY

0.99+

ESXTITLE

0.99+

three daysQUANTITY

0.99+

RedditORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

Two setsQUANTITY

0.99+

vSphere 6TITLE

0.99+

bothQUANTITY

0.99+

one platformQUANTITY

0.98+

20 plusQUANTITY

0.98+

first instantiationQUANTITY

0.98+

Project MontereyORGANIZATION

0.97+

6TITLE

0.97+

GreenLakeORGANIZATION

0.97+

VMware ExplorerORGANIZATION

0.95+

KubernetesTITLE

0.94+

Three daysQUANTITY

0.94+

day twoQUANTITY

0.94+

vCenterTITLE

0.93+

hundred thousandQUANTITY

0.92+

third phaseQUANTITY

0.92+

200,000 joined customersQUANTITY

0.92+

oneQUANTITY

0.91+

Project MontereyORGANIZATION

0.89+

IntelORGANIZATION

0.85+

8TITLE

0.84+

VCFORGANIZATION

0.84+

vSphereCOMMERCIAL_ITEM

0.83+

vSphereORGANIZATION

0.81+

20% savingsQUANTITY

0.81+

VMware Explore '22EVENT

0.81+

every two yearsQUANTITY

0.8+

CUBEORGANIZATION

0.79+

hundred thousand partnersQUANTITY

0.79+

three very discrete productQUANTITY

0.79+

Distributed Services EngineORGANIZATION

0.76+

Raghu Raghuram, VMware | VMware Explore 2022


 

>>Okay, welcome back everyone. There's the cubes coverage of VMware Explorer, 22 formerly world. We've been here since 2010 and world 2010 to now it's 2022. And it's VMware Explorer. We're here at the CEO, regular writer. Welcome back to the cube. Great to see you in person. >>Yeah. Great to be here in person, >>Dave and I are, are proud to say that we've been to 12 straight years of covering VMware's annual conference. And thank you. We've seen the change in the growth over time and you know, it's kind of, I won't say pinch me moment, but it's more of a moment of there's the VMware that's grown into the cloud after your famous deal with Andy jazzy in 2016, we've been watching what has been a real sea change and VMware since taking that legacy core business and straightening out the cloud strategy in 2016, and then since then an acceleration of, of cloud native, like direction under your leadership at VMware. Now you're the CEO take us through that because this is where we are right now. We are here at the pinnacle of VMware 2.0 or cloud native VMware, as you point out on your keynote, take us through that history real quick. Cuz I think it's important to know that you've been the architect of a lot of this change and it's it's working. >>Yeah, definitely. We are super excited because like I said, it's working, the history is pretty simple. I mean we tried running our own cloud cloud air. We cloud air didn't work so well. Right. And then at that time, customers really gave us strong feedback that the hybrid they wanted was a Amazon together. Right. And so that's what we went back and did and the andjay announcement, et cetera. And then subsequently as we were continue to build it out, I mean, once that happened, we were able to go work with the Satia and Microsoft and others to get the thing built out all over. Then the next question was okay, Hey, that's great for the workloads that are running on vSphere. What's the story for workloads that are gonna be cloud native and benefit a lot from being cloud native. So that's when we went the Tansu route and the Kubernetes route, we did a couple of acquisitions and then we started that started paying off now with the Tansu portfolio. And last but not the least is once customers have this distributed portfolio now, right. Increasingly everything is becoming multi-cloud. How do you manage and connect and secure. So that's what you start seeing that you saw the management announcement, networking and security and everything else is cooking. And you'll see more stuff there. >>Yeah know, we've been talking about super cloud. It's kinda like a multi-cloud on steroids kind a little bit different pivot of it. And we're seeing some use cases. >>No, no, it's, it's a very great, it's a, it's pretty close to what we talk about. >>Awesome. I mean, and we're seeing this kind of alignment in the industry. It's kind of open, but I have to ask you, when did you, you have the moment where you said multicloud is the game changer moment. When did you have, because you guys had hybrid, which is really early as well. When was the Raghu? When did you have the moment where you said, Hey, multicloud is what's happening. That's we're doubling down on that go. >>I mean, if you think about the evolution of the cloud players, right. Microsoft really started picking up around the 2018 timeframe. I mean, I'm talking about Azure, right? >>In a big way. >>Yeah. In a big way. Right. When that happened and then Google got really serious, it became pretty clear that this was gonna be looking more like the old database market than it looked like a single player cloud market. Right. Equally sticky, but very strong players all with lots of IP creation capability. So that's when we said, okay, from a supplier side, this is gonna become multi. And from a customer side that has always been their desire. Right. Which is, Hey, I don't want to get locked into anybody. I want to do multiple things. And the cloud vendors also started leveraging that OnPrem. Microsoft said, Hey, if you're a windows customer, your licensing is gonna be better off if you go to Azure. Right. Oracle did the same thing. So it just became very clear. >>I am, I have gone make you laugh. I always go back to the software mainframe because I, I think you were here. Right. I mean, you're, you're almost 20 years in. Yeah. And I, the reason I appreciate that is because, well, that's technically very challenging. How do you make virtualization overhead virtually non-existent how do you run any workload? Yeah. How do you recover from, I mean, that's was not trivial. Yeah. Okay. So what's the technical, you know, analog today, the real technical challenge. When you think about cross cloud services. >>Yeah. I mean, I think it's different for each of these layers, right? So as I was alluding to for management, I mean, you can go each one of them by themselves, there is one way of Mo doing multi-cloud, which is multiple clouds. Right. You could say, look, I'm gonna build a great product for AWS. And then I'm gonna build a great product for Azure. I'm gonna build a great product for Google. That's not what aria is. Aria is a true multi-cloud, which means it pulls data in from multiple places. Right? So there are two or three, there are three things that aria has done. That's I think is super interesting. One is they're not trying to take all the data and bring it in. They're trying to federate the data sources. And secondly, they're doing it in real time and they're able to construct this graph of a customer's cloud resources. >>Right. So to keep the graph constructed and pulling data, federating data, I think that's a very interesting concept. The second thing that, like I said is it's a real time because in the cloud, a container might come and go like that. Like that is a second technical challenge. The third it's not as much a technical challenge, but I really like what they have done for the interface they've used GraphQL. Right? So it's not about if you remember in the old world, people talk about single pan or glass, et cetera. No, this is nothing to do with pan or glass. This is a data model. That's a graph and a query language that's suited for that. So you can literally think of whatever you wanna write. You can write and express it in GraphQL and pull all sorts of management applications. You can say, Hey, I can look at cost. I can look at metrics. I can look at whatever it is. It's not five different types of applications. It's one, that's what I think had to do it at scale is the other problem. And, and >>The, the technical enable there is just it's good software. It's a protocol. It's >>No, no, it's, it's, it's it's software. It's a data model. And it's the Federation architecture that they've got, which is open. Right. You can pull in data from Datadog, just as well as from >>Pretty >>Much anything data from VR op we don't care. Right? >>Yeah. Yeah. So rego, I have to ask you, I'm glad you like the Supercloud cuz you know, we, we think multi-cloud still early, but coming fast. I mean, everyone has multiple clouds, but spanning this idea of spanning across has interesting sequences. Do you data, do you do computer both and a lot of good things happening. Kubernetes been containers, all that good stuff. Okay. How do you see the first rev of multi-cloud evolving? Like is it what happens? What's the sequence, what's the order of operations for a client standpoint? Customer standpoint of, of multicloud or Supercloud because we think we're seeing it as a refactoring of something like snowflake, they're a data base, they're a data warehouse on the cloud. They, they say data cloud they'd they like they'll tell us no, you, we're not a data. We're not a data warehouse. We're data cloud. Okay. You're a data warehouse refactored for the CapEx from Amazon and cooler, newer things. Yeah, yeah, yeah. That's a behavior change. Yeah. But it's still a data warehouse. Yeah. How do you see this multi-cloud environment? Refactoring? Is there something that you see that might be different? That's the same if you know what I'm saying? Like what's what, what's the ne the new thing that's happening with multi-cloud, that's different than just saying I'm I'm doing SAS on the cloud. >>Yeah. So I would say, I would point to a, a couple of things that are different. Firstly, my, the answer depends on which category you are in. Like the category that snowflake is in is very different than Kubernetes or >>Something or Mongo DB, right? >>Yeah. Or Mongo DB. So, so it is not appropriate to talk about one multi-cloud approach across data and compute and so, so on and so forth. So I'll talk about the spaces that we play. Right. So step one, for most customers is two application architectures, right? The cloud native architecture and an enterprise native architecture and tying that together either through data or through networks or through et cetera. So that's where most of the customers are. Right. And then I would say step two is to bring these things together in a more, in a closer fashion and that's where we are going. And that is why you saw the cloud universal announcement and that's already, you've seen the Tansu announcement, et cetera. So it's really, the step one was two distinct clouds. That is just two separate islands. >>So the other thing that we did, that's really what my, the other thing that I'd like to get to your reaction on, cause this is great. You're like a masterclass in the cube here. Yeah, totally is. We see customers becoming super clouds because they're getting the benefit of, of VMware, AWS. And so if I'm like a media company or insurance company, if I have scale, if I continue to invest in, in cloud native development, I do all these things. I'm gonna have a da data scale advantage, possibly agile, which means I can build apps and functionality very quick for customers. I might become my own cloud within the vertical. Exactly. And so I could then service other people in the insurance vertical if I'm the insurance company with my technology and create a separate power curve that never existed before. Cause the CapEx is off the table, it's operating expense. Yep. That runs into the income statement. Yep. This is a fundamental business model shift and an advantage of this kind of scenario. >>And that's why I don't think snowflakes, >>What's your reaction to that? Cuz that's something that, that is not really, talk's highly nuanced and situational. But if Goldman Sachs builds the biggest cloud on the planet for financial service for their own benefit, why wouldn't they >>Exactly. >>And they're >>Gonna build it. They sort of hinted at it that when they were up on stage on AWS, right. That is just their first big step. I'm pretty sure over time they would be using other clouds. Think >>They already are on >>Prem. Yeah. On prem. Exactly. They're using VMware technology there. Right? I mean think about it, AWS. I don't know how many billions of dollars they're spending on AWS R and D Microsoft is doing the same thing. Google's doing the same thing we are doing. Not as much as them that you're doing oral chair. Yeah. If you are a CIO, you would be insane not to take advantage of all of this IP that's getting created and say, look, I'm just gonna bet on one. Doesn't make any sense. Right. So that's what you're seeing. And then >>I think >>The really smart companies, like you talked about would say, look, I will do something for my industry that uses these underlying clouds as the substrate, but encapsulates my IP and my operating model that I then offer to other >>Partners. Yeah. And their incentive for differentiation is scale. Yeah. And capability. And that's a super cloud. That's a, or would be say it environment. >>Yeah. But this is why this, >>It seems like the same >>Game, but >>This, I mean, I think it environment is different than >>Well, I mean it advantage to help the business, the old day service, you >>Said snowflake guys out the marketing guys. So you, >>You said snowflake data warehouse. See, I don't think it's in data warehouse. It's not, that's like saying, you >>Know, I, over >>VMware is a virtualization company or service now is a help desk tool. I, this is the change. Yes. That's occurring. Yes. And that you're enabling. So take the Goldman Sachs example. They're gonna run OnPrem. They're gonna use your infrastructure to do selfer. They're gonna build on AWS CapEx. They're gonna go across clouds and they're gonna need some multi-cloud services. And that's your opportunity. >>Exactly. That's that's really, when you, in the keynote, I talked about cloud universal. Right? So think of a future where we can go to a customer and say, Mr. Customer buy thousand scores, a hundred thousand cores, whatever capacity you can use it, any which way you want on any application platform. Right. And it could be OnPrem. It could be in the cloud, in the cloud of their choice in multiple clouds. And this thing can be fungible and they can tie it to the right services. If they like SageMaker they could tie it to Sage or Aurora. They could tie it to Aurora, cetera, et cetera. So I think that's really the foundation that we are setting. Well, I think, I >>Mean, you're building a cloud across clouds. I mean, that's the way I look at it. And, and that's why it's, to me, the, the DPU announcement, the project Monterey coming to fruition is so important. Yeah. Because if you don't have that, if you're not on that new Silicon curve yep. You're gonna be left behind. Oh, >>Absolutely. It allows us to build things that you would not otherwise be able to do, >>Not to pat ourselves on the back Ragu. But we, in what, 2013 day we said, feel >>Free. >>We, we said with Lou Tucker when OpenStack was crashing. Yeah. Yeah. And then Kubernetes was just a paper. We said, this could be the interoperability layer. Yeah. You got it. And you could have inter clouding cuz there was no clouding. I was gonna riff on inter networking. But if you remember inter networking during the OSI model, TCP and IP were hardened after the physical data link layer was taken care of. So that enabled an entire new industry that was open, open interconnect. Right. So we were saying inter clouding. So what you're kind of getting at with cross cloud is you're kind of creating this routing model if you will. Not necessarily routing, but like connection inter clouding, we called it. I think it's kinda a terrible name. >>What you said about Kubernetes is super critical. It is turning out to be the infrastructure API so long. It has been an infrastructure API for a certain cluster. Right. But if you think about what we said about VSE eight with VSE eight Kubernetes becomes the data center API. Now we sort of glossed over the point of the keynote, but you could do operations storage, anything that you can do on vSphere, you can do using a Kubernetes API. Yeah. And of course you can do all the containers in the Kubernetes clusters and et cetera, is what you could always do. Now you could do that on a VMware environment. OnPrem, you could do that on EKS. Now Kubernetes has become the standard programming model for infrastructure across. It >>Was the great equalizer. Yeah. You, we used to say Amazon turned the data center through an API. It turns, turns of like a lot of APIs and a lot of complexity. Right. And Kubernetes changed. >>Well, the role, the role of defacto standards played a lot into the T C P I P revolution before it became a standard standard. What the question Raghu, as you look at, we had submit on earlier, we had tutorial on as well. What's the disruptive enabler from a defacto. What in your mind, what should, because Kubernetes became kind of defacto, even though it was in the CNCF and in an open source open, it wasn't really standard standard. There's no like standards, body, but what de facto thing has to happen in your mind's eye around making inter clouding or connecting clouds in a, in a way that's gonna create extensibility and growth. What do you see as a de facto thing that the industry should rally around? Obviously Kubernetes is one, is there something else that you see that's important for in an open way that the industry can discuss and, and get behind? >>Yeah. I mean, there are things like identity, right? Which are pretty critical. There is connectivity and networking. So these are all things that the industry can rally around. Right. And that goes along with any modern application infrastructure. So I would say those are the building blocks that need to happen on the data side. Of course there are so many choices as well. So >>How about, you know, security? I think about, you know, when after stuck net, the, the whole industry said, Hey, we have to do a better job of collaborating. And then when you said identity, it just sort of struck me. But then a lot of people tried to sort of monetize private reporting and things like that. So you do you see a movement within the technology industry to do a better job of collaborating to, to solve the acute, you know, security problems? >>Yeah. I think the customer pressure and government pressure right. Causes that way. Yeah. Even now, even in our current universe, you see, there is a lot of behind the scenes collaboration amongst the security teams of all of the tech companies that is not widely seen or known. Right. For example, my CISO knows the AWS CSO or the Microsoft CSO and they all talk and they share the right information about vulnerability attacks and so on and so forth. So there's already a certain amount of collaboration that's happening and that'll only increase. Do, >>Do you, you know, I was somewhat surprised. I didn't hear more in your face about security would, is that just because you had such a strong multi-cloud message that you wanted to get, get across, cuz your security story is very strong and deep. When you get into the DPU side of things, the, you know, the separation of resources and the encryption and I'll end to end >>I'm well, we have a phenomenal security story. Yeah. Yeah. Tell security story and yes. I mean I'll need guilty to the fact that in the keynote you have yeah, yeah, sure time. But what we are doing with NSX and you will hear about some NSX projects as you, if you have time to go to some of the, the sessions. Yeah. There's one called project, not star. Another is called project Watchman or watch, I think it's called, we're all dealing with this. That is gonna strengthen the security story even more. Yeah. >>We think security and data is gonna be a big part of it. Right. As CEO, I have to ask you now that you're the CEO, first of all, I'd love to talk about product with you cuz you're yeah. Yeah. We just great conversation. We want to kind of read thet leaves and ask pointed questions cuz we're putting the puzzle together in real time here with the audience. But as CEO, now you have a lot of discussions around the business. You, the Broadcom thing happening, you got the rename here, you got multi-cloud all good stuff happening. Dave and I were chatting before we came on this morning around the marketplace, around financial valuations and EBIDA numbers. When you have so much strategic Goodwill and investment in the oven right now with the, with the investments in cloud native multi-year investments on a trajectory, you got economies of scale there. >>It's just now coming out to be harvest and more behind it. Yeah. As you come into the Broadcom and or the new world wave that's coming, how do you talk about that value? Cuz you can't really put a number on it yet because there's no customers on it. I mean some customers, but you can't probably some for form. It's not like sales numbers. Yeah. Yeah. How do you make the argument to the PE type folks out there? Like EBIDA and then all the strategic value. What's the, what's the conversation like if you can share any, I know it's obviously public company, all the things going down, but like how do you talk about strategic value to numbers folks? >>Yeah. I mean, we are not talking to PE guys at all. Right. I mean the only conversation we have is helping Broadcom with >>Yeah. But, but number people who are looking at the number, EBIDA kind of, >>Yeah. I mean, you'd be surprised if, for, for example, even with Broadcom, they look at the business holistically as what are the prospects of this business becoming a franchise that is durable and could drive a lot of value. Right. So that's how they look at it holistically. It's not a number driven. >>They do. They look at that. >>Yeah. Yeah, absolutely. So I think it's a misperception to say, Hey, it's a numbers driven conversation. It's a business driven conversation where, I mean, and Hawk's been public about it. He says, look, I look at businesses. Can they be leaders in their market? Yeah. Because leaders get, as we all know a disproportionate share of the economic value, is it a durable franchise that's gonna last 10 years or more, right. Obviously with technology changes in between, but 10 years or more >>Or 10, you got your internal, VMware talent customers and >>Partners. Yeah. Significant competitive advantage. So that's, that's really where the conversation starts and the numbers fall out of it. Got it. >>Okay. So I think >>There's a track record too. >>That culture >>That VMware has, you've always had an engineering culture. That's turned, you know, ideas and problems into products that, that have been very successful. >>Well, they had different engineering cultures. They're chips. You guys are software. Right. You guys know >>Software. Yeah. Mean they've been very successful with Broadcom, the standalone networking company since they took it over. Right. I mean, it's, there's a lot of amazing innovation going on there. >>Yeah. Not, not that I'm smiling. I want to kind of poke at this question question. I'll see if I get an answer out of you, when you talk to Hawk tan, does he feel like he bought a lot more than he thought or does he, did he, does he know it's all here? So >>The last two months, I mean, they've been going through a very deliberate process of digging into each business and certainly feels like he got a phenomenal asset base. Yeah. He said that to me even today after the keynote, right. Is the amazing amount of product capability that he's seeing in every one of our businesses. And that's been the constant frame. >>But congratulations on that. >>I've heard, I've heard Hawk talk about the shift to, to Mer merchant Silicon. Yeah. From custom Silicon. But I wanted to ask you when you look at things like AWS nitro yeah. And graviton and train and the advantage that AWS has with custom Silicon, you see Google and Microsoft sort of Alibaba following suit. Would it benefit you to have custom Silicon for, for DPU? I mean, I guess you, you know, to have a tighter integration or do you feel like with the relationships that you have that doesn't buy you anything? >>Yeah. I mean we have pretty strong relationships with in fact fantastic relationships with the Invidia and Intel and AMD >>Benon and AMD now. >>Yeah. Yeah. I mean, we've been working with the Pendo team in their previous incarnations for years. Right, right. When they were at Cisco and then same thing with the, we know the Melanox team as well as the invi original teams and Intel is the collaboration right. From the get go of the company. So we don't feel a need for any of that. We think, I mean, it's clear for those cloud folks, right. They're going towards a vertical integration model and select portions of their stack, like you talked about, but there is always a room for horizontal integration model. Right. And that's what we are a part of. Right. So there'll be a number of DPU pro vendors. There'll be a number of CPU vendors. There'll be a number of other storage, et cetera, et cetera. And we think that is goodness in an alternative model compared to a vertically integr >>And yeah. What this trade offs, right. It's not one or the other, I mean I used to tell, talk to Al Shugar about this all the time. Right. I mean, if vertically integrated, there may be some cost advantages, but then you've got flexibility advantages. If you're using, you know, what the industry is building. Right. And those are the tradeoffs, so yeah. Yeah. >>Greg, what are you excited about right now? You got a lot going on obviously great event. Branding's good. Love the graphics. I was kind of nervous about the name changed. I likem world, but you know, that's, I'm kind of like it >>Doesn't readily roll off your phone. Yeah. >>I know. We, I had everyone miscue this morning already and said VMware Explorer. So >>You pay Laura fine. Yeah. >>Now, I >>Mean a quarter >>Curse jar, whatever I did wrong. I don't believe it. Only small mistake that's because the thing wasn't on. Okay. Anyway, what's on your plate. What's your, what's some of the milestones. Do you share for your employees, your customers and your partners out there that are watching that might wanna know what's next in the whole Broadcom VMware situation. Is there a timeline? Can you talk publicly about what? To what people can expect? >>Yeah, no, we, we talk all the time in the company about that. Right? Because even if there is no news, you need to talk about what is where we are. Right. Because this is such a big transaction and employees need to know where we are at every minute of the day. Right? Yeah. So, so we definitely talk about that. We definitely talk about that with customers too. And where we are is that the, all the processes are on track, right? There is a regulatory track going on. And like I alluded to a few minutes ago, Broadcom is doing what they call the discovery phase of the integration planning, where they learn about the business. And then once that is done, they'll figure out what the operating model is. What Broadcom is said publicly is that the acquisition will close in their fiscal 23, which starts in November of this year, runs through October of next year. >>So >>Anywhere window, okay. As to where it is in that window. >>All right, Raghu, thank you so much for taking valuable time out of your conference time here for the queue. I really appreciate Dave and I both appreciate your friendship. Congratulations on the success as CEO, cuz we've been following your trials and tribulations and endeavors for many years and it's been great to chat with you. >>Yeah. Yeah. It's been great to chat with you, not just today, but yeah. Over a period of time and you guys do great work with this, so >>Yeah. And you guys making, making all the right calls at VMware. All right. More coverage. I'm shot. Dave ante cube coverage day one of three days of world war cup here in Moscone west, the cube coverage of VMware Explorer, 22 be right back.

Published Date : Aug 30 2022

SUMMARY :

Great to see you in person. Cuz I think it's important to know that you've been the architect of a lot of this change and it's So that's what you start seeing that you saw the management And we're seeing some use cases. When did you have the moment where I mean, if you think about the evolution of the cloud players, And the cloud vendors also started leveraging that OnPrem. I think you were here. to for management, I mean, you can go each one of them by themselves, there is one way of So it's not about if you remember in the old world, people talk about single pan The, the technical enable there is just it's good software. And it's the Federation Much anything data from VR op we don't care. That's the same if you know what I'm saying? Firstly, my, the answer depends on which category you are in. And that is why you saw the cloud universal announcement and that's already, you've seen the Tansu announcement, et cetera. So the other thing that we did, that's really what my, the other thing that I'd like to get to your reaction on, cause this is great. But if Goldman Sachs builds the biggest cloud on the planet for financial service for their own benefit, They sort of hinted at it that when they were up on stage on AWS, right. Google's doing the same thing we are doing. And that's a super cloud. Said snowflake guys out the marketing guys. you So take the Goldman Sachs example. And this thing can be fungible and they can tie it to the right services. I mean, that's the way I look at it. It allows us to build things that you would not otherwise be able to do, Not to pat ourselves on the back Ragu. And you could have inter clouding cuz there was no clouding. And of course you can do all the containers in the Kubernetes clusters and et cetera, is what you could always do. Was the great equalizer. What the question Raghu, as you look at, we had submit on earlier, we had tutorial on as well. And that goes along with any I think about, you know, when after stuck net, the, the whole industry Even now, even in our current universe, you see, is that just because you had such a strong multi-cloud message that you wanted to get, get across, cuz your security story I mean I'll need guilty to the fact that in the keynote you have yeah, As CEO, I have to ask you now that you're the CEO, I know it's obviously public company, all the things going down, but like how do you talk about strategic value to I mean the only conversation we have is helping Broadcom So that's how they look at it holistically. They look at that. So I think it's a misperception to say, Hey, it's a numbers driven conversation. the numbers fall out of it. That's turned, you know, ideas and problems into Right. I mean, it's, there's a lot of amazing innovation going on there. I want to kind of poke at this question question. He said that to me even today after the keynote, right. But I wanted to ask you when you look at things like AWS nitro Invidia and Intel and AMD a vertical integration model and select portions of their stack, like you talked about, It's not one or the other, I mean I used to tell, talk to Al Shugar about this all the time. Greg, what are you excited about right now? Yeah. I know. Yeah. Do you share for your employees, your customers and your partners out there that are watching that might wanna know what's What Broadcom is said publicly is that the acquisition will close As to where it is in that window. All right, Raghu, thank you so much for taking valuable time out of your conference time here for the queue. Over a period of time and you guys do great day one of three days of world war cup here in Moscone west, the cube coverage of VMware Explorer,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2016DATE

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AMDORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

OracleORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

RaghuPERSON

0.99+

GregPERSON

0.99+

twoQUANTITY

0.99+

IntelORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

LauraPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

2010DATE

0.99+

threeQUANTITY

0.99+

Lou TuckerPERSON

0.99+

10 yearsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

2022DATE

0.99+

12 straight yearsQUANTITY

0.99+

Andy jazzyPERSON

0.99+

two separate islandsQUANTITY

0.99+

SatiaORGANIZATION

0.99+

thirdQUANTITY

0.99+

todayDATE

0.99+

fiscal 23DATE

0.99+

FirstlyQUANTITY

0.99+

Raghu RaghuramPERSON

0.99+

NSXORGANIZATION

0.99+

OneQUANTITY

0.99+

10QUANTITY

0.99+

2018DATE

0.99+

second thingQUANTITY

0.98+

Al ShugarPERSON

0.98+

vSphereTITLE

0.98+

TansuORGANIZATION

0.98+

two applicationQUANTITY

0.98+

22QUANTITY

0.98+

one wayQUANTITY

0.98+

three thingsQUANTITY

0.97+

first revQUANTITY

0.97+

oneQUANTITY

0.97+

three daysQUANTITY

0.97+

VSE eightTITLE

0.97+

eachQUANTITY

0.97+

bothQUANTITY

0.97+

PendoORGANIZATION

0.97+

2013 dayDATE

0.97+

each businessQUANTITY

0.97+

KubernetesTITLE

0.97+

almost 20 yearsQUANTITY

0.97+

EBIDAORGANIZATION

0.97+

five different typesQUANTITY

0.96+

Ricky Cooper, VMware & Rocco Lavista, HPE | HPE Discover 2022


 

>>The cube presents HPE discover 2022 brought to you by HPE, >>Where back you watching the Cube's coverage. HPE discover 2022. This is day three, Dave Valante with John furrier. Ricky Cooper is here. He is the vice president slash newly minted SP we're gonna talk about that of global and transformational partners at VMware and rock LA Vista. Who's the vice president of worldwide GreenLake cloud services at the transformation, the transformational partner of Hewlett Packard enterprise guys. Welcome to the program. Thanks for coming >>On. Thank you. Thank you very much. Thank >>You. So really interesting title and you've got a new role. Yeah. Right. Explain that. >>Well, I'm the interim SVP for the channel and for the commercial business at VMware, I also have the global, my existing role is global and transformational partners. So that's our, you know, our largest OEMs and also the transformational partners, which is more the, you know, the, the reseller stroke, um, services element of our business. >>I remember in, uh, John and I started the cube in 2010. Yeah. And the second show we did, third show actually was wasm world 2010. >>And Ritz was the CEO at the time, huge >>Booth. It was amazing. And, and HP at the time was all over, you know, of, of the cube and of course, world, and you guys have been partners for a long, long time Roco. So maybe give us a little bit >>Of the history AB absolutely. So for 20 years, H P H P has been partnered with VMware in delivering virtualization technology and solutions to our customer base. And while that partnership is strong, and I remember some of the market share numbers were like 45% of VMware software stack is running on HPE servers and technology. I think about how that's evolved, right? Like strong history, strong partnership. And when I say strong, I'm not talking about marketing fluff, I'm not talking about slideware. I'm talking about at a ground level that the account teams get together and talk about what those customers that they're working with. They get together and figure out what outcome they're trying to solve for. And we bring that technology together. Now, layering GreenLake GreenLake is taking at the heart of what VMware does with their software stack, combining it with our infrastructure solutions and providing IAS, PAs and CAS capabilities to our customers at the edge in their core, whether it's a data center or, um, colo, as well as providing the common operating model into public cloud. And so we embrace, and the partnership is only getting stronger because of what VMware does with us now with GreenLake, which is everything, what HPE is >>About that is well, well said, I gotta say, I gotta say that was purposely. That was really crisp and, and not to kind of go back and look at the history of the cube, but we've been covering both of you guys. Mm-hmm, <affirmative> deeply been watching the transformation of both companies. It's so clear that VMware is so deep in the operational side of it. Yeah. It's been one of the hallmarks of VMware mm-hmm <affirmative>, uh, vSphere, um, all that technology. You guys have been powering with the hardware now, GreenLake, we had a demo yesterday with the storage team, they're provisioning, storage, Amazon storage, and on premise and edge. So we see VMware as a massive service layer in this new model. Very key. How deep, uh, is it going now with the GreenLake? Can you share what's different with the relationship, I get the account deep account partner sharing, but now that green Lake's out there, you have an ecosystem. VMware has an ecosystem. Absolutely. A big one. Yeah. You know, so take >>This and is really where we're looking to improve things. So let me, let me start by saying, we've just been voted the 20, 20, uh, partner of the year, uh, here with HPE this week. And that news is out there and, uh, was issued a couple of days ago, which is fantastic for the two companies and shows the direction where we are now and where we're looking to go forward. I think there's a lot of work to be done behind the scenes. As we emerge as an independent company, there's a lot of work to be done behind the scenes on how we look at our broader ecosystem and certainly our largest OEMs of which, you know, HPE, as Roco said, 20 years of great partnership there, the next stage is how do we really get the teams equipped and plug into GreenLake? Um, you know, we've had a relationship very well known with Dell for the last, you know, for the last five years, we've grown that business at an amazing rate. We've got a whole bunch of personnel still working on, on those areas. We're in a position now where we can sort of redeploy some of those, um, over some of the headcount to really drive our mission here with our other partners. And certainly with HPE, >>Well, the integration piece that you guys have co co-engineering on that's well documented. Yeah. But with the ecosystem specifically, this is a net new thing for GreenLake and frankly, us analysts. And we had IDC on yesterday. We're looking at that as a benchmark, we're gonna be measuring GreenLake success by how well the ecosystem is so correct. Welcome to the party, VMware and HPE. That is it. You didn't have to have that big ecosystem cuz you had the channel, your HP had a strong channel mm-hmm <affirmative> but now it's an ecosystem game. Talk about that. >>Customers have that expectation, right? And if you think about what we've built, we've got an ecosystem we re we, um, announced Mar the marketplace for GreenLake right now, VMware has their own marketplace, but by standardizing on their technology in our private cloud enterprise, which was also announced here at discover, which is deeply rooted with VMware technology in it, we now are able to take advantage of their marketplace. Plus all the others that we're bringing into GreenLake and effectively solve for the customer's most complex business problems. Because if you want to be successful, you have to think that the world is open and hybrid. And that means partnerships with everybody mm-hmm <affirmative> you can't think I won't partner because they're a competitor or they may have a product that competes with me. It starts and ends with what the customer wants and needs and solving for that business objective. That means partnering well. >>Well you guys have, you know, they're they own the operator it ops. Yeah. I would say ops op side, clearly mm-hmm <affirmative>. And with the cloud native momentum that VMware has and what you guys have been doing, I just see a nice fit there. What are some of the customers say? I mean, what's some of the, what's the, what's the market telling you with GreenLake and VMware? What's the number one thing people love? Well, >>Just, just look at GreenLake at its core. And the very simplistic pays your grow model, right? The hardware doesn't grow without software. You don't scale the hardware or scale it back without software. And so what are we doing in within GreenLake? We're taking the VMware stack and we're scaling it with the hardware up and down for customers. They no longer have to worry about the balancing act between how much infrastructure I have to buy. How much software do I have to marry up to it? Are they outta sync? Right? We're solving that together for our customers. That's what they want at, at a very simplistic view, right? Then they say, Hey, give me the life cycle management of this platform, right? I don't wanna have to spend it cost operations, have employees dealing with very rudimentary life cycle management and the toil that it comes with. That's a big cost element when customers are creating snowflakes, mm-hmm <affirmative> in their it operations, they're adding cost. And what we're doing through this partnership, what we're doing with private cloud enterprise is eliminating that toil and, and helping optimize that operating mind >>You're simplifying. Oh, absolutely. >>So I wanna standardizing there a little bit as well. Right? So that, that's a, a great point and BRCA has made several there, but the next stage for us and what we've been talking about a lot this week is how do we sort of standardize what are the three or four things that customers are gonna recognize this partnership for? You know, be that, um, anywhere workspace be that multi-cloud, what are the three or four things that we can say, Hey, these two companies together are fantastic. And how do you then security get up and yeah. Security, security. Yeah. How do you then get that up and running in a green lake environment, but also on the back end, ensure that your operations are seamless and it's a great customer experience. >>So Ricky, that and Roco, I want to, uh, rewind two clicks back in the context of standards in the partner conversation, the ecosystem conversation, are you at a point where you can cuz you're basically saying you can cross pollinate the ecosystems and the partnerships. Yeah. But you got different, you know, business practices, different legal contracts and so forth. Are you able to create standardization at that layer within the partners beyond just YouTube within your respective ecosystems? Is that it sounds like that's a really difficult challenge, but it could deliver customer benefit in terms of reducing >>Friction. Absolutely. It does. And that's what we've gotta work towards. So right now operation wise, contract wise, that's exactly what we're here working through. It's not easy, but the teams are all fully behind it and that's the Nirvana for us is to be in that >>Position. Well, and, and what I really like where we are in this partnership at, in a point in time, VMware is spun off from Dell. If there's any confusion by our customer base, that VMware is going to not only work with us as they've done traditionally, but maybe get closer and not worry about this standardization, this approach, this ecosystem of players. I mean, you know, Ricky and I talked about this, like this only gets better. Yeah. Because of that. >>Yeah. The market dynamics are your friend right now. I think, yeah. That's definitely the case and the history is key, but the technical trends that we had an earlier panel on here, uh, with the technologists coming together, mm-hmm, <affirmative>, there's big changes happening. The edge is exploding rapidly accelerating with machine learning. You're seeing it ops turn into ML ops mm-hmm <affirmative>, you're starting to see the edged industrial edge explode, um, even into space. So like you have technology shifts. Yeah. And IDC pointed out that the B2B growth trends, even it spend, you want even call it, it spend or cloud spend or cloud ops is still up to the right. Yeah. Even during recession. >>And that is where all the opportunity is. So, you know, not just focusing on what we do today, let's think outside the box, we're doing some great things together, you know, in the, in the AI space and we've Invidia and between the two teams, some amazing things are happening and we've just gotta continue that. But focus is gonna be essential in the early stages to make sure you've got two or three things built out very well. And then the rest of the business that's already happening out there between the two companies is a bit more programmatic. >>Yeah. It's interesting. The V the VMware relationship with the hyperscale. I know we've covered, uh, the AWS announcement like six years ago. I forget what it was, Dave, four 60 years. Ragoo was there with Andy Ja, pat Gelsinger and, and, uh, all the top dogs there, but that's just Amazon. It's still the VMware instances on the cloud there. Yeah. The customers we're hearing here at GreenLake is that they want the single pain in cloud hate. They use that term. It's kind of an old term. That's kind of what we're seeing. They >>Still want it because nobody's giving it to 'em. >>So this, and then outpost, which is launched four years ago, kind of not working well for Amazon because EKS and open standards and, and other hardware platforms, which is essentially hardware mm-hmm <affirmative>, which is not Amazon's game. And they're, although they do great hardware in the cloud, but they're not, they're not hardware people >>Wait. So you're talking about like the public cloud guys trying to get into the edge, but look, the world is hybrid in no point, in instance, in time, do I ever believe that Azure will be able to control AWS nor GCP versus place versa? Right? And then this idea that you can go from the outside in is interesting, but where data's created, where the applications are, where the digital and the analog world meet as at the edge and for our customers, they're creating transactions and data at the edge. Mm-hmm, <affirmative>, that's where the control plane should start not in the public. And so, given that, and working with VMware, we're able to say where the data lives, where the application is sitting, where the digital transformation is happening. It's from the inside out that you provide a standard operating model across all your clouds, right? They're never gonna be able to give that to you unless you're a hundred percent in their cloud, including what they do at the edge. What we're doing with GreenLake is saying, we're giving you that edge to colo, to core data center, to public cloud operating model, that you're not having multiple snowflakes of an operating model for each one of those clouds. And VMware is at the core of that. >>And it's a global model. And Ricky, I'm guessing from your, what I would call an accent that you weren't born in America. Correct. I know where this Yankee fan was. >>Yeah. >>That's a >>Don't pin Yankee fan on the >>That's fan. Yeah. Okay. So despite 1986 we'll >>So >>I wanted to ask if, how you're able to take these standards overseas. Um, and because of course, you know, you know, well, John, as do I, different countries of different, different projects, governance issues, are you able to take this to make this a global? >>Absolutely. And, and the work I was talking about within Nvidia and HP is a great example because we've gone the other way. It's coming from Asia, where we've set up some best practice in the work that they're doing there, and it's coming across into Europe and coming across into the us. So it's all about finding, you know, finding the right solutions that we were talking about earlier. What's going to work, building out, investing that's something. I think that we we've missed a trick on, you know, through, through the past sort of four or five years, VMware really leaning in and really holding a hand here of HPE. The team were a huge team, turned up to the, to, to this event from all over the world. They're here demonstrating exactly what you're talking about, the standards with Nvidia, that message. And then you take that and make sure that it's not a snowflake just happening in Asia. You're bringing it across the world and, and you're getting the, you know, the impetus and the, uh, push behind that. >>You say, snowflake, I think of snowflake. We just covered their event too. Yeah. Yeah. Not snowflake and snowflake. Um, um, but final question as we wrap up, um, we got world converted to now called VMware Explorer. Yeah. So we're gonna be there again on the floor, two sets with the cube, um, that's changing. What can we expect to see from the relationship? What's the scorecard gonna look like? What, what's the metrics you guys are measuring yourselves on and what can customers expect from the HPE, um, VMware next level relationship partnership? >>Uh, for me, it's very simple. We measure our success based on the customer response. Are we solving for what they want us to be solving for? And that will prove itself out in how we're solutioning for them, the feedback that they give us and this discover event in terms of what we've released, the announcements between private cloud enterprise, the marketplace, um, what we're doing with this relationship since the Dell spinoff, the feedback has been amazing. Amazing, great. And I am thankful, thankful for the partnership. >>Awesome. Wrap way to bring us home Rocko. Thank you for that. And thank you, Ricky, for coming on the great, great >>Job you guys been great. Thank you. Thank you. >>Thanks very much. All right. And thank you for watching this, Dave Valante for John furrier day three of HPE, discover 2022. You're watching the cube, the leader in live enterprise and emerging tech coverage. We'll be right back.

Published Date : Jun 30 2022

SUMMARY :

He is the vice president slash newly Thank you very much. Yeah. So that's our, you know, our largest OEMs and also the transformational partners, And the second show we did, you know, of, of the cube and of course, world, and you guys have been partners the heart of what VMware does with their software stack, combining it with but now that green Lake's out there, you have an ecosystem. with Dell for the last, you know, for the last five years, we've grown that business at an amazing rate. Well, the integration piece that you guys have co co-engineering on that's well documented. And if you think about what we've built, we've got an what you guys have been doing, I just see a nice fit there. We're taking the VMware stack and we're scaling it with the hardware up and down for customers. You're simplifying. And how do you then security get the partner conversation, the ecosystem conversation, are you at a point where you can cuz you're basically And that's what we've gotta work towards. I mean, you know, that the B2B growth trends, even it spend, you want even call it, it spend or cloud spend let's think outside the box, we're doing some great things together, you know, in the, in the AI space and we've Invidia The V the VMware relationship with the hyperscale. And they're, although they do great hardware in the cloud, but they're not, they're not hardware people It's from the inside out that you provide a standard operating model across you weren't born in America. and because of course, you know, you know, well, John, as do I, different countries of different, I think that we we've missed a trick on, you know, So we're gonna be there again on the floor, two sets with the cube, the marketplace, um, what we're doing with this relationship since the Dell spinoff, Thank you for that. Job you guys been great. And thank you for watching this, Dave Valante for John furrier day three of HPE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BRCAORGANIZATION

0.99+

Ricky CooperPERSON

0.99+

RickyPERSON

0.99+

Dave ValantePERSON

0.99+

AsiaLOCATION

0.99+

2010DATE

0.99+

AmericaLOCATION

0.99+

NvidiaORGANIZATION

0.99+

Hewlett PackardORGANIZATION

0.99+

DavePERSON

0.99+

Andy JaPERSON

0.99+

JohnPERSON

0.99+

DellORGANIZATION

0.99+

twoQUANTITY

0.99+

AmazonORGANIZATION

0.99+

45%QUANTITY

0.99+

EuropeLOCATION

0.99+

AWSORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

HPORGANIZATION

0.99+

two companiesQUANTITY

0.99+

threeQUANTITY

0.99+

VMwareORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

HPEORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

yesterdayDATE

0.99+

second showQUANTITY

0.99+

two teamsQUANTITY

0.99+

both companiesQUANTITY

0.99+

six years agoDATE

0.99+

third showQUANTITY

0.99+

green LakeORGANIZATION

0.99+

four years agoDATE

0.99+

RagooPERSON

0.99+

H P H PORGANIZATION

0.99+

two clicksQUANTITY

0.99+

RitzORGANIZATION

0.99+

1986DATE

0.99+

John furrierPERSON

0.98+

bothQUANTITY

0.98+

RocoPERSON

0.98+

this weekDATE

0.98+

RocoORGANIZATION

0.98+

two setsQUANTITY

0.98+

pat GelsingerPERSON

0.96+

Ryan King & Laurie Fontaine, Red Hat | HPE Discover 2022


 

>>The cube presents HPE discover 2022 brought to you by HPE. >>Hey everyone. Welcome back to the Cube's day one coverage of HPE. Discover 22 live from Las Vegas. Lisa Martin, here with Dave Velante of a couple of guests from red hat. You may have seen some news yesterday. We're gonna be talking about that. Please. Welcome Ryan King, the senior director of hardware partner ecosystem, and Lori Fontine joins us as well. The senior director of global commercial partner ecosystem. Welcome to the program guys. >>Thanks for having us. Yeah, >>Thank you so great to be back in person and nobody word has summit was just last month or so. That's right. Ryan. Talk about hybrid cloud. It's all the buzz. We've been talking a lot about it in the last hour and a half alone. What are some of the trends that, that red hat is seen with respect to hybrid cloud? >>Well, I, I mean, hybrid cloud of red hat has been a trend for quite some time. In fact, we were very early in setting our course towards hybrid cloud with our products and platforms. And that's been a key part of our strategy in terms of the number of transformations have been happening in the enterprise. And with HPE, we're super excited about, you know, we're hitting our stride with OpenShift. I've been working with OpenShift for the better part of my 10 years here at 12 years at red hat, 10 years with OpenShift. And we're very excited about seeing the pattern of going where customers want to build their cloud. It's very important that where, where the market is going. So we're seeing trends from the public cloud now go into edge and telco and 5g and really exceed, see them expanding their infrastructure footprint out to those use cases. And again, we see REL everywhere. So re has continued to expand as well. And then Ansible automation platform has also been a great means of kind of bringing together community for that last mile of automating your entire infrastructure. >>Well, the Lin, the functionality of Linux continues to improve OpenShift is everywhere. I mean, I remember at the red hat summit, I mean, well, we, we, we coined this term super cloud, which is this layer that floats, you know, on-prem took across clouds out to the edge we had Verizon on. They were talking about, you know, 5g developers and how they're developing using, you know, a combination of, of, of OpenShift. So guys have been really crushing it with, with OpenShift. I remember, gosh, I mean, we've been covering, you know, red hat summits for a long time now. And just to see that evolution is actually quite amazing. >>Yeah. It's actually really neat to see our CEOs align too. Right. So the messaging that we've had around hybrid cloud from red hat, like you said, we were kind of the pioneers, honestly, this we've been talking about hybrid cloud from the very beginning. We always knew that it wasn't gonna be public cloud or private cloud. We had to have, you know, hybrid. And, and it's interesting to see that Antonio, you know, took that on and wanted to say, we're gonna do everything as a service right. A few years ago. And, and the whole theme was around hybrid cloud and giving customers that choice. Right? So it's exciting for us to see all of that come together. And I actually worked for HP for like 17 and a half years. So it's really fun for me to be on this side now with red hat and see the messaging come together, the vision come together and just really being able to align and move forward on >>This tremendous amount of transformation in the last few years >>Alone. Oh my gosh, we >>Talk about, you know, customers need choice. They want choice, but you also talked about, we have to meet customers where they are. That seems the last few years to have accelerated, there is no more option for companies. You've gotta meet the customers where they are. >>Exactly. Yeah. And it's all about choice, like you said, and it, everybody's got, you know, their own way to do everything as far as consumption goes and we have to be available and spot on with it, you know, and be able to move quickly with these trends that we're seeing. And so it's great to be aligned. And >>From a partnership standpoint, I mean, you, you mentioned H HP 17 years. I mean, it was, it was a hard to follow company. You had, you had PCs over here, you had services, the kind of the old EDS business. Now there's such a focus absolutely. On this mission, absolutely. Of as a service. And, you know, obviously a key part of that is having optionality and bringing open source tooling into that. I mean, we heard about this in, in spades, at, at red hat summit, which is really interesting this year. It was a smaller VIP event in Boston. And I, and I loved it, you know, cuz it was really manageable. We had all the execs on and customers and partners. It was awesome. What's new since red hat summit. >>Well, I mean, I would say that obviously GreenLake and what we've announced this week is a big new thing for us, but really like we're just continuing on our pattern. We are. Now, if you look at the Q1 report from IBM, you'll see that the growth of the customer base for OpenShift that they reported just continues to go up into the right. You'll see that now, like AMIA is saying that we're like 47.8% of the containers market for the enterprise. You'll see that like we're now in 65% of the fortune 500 with OpenShift, 90% with red hat in general. So we've established our footprint. And when you establish your footprint and customers start taking you out to the edge, we're going into these 5g use cases, we're, we've got an incredible amount happening in the AI space, all these emerging areas of where people are building their cloud, like we're now going to that next level of saying, how do they want to consume it? >>So what's really important to me about that is, is so Omni data around 50% of the market is, is open shift. A people may not realize a lot of people use, you know, do Kubernetes for free, you know, Hey, we're doing Kubernetes, but they don't have that application development framework and all the recovery and all the, the tooling around it. And the reason why I think that's so important, Laurie is ecosystems wanna monetize. So people are paying for things that becomes more interesting and it actually starts to attract people just naturally. >>Yeah, absolutely. And speaking of ecosystem, I mean, that's the beauty of what we're doing with GreenLake too. We're taking on a building block approach. So we're really, it's kind of ISV as a service if you will. And you know, personally, I, this was my baby for the past couple years, trying to make sure that we took into consideration every partner use case, every customer use case. So we created an agreement that would make sense to be able to scale, but also to meet all the demands of our customers. And so the, the what's really exciting about this is now we have a chance to take this building block approach, scale it out to all types of partner types, right throughout the entire ecosystem and build offerings together. That is really exciting for us. And that's where we see the real potential here with GreenLake and with red hat, >>What's actually available inside a GreenLake. >>So we are starting with OpenShift. So OpenShift will be available in Q3 that will follow in Q4 with re and then after that Ansible. So we're, we're moving very quickly to bring our platforms into it and it's really our strategic platforms, but it's all based on customer demand. We know we're seeing amazing transformation of customers moving to Kubernetes. You said, you know, OpenShift is Kubernetes with useful additions to it and an ecosystem around it, right? So that transformation is also happening at the bare metal layer. So we're seeing people move into Kubernetes bare metal, which is an amazing growth market for us. >>Explain those useful additions if you would. So why shouldn't I just go out and, and get the free version of Kubernete? Why should I engage red hat and, and OpenShift? What do I get? >>So you get all the day, two management stuff, you get, we have a whole set of additional stuff you can purchase around it, OpenShift platform. Plus you can get our ACM, our advanced cluster management. So you wanna manage multiple clusters, right? You get the ACS, the security side of it. You can also get ODF. So you get storage built into it as well. And we've done all these integrations. You can manage the whole thing as a cluster or as multiple clusters with the whole enterprise support and the long term support that we provide for these things up to 10 years. So >>When you look at the early days lease of, of Kubernetes, it was really, the focus was on simplicity. You had other platforms that were actually doing more sophisticated cluster management. And the, the committers that in Kubernetes said, you know, we're not gonna do that. We're gonna keep it simple. And so that leave some holes and gaps and you know, they're starting to fill those, but what if, if correct me if I'm wrong, but what red hat has done is said, okay, we're gonna accelerate, you know, the, the, the closing of those gaps and stay ahead and actually offer incremental value. And that's why you're winning in the marketplace. >>Well, we're an open company, so we're still doing everything upstream and open source as we do, of course, sticking with, you know, the APIs and APIs to make this all work, both, you know, in terms of what the community's trying to drive, what we're trying to drive for our customers on their behalf. And then just where things are going from a technology basis, make it a lot of investment, >>But you have to, you have to make a red hat, has to make a choice as to where it puts its commitments. You can't spread yourself too thin, so you gotta pick your spots. And you've, you've proven that you're pretty adept at doing that. >>That just comes back to customer centricity, right. And just knowing where our customers need to take the platform. That's, >>That's easy to say, but it's, it's an art form. And a little bit of science. >>Remember these customers have experts that are deep in this space. So it's like, you know, those experts trust us with where they needed to go. And they trust us to help shepherd that and deliver that as a platform to them. So it's not like anybody tell us what you want, right? Like it's really about like, knowing what's the best way to do it. And working with the people that can help you understand how to apply that to their use case >>And within the customer environment, who are you working with? Who is that key constituent or constituents that are guiding red hat in this direction? >>Well, it's certainly infrastructure folks. So it's your, it's your standard folks that are looking at the, how do we lay down our infrastructure? How do we manage it? How do we grow it? It goes out to the application developers. They're trying to deliver this in a cloud native way. And we have new personas, you know, coming in with the AI practitioners, right? So we've announced at before summit at Invidia's event, their new offering called Invidia AI enterprise. And so that's them bringing in enterprise support for GPU, for Kuda and for a software stack above that to start offering some more support there. So they're certifying OpenShift, we're both certifying the servers that run underneath it, and then they're offering support for their stuff on top of it. And that's a whole new use case for us. >>And, you know, I should also mention that even though this paper use with the GreenLake is new for us, and we just had this big announcement, we have done GreenLake deals though. We've done numerous GreenLake deals with our annual subs, right? So I, so even though this is new to us, as far as, you know, monthly utilization and being able to do this cloud consumption this isn't new to us as two companies coming together, we've been doing GreenLake deals for the past couple years. It's just, now we have this cloud consumption availability, which is really gonna make this thing launch. So, >>So what have been some of the customer benefits so far, you've been doing it for a couple years. The announcement was yesterday, but there's obviously feed on the street going on. What are some of the, the big outcomes that you're seeing customers actually bring to reality? >>I think speed and agility, right? That's the biggest thing with, with our products, being able to have it everything predictable and being able to have it consumed one way, instead of having this fragmented customer experience, which is, you know, what we've seen in the past. So I think that's the biggest thing is speed agility and just, you know, a really good customer experience at this point. >>Go get it, please. >>I would say the customer experience is critical. Yes. That's one of the things that we know that in terms of, of patients wearing thin the last couple of years, people expect to have a really strong consumer experience regardless of what you're doing, regardless of what industry and so attention and mind on that is a differentiator in my opinion. >>Absolutely. Yeah. And we've gotta constantly keep our eye on that. I mean, that's, that's our north star, if you will. Right. So, and Lori, >>I know you've saying you're, you've done GreenLake deals in the past, but what feels different to me now in that it's actually coalescing some of the things that Alma Russo announced this morning, the platform on which, you know, ISV is a service. I think you, you called it. Yeah. You, it, it now seems like, you know, look a couple years ago, HP said, okay, this is the direction that we're going. Yeah. They weren't there at that time. And they're still not there. There's a lot of work to be, to be done. But now it's starting to form. You're seeing, you know, the pieces come together, the puzzle pieces that sort of substrate being laid out. And now you're hoping that we see the steep part of the S-curve and that's what customers I think are expecting. >>Right. And it's bringing that operating model to move to a monthly model so they can do pay as you go. Right. And that pairs up nicely with like the cloud native capabilities we're bringing to OpenShift and hybrid cloud in general. So it's, it just shows like we're already getting demand from customers. It's saying like, this is part of our model. Like we know a certain amount of infrastructure we wanna own, and we just wanna own it outright, but there's a lot that they want to have flexibility on. And so being able to add that portion to it is just, you know, gonna help us both. >>And you think about the critical aspects of, of the cloud operating model. It's obviously pay as you go it's, you know, massive scale it's ecosystem enablement, and also automation. I mean, that is, that is a key, what's your point of view on that? You guys with Ansible, you, you, you know, you go back to a couple years ago and it was, you know, there was this, there were a lot of other tooling, but now, I mean, Ansible is really taken off. Yeah. >>It's just, you know, Cinderella story, right? Like it really an amazing community driven thing where we just knew, we all know this, right. You have, when you get to the very last mile of doing infrastructure management, there's a variety of devices, there's variety, a variety of vendors. And then you have like the variety of skills of the people that have to figure out how to do automate all of this. And what Ansible did is it provided a common language across all of that. And so what we do with automation, our, an ible automation platform is we make it. So now teams can manage all of this together and they can share their playbooks and they can host that privately for all their enterprise stuff that they need to do. So it's just, you know, it fits our DNA so well to have something so community driven now with a really nice enterprise message wrapped around it. And it's playing out very well for where, you know, hybrid cloud. Right. Cause there's some more additional variety. You need to be able to manage, you know, across all of your different footprints, because really it's like, it's not just about flexibility and scale up scale down it's where do you need it to run at what time? Right. And that, that last leg Ansible plays a key role in that. >>And we actually, Ansible will be coming further down the, you know, the patch. I know we're gonna talk a little bit about what's available today versus what's available down the road, but yeah, we have that on the radar. So right outta the gate, we're working on OpenShift, obviously bare metal. And we see that happening in Q3 and then behind that as well in Q4 and then Ansible is gonna be right behind that. So that's kind of the order that, and there's other pieces, right? So our whole portfolio is basically available to HP right now. It's just making sure that we can operationalize everything and have the best experience >>All inside of GreenLake, >>All inside a GreenLake. Yeah. Pretty neat. Lori >>Question for you. You've been, you were with HP for a very long time. This is obviously the first discover in three years in person. Exactly. You know, three years ago, Antonio near stood on stage and said, we are going to buy 20, 22. And here we are deliver everything as a service, as a partner and as a former HP, what are you seeing at this discover 22? >>It's it's so interesting. I it's such a sea change if you will. Right. And having come from HPE, I actually led the software as a service organization for a while on the software side of things. And we thought that was like state of the art and cutting edge that was 10, 11, 12 years ago. Right. So to actually see this come to life, because we were all thinking really, everything is a service. How are you gonna do that? Like your entire portfolio is gonna be available. Like that is lofty. Right. And having worked at HP, I thought, wow, I don't, you know, I know things take time. And, but actually just even being around the showcase here and watching everything come to life is amazing. Cause I, I, you know, I, I was very positive about it, but at the same time, it's like that, that was a big goal three years. Right. And it's, I'm seeing it happen >>A big goal in two of those years during a pandemic. Right. So right. Talk about lofty. Oh my gosh. Quite a bit of accomplishments guys. Thank you so much for joining David me on the program talking about actually guys, this is great. What red hat and HPE are doing your power partnership, power ship. Is that a word? It is now your power. >>I like >>That with GreenLake. We appreciate that. We'll look forward to having you guys back on. >>Thank you so much, guys. >>All right. For our guests. I'm Lisa Martin. He's Dave ante. We are at HPE discover 22 live from the show floor in Las Vegas. This is just day one of our cupboards stick around. We'll be right back with our next guest.

Published Date : Jun 28 2022

SUMMARY :

the senior director of hardware partner ecosystem, and Lori Fontine joins us as well. Thanks for having us. Thank you so great to be back in person and nobody word has summit was just last month or so. And with HPE, we're super excited about, you know, I remember, gosh, I mean, we've been covering, you know, red hat summits for a long time And, and it's interesting to see that Antonio, you know, took that on and wanted to Oh my gosh, we Talk about, you know, customers need choice. with it, you know, and be able to move quickly with these trends that we're seeing. And I, and I loved it, you know, cuz it was really manageable. And when you establish your you know, do Kubernetes for free, you know, Hey, we're doing Kubernetes, but they don't have And you know, personally, I, this was my baby for the past couple years, trying to make sure that we took into You said, you know, OpenShift is Kubernetes with useful additions to it and an ecosystem Explain those useful additions if you would. So you get all the day, two management stuff, you get, we have a whole set of additional stuff you And the, the committers that in Kubernetes said, you know, we're not gonna do that. sticking with, you know, the APIs and APIs to make this all work, both, you know, in terms of what the community's trying But you have to, you have to make a red hat, has to make a choice as to where it puts its commitments. And just knowing where our customers need to take the platform. And a little bit of science. So it's like, you know, those experts trust us with And we have new personas, you know, this is new to us, as far as, you know, monthly utilization and being able to do this cloud consumption this So what have been some of the customer benefits so far, you've been doing it for a couple years. So I think that's the biggest thing is speed agility and just, you know, a really good customer experience at this point. That's one of the things that we know that in terms of, if you will. You're seeing, you know, the pieces come together, the puzzle pieces that sort of substrate being And it's bringing that operating model to move to a monthly model so they can do pay as you go. And you think about the critical aspects of, of the cloud operating model. So it's just, you know, it fits our DNA so well to have something so community driven now And we actually, Ansible will be coming further down the, you know, the patch. All inside a GreenLake. what are you seeing at this discover 22? I don't, you know, I know things take time. Thank you so much for joining David me on the program talking about actually guys, We'll look forward to having you guys back on. We are at HPE discover 22 live from the show floor in Las Vegas.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VelantePERSON

0.99+

HPORGANIZATION

0.99+

LoriPERSON

0.99+

Ryan KingPERSON

0.99+

BostonLOCATION

0.99+

Lori FontinePERSON

0.99+

IBMORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Alma RussoPERSON

0.99+

two companiesQUANTITY

0.99+

telcoORGANIZATION

0.99+

RyanPERSON

0.99+

90%QUANTITY

0.99+

AMIAORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

65%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

three yearsQUANTITY

0.99+

GreenLakeORGANIZATION

0.99+

yesterdayDATE

0.99+

Laurie FontainePERSON

0.99+

AnsibleORGANIZATION

0.99+

47.8%QUANTITY

0.99+

DavePERSON

0.99+

10DATE

0.99+

bothQUANTITY

0.99+

three years agoDATE

0.99+

HPEORGANIZATION

0.99+

firstQUANTITY

0.99+

red hatORGANIZATION

0.99+

KubernetesTITLE

0.99+

12 yearsQUANTITY

0.99+

OpenShiftTITLE

0.99+

H HPORGANIZATION

0.98+

KuberneteTITLE

0.98+

this weekDATE

0.98+

twoQUANTITY

0.98+

AntonioPERSON

0.98+

last monthDATE

0.98+

5gORGANIZATION

0.98+

20QUANTITY

0.98+

17 yearsQUANTITY

0.97+

11DATE

0.97+

OpenShiftORGANIZATION

0.97+

around 50%QUANTITY

0.96+

oneQUANTITY

0.96+

up to 10 yearsQUANTITY

0.96+

Q3DATE

0.95+

Red HatORGANIZATION

0.95+

LinuxTITLE

0.95+

22QUANTITY

0.94+

Q4DATE

0.94+

12 years agoDATE

0.93+

2022DATE

0.93+

this yearDATE

0.93+

pandemicEVENT

0.92+

Matt Burr, Pure Storage


 

(Intro Music) >> Hello everyone and welcome to this special cube conversation with Matt Burr who is the general manager of FlashBlade at Pure Storage. Matt, how you doing? Good to see you. >> I'm doing great. Nice to see you again, Dave. >> Yeah. You know, welcome back. We're going to be broadcasting this is at accelerate. You guys get big news. Of course, FlashBlade S we're going to dig into it. The famous FlashBlade now has new letter attached to it. Tell us what it is, what it's all about. >> (laughing) >> You know, it's easy to say. It's just the latest and greatest version of the FlashBlade, but obviously it's a lot more than that. We've had a lot of success with FlashBlade kind of across the board in particular with Meta and their research super cluster, which is one of the largest AI super clusters in the world. But, it's not enough to just build on the thing that you had, right? So, with the FlashBlade S, we've increased modularity, we've done things like, building co-design software and hardware and leveraging that into something that increases, or it actually doubles density, performance, power efficiency. On top of that, you can scale independently, storage, networking, and compute, which is pretty big deal because it gives you more flexibility, gives you a little more granularity around performance or capacity, depending on which direction you want to go. And we believe that, kind of the end of this is fundamentally the, I guess, the way to put it is sort of the highest performance and capacity optimization, unstructured data platform on the market today without the need for, kind of, an expensive data tier of cash or expected data cash and tier. So we're pretty excited about, what we've ended up with here. >> Yeah. So I think sometimes people forget, about how much core engineering Meta does. Facebook, you go on Facebook and play around and post things, but yeah, their backend cloud is just amazing. So talk a little bit more about the problem targets for FlashBlade. I mean, it's pretty wide scope and we're going to get into that, but what's the core of that. >> Yeah. We've talked about that extensively in the past, the use cases kind of generally remain the same. I know, we'll probably explore this a little bit more deeply, but you know, really what we're talking about here is performance and scalability. We have written essentially an unlimited Metadata software level, which gives us the ability to expand, we're already starting to think about computing an exabyte scale. Okay. So, the problem that the customer has of, Hey, I've got a Greenfield, object environment, or I've got a file environment and my 10 K and 7,500 RPM disc is just spiraling out of control in my environment. It's an environmental problem. It's a management problem, we have effectively, simplified the process of bringing together highly performant, very large multi petabyte to eventually exabyte scale unstructured data systems. >> So people are obviously trying to inject machine intelligence, AI, ML into applications, bring data into applications, bringing those worlds closer together. Analytics is obviously exploding. You see some other things happening in the news, read somewhere, protection and the like, where does FlashBlade fit in terms of FlashBlade S in some terms of some of these new use cases. >> All those things, we're only going wider and broader. So, we've talked in the past about having a having a horizontal approach to this market. The unstructured data market has often had vertical specificity. You could see successful infrastructure companies in oil and gas that may not play median entertainment, where you see, successful companies that play in media entertainment, but don't play well in financial services, for example. We're sort of playing the long game here with this and we're focused on, bringing an all Q L C architecture that combines our traditional kind of pure DFM with the software that is, now I guess seven years hardened from the original FlashBlade system. And so, when we look at customers and we look at kind of customers in three categories, right, we have customers that sort of fit into a very traditional, more than three, but kind of make bucketized this way, customers that fit into kind of this EDA HPC space, then you have that sort of data protection, which I believe kind of ransomware falls under that as well. The world has changed, right? So customers want their data back faster. Rapid restore is a real thing, right? We have customers that come to us and say, anybody can back up my data, but if I want to get something back fast and I mean in less than a week or a couple days, what do I do? So we can solve that problem. And then as you sort of accurately pointed out where you started, there is the AI ML side of things where the Invidia relationship that we have, right. DGX is are a pretty powerful weapon in that market and solving those problems. But they're not cheap. And keeping those DGX's running all the time requires an extremely efficient underpinning of a flash system. And we believe we have that market as well. >> It's interesting when pure was first coming out as a startup, you obviously had some cool new tech, but you know, your stack wasn't as hard. And now you've got seven years under your belt. The last time you were on the cube, we talked about some of the things that you guys were doing differently. We talked about UFFO, unified fast file and object. How does this new product, FlashBlade S, compare to some previous generations of FlashBlade in terms of solving unstructured data and some of these other trends that we've been talking about? >> Yeah. I touched on this a little bit earlier, but I want to go a little bit deeper on this concept of modularity. So for those that are familiar with Pure Storage, we have what's called the evergreen storage program. It's not as much a program as it is an engineering philosophy. The belief that everything we build should be modular in nature so that we can have essentially a chassi that has an a 100% modular components inside of it. Such that we can upgrade all of those features, non disruptively from one version to the next, you should think about that as you know, if you have an iPhone, when you go get a new iPhone, what do you do with your old iPhone? You either throw it away or you sell it. Well, imagine if your iPhone just got newer and better each time you renewed your, whatever it is, two year or three year subscription with apple. That's effectively what we have as a core philosophy, core operating engineering philosophy within pure. That is now a completely full and robust program with this instantiation of the FlashBlade S. And so kind of what that means is, for a customer I'm future proofed for X number of years, knowing that we have a run rate of being able to keep customers on the flash array side from the FA 400 all the way through the flash array X and Excel, which is about a 10 year time span. So, that then, and of itself sort of starts to play into customers that have concerns around ESG. Right? Last time I checked power space and cooling, still mattered in data center. So although I have people that tell me all the time, power space clearly doesn't matter anymore, but I know at the end of the day, most customers seem to say that it does, you're not throwing away refrigerator size pieces of equipment that once held spinning disc, something that's a size of a microwave that's populated with DFMs with all LC flash that you can actually upgrade over time. So if you want to scale more performance, we can do that through adding CPU. If you want to scale more capacity, we can do that through adding more And we're in control of those parameters because we're building our own DFM, our direct fabric modules on our own storage notes, if you will. So instead of relying on the consumer packaging of an SSD, we're upgrading our own stuff and growing it as we can. So again, on the ESG side, I think for many customers going into the next decade, it's going to be a huge deal. >> Yeah. Interesting comments, Matt. I mean, I don't know if you guys invented it, but you certainly popularize the idea of, no Fort lift upgrades and sort of set the industry on its head when you guys really drove that evergreen strategy and kind of on that note, you guys talk about simplicity. I remember last accelerate went deep with cause on your philosophy of keeping things simple, keeping things uncomplicated, you guys talk about using better science to do that. And you a lot of talk these days about outcomes. How does FlashBlade S support those claims and what do you guys mean by better science? >> Yeah. You know, better science is kind of a funny term. It was an internal term that I was on a sales call actually. And the customer said, well, I understand the difference between these two, but could you tell me how we got there and I was a little stumped on the answer. And I just said, well, I think we have better scientists and that kind of morphed into better science, a good example of that is our Metadata architecture, right? So our scalable Metadata allows us to avoid having that cashing tier, that other architectures have to rely on in order to anticipate, which files are going to need to be in read cash and read misses become very expensive. Now, a good follow up question there, not to do your job, but it's the question that I always get is, well, when you're designing your own hardware and your own software, what's the real material advantage of that? Well, the real material advantage of that is that you are in control of the combination and the interaction of those two things you don't give up the sort of the general purpose nature, if you will, of the performance characteristics that come along with things like commodity, you get a very specific performance profile. That's tailored to the software that's being married to it. Now in some instances you could say, well, okay, does that really matter? Well, when you start to talking about 20, 40, 50, 100, 500, petabyte data sets, every percentage matters. And so those individual percentages equate to space savings. They equate to power and cooling savings. We believe that we're going to have industry best dollars per lot. We're going to have industry best, kind of dollar PRU. So really the whole kind of game here is a round scale. >> Yeah. I mean, look, there's clearly places for the pure software defined. And then when cloud first came out, everybody said, oh, build the cloud and commodity, they don't build custom art. Now you see all the hyper scalers building custom software, custom hardware and software integration, custom Silicon. So co-innovation between hardware and software. It seems pretty as important, if not more important than ever, especially for some of these new workloads who knows what the edge is going to bring. What's the downside of not having that philosophy in your view? Is it just, you can't scale to the degree that you want, you can't support the new workloads or performance? What should customers be thinking about there? >> I think the downside plays in two ways. First is kind of the future and at scale, as I alluded to earlier around cost and just savings over time. Right? So if you're using a you know a commodity SSD, there's packaging around that SSD that is wasteful both in terms of- It's wasteful in the environmental sense and wasteful in the sort of computing performance sense. So that's kind of one thing. On the second side, it's easier for us to control the controllables around reliability when you can eliminate the number of things that actually sit in that workflow and by workflow, I mean when a right is acknowledged from a host and it gets down to the media, the more control you have over that, the more reliability you have over that piece. >> Yeah. I know. And we talked about ESG earlier. I know you guys, I'm going to talk a little bit about more news from accelerate within Invidia. You've certainly heard Jensen talk about the wasted CPU cycles in the data center. I think he's forecasted, 25 to 30% of the cycles are wasted on doing things like storage offload, or certainly networking and security. So now it sort of confirms your ESG thought, we can do things more efficiently, but as it relates to Invidia and some of the news around AIRI's, what is the AI RI? What's that stand for? What's the high level overview of AIRI. >> So the AIRI has been really successful for both us and Invidia. It's a really great partnership we're appreciative of the partnership. In fact, Tony pack day will be speaking here at accelerate. So, really looking forward to that, Look, there's a couple ways to look at this and I take the macro view on this. I know that there's a equally as good of a micro example, but I think the macro is really kind of where it's at. We don't have data center space anymore, right? There's only so many data centers we can build. There's only so much power we can create. We are going to reach a point in time where municipalities are going to struggle against the businesses that are in their municipalities for power. And now you're essentially bidding big corporations against people who have an electric bill. And that's only going to last so long, you know who doesn't win in that? The big corporation doesn't win in that. Because elected officials will have to find a way to serve the people so that they can get power. No matter how skewed we think that may be. That is the reality. And so, as we look at this transition, that first decade of disc to flash transition was really in the block world. The second decade, which it's really fortunate to have a multi decade company, of course. But the second decade of riding that wave from disk to flash is about improving space, power, efficiency, and density. And we sort of reach that, it's a long way of getting to the point about iMedia where these AI clusters are extremely powerful things. And they're only going to get bigger, right? They're not going to get smaller. It's not like anybody out there saying, oh, it's a Thad, or, this isn't going to be something that's going to yield any results or outcomes. They yield tremendous outcomes in healthcare. They yield tremendous outcomes in financial services. They use tremendous outcome in cancer research, right? These are not things that we as a society are going to give up. And in fact, we're going to want to invest more on them, but they come at a cost and one of the resources that is required is power. And so when you look at what we've done in particular with Invidia. You found something that is extremely power efficient that meets the needs of kind of going back to that macro view of both the community and the business. It's a win-win. >> You know and you're right. It's not going to get smaller. It's just going to continue to in momentum, but it could get increasingly distributed. And you think about, I talked about the edge earlier. You think about AI inferencing at the edge. I think about Bitcoin mining, it's very distributed, but it consumes a lot of power and so we're not exactly sure what the next level architecture is, but we do know that science is going to be behind it. Talk a little bit more about your Invidia relationship, because I think you guys were the first, I might be wrong about this, but I think you were the first storage company to announce a partnership with Invidia several years ago, probably four years ago. How is this new solution with a AIRI slash S building on that partnership? What can we expect with Invidia going forward? >> Yeah. I think what you can expect to see is putting the foot on the gas on kind of where we've been with Invidia. So, as I mentioned earlier Meta is by some measurements, the world's largest research super cluster, they're a huge Invidia customer and built on pure infrastructure. So we see kind of those types of well reference architectures, not that everyone's going to have a Meta scale reference architecture, but the base principles of what they're solving for are the base principles of what we're going to begin to see in the enterprise. I know that begin sounds like a strange word because there's already a big business in DGX. There's already a sizable business in performance, unstructured data. But those are only going to get exponentially bigger from here. So kind of what we see is a deepening and a strengthening of the of the relationship and opportunity for us to talk, jointly to customers that are going to be building these big facilities and big data centers for these types of compute related problems and talking about efficiency, right? DGX are much more efficient and Flash Blades are much more efficient. It's a great pairing. >> Yeah. I mean you're definitely, a lot of AI today is modeling in the cloud, seeing HPC and data just slam together all kinds of new use cases. And these types of partnerships are the only way that we're going to solve the future problems and go after these future opportunities. I'll give you a last word you got to be excited with accelerate, what should people be looking for, add accelerate and beyond. >> You know, look, I am really excited. This is going on my 12th year at Pure Storage, which has to be seven or eight accelerates whenever we started this thing. So it's a great time of the year, maybe take a couple off because of because of COVID, but I love reconnecting in particular with partners and customers and just hearing kind of what they have to say. And this is kind of a nice one. This is four years or five years worth of work for my team who candidly I'm extremely proud of for choosing to take on some of the solutions that they, or excuse me, some of the problems that they chose to take on and find solutions for. So as accelerate roles around, I think we have some pretty interesting evolutions of the evergreen program coming to be announced. We have some exciting announcements in the other product arenas as well, but the big one for this event is FlashBlade. And I think that we will see. Look, no one's going to completely control this transition from disc to flash, right? That's a that's a macro trend. But there are these points in time where individual companies can sort of accelerate the pace at which it's happening. And that happens through cost, it happens through performance. My personal belief is this will be one of the largest points of those types of acceleration in this transformation from disc to flash and unstructured data. This is such a leap. This is essentially the equivalent of us going from the 400 series on the block side to the X, for those that you're familiar with the flash array lines. So it's a huge, huge leap for us. I think it's a huge leap for the market. And look, I think you should be proud of the company you work for. And I am immensely proud of what we've created here. And I think one of the things that is a good joy in life is to be able to talk to customers about things you care about. I've always told people my whole life, inefficiency is the bane of my existence. And I think we've rooted out ton of inefficiency with this product and looking forward to going and reclaiming a bunch of data center space and power without sacrificing any performance. >> Well congratulations on making it into the second decade. And I'm looking forward to the orange and the third decade, Matt Burr, thanks so much for coming back in the cubes. It's good to see you. >> Thanks, Dave. Nice to see you as well. We appreciate it. >> All right. And thank you for watching. This is Dave Vellante for the Cube. And we'll see you next time. (outro music)

Published Date : May 24 2022

SUMMARY :

Good to see you. to see you again, Dave. We're going to be broadcasting kind of the end of this the problem targets for FlashBlade. in the past, the use cases kind of happening in the news, We have customers that come to us and say, that you guys were doing differently. that tell me all the time, and kind of on that note, the general purpose nature, if you will, to the degree that you want, First is kind of the future and at scale, and some of the news around AIRI's, that meets the needs of I talked about the edge earlier. of the of the relationship are the only way that we're going to solve of the company you work for. and the third decade, Nice to see you as well. This is Dave Vellante for the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BurrPERSON

0.99+

DavePERSON

0.99+

InvidiaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

100%QUANTITY

0.99+

25QUANTITY

0.99+

AIRIORGANIZATION

0.99+

seven yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

10 KQUANTITY

0.99+

four yearsQUANTITY

0.99+

sevenQUANTITY

0.99+

ExcelTITLE

0.99+

three yearQUANTITY

0.99+

FirstQUANTITY

0.99+

12th yearQUANTITY

0.99+

7,500 RPMQUANTITY

0.99+

MattPERSON

0.99+

two yearQUANTITY

0.99+

appleORGANIZATION

0.99+

less than a weekQUANTITY

0.99+

first decadeQUANTITY

0.99+

FacebookORGANIZATION

0.99+

seven yearsQUANTITY

0.99+

second sideQUANTITY

0.99+

eightQUANTITY

0.99+

second decadeQUANTITY

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

40QUANTITY

0.99+

four years agoDATE

0.99+

more than threeQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

100QUANTITY

0.98+

next decadeDATE

0.98+

two waysQUANTITY

0.98+

50QUANTITY

0.98+

one versionQUANTITY

0.98+

several years agoDATE

0.98+

30%QUANTITY

0.98+

twoQUANTITY

0.97+

oneQUANTITY

0.97+

TonyPERSON

0.97+

two thingsQUANTITY

0.97+

500QUANTITY

0.97+

Pure StorageORGANIZATION

0.97+

FlashBladeTITLE

0.97+

todayDATE

0.94+

third decadeQUANTITY

0.94+

FlashBladeEVENT

0.94+

a couple daysQUANTITY

0.9+

first storage companyQUANTITY

0.88+

each timeQUANTITY

0.88+

ESGORGANIZATION

0.87+

JensenPERSON

0.85+

DGXORGANIZATION

0.85+

FlashBlade STITLE

0.85+

three categoriesQUANTITY

0.85+

FlashBlade SCOMMERCIAL_ITEM

0.82+

about a 10 yearQUANTITY

0.82+

400 seriesQUANTITY

0.78+

Tushar Katarki & Justin Boitano | Red Hat Summit 2022


 

(upbeat music) >> We're back. You're watching theCUBE's coverage of Red Hat Summit 2022 here in the Seaport in Boston. I'm Dave Vellante with my co-host, Paul Gillin. Justin Boitano is here. He's the Vice President of Enterprise and Edge Computing at NVIDIA. Maybe you've heard of him. And Tushar Katarki who's the Director of Product Management at Red Hat. Gentlemen, welcome to theCUBE, good to see you. >> Thank you. >> Great to be here, thanks >> Justin, you are a keynote this morning. You got interviewed and shared your thoughts on AI. You encourage people to got to think bigger on AI. I know it's kind of self-serving but why? Why should we think bigger? >> When you think of AI, I mean, it's a monumental change. It's going to affect every industry. And so when we think of AI, you step back, you're challenging companies to build intelligence and AI factories, and factories that can produce intelligence. And so it, you know, forces you to rethink how you build data centers, how you build applications. It's a very data centric process where you're bringing in, you know, an exponential amount of data. You have to label that data. You got to train a model. You got to test the model to make sure that it's accurate and delivers business value. Then you push it into production, it's going to generate more data, and you kind of work through that cycle over and over and over. So, you know, just as Red Hat talks about, you know, CI/CD of applications, we're talking about CI/CD of the AI model itself, right? So it becomes a continuous improvement of AI models in production which is a big, big business transformation. >> Yeah, Chris Wright was talking about basically take your typical application development, you know, pipeline, and life cycle, and apply that type of thinking to AI. I was saying those two worlds have to come together. Actually, you know, the application stack and the data stack including AI need to come together. What's the role of Red Hat? What's your sort of posture on AI? Where do you fit with OpenShift? >> Yeah, so we're really excited about AI. I mean, a lot of our customers obviously are looking to take that data and make meaning out of it using AI is definitely a big important tool. And OpenShift, and our approach to Open Hybrid Cloud really forms a successful platform to base all your AI journey on with the partners such as NVIDIA whom we are working very closely with. And so the idea really is as Justin was saying, you know, the end to end, when you think about life of a model, you've got data, you mine that data, you create models, you deploy it into production. That whole thing, what we call CI/CD, as he was saying DevOps, DevSecOps, and the hybrid cloud that Red Hat has been talking about, although with OpenShift as the center forms a good basis for that. >> So somebody said the other day, I'm going to ask you, is INVIDIA a hardware company or a software company? >> We are a company that people know for our hardware but, you know, predominantly now we're a software company. And that's what we were on stage talking about. I mean, ultimately, a lot of these customers know that they've got to embark on this journey to apply AI, to transform their business with it. It's such a big competitive advantage going into, you know, the next decade. And so the faster they get ahead of it, the more they're going to win, right? But some of them, they're just not really sure how to get going. And so a lot of this is we want to lower the barrier to entry. We built this program, we call it Launchpad to basically make it so they get instant access to the servers, the AI servers, with OpenShift, with the MLOps tooling, with example applications. And then we walk them through examples like how do you build a chatbot? How do you build a vision system for quality control? How do you build a price recommendation model? And they can do hands on labs and walk out of, you know, Launchpad with all the software they need, I'll say the blueprint for building their application. They've got a way to have the software and containers supported in production, and they know the blueprint for the infrastructure and operating that a scale with OpenShift. So more and more, you know, to come back to your question is we're focused on the software layers and making that easy to help, you know, either enterprises build their apps or work with our ecosystem and developers to buy, you know, solutions off the shelf. >> On the harbor side though, I mean, clearly NVIDIA has prospered on the backs of GPUs, as the engines of AI development. Is that how it's going to be for the foreseeable future? Will GPUs continue to be core to building and training AI models or do you see something more specific to AI workloads? >> Yeah, I mean, it's a good question. So I think for the next decade, well, plus, I mean not forever, we're going to always monetize hardware. It's a big, you know, market opportunity. I mean, Jensen talks about a $100 billion, you know, market opportunity for NVIDIA just on hardware. It's probably another a $100 billion opportunity on the software. So the reality is we're getting going on the software side, so it's still kind of early days, but that's, you know, a big area of growth for us in the future and we're making big investments in that area. On the hardware side, and in the data center, you know, the reality is since Moore's law has ended, acceleration is really the thing that's going to advance all data centers. So I think in the future, every server will have GPUs, every server will have DPUs, and we can talk a bit about what DPUs are. And so there's really kind of three primary processors that have to be there to form the foundation of the enterprise data center in the future. >> Did you bring up an interesting point about DPUs and MPUs, and sort of the variations of GPUs that are coming about? Do you see those different PU types continuing to proliferate? >> Oh, absolutely. I mean, we've done a bunch of work with Red Hat, and we've got a, I'll say a beta of OpenShift 4.10 that now supports DPUs as the, I'll call it the control plane like software defined networking offload in the data center. So it takes all the software defined networking off of CPUs. When everybody talks about, I'll call it software defined, you know, networking and core data centers, you can think of that as just a CPU tax up to this point. So what's nice is it's all moving over to DPU to, you know, offload and isolate it from the x86 cores. It increases security of data center. It improves the throughput of your data center. And so, yeah, DPUs, we see everybody copying that model. And, you know to give credit where credit is due, I think, you know, companies like AWS, you know, they bought Annapurna, they turned it into Nitro which is the foundation of their data centers. And everybody wants the, I'll call it democratized version of that to run their data centers. And so every financial institution and bank around the world sees the value of this technology, but running in their data centers. >> Hey, everybody needs a Nitro. I've written about it. It's Annapurna acquisition, 350 million. I mean, peanuts in the grand scheme of things. It's interesting, you said Moore's law is dead. You know, we have that conversation all the time. Pat Gelsinger promised that Moore's law is alive and well. But the interesting thing is when you look at the numbers, that's, you know, Moore's law, we all know it, doubling of the transistor densities every 18 to 24 months. Let's say that, that promise that he made is true. What I think the industry maybe doesn't appreciate, I'm sure you do, being in NVIDIA, when you combine what you were just saying, the CPU, the GPU, Paul, the MPU, accelerators, all the XPUs, you're talking about, I mean, look at Apple with the M1, I mean 6X in 15 months versus doubling every 18 to 24. The A15 is probably averaging over the last five years, a 110% performance improvement each year versus the historical Moore's law which is 40%. It's probably down to the low 30s now. So it's a completely different world that we're entering now. And the new applications are going to be developed on these capabilities. It's just not your general purpose market anymore. From an application development standpoint, what does that mean to the world? >> Yeah, I mean, yeah, it is a great point. I mean, from an application, I mean first of all, I mean, just talk about AI. I mean, they are all very compute intensive. They're data intensive. And I mean to move data focus so much in to compute and crunch those numbers. I mean, I'd say you need all the PUs that you mentioned in the world. And also there are other concerns that will augment that, right? Like we want to, you know, security is so important so we want to secure everything. Cryptography is going to take off to new levels, you know, that we are talking about, for example, in the case of DPUs, we are talking about, you know, can that be used to offload your encryption and firewalling, and so on and so forth. So I think there are a lot of opportunities even from an application point of view to take of this capacity. So I'd say we've never run out of the need for PUs if you will. >> So is OpenShift the layer that's going to simplify all that for the developer. >> That's right. You know, so one of the things that we worked with NVIDIA, and in fact was we developed this concept of an operator for GPUs, but you can use that pattern for any of the PUs. And so the idea really is that, how do you, yeah-- (all giggle) >> That's a new term. >> Yeah, it's a new term. (all giggle) >> XPUs. >> XPUs, yeah. And so that pattern becomes very easy for GPUs or any other such accelerators to be easily added as a capacity. And for the Kubernetes scaler to understand that there is that capacity so that an application which says that I want to run on a GPU then it becomes very easy for it to run on that GPU. And so that's the abstraction to your point about how we are making that happen. >> And to add to this. So the operator model, it's this, you know, open source model that does the orchestration. So Kubernetes will say, oh, there's a GPU in that node, let me run the operator, and it installs our entire run time. And our run time now, you know, it's got a MIG configuration utility. It's got the driver. It's got, you know, telemetry and metering of the actual GPU and the workload, you know, along with a bunch of other components, right? They get installed in that Kubernetes cluster. So instead of somebody trying to chase down all the little pieces and parts, it just happens automatically in seconds. We've extended the operator model to DPUs and networking cards as well, and we have all of those in the operator hub. So for somebody that's running OpenShift in their data centers, it's really simple to, you know, turn on Node Feature Discovery, you point to the operators. And when you see new accelerated nodes, the entire run time is automatically installed for you. So it really makes, you know, GPUs and our networking, our advanced networking capabilities really first class citizens in the data center. >> So you can kind of connect the dots and see how NVIDIA and the Red Hat partnership are sort of aiming at the enterprise. I mean, NVIDIA, obviously, they got the AI piece. I always thought maybe 25% of the compute cycles in the data center were wasted doing storage offloads or networking offload, security. I think Jensen says it's 30%, probably a better number than I have. But so now you're seeing a lot of new innovation in new hardware devices that are attacking that with alternative processors. And then my question is, what about the edge? Is that a blue field out at the edge? What does that look like to NVIDIA and where does OpenShift play? >> Yeah, so when we talk about the edge, we always going to start talking about like which edge are we talking about 'cause it's everything outside the core data center. I mean, some of the trends that we see with regard to the edges is, you know, when you get to the far edge, it's single nodes. You don't have the guards, gates, and guns protection of the data center. So you start having to worry about physical security of the hardware. So you can imagine there's really stringent requirements on protecting the intellectual property of the AI model itself. You spend millions of dollars to build it. If I push that out to an edge data center, how do I make sure that that's fully protected? And that's the area that we just announced a new processor that we call Hopper H100. It supports confidential computing so that you can basically ensure that model is always encrypted in system memory across the bus, of the PCI bus to the GPU, and it's run in a confidential way on the GPU. So you're protecting your data which is your model plus the data flowing through it, you know, in transit, wallet stored, and then in use. So that really adds to that edge security model. >> I wanted to ask you about the cloud, correct me if I'm wrong. But it seems to me that that AI workloads have been slower than most to make their way to the cloud. There are a lot of concerns about data transfer capacity and even cost. Do you see that? First of all, do you agree with that? And secondly, is that going to change in the short-term? >> Yeah, so I think there's different classes of problems. So we'll take, there's some companies where their data's generated in the cloud and we see a ton of, I'll say, adoption of AI by cloud service providers, right? Recommendation engines, translation engines, conversational AI services, that all the clouds are building. That's all, you know, our processors. There's also problems that enterprises have where now I'm trying to take some of these automation capabilities but I'm trying to create an intelligent factory where I want to, you know, merge kind of AI with the physical world. And that really has to run at the edge 'cause there's too much data being generated by cameras to bring that all the way back into the cloud. So, you know, I think we're seeing mass adoption in the cloud today. I think at the edge a lot of businesses are trying to understand how do I deploy that reliably and securely and scale it. So I do think, you know, there's different problems that are going to run in different places, and ultimately we want to help anybody apply AI where the business is generating the data. >> So obviously very memory intensive applications as well. We've seen you, NVIDIA, architecturally kind of move away from the traditional, you know, x86 approach, take better advantage of memories where obviously you have relationships with Arm. So you've got a very diverse set of capabilities. And then all these other components that come into use, to just be a kind of x86 centric world. And now it's all these other supporting components to support these new applications and it's... How should we think about the future? >> Yeah, I mean, it's very exciting for sure, right? Like, you know, the future, the data is out there at the edge, the data can be in the data center. And so we are trying to weave a hybrid cloud footprint that spans that. I mean, you heard Paul come here, talk about it. But, you know, we've talked about it for some time now. And so the paradigm really that is, that be it an application, and when I say application, it could be even an AI model as a service. It can think about that as an application. How does an application span that entire paradigm from the core to the edge and beyond is where the future is. And, of course, there's a lot of technical challenges, you know, for us to get there. And I think partnerships like this are going to help us and our customers to get there. So the world is very exciting. You know, I'm very bullish on how this will play out, right? >> Justin, we'll give you the last word, closing thoughts. >> Well, you know, I think a lot of this is like I said, it's how do we reduce the complexity for enterprises to get started which is why Launchpad is so fundamental. It gives, you know, access to the entire stack instantly with like hands on curated labs for both IT and data scientists. So they can, again, walk out with the blueprints they need to set this up and, you know, start on a successful AI journey. >> Just a position, is Launchpad more of a Sandbox, more of a school, or more of an actual development environment. >> Yeah, think of it as it's, again, it's really for trial, like hands on labs to help people learn all the foundational skills they need to like build an AI practice and get it into production. And again, it's like, you don't need to go champion to your executive team that you need access to expensive infrastructure and, you know, and bring in Red Hat to set up OpenShift. Everything's there for you so you can instantly get started. Do kind of a pilot project and then use that to explain to your executive team everything that you need to then go do to get this into production and drive business value for the company. >> All right, great stuff, guys. Thanks so much for coming to theCUBE. >> Yeah, thanks. >> Thank you for having us. >> All right, thank you for watching. Keep it right there, Dave Vellante and Paul Gillin. We'll be back right after this short break at the Red Hat Summit 2022. (upbeat music)

Published Date : May 11 2022

SUMMARY :

here in the Seaport in Boston. Justin, you are a keynote this morning. And so it, you know, forces you to rethink Actually, you know, the application And so the idea really to buy, you know, solutions off the shelf. Is that how it's going to be the data center, you know, of that to run their data centers. I mean, peanuts in the of the need for PUs if you will. all that for the developer. And so the idea really is Yeah, it's a new term. And so that's the So it really makes, you know, Is that a blue field out at the edge? across the bus, of the PCI bus to the GPU, First of all, do you agree with that? And that really has to run at the edge you know, x86 approach, from the core to the edge and beyond Justin, we'll give you the Well, you know, I think a lot of this is Launchpad more of a that you need access to Thanks so much for coming to theCUBE. at the Red Hat Summit 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tushar KatarkiPERSON

0.99+

JustinPERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

NVIDIAORGANIZATION

0.99+

Justin BoitanoPERSON

0.99+

Chris WrightPERSON

0.99+

Dave VellantePERSON

0.99+

PaulPERSON

0.99+

AWSORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

110%QUANTITY

0.99+

25%QUANTITY

0.99+

30%QUANTITY

0.99+

40%QUANTITY

0.99+

$100 billionQUANTITY

0.99+

AppleORGANIZATION

0.99+

INVIDIAORGANIZATION

0.99+

AnnapurnaORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

SeaportLOCATION

0.99+

350 millionQUANTITY

0.99+

15 monthsQUANTITY

0.99+

24QUANTITY

0.99+

Red HatORGANIZATION

0.99+

24 monthsQUANTITY

0.99+

next decadeDATE

0.99+

Red Hat Summit 2022EVENT

0.98+

18QUANTITY

0.98+

BostonLOCATION

0.98+

OpenShiftTITLE

0.98+

30sQUANTITY

0.97+

each yearQUANTITY

0.97+

A15COMMERCIAL_ITEM

0.97+

secondlyQUANTITY

0.97+

FirstQUANTITY

0.97+

todayDATE

0.96+

6XQUANTITY

0.96+

next decadeDATE

0.96+

bothQUANTITY

0.96+

Open Hybrid CloudTITLE

0.95+

KubernetesTITLE

0.95+

theCUBEORGANIZATION

0.94+

LaunchpadTITLE

0.94+

two worldsQUANTITY

0.93+

millions of dollarsQUANTITY

0.92+

M1COMMERCIAL_ITEM

0.92+

NitroORGANIZATION

0.91+

Vice PresidentPERSON

0.91+

OpenShift 4.10TITLE

0.89+

single nodesQUANTITY

0.88+

DevSecOpsTITLE

0.86+

JensenORGANIZATION

0.83+

oneQUANTITY

0.82+

three primary processorsQUANTITY

0.82+

DevOpsTITLE

0.81+

firstQUANTITY

0.8+

last five yearsDATE

0.79+

this morningDATE

0.79+

MoorePERSON

0.77+

x86 coresQUANTITY

0.71+

Breaking Analysis: Governments Should Heed the History of Tech Antitrust Policy


 

>> From "theCUBE" studios in Palo Alto, in Boston, bringing you data driven insights from "theCUBE" and ETR. This is "Breaking Analysis" with Dave Vellante. >> There are very few political issues that get bipartisan support these days, nevermind consensus spanning geopolitical boundaries. But whether we're talking across the aisle or over the pond, there seems to be common agreement that the power of big tech firms should be regulated. But the government's track record when it comes to antitrust aimed at big tech is actually really mixed, mixed at best. History has shown that market forces rather than public policy have been much more effective at curbing monopoly power in the technology industry. Hello, and welcome to this week's "Wikibon CUBE" insights powered by ETR. In this "Breaking Analysis" we welcome in frequent "CUBE" contributor Dave Moschella, author and senior fellow at the Information Technology and Innovation Foundation. Dave, welcome, good to see you again. >> Hey, thanks Dave, good to be here. >> So you just recently published an article, we're going to bring it up here and I'll read the title, "Theory Aside, Antitrust Advocates Should Keep Their "Big Tech" Ambitions Narrow". And in this post you argue that big sweeping changes like breaking apart companies to moderate monopoly power in the tech industry have been ineffective compared to market forces, but you're not saying government shouldn't be involved rather you're suggesting that more targeted measures combined with market forces are the right answer. Can you maybe explain a little bit more the premise behind your research and some of your conclusions? >> Sure, and first let's go back to that title, when I said, theory aside, that is referring to a huge debate that's going on in global antitrust circles these days about whether antitrust should follow the traditional path of being invoked when there's real harm, demonstrable harm to consumers or a new theory that says that any sort of vast monopoly power inevitably will be bad for competition and consumers at some point, so your best to intervene now to avoid harms later. And that school, which was a very minor part of the antitrust world for many, many years is now quite ascendant and the debate goes on doesn't matter which side of that you're on the questions sort of there well, all right, well, if you're going to do something to take on big tech and clearly many politicians, regulators are sort of issuing to do something, what would you actually do? And what are the odds that that'll do more good than harm? And that was really the origins of the piece and trying to take a historical view of that. >> Yeah, I learned a new word, thank you. Neo-brandzian had to look it up, but basically you're saying that traditionally it was proving consumer harm versus being proactive about the possibility or likelihood of consumer harm. >> Correct, and that's a really big shift that a lot of traditional antitrust people strongly object to, but is now sort of the trendy and more send and view. >> Got it, okay, let's look a little deeper into the history of tech monopolies and government action and see what we can learn from that. We put together this slide that we can reference. It shows the three historical targets in the tech business and now the new ones. In 1969, the DOJ went after IBM, Big Blue and it's 13 years later, dropped its suit. And then in 1984 the government broke Ma Bell apart and in the late 1990s, went after Microsoft, I think it was 1998 in the Wintel monopoly. And recently in an interview with tech journalist, Kara Swisher, the FTC chair Lena Khan claimed that the government played a major role in moderating the power of tech giants historically. And I think she even specifically referenced Microsoft or maybe Kara did and basically said the industry and consumers from the dominance of companies like Microsoft. So Dave, let's briefly talk about and Kara by the way, didn't really challenge that, she kind of let it slide. But let's talk about each of these and test this concept a bit. Were the government actions in these instances necessary? What were the outcomes and the consequences? Maybe you could start with IBM and AT&T. >> Yeah, it's a big topic and there's a lot there and a lot of history, but I might just sort of introduce by saying for whatever reasons antitrust has been part of the entire information technology industry history from mainframe to the current period and that slide sort of gives you that. And the reasons for that are I think once that we sort of know the economies of scale, network effects, lock in safe choices, lot of things that explain it, but the good bit about that is we actually have so much history of this and we can at least see what's happened in the past and when you look at IBM and AT&T they both were massive antitrust cases. The one against IBM was dropped and it was dropped in as you say, in 1980. Well, what was going on in at that time, IBM was sort of considered invincible and unbeatable, but it was 1981 that the personal computer came around and within just a couple of years the world could see that the computing paradigm had change from main frames and minis to PCs lines client server and what have you. So IBM in just a couple of years went from being unbeatable, you can't compete with them, we have to break up with them to being incredibly vulnerable and in trouble and never fully recovered and is sort of a shell of what it once was. And so the market took care of that and no action was really necessary just by everybody thinking there was. The case of AT&T, they did act and they broke up the company and I would say, first question is, was that necessary? Well, lots of countries didn't do that and the reality is 1980 breaking it up into long distance and regional may have made some sense, but by the 1990 it was pretty clear that the telecom world was going to change dramatically from long distance and fixed wires services to internet services, data services, wireless services and all of these things that we're going to restructure the industry anyways. But AT& T one to me is very interesting because of the unintended consequences. And I would say that the main unintended consequence of that was America's competitiveness in telecommunications took a huge hit. And today, to this day telecommunications is dominated by European, Chinese and other firms. And the big American sort of players of the time AT&T which Western Electric became Lucent, Lucent is now owned by Nokia and is really out of it completely and most notably and compellingly Bell Labs, the Bell Labs once the world's most prominent research institution now also a shell of itself and as it was part of Lucent is also now owned by the Finnish company Nokia. So that restructuring greatly damaged America's core strength in telecommunications hardware and research and one can argue we've never recovered right through this 5IG today. So it's a very good example of the market taking care of, the big problem, but meddling leading to some unintended consequences that have hurt the American competitiveness and as we'll talk about, probably later, you can see some of that going on again today and in the past with Microsoft and Intel. >> Right, yeah, Bell Labs was an American gem, kind of like Xerox PARC and basically gone now. You mentioned Intel and Microsoft, Microsoft and Intel. As many people know, some young people don't, IBM unwillingly handed its monopoly to Intel and Microsoft by outsourcing the micro processor and operating system, respectively. Those two companies ended up with IBM ironically, agreeing to take OS2 which was its proprietary operating system and giving Intel, Microsoft Windows not realizing that its ability to dominate a new disruptive market like PCs and operating systems had been vaporized to your earlier point by the new Wintel ecosystem. Now Dave, the government wanted to break Microsoft apart and split its OS business from its application software, in the case of Intel, Intel only had one business. You pointed out microprocessors so it couldn't bust it up, but take us through the history here and the consequences of each. >> Well, the Microsoft one is sort of a classic because the antitrust case which was raging in the sort of mid nineties and 1998 when it finally ended, those were the very, once again, everybody said, Bill Gates was unstoppable, no one could compete with Microsoft they'd buy them, destroy them, predatory pricing, whatever they were accusing of the attacks on Netscape all these sort of things. But those the very years where it was becoming clear first that Microsoft basically missed the early big years of the internet and then again, later missed all the early years of the mobile phone business going back to BlackBerrys and pilots and all those sorts of things. So here we are the government making the case that this company is unstoppable and you can't compete with them the very moment they're entirely on the defensive. And therefore wasn't surprising that that suit eventually was dropped with some minor concessions about Microsoft making it a little bit easier for third parties to work with them and treating people a little bit more, even handling perfectly good things that they did. But again, the more market took care of the problem far more than the antitrust activities did. The Intel one is also interesting cause it's sort of like the AT& T one. On the one hand antitrust actions made Intel much more likely and in fact, required to work with AMD enough to keep that company in business and having AMD lowered prices for consumers certainly probably sped up innovation in the personal computer business and appeared to have a lot of benefits for those early years. But when you look at it from a longer point of view and particularly when look at it again from a global point of view you see that, wow, they not so clear because that very presence of AMD meant that there's a lot more pressure on Intel in terms of its pricing, its profitability, its flexibility and its volumes. All the things that have made it harder for them to A, compete with chips made in Taiwan, let alone build them in the United States and therefore that long term effect of essentially requiring Intel to allow AMD to exist has undermined Intel's position globally and arguably has undermined America's position in the long run. And certainly Intel today is far more vulnerable to an ARM and Invidia to other specialized chips to China, to Taiwan all of these things are going on out there, they're less capable of resisting that than they would've been otherwise. So, you thought we had some real benefits with AMD and lower prices for consumers, but the long term unintended consequences are arguably pretty bad. >> Yeah, that's why we recently wrote in Intel two "Strategic To Fail", we'll see, Okay. now we come to 2022 and there are five companies with anti-trust targets on their backs. Although Microsoft seems to be the least susceptible to US government ironically intervention at this this point, but maybe not and we show "The Cincos Comas Club" in a homage to Russ Hanneman of the show "Silicon Valley" Apple, Microsoft, Google, and Amazon all with trillion dollar plus valuations. But meta briefly crossed that threshold like Mr. Hanneman lost a comma and is now well under that market cap probably around five or 600 million, sorry, billion. But under serious fire nonetheless Dave, people often don't realize the immense monopoly power that IBM had which relatively speaking when measured its percent of industry revenue or profit dwarf that of any company in tech ever, but the industry is much smaller then, no internet, no cloud. Does it call for a different approach this time around? How should we think about these five companies their market power, the implications of government action and maybe what you suggested more narrow action versus broad sweeping changes. >> Yeah, and there's a lot there. I mean, if you go back to the old days IBM had what, 70% of the computer business globally and AT&T had 90% or so of the American telecom market. So market shares that today's players can only dream of. Intel and Microsoft had 90% of the personal computer market. And then you look at today the big five and as wealthy and as incredibly successful as they've been, you sort of have almost the argument that's wrong on the face of it. How can five companies all of which compete with each other to at least some degree, how can they all be monopolies? And the reality is they're not monopolies, they're all oligopolies that are very powerful firms, but none of them have an outright monopoly on anything. There are competitors in all the spaces that they're in and increasing and probably increasingly so. And so, yeah, I think people conflate the extraordinary success of the companies with this belief that therefore they are monopolist and I think they're far less so than those in the past. >> Great, all right, I want to do a quick drill down to cloud computing, it's a key component of digital business infrastructure in his book, "Seeing Digital", Dave Moschella coined a term the matrix or the key which is really referred to the key technology platforms on which people are going to build digital businesses. Dave, we joke you should have called it the metaverse you were way ahead of your time. But I want to look at this ETR chart, we show spending momentum or net score on the vertical access market share or pervasiveness in the dataset on the horizontal axis. We show this view a lot, we put a dotted line at the 40% mark which indicates highly elevated spending. And you can sort of see Microsoft in the upper right, it's so far up to the right it's hidden behind the January 22 and AWS is right there. Those two dominate the cloud far ahead of the pack including Google Cloud. Microsoft and to a lesser extent AWS they dominate in a lot of other businesses, productivity, collaboration, database, security, video conferencing. MarTech with LinkedIn PC software et cetera, et cetera, Googles or alphabets of business of course is ads and we don't have similar spending data on Apple and Facebook, but we know these companies dominate their respective business. But just to give you a sense of the magnitude of these companies, here's some financial data that's worth looking at briefly. The table ranks companies by market cap in trillions that's the second column and everyone in the club, but meta and each has revenue well over a hundred billion dollars, Amazon approaching half a trillion dollars in revenue. The operating income and cash positions are just mind boggling and the cash equivalents are comparable or well above the revenues of highly successful tech companies like Cisco, Dell, HPE, Oracle, and Salesforce. They're extremely profitable from an operating income standpoint with the clear exception of Amazon and we'll come back to that in a moment and we show the revenue multiples in the last column, Apple, Microsoft, and Google, just insane. Dave, there are other equally important metrics, CapX is one which kind of sets the stage for future scale and there are other measures. >> Yeah, including our research and development where those companies are spending hundreds of billions of dollars over the years. And I think it's easy to look at those numbers and just say, this doesn't seem right, how can any companies have so much and spend so much? But if you think of what they're actually doing, those companies are building out the digital infrastructure of essentially the entire world. And I remember once meeting some folks at Google, and they said, beyond AI, beyond Search, beyond Android, beyond all the specific things we do, the biggest thing we're actually doing is building a physical infrastructure that can deliver search results on any topic in microseconds and the physical capacity they built costs those sorts of money. And when people start saying, well, we should have lots and lots of smaller companies well, that sounds good, yeah, it's all right, but where are those companies going to get the money to build out what needs to be built out? And every country in the world is trying to build out its digital infrastructure and some are going to do it much better than others. >> I want to just come back to that chart on Amazon for a bit, notice their comparatively tiny operating profit as a percentage of revenue, Amazon is like Bezos giant lifestyle business, it's really never been that profitable like most retail. However, there's one other financial data point around Amazon's business that we want to share and this chart here shows Amazon's operating profit in the blue bars and AWS's in the orange. And the gray line is the percentage of Amazon's overall operating profit that comes from AWS. That's the right most access, so last quarter we were well over a hundred percent underscoring the power of AWS and the horrendous margins in retail. But AWS is essentially funding Amazon's entrance into new markets, whether it's grocery or movies, Bezos moves into space. Dave, a while back you collaborated with us and we asked our audience, what could disrupt Amazon? And we came up with your detailed help, a number of scenarios as shown here. And we asked the audience to rate the likelihood of each scenario in terms of its likelihood of disrupting Amazon with a 10 being highly likely on average the score was six with complacency, arrogance, blindness, you know, self-inflicted wounds really taking the top spot with 6.5. So Dave is breaking up Amazon the right formula in your view, why or why not? >> Yeah, there's a couple of things there. The first is sort of the irony that when people in the sort of regulatory world talk about the power of Amazon, they almost always talk about their power in consumer markets, whether it's books or retail or impact on malls or main street shops or whatever and as you say that they make very little money doing that. The interest people almost never look at the big cloud battle between Amazon, Microsoft and lesser extent Google, Alibaba others, even though that's where they're by far highest market share and pricing power and all those things are. So the regulatory focus is sort of weird, but you know, the consumer stuff obviously gets more appeal to the general public. But that survey you referred to me was interesting because one of the challenges I sort of sent myself I was like okay, well, if I'm going to say that IBM case, AT&T case, Microsoft's case in all those situations the market was the one that actually minimized the power of those firms and therefore the antitrust stuff wasn't really necessary. Well, how true is that going to be again, just cause it's been true in the past doesn't mean it's true now. So what are the possible scenarios over the 2020s that might make it all happen again? And so each of those were sort of questions that we put out to others, but the ones that to me by far are the most likely I mean, they have the traditional one of company cultures sort of getting fat and happy and all, that's always the case, but the more specific ones, first of all by far I think is China. You know, Amazon retail is a low margin business. It would be vulnerable if it didn't have the cloud profits behind it, but imagine a year from now two years from now trade tensions with China get worse and Christmas comes along and China just says, well, you know, American consumers if you want that new exercise bike or that new shoes or clothing, well, anything that we make well, actually that's not available on Amazon right now, but you can get that from Alibaba. And maybe in America that's a little more farfetched, but in many countries all over the world it's not farfetched at all. And so the retail divisions vulnerability to China just seems pretty obvious. Another possible disruption, Amazon has spent billions and billions with their warehouses and their robots and their automated inventory systems and all the efficiencies that they've done there, but you could argue that maybe someday that's not really necessary that you have Search which finds where a good is made and a logistical system that picks that up and delivers it to customers and why do you need all those warehouses anyways? So those are probably the two top one, but there are others. I mean, a lot of retailers as they get stronger online, maybe they start pulling back some of the premium products from Amazon and Amazon takes their cut of whatever 30% or so people might want to keep more of that in house. You see some of that going on today. So the idea that the Amazon is in vulnerable disruption is probably is wrong and as part of the work that I'm doing, as part of stuff that I do with Dave and SiliconANGLE is how's that true for the others too? What are the scenarios for Google or Apple or Microsoft and the scenarios are all there. And so, will these companies be disrupted as they have in the past? Well, you can't say for sure, but the scenarios are certainly plausible and I certainly wouldn't bet against it and that's what history tells us. And it could easily happen once again and therefore, the antitrust should at least be cautionary and humble and realize that maybe they don't need to act as much as they think. >> Yeah, now, one of the things that you mentioned in your piece was felt like narrow remedies, were more logical. So you're not arguing for totally Les Affaire you're pushing for remedies that are more targeted in scope. And while the EU just yesterday announced new rules to limit the power of tech companies and we showed the article, some comments here the regulators they took the social media to announce a victory and they had a press conference. I know you watched that it was sort of a back slapping fest. The comments however, that we've sort of listed here are mixed, some people applauded, but we saw many comments that were, hey, this is a horrible idea, this was rushed together. And these are going to result as you say in unintended consequences, but this is serious stuff they're talking about applying would appear to be to your point or your prescription more narrowly defined restrictions although a lot of them to any company with a market cap of more than 75 billion Euro or turnover of more than 77.5 billion Euro which is a lot of companies and imposing huge penalties for violations up to 20% of annual revenue for repeat offenders, wow. So again, you've taken a brief look at these developments, you watched the press conference, what do you make of this? This is an application of more narrow restrictions, but in your quick assessment did they get it right? >> Yeah, let's break that down a little bit, start a little bit of history again and then get to Europe because although big sweeping breakups of the type that were proposed for IBM, Microsoft and all weren't necessary that doesn't mean that the government didn't do some useful things because they did. In the case of IBM government forces in Europe and America basically required IBM to make it easier for companies to make peripherals type drives, disc drives, printers that worked with IBM mainframes. They made them un-bundle their software pricing that made it easier for database companies and others to sell their of products. With AT&T it was the government that required AT&T to actually allow other phones to connect to the network, something they argued at the time would destroy security or whatever that it was the government that required them to allow MCI the long distance carrier to connect to the AT network for local deliveries. And with that Microsoft and Intel the government required them to at least treat their suppliers more even handly in terms of pricing and policies and support and such things. So the lessons out there is the big stuff wasn't really necessary, but the little stuff actually helped a lot and I think you can see the scenarios and argue in the piece that there's little stuff that can be done today in all the cases for the big five, there are things that you might want to consider the companies aren't saints they take advantage of their power, they use it in ways that sometimes can be reigned in and make for better off overall. And so that's how it brings us to the European piece of it. And to me, the European piece is much more the bad scenario of doing too much than the wiser course of trying to be narrow and specific. What they've basically done is they have a whole long list of narrow things that they're all trying to do at once. So they want Amazon not to be able to share data about its selling partners and they want Apple to open up their app store and they don't want people Google to be able to share data across its different services, Android, Search, Mail or whatever. And they don't want Facebook to be able to, they want to force Facebook to open up to other messaging services. And they want to do all these things for all the big companies all of which are American, and they want to do all that starting next year. And to me that looks like a scenario of a lot of difficult problems done quickly all of which might have some value if done really, really well, but all of which have all kinds of risks for the unintended consequence we've talked before and therefore they seem to me being too much too soon and the sort of problems we've seen in the past and frankly to really say that, I mean, the Europeans would never have done this to the companies if they're European firms, they're doing this because they're all American firms and the sort of frustration of Americans dominance of the European tech industry has always been there going back to IBM, Microsoft, Intel, and all of them. But it's particularly strong now because the tech business is so big. And so I think the politics of this at a time where we're supposedly all this great unity of America and NATO and Europe in regards to Ukraine, having the Europeans essentially go after the most important American industry brings in the geopolitics in I think an unavoidable way. And I would think the story is going to get pretty tense over the next year or so and as you say, the Europeans think that they're taking massive actions, they think they're doing the right thing. They think this is the natural follow on to the GDPR stuff and even a bigger version of that and they think they have more to come and they see themselves as the people taming big tech not just within Europe, but for the world and absent any other rules that they may pull that off. I mean, GDPR has indeed spread despite all of its flaws. So the European thing which it doesn't necessarily get huge attention here in America is certainly getting attention around the world and I would think it would get more, even more going forward. >> And the caution there is US public policy makers, maybe they can provide, they will provide a tailwind maybe it's a blind spot for them and it could be a template like you say, just like GDPR. Okay, Dave, we got to leave it there. Thanks for coming on the program today, always appreciate your insight and your views, thank you. >> Hey, thanks a lot, Dave. >> All right, don't forget these episodes are all available as podcast, wherever you listen. All you got to do is search, "Breaking Analysis Podcast". Check out ETR website, etr.ai. We publish every week on wikibon.com and siliconangle.com. And you can email me david.vellante@siliconangle.com or DM me @davevellante. Comment on my LinkedIn post. This is Dave Vellante for Dave Michelle for "theCUBE Insights" powered by ETR. Have a great week, stay safe, be well and we'll see you next time. (slow tempo music)

Published Date : Mar 27 2022

SUMMARY :

bringing you data driven agreement that the power in the tech industry have been ineffective and the debate goes on about the possibility but is now sort of the trendy and in the late 1990s, and the reality is 1980 breaking it up and the consequences of each. of the internet and then again, of the show "Silicon Valley" 70% of the computer business and everyone in the club, and the physical capacity they built costs and the horrendous margins in retail. but the ones that to me Yeah, now, one of the and argue in the piece And the caution there and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave MoschellaPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DellORGANIZATION

0.99+

DavePERSON

0.99+

AppleORGANIZATION

0.99+

Bell LabsORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Kara SwisherPERSON

0.99+

AT& TORGANIZATION

0.99+

Dave MoschellaPERSON

0.99+

Lena KhanPERSON

0.99+

TaiwanLOCATION

0.99+

KaraPERSON

0.99+

Palo AltoLOCATION

0.99+

AWSORGANIZATION

0.99+

1980DATE

0.99+

1998DATE

0.99+

IntelORGANIZATION

0.99+

Big BlueORGANIZATION

0.99+

Dave VellantePERSON

0.99+

HannemanPERSON

0.99+

AlibabaORGANIZATION

0.99+

EUORGANIZATION

0.99+

Western ElectricORGANIZATION

0.99+

AmericaLOCATION

0.99+

NATOORGANIZATION

0.99+

1969DATE

0.99+

90%QUANTITY

0.99+

sixQUANTITY

0.99+

LucentORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Does Intel need a Miracle?


 

(upbeat music) >> Welcome everyone, this is Stephanie Chan with theCUBE. Recently analyst Dave Ross RADIO entitled, Pat Gelsinger has a vision. It just needs the time, the cash and a miracle where he highlights why he thinks Intel is years away from reversing position in the semiconductor industry. Welcome Dave. >> Hey thanks, Stephanie. Good to see you. >> So, Dave you been following the company closely over the years. If you look at Wall Street Journal most analysts are saying to hold onto Intel. can you tell us why you're so negative on it? >> Well, you know, I'm not a stock picker Stephanie, but I've seen the data there are a lot of... some buys some sells, but most of the analysts are on a hold. I think they're, who knows maybe they're just hedging their bets they don't want to a strong controversial call that kind of sitting in the fence. But look, Intel still an amazing company they got tremendous resources. They're an ICON and they pay a dividend. So, there's definitely an investment case to be made to hold onto the stock. But I would generally say that investors they better be ready to hold on to Intel for a long, long time. I mean, Intel's they're just not the dominant player that it used to be. And the challenges have been mounting for a decade and look competitively Intel's fighting a five front war. They got AMD in both PCs and the data center the entire Arm Ecosystem` and video coming after with the whole move toward AI and GPU they're dominating there. Taiwan Semiconductor is by far the leading fab in the world with terms of output. And I would say even China is kind of the fifth leg of that stool, long term. So, lot of hurdles to jump competitively. >> So what are other sources of Intel's trouble sincere besides what you just mentioned? >> Well, I think they started when PC volumes peaked which was, or David Floyer, Wikibon wrote back in 2011, 2012 that he tells if it doesn't make some moves, it's going to face some trouble. So, even though PC volumes have bumped up with the pandemic recently, they pair in comparison to the wafer volume that are coming out of the Arm Ecosystem, and TSM and Samsung factories. The volumes of the Arm Ecosystem, Stephanie they dwarf the output of Intel by probably 10 X in semiconductors. I mean, the volume in semiconductors is everything. And because that's what costs down and Intel they just knocked a little cost manufacture any anymore. And in my view, they may never be again, not without a major change in the volume strategy, which of course Gelsinger is doing everything he can to affect that change, but they're years away and they're going to have to spend, north of a 100 billion dollars trying to get there, but it's all about volume in the semiconductor game. And Intel just doesn't have it right now. >> So you mentioned Pat Gelsinger he was a new CEO last January. He's a highly respected CEO and in truth employed more than four decades, I think he has knowledge and experience. including 30 years at Intel where he began his career. What's your opinion on his performance thus far besides the volume and semiconductor industry position of Intel? >> Well, I think Gelsinger is an amazing executive. He's a technical visionary, he's an execution machine, he's doing all the right things. I mean, he's working, he was at the state of the union address and looking good in a suit, he's saying all the right things. He's spending time with EU leaders. And he's just a very clear thinker and a super strong strategist, but you can't change Physics. The thing about Pat is he's known all along what's going on with Intel. I'm sure he's watched it from not so far because I think it's always been his dream to run the company. So, the fact that he's made a lot of moves. He's bringing in new management, he's repairing some of the dead wood at Intel. He's launched, kind of relaunched if you will, the Foundry Business. But I think they're serious about that. You know, this time around, they're spinning out mobile eye to throw off some cash mobile eye was an acquisition they made years ago to throw off some more cash to pay for the fabs. They have announced things like; a fabs in Ohio, in the Heartland, Ze in Heartland which is strikes all the right chords with the various politicians. And so again, he's doing all the right things. He's trying to inject. He's calling out his best Andrew Grove. I like to say who's of course, The Iconic CEO of Intel for many, many years, but again you can't change Physics. He can't compress the cycle any faster than the cycle wants to go. And so he's doing all the right things. It's just going to take a long, long time. >> And you said that competition is better positioned. Could you elaborate on why you think that, and who are the main competitors at this moment? >> Well, it's this Five Front War that I talked about. I mean, you see what's happened in Arm changed everything, Intel remember they passed on the iPhone didn't think it could make enough money on smartphones. And that opened the door for Arm. It was eager to take Apple's business. And because of the consumer volumes the semiconductor industry changed permanently just like the PC volume changed the whole mini computer business. Well, the smartphone changed the economics of semiconductors as well. Very few companies can afford the capital expense of building semiconductor fabrication facilities. And even fewer can make cutting edge chips like; five nanometer, three nanometer and beyond. So companies like AMD and Invidia, they don't make chips they design them and then they ship them to foundries like TSM and Samsung to manufacture them. And because TSM has such huge volumes, thanks to large part to Apple it's further down or up I guess the experience curve and experience means everything in terms of cost. And they're leaving Intel behind. I mean, the best example I can give you is Apple would look at the, a series chip, and now the M one and the M one ultra, I think about the traditional Moore's law curve that we all talk about two X to transistor density every two years doubling. Intel's lucky today if can keep that pace up, let's assume it can. But meanwhile, look at Apple's Arm based M one to M one Ultra transition. It occurred in less than two years. It was more like, 15 or 18 months. And it went from 16 billion transistors on a package to over a 100 billion. And so we're talking about the competition Apple in this case using Arm standards improving it six to seven X inside of a two year period while Intel's running it two X. And that says it all. So Intel is on a curve that's more expensive and slower than the competition. >> Well recently, until what Lujan Harrison did with 5.4 billion So it can make more check order companies last February I think the middle of February what do you think of that strategic move? >> Well, it was designed to help with Foundry. And again, I said left that out of my things that in Intel's doing, as Pat's doing there's a long list actually and many more. Again I think, it's an Israeli based company they're a global company, which is important. One of the things that Pat stresses is having a a presence in Western countries, I think that's super important, he'd like to get the percentage of semiconductors coming out of Western countries back up to at least maybe not to where it was previously but by the end of the decade, much more competitive. And so that's what that acquisition was designed to do. And it's a good move, but it's, again it doesn't change Physics. >> So Dave, you've been putting a lot of content out there and been following Intel for years. What can Intel do to go back on track? >> Well, I think first it needs great leadership and Pat Gelsinger is providing that. Since we talked about it, he's doing all the right things. He's manifesting his best. Andrew Grove, as I said earlier, splitting out the Foundry business is critical because we all know Moore's law. This is Right Law talks about volume in any business not just semiconductors, but it's crucial in semiconductors. So, splitting out a separate Foundry business to make chips is important. He's going to do that. Of course, he's going to ask Intel's competitors to allow Intel to manufacture their chips which they very well may well want to do because there's such a shortage right now of supply and they need those types of manufacturers. So, the hope is that that's going to drive the volume necessary for Intel to compete cost effectively. And there's the chips act. And it's EU cousin where governments are going to possibly put in some money into the semiconductor manufacturing to make the west more competitive. It's a key initiative that Pat has put forth and a challenge. And it's a good one. And he's making a lot of moves on the design side and committing tons of CapEx in these new fabs as we talked about but maybe his best chance is again the fact that, well first of all, the market's enormous. It's a trillion dollar market, but secondly there's a very long term shortage in play here in semiconductors. I don't think it's going to be cleared up in 2022 or 2023. It's just going to be keep being an explotion whether it's automobiles and factory devices and cameras. I mean, virtually every consumer device and edge device is going to use huge numbers of semiconductor chip. So, I think that's in Pat's favor, but honestly Intel is so far behind in my opinion, that I hope by the end of this decade, it's going to be in a position maybe a stronger number two position, and volume behind TSM maybe number three behind Samsung maybe Apple is going to throw Intel some Foundry business over time, maybe under pressure from the us government. And they can maybe win that account back but that's still years away from a design cycle standpoint. And so again, maybe in the 2030's, Intel can compete for top dog status, but that in my view is the best we can hope for this national treasure called Intel. >> Got it. So we got to leave it right there. Thank you so much for your time, Dave. >> You're welcome Stephanie. Good to talk to you >> So you can check out Dave's breaking analysis on theCUBE.net each Friday. This is Stephanie Chan for theCUBE. We'll see you next time. (upbeat music)

Published Date : Mar 22 2022

SUMMARY :

It just needs the time, Good to see you. closely over the years. but most of the analysts are on a hold. I mean, the volume in far besides the volume And so he's doing all the right things. And you said that competition And because of the consumer volumes I think the middle of February but by the end of the decade, What can Intel do to go back on track? And so again, maybe in the 2030's, Thank you so much for your time, Dave. Good to talk to you So you can check out

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamsungORGANIZATION

0.99+

DavePERSON

0.99+

Stephanie ChanPERSON

0.99+

StephaniePERSON

0.99+

TSMORGANIZATION

0.99+

David FloyerPERSON

0.99+

OhioLOCATION

0.99+

Pat GelsingerPERSON

0.99+

2022DATE

0.99+

2023DATE

0.99+

30 yearsQUANTITY

0.99+

Andrew GrovePERSON

0.99+

AppleORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AMDORGANIZATION

0.99+

5.4 billionQUANTITY

0.99+

GelsingerPERSON

0.99+

10 XQUANTITY

0.99+

less than two yearsQUANTITY

0.99+

sixQUANTITY

0.99+

M oneCOMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

PatPERSON

0.99+

M one ultraCOMMERCIAL_ITEM

0.99+

fifth legQUANTITY

0.99+

15QUANTITY

0.99+

five nanometerQUANTITY

0.99+

MoorePERSON

0.99+

HeartlandLOCATION

0.99+

EUORGANIZATION

0.99+

18 monthsQUANTITY

0.99+

sevenQUANTITY

0.99+

IconicORGANIZATION

0.98+

five frontQUANTITY

0.98+

three nanometerQUANTITY

0.98+

Dave RossPERSON

0.98+

two yearQUANTITY

0.98+

CapExORGANIZATION

0.98+

last FebruaryDATE

0.97+

last JanuaryDATE

0.97+

Lujan HarrisonPERSON

0.97+

middle of FebruaryDATE

0.97+

firstQUANTITY

0.96+

OneQUANTITY

0.96+

16 billion transistorsQUANTITY

0.96+

100 billion dollarsQUANTITY

0.96+

todayDATE

0.96+

theCUBEORGANIZATION

0.96+

theCUBE.netOTHER

0.95+

both PCsQUANTITY

0.94+

Five Front WarEVENT

0.94+

Krish Prasad and Manuvir Das | VMworld 2020


 

>> Narrator: From around the globe, it's theCube. With digital coverage of VMworld 2020. Brought to you by VMware and its ecosystem partners. >> Hello, and welcome back to theCube virtual coverage of VMworld 2020. I'm John Furrier, host of theCube. VMworld's not in person this year, it's on the virtual internet. A lot of content, check it out, vmworld.com, a lot of great stuff, online demos, and a lot of great keynotes. Here we got a great conversation to unpack, the NVIDIA, the AI and all things Cloud Native. With Krish Prasad, who's the SVP and GM of Cloud Platform, Business Unit, and Manuvir Das head of enterprise computing at NVIDIA. Gentlemen, great to see you virtually. Thanks for joining me on the virtual Cube, for the virtual VMworld 2020. >> Thank you John. >> Pleasure to be here. >> Quite a world. And I think one of the things that obviously we've been talking about all year since COVID is the acceleration of this virtualized environment with media and everyone working at home remote. Really puts the pressure on digital transformation Has been well discussed and documented. You guys have some big news, obviously on the main stage NVIDIA CEO, Jensen there legend. And of course, you know, big momentum with with AI and GPUs and all things, you know, computing. Krish, what are your announcements today? You got some big news. Could you take a minute to explain the big announcements today? >> Yeah, John. So today we want to make two major announcements regarding our partnership with NVIDIA. So let's take the first one, and talk through it and then we can get to the second announcement later. In the first one, as you well know, NVIDIA is the leader in AI and VMware as the leader in virtualization and cloud. This announcement is about us teaming up, deliver a jointly engineered solution to the market to bring AI to every enterprise. So as you well know, VMware has more than 300,000 customers worldwide. And we believe that this solution would enable our customers to transform their data centers or AI applications running on top of their virtualized VMware infrastructure that they already have. And we think that this is going to vastly accelerate the adoption of AI and essentially democratize AI in the enterprise. >> Why AI? Why now Manuvir? Obviously we know the GPUs have set the table for many cool things, from mining Bitcoin to really providing a great user experience. But AI has been a big driver. Why now? Why VMware now? >> Yes. Yeah. And I think it's important to understand this is about AI more than even about GPUs, you know. This is a great moment in time where AI has finally come to life, because the hardware and software has come together to make it possible. And if you just look at industries and different parts of life, how is AI impacting? So for example, if you're a company on the internet doing business, everything you do revolves around making recommendations to your customers about what they should do next. This is based on AI. Think about the world we live in today, with the importance of healthcare, drug discovery, finding vaccines for something like COVID. That work is dramatically accelerated if you use AI. And what we've been doing in NVIDIA over the years is, we started with the hardware technology with the GPU, the Parallel Processor, if you will, that could really make these algorithms real. And then we worked very hard on building up the ecosystem. You know, we have 2 million developers today who work with NVIDIA AI. That's thousands of companies that are using AI today. But then if you think about what Krish said, you know about the number of customers that VMware has, which is in the hundreds of thousands, the opportunity before us really now is, how do we democratize this? How do we take this power of AI, that makes every customer and every person better and put it in the hands of every enterprise customer? And we need a great vehicle for that, and that vehicle is VMware. >> Guys, before we get to the next question, I would just want to get your personal take on this, because again, we've talked many times, both of you've been on theCube on this topic. But now I want to highlight, you mentioned the GPU that's hardware. This is software. VMware had hardware partners and then still software's driving it. Software's driving everything. Whether it's something in space, it's an IOT device or anything at the edge of the network. Software, is the value. This has become so obvious. Just share your personal take on this for folks who are now seeing this for the first time. >> Yeah. I mean, I'll give you my take first. I'm a software guy by background, I learned a few years ago for the first time that an array is a storage device and not a data structure in programming. And that was a shock to my system. Definitely the world is based on algorithms. Algorithms are implemented in software. Great hardware enables those algorithms. >> Krish, your thoughts. we live we're living in the future right now. >> Yeah, yeah. I would say that, I mean, the developers are becoming the center. They are actually driving the transformation in this industry, right? It's all about the application development, it's all about software, the infrastructure itself is becoming software defined. And the reason for that is you want the developers to be able to craft the infrastructure the way they need for the applications to run on top of. So it's all about software like I said. >> Software defined. Yeah, just want to get that quick self-congratulatory high five amongst ourselves virtually. (laughs) Congratulations. >> Exactly. >> Krish, last time we spoke at VMworld, we were obviously in person, but we talked about Tanzu and vSphere. Okay, you had Project Pacific. Does this expand? Does this announcement expand on that offering? >> Absolutely. As you know John, for the past several years, VMware has been on this journey to define the Hybrid Cloud Infrastructure, right? Essentially is the software stack that we have, which will enable our customers to provide a cloud operating model to their developers, irrespective of where they want to land their workloads. Whether they want to land their workloads On-Premise, or if they want it to be on top of AWS, Google, Azure, VMware stack is already running across all of them as you well know. And in addition to that, we have around, you know, 4,000, 5,000 service providers who are also running our Platform to deliver cloud services to their customers. So as part of that journey, last year, we took the Platform and we added one further element to it. Traditionally, our platform has been used by customers for running via VMs. Last year, we natively integrated Kubernetes into our platform. This was the big re architecture of vSphere, as we talked about. That was delivered to the market. And essentially now customers can use the same platform to run Kubernetes, Containers and VM workloads. The exact same platform, it is operationally the same. So the same skillsets, tools and processes can be used to run Kubernetes as well as VM applications. And the same platform runs, whether you want to run it On-Premise or in any of the clouds, as we talked about before. So that vastly simplifies the operational complexity that our customers have to deal with. And this is the next chapter in that journey, by doing the same thing for AI workload. >> You guys had great success with these Co-Engineering joined efforts. VMware and now with NVIDIA is interesting. It's very relevant and is very cool. So it's cool and relevant, so check, check. Manuvir, talk about this, because how do you bring that vision to the enterprises? >> Yeah, John, I think, you know, it's important to understand there is some real deep Computer Science here between the Engineers at VMware and NVIDIA. Just to lay that out, you can think of this as a three layer stack, right? The first thing that you need is, clearly you need the hardware that is capable of running these algorithms, that's what the GPU enable. Then you need a great software stack for AI, all the right Algorithmics that take advantage of that hardware. This is actually where NVIDIA spends most of its effort today. People may sometimes think of NVIDIA as a GPU company, but we have much more a software company now, where we have over the years created a body of work of all of the software that it actually takes to do good AI. But then how do you marry the software stack with the hardware? You need a platform in the middle that supports the applications and consumes the hardware and exposes it properly. And that's where vSphere, you know, as Krish described with either VMs or Containers comes into the picture. So the Computer Science here is, to wire all these things up together with the right algorithmics so that you get real acceleration. So as examples of early work that the two teams have done together, we have workloads in healthcare, for example. In cancer detection, where the acceleration we get with this new stack is 30X, right? The workload is running 30 times faster than it was running before this integration just on CPUs. >> Great performance increase again. You guys are hiring a lot of software developers. I can attest to knowing folks in Silicon Valley and around the world. So I know you guys are bringing the software jobs to the table on a great product by the way, so congratulations. Krish, Democratization of AI for the enterprise. This is a liberating opportunity, because one of the things we've heard from your customers and also from VMware, but mostly from the customer's successes, is that there's two types of extremes. There's the, I'm going to modernize my business, certainly COVID forcing companies, whether they're airlines or whatever, not a lot going on, they have an opportunity to modernize, to essentially modern apps that are getting a tailwind from these new digital transformation accelerated. How does AI democratize this? Cause you got people and you've got technology. (laughs) Right? So share your thoughts on how you see this democratizing. >> That's a very good question. I think if you look at how people are running AI applications today, like you go to an enterprise, you would see that there is a silo of bare metal sun works on the side, where the AI stack is run. And you have people with specialized skills and different tools and utilities that manage that environment. And that is what is standing in the way of AI taking off in the enterprise, right? It is not the use case. There are all these use cases which are mission critical that all companies want to do, right? Worldwide, that has been the case. It is about the complexity of life that is standing in the way. So what we are doing with this is we are saying, "hey, that whole solution stack that Manuvir talked about, is integrated into the VMware Virtualized Infrastructure." Whether it's On-Prem or in the cloud. And you can manage that environment with the exact same tools and processes and skills that you traditionally had for running any other application on VMware infrastructure. So, you don't need to have anything special to run this. And that's what is going to give us the acceleration that we talked about and essentially hive the Democratization of AI. >> That's a great point. I just want to highlight that and call that out, because AI's every use case. You could almost say theCube could have AI and we do actually have a little bit of AI and some of our transcriptions and work. But it's not so much just use cases, it's actually not just saying you got to do it. So taking down that blocker, the complexity, certainly is the key. And that's a great point. We're going to call that out after. Alright, let's move on to the second part of the announcement. Krish Project Monterey. This is a big deal. And it looks like a, you know, kind of this elusive, it's architectural thing, but it's directionally really strategic for VMware. Could you take a minute to explain this announcement? Frame this for us. >> Absolutely. I think John, you remember Pat got on stage last year at Vmworld and said, you know, "we are undertaking the biggest re architecture of the vSphere platform in the last 10 years." And he was talking about natively embedding Kubernetes, in vSphere, right? Remember Tanzu and Project Pacific. This year we are announcing Project Monterrey. It's a project that is significant with several partners in the industry, along with NVIDIA was one of the key partners. And what we are doing is we are reimagination of the data center for the next generation applications. And at the center of it, what we are going to do is rearchitect vSphere and ESX. So that the ESX can normally run on the CPU, but it'll also run on the Smart Mix. And what this gives us is the whole, let's say data center, infrastructure type services to be offloaded from running on the CPU onto the Smart Mix. So what does this provide the applications? The applications then will perform better. And secondly, it provides an extra layer of security for the next generation applications. Now we are not going to stop there. We are going to use this architecture and extended it so that we can finally eliminate one of the big silos that exist in the enterprise, which is the bare metal silo. Right? Today we have virtualized environments and bare metal, and what this architecture will do is bring those bare metal environments also under ESX management. So you ESX will manage environments which are virtualized and environments which are running bare metal OS. And so that's one big breakthrough and simplification for the elimination of silo or the elimination of, you know, specialized skills to keep it running. And lastly, but most importantly, where we are going with this. That just on the question you asked us earlier about software defined and developers being in control. Where we want to go with this is give developers, the application developers, the ability to really define and create their run time on the Fly, dynamically. So think about it. If dynamically they're able to describe how the application should run. And the infrastructure essentially kind of attaches computer resources on the Fly, whether they are sitting in the same server or somewhere in the network as pools of resources. Bring it all together and compose the runtime environment for them. That's going to be huge. And they won't be constrained anymore by the resources that are tied to the physical server that they are running on. And that's the vision of where we are taking it. It is going to be the next big change in the industry in terms of enterprise computing. >> Sounds like an Operating System to me. Yeah. Run time, assembly orchestration, all these things coming together, exciting stuff. Looking forward to digging in more after Vmworld. Manuvir, how does this connect to NVIDIA and AI? Tie that together for us. >> Yeah, It's an interesting question, because you would think, you know, okay, so NVIDIA this GPU company or this AI company. But you have to remember that INVIDIA is also a networking company. Because friends at Mellanox joined us not that long ago. And the interesting thing is that there's a Yin and Yang here, because, Krish described the software vision, which is brilliant. And what this does is it imposes a lot on the host CPU of the server to do. And so what we've be doing in parallel is developing hardware. A new kind of "Nick", if you will, we call it a DPU or a Data Processing Unit or a Smart Nick that is capable of hosting all this stuff. So, amusingly when Krish and I started talking, we exchanged slides and we basically had the same diagram for our vision of where things go with that software, the infrastructure software being offloaded, data center infrastructure on a chip, if you will. Right? And so it's a very natural confluence. We are very excited to be part of this, >> Yeah. >> Monterey program with Krish and his team. And we think our DPU, which is called the NVIDIA BlueField-2, is a pretty good device to empower the work that Krish's team is doing. >> Guys it's awesome stuff. And I got to say, you know, I've been covering Vmworld now 11 years with theCube, and I've known VMware since its founding, just the evolution. And just recently before VMworld, you know, you saw the biggest IPO in the history of Wall Street, Snowflake an Enterprise Data Cloud Company. The number one IPO ever. Enterprise tech is so exciting. This is really awesome. And NVIDIA obviously well known, great brand. You own some chip company as well, and get processors and data and software. Guys, customers are going to be very interested in this, so what should customers do to find out more? Obviously you've got Project Monterey, strategic direction, right? Framed perfectly. You got this announcement. If I'm a customer, how do I get involved? How do I learn more? And what's in it for me. >> Yeah, John, I would say, sorry, go ahead, Krish. >> No, I was just going to say sorry Manuvir. I was just going to say like a lot of these discussions are going to be happening, there are going to be panel discussions there are going to be presentations at Vmworld. So I would encourage customers to really look at these topics around Project Monterey and also about the AI work we are doing with NVIDIA and attend those sessions and be active and we will have a ways for them to connect with us in terms of our early access programs and whatnot. And then as Manuvir was about to say, I think Manuvir, I will give it to you about GTC. >> Yeah, I think right after that, we have the NVIDIA conference, which is GTC, where we'll also go over this. And I think some of this work is a lot closer to hand than people might imagine. So I would encourage watching all the sessions and learning more about how to get started. >> Yeah, great stuff. And just for the folks @vmworld.com watching, Cloud City's got 60 solution demos, go look for the sessions. You got the EX, the expert sessions, Raghu, Joe Beda amongst other people from VMware are going to be there. And of course, a lot of action on the content. Guys, thanks so much for coming on. Congratulations on the news, big news. NVIDIA on the Bay in Virtual stage here at VMworld. And of course you're in theCube. Thanks for coming. Appreciate it. >> Thank you for having us. Okay. >> Thank you very much. >> This is Cube's coverage of VMworld 2020 virtual. I'm John Furrier, host of theCube virtual, here in Palo Alto, California for VMworld 2020. Thanks for watching. (upbeat music)

Published Date : Sep 18 2020

SUMMARY :

Brought to you by VMware Thanks for joining me on the virtual Cube, is the acceleration of this and VMware as the leader GPUs have set the table the Parallel Processor, if you will, Software, is the value. the first time that an array the future right now. for the applications to run on top of. Yeah, just want to get that quick Okay, you had Project Pacific. And the same platform runs, because how do you bring that the acceleration we get and around the world. that is standing in the way. certainly is the key. the ability to really define Sounds like an Operating System to me. of the server to do. And we think our DPU, And I got to say, you know, Yeah, John, I would say, and also about the AI work And I think some of this And just for the folks Thank you for having us. This is Cube's coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NVIDIAORGANIZATION

0.99+

JohnPERSON

0.99+

KrishPERSON

0.99+

30 timesQUANTITY

0.99+

John FurrierPERSON

0.99+

Krish PrasadPERSON

0.99+

VMwareORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

RaghuPERSON

0.99+

Joe BedaPERSON

0.99+

Last yearDATE

0.99+

two teamsQUANTITY

0.99+

last yearDATE

0.99+

MellanoxORGANIZATION

0.99+

Manuvir DasPERSON

0.99+

todayDATE

0.99+

more than 300,000 customersQUANTITY

0.99+

Project PacificORGANIZATION

0.99+

PatPERSON

0.99+

11 yearsQUANTITY

0.99+

30XQUANTITY

0.99+

first oneQUANTITY

0.99+

ESXTITLE

0.99+

VmworldORGANIZATION

0.99+

hundreds of thousandsQUANTITY

0.99+

two typesQUANTITY

0.99+

AWSORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

VMworldORGANIZATION

0.99+

first timeQUANTITY

0.99+

vSphereTITLE

0.99+

INVIDIAORGANIZATION

0.99+

second partQUANTITY

0.99+

TodayDATE

0.98+

VMworld 2020EVENT

0.98+

SnowflakeORGANIZATION

0.98+

first thingQUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

60 solution demosQUANTITY

0.98+

first oneQUANTITY

0.98+

GoogleORGANIZATION

0.97+

This yearDATE

0.97+

firstQUANTITY

0.97+

vmworld.comOTHER

0.97+

John Curran & Jim Benedetto, Core Scientific | Pure Accelerate 2019


 

>> Announcer: From Austin, Texas, it's theCUBE Covering Pure Storage Accelerate 2019. Brought to you by Pure Storage. >> Welcome back to theCUBE, Lisa Martin live on the Pure Accelerate floor in Austin, Texas. Dave Vellante is joining me and we're pleased to welcome a couple of guests from Core Scientific for the first time to theCUBE. We have Jim Benedetto, Chief Data Officer and John Curran, the SVP of Business Development. Gentlemen, welcome to theCUBE. >> Both: Thank you. >> Pleasure to be here. >> So John, we're going to start with you. Give our audience an overview of who Core Scientific is, what you guys do, what you deliver. >> Sure, well, we're a two year old start up. Headquartered out of Bellevue, Washington and we really focus on two primary businesses. We have a blockchain business and we have an AI business. In blockchain, we are one of the largest blockchain cryptocurrency hosting companies in North America. We've got facilities, four facilities in North Carolina, South Carolina, Georgia, and Kentucky. And really the business there is helping companies to be able to take advantage of blockchain and then position them for the future, you know. And then on the AI side of our business, really we operate that in two ways. One is we can also co-locate and host people, just like we do on the blockchain side. But primarily, we're focused on creating a public cloud focused on GPU centric computing and artificial intelligence and we're there to help really usher in the new age of AI. >> So you guys you founded, you said two years ago. >> Yes. >> From what I can tell you haven't raised a ton of dough. Is that true or are you guys quiet about that? >> John: We're very well capitalized. >> Okay, so it hasn't hit crunch base yet. >> Yeah, no. So we're a very well capitalized company. We've got, you know, to give you-- >> 'Cause what you do is not cheap. >> No, no, we've got about 675 megawatts of power under contract so each one of our facilities is about 50 megawatts plus in size. So no, it's not cheap. They're large installations and large build outs. >> And to even give you a comparison, a standard data center is about five to 10 megawatts. We won't even look at a facility or a plot of land unless we can supply at least 50 megawatts of power. >> So I was going to ask you kind of describe what's different between sort of blockchain hosting at conventional data bases or data centers. You kind of just did, but are there other sort of technical factors that you guys consider? >> Absolutely. We custom build our own data centers from the ground up. We've got patent pending technology, and if you look at virtually every data center in the world today, it's built with one thing at it's core and that's the CPU. The CPU is fundamentally different than the GPU and if you try to retrofit CPU based data centers for GPUs you're not going to fully maximize the performance and the capabilities of the GPU. So we build from the ground up data centers focused with the GPU at the center and not the CPU at the center. >> And is center in quotes because I mean, you have all this alternative processing, GPUs in particular that are popping up all over the place. As opposed to traditional CPU, which is, okay, just jam as much as I can on the real estate as possible, is that a factor? >> Well there's also a lot, the GPU at the center but there's also a lot of supporting infrastructure. So you got to look at first off the power density is very, very different. GPU, they require significantly a lot more power than CPUs do and then also just from a fluid dynamic prospective, it's very, the heating and cooling of them is again fundamentally different. You're not looking at standard hot, cold aisles and raised floors. But the overall goal also is to be able to provide a supporting infrastructure, which is from an AI ready design, is the interconnected networking and also the incredibly fast storage behind it. Because the name of the game with GPUs is different than with CPUs. With GPUs, the one thing you want to do is you want to get as much data into the GPU as fast as possible. Because compute will very rarely be your limiting factor with the GPU so the supporting infrastructure is significantly more important than it is when you're dealing with CPUs. >> So the standard narrative is, well, I don't know about cryptocurrency but the underlying technology of blockchain has a lot of potential. I personally think they're very much related and I wonder if you guys can comment on that. You started during the real, sort of the latest, most recent sort of big uptick, I know it's bounced back in cryptocurrency and so must you must've had a lot of activity in really, in your early days. And then maybe the crypto winter affected you, maybe it didn't. Some of those companies were so well capitalized, it was kind of their time to innovate, right? And yeah, there were some bad actors but that's really not the core of it. So I wonder what you guys have seen in the blockchain market. We'll get to AI and Pure and all that other stuff but this is a great topic, so I wonder if you could comment. >> So you know, yes, there's certainly classicality in the blockchain market, right? I think one of the key things is being well capitalized allows you to invest through the down turns to position to come out stronger as the market came out and you know, we've certainly seen that. Our growth in blockchain continues to really be substantial. And you know, we're making all the right strategic investments, right? Whether it's blockchain or AI, because you have such significant power requirements you know, you got to be very strategic about where you put the facilities. You're looking for facilities that have large sustained power capabilities, green. You know we've seen carbon taxes come in, that'll adversely affect folks. We want to make sure we're positioned for long term in terms of the capabilities. And then some geo political uncertainty is certainly affected, you know. The blockchain side of the business and it's driven more business to North America which has been fantastic for us. >> To me you're hosting innovation, you're talking blockchain and AI and like you're saying include crypto in there, you have some cryptocurrency guys, right? >> We do blockchain or cryptocurrency mining for ourselves as well. >> For yourselves, okay. But so my take on it is a whole new internet is being built and the crypto craze actually has funded a lot of that innovation. New protocol, when's the last time, the protocols of the internet, SMTP, HTDP, they're all government funded or education funded, academic institutions and the big internet companies sort of co-opted them. So you had a dirt of innovation, that's now come back. And you guys are hosting that innovation, that's kind of how I look at it. And I feel like we've seated the base and there's going to be this massive explosion of innovation, both in blockchain, crypto, AI automation and you're in the heart of it. >> Yeah I agree, I think cryptocurrencies or digital currencies are really just the first successful experiment of the blockchain and I agree with you, I think that is is as revolutionary and is going to change as many industries as the internet did and we're still very in a nascent stage of the technology but at Core, we're working to position ourselves to really be the underlying platform, almost like the alchemy of the early days of the internet. The underlying platform and the plumbing for both blockchain and AI applications. >> Right, whether it's smart contracts, like I say, new innovation, AI, it's all powering next generation of distributed apps. Really okay, so, sorry, I love this topic. >> I know you do. (laughs) >> Okay so where do these guys fit in? >> John: So do we. >> I mean, it's just so exciting. I think it's misunderstood. I mean the people who are into it are believers. I mean like myself, I really believe in a value store, I believe in smart contracts, immutability, you know, and I believe in responsibility too and that other good stuff but so. >> Innovation in private blockchain is just starting. If you look at it, I think there's going to be multiple waves in the blockchain side and we want to be there to make sure that we're helping power and position folks from both an infrastructure as well as a software perspective. >> Every financial institution, you got VMware doing stuff, Libra, I love Libra even though it's getting a lot of criticism, it just shined a light on the whole topic but bring us back to sort of commercial mainstream, what are you guys doing here, what's going on with Pure? >> So we have built, we're the first AI ready certified data center and we've actually partnered very closely with Pure and INVIDIA. As we went through the selection process of what type of storage we're going to be using to back our GPUs, we went through a variety of different evaluation criteria and Pure came out ahead and we've decided that we're going with Pure and we, again, for me it boils down to one thing as a Chief Data Officer is how much data can I get into those GPUs as fast as possible? And what you see is if you look at a existing, current Cloud providers, you'll see that their retro fitting CPU based centers for GPUs and you see a lot of problems with that where the storage that they provide is not fast enough to drive quote unquote warm or cold data into the GPUs so people end up adding more and more GPUs, it's actually just increased GPU memory when they're usually running around a couple percents, like one or two percent, five percent compute but you have to add more just for the memory because the storage is so slow. >> So you, how Jim you were saying before when we were chatting earlier, that you have had 20 years of experience looking at different storage vendors, working with them, what were some of the criteria, you talked about the speed and the performance, but in terms of, you also mentioned John that green was, is an important component of the way that you build data centers, where was Pure's vision on sustainability, ever green, where was that a factor in the decision to go with Pure? >> If you look at Pure's power density requirements and things like that, I think it's important. One thing that also, and this does apply from the sustainability perspective, where a lot of other storage vendors say that they're horizontally scalable forever but they're actually running different heads and in a variety of different ways. Pure is the only storage vendor that I've ever come across that is truly horizontally scalable. And when you start to try to build stuff like that you get into all the different things of super computing where you got, you know, split brain scenarios and fencing and it's very complex but their ability to scale horizontally with just, not even disc, but just the storage is something that was really important to us. >> I think the other thing that's certainly interesting for our customers is you're looking at important workloads that they're driving out and so the ability to do in place upgrades, business continuity, right, to make sure that we're able to deliver them technology that doesn't disrupt their business when their business needs the results, it's critically important so Pure is a great choice for us from that perspective and the innovations they're driving on that side of the business has really been helpful. >> I read a stat on the Pure website where users of Core Scientific infrastructure are seeing performance improvements of up to 800%. Are you delighting the heck out of data scientists now? >> Yeah, I mean. >> Are those the primary users? >> That is, it again references what we see with people using GPUs in the public Cloud. Again, going back to the thing that I keep hammering on, driving data into that GPU. We had one customer that had somewhere 14 or 15 GPUs running an analytics application in the public Cloud and we told them keep all your CPU compute in one of the largest Cloud providers but move just your GPU compute to us and they went from 14 or 15 GPUs down to two. GV-100 and a DGX-1 and backed by Pure Storage with Arista and from 14 GPUs to two GPUs, they saw an 800% in performance. >> Wow. >> And there's a really important additional part to that, let's say if I'm running a dashboard or running a query and a .5 second query gets an 800% increase in performance, how much do I really care? Now if I'm the guy running a 100 queries every single day, I probably do but it's not just that, it's the fact that it allows, it doesn't just speed up things, it allows you to look at data you were never able to look at before. So it's not just that they have an 800% performance increase, it's that instead of having tables with 100s of millions of rows, they now can have tables with billions of rows. So data that was previously not looked at before, data that was previously not turned into the actionable information to help drive their business, is now, they're now getting visibility into data they didn't have access to before. >> So you're a CDO that, it sounds like you have technical chops. >> Yeah, I'm a tech nerd at heart. >> It's kind rare actually for a CDO, I've interviewed a lot of CDOs and most of them are kind of come from a data quality background or a governance and compliance world, they don't dress like you (laughs) They dress like I do. (laughs) Even quite a bit better. But the reason I ask that, it sounds like you're a different type of CDO, like even a business like yours, I almost think you're a data scientist. So describe your role. >> I've actually held, I was with the company from the beginning so I've held quite a few roles actually. I think this might be my third title at this point. >> Okay. >> But in general, I'm a very technical person. I'm hands on, I love technology. I've held CTO titles in the past as well. >> Dave: Right. >> But I kind of, I've always been very interested in data and interested in storage because that's where data lives and it's a great fit for me. >> So I've always been interested in this because you know the narrative is that CDOs shouldn't be technical, they should be business and I get all that but the flip side of that is when you talk to CDOs about AI projects, which is you know, not digital transformation but specifically AI projects, they're not, most CDOs in healthcare, financial services, even government, they're not intimately involved, they're kind of like yeah, Chief Data Officer, we'll let you know when we have a data quality problem and I don't think that's right. I mean the CDO should be intimately involved. >> I agree. >> In those AI projects. >> I think a lot of times if you ask them, you ask, a lot of people, they'll say are you interested in deploying AI in your organization? And the answer is 100% yes and then the next follow up question is what would you like to do with it? And most of the time the answer is we don't know. I don't know. So what I have found is I go into organizations, I don't ask if people want to use AI, I ask what are your problems and I think what problems are you facing, what KPIs are you trying to optimize for and there are some of those problems, there are some problems on that list that might not be able to be helped by AI but usually there are problems on that list that can be helped by AI with the right data and the right place. >> So my translation of what you're asking is how can you make more money? (laughs) >> That what it comes down to. >> That's what you're asking, how can you cut costs or raise revenue, that's really ultimately what you're getting to. >> Data. >> Find new customers. I think the other interesting thing about our partnership with Pure and especially with regards to AIRE, AIRE's is an exciting technology but for a lot of companies is they're looking to get started in AI, there's almost this moment of pause, of how do I get started and then if I look at some of the greatest technology out there, it's like, okay, well now I have to retrofit my data center to get it in there, right. There's a bunch of technical barriers that slow down the progression and what we've been able to do with AIRE and the Cloud is really to be able to help people jumpstart, to get started right away. So rather than you know, let me think for six months or 12 months or 18 months on what would I analyze, start analyzing, get started and you can do it on a very cost effective outback's model as opposed to a capital intensive CAMP-X model. >> Alright, so I got to ask you. >> Yeah. >> And Pure will be pissed off I'm asking this question because you're talking about AIRE as a, it's real and I want some color on that but I felt like when the first announcement came out with Invida, it was rushed so that Pure could have another first. (laughs) Ink was drying, like we beat the competition but the way you're talking is AIRE is real, you're using it, it's a tangible solution. It's a value to your business. >> It's a core solution in our facility. >> Dave: It's a year ago. >> It's a core thing that we go to market with and it's something that you know, we're seeing customer demand to go out and really start to drive some business value. So you know, absolutely. >> A core component of helping them jumpstart that AI. Well you guys just, I think an hour or so ago, announced your new partnership level with Pure. John, take us away as we wrap here with the news please. >> Yeah, so well we're really excited. We're one of a handful of elite level MSP partners for Pure. I think there's only a few of us in the world so that's something and we're really the one who is focused on bringing ARIE to the Cloud and so it's a unique partnership. It's a deep partnership and it allows us to really coordinate our technical teams, our sales teams, you know, and be able to bring this technology across the industry and so we're excited, it's just the start but it's a great start and we're looking forward to nothing but upside from here. >> Fantastic, you'll have to come back guys and talk to us about a customer's who's done a jumpstart with ARIE and just taking the world by storm. So we thank you both for stopping by theCUBE. >> Absolutely, we'll love to do that. >> Lisa: Alright John, Jim, thank you so much for your time. >> Thank you. >> Absolutely. >> John: Really appreciate it. >> For Dave Vellante, I'm Lisa Martin, you're watching theCUBE from Pure Accelerate 2019. (upbeat techno music)

Published Date : Sep 18 2019

SUMMARY :

Brought to you by Pure Storage. and John Curran, the SVP of Business Development. what you guys do, what you deliver. and then position them for the future, you know. Is that true or are you guys quiet about that? We've got, you know, to give you-- So no, it's not cheap. And to even give you a comparison, that you guys consider? and if you look at virtually every data center you have all this alternative processing, GPUs in particular With GPUs, the one thing you want to do and I wonder if you guys can comment on that. as the market came out and you know, We do blockchain or cryptocurrency mining and the crypto craze actually has funded a lot and is going to change as many industries of distributed apps. I know you do. I mean the people who are into it are believers. If you look at it, I think there's going to be multiple waves and you see a lot of problems And when you start to try to build stuff like that from that perspective and the innovations they're driving I read a stat on the Pure website where in one of the largest Cloud providers it allows you to look at data you were never able you have technical chops. they don't dress like you from the beginning so I've held quite a few roles actually. But in general, I'm a very technical person. and it's a great fit for me. and I get all that but the flip side is what would you like to do with it? how can you cut costs or raise revenue, and you can do it on a very cost effective but the way you're talking is AIRE is real, and it's something that you know, Well you guys just, I think an hour or so ago, you know, and be able to bring this technology and just taking the world by storm. you're watching theCUBE from Pure Accelerate 2019.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim BenedettoPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

John CurranPERSON

0.99+

JohnPERSON

0.99+

five percentQUANTITY

0.99+

KentuckyLOCATION

0.99+

Core ScientificORGANIZATION

0.99+

DavePERSON

0.99+

JimPERSON

0.99+

GeorgiaLOCATION

0.99+

20 yearsQUANTITY

0.99+

oneQUANTITY

0.99+

14QUANTITY

0.99+

800%QUANTITY

0.99+

100%QUANTITY

0.99+

six monthsQUANTITY

0.99+

LisaPERSON

0.99+

North AmericaLOCATION

0.99+

14 GPUsQUANTITY

0.99+

12 monthsQUANTITY

0.99+

AIREORGANIZATION

0.99+

two percentQUANTITY

0.99+

18 monthsQUANTITY

0.99+

two yearQUANTITY

0.99+

South CarolinaLOCATION

0.99+

Austin, TexasLOCATION

0.99+

PureORGANIZATION

0.99+

North CarolinaLOCATION

0.99+

15 GPUsQUANTITY

0.99+

two GPUsQUANTITY

0.99+

third titleQUANTITY

0.99+

two waysQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

INVIDIAORGANIZATION

0.99+

one customerQUANTITY

0.99+

first timeQUANTITY

0.99+

BothQUANTITY

0.98+

100 queriesQUANTITY

0.98+

ARIEORGANIZATION

0.98+

up to 800%QUANTITY

0.98+

first announcementQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

two years agoDATE

0.98+

LibraORGANIZATION

0.98+

Pure StorageORGANIZATION

0.98+

twoQUANTITY

0.98+

a year agoDATE

0.98+

VMwareORGANIZATION

0.98+

Bellevue, WashingtonLOCATION

0.98+

bothQUANTITY

0.97+

billions of rowsQUANTITY

0.97+

10 megawattsQUANTITY

0.97+

each oneQUANTITY

0.97+

an hour or so agoDATE

0.97+

two primary businessesQUANTITY

0.95+

one thingQUANTITY

0.95+

about 50 megawattsQUANTITY

0.94+

Jamie Thomas, IBM | IBM Think 2019


 

>> Live from San Francisco. It's theCube covering IBM Think 2019. Brought to you by IBM. >> Welcome back to Moscone Center everybody. The new, improved Moscone Center. We're at Moscone North, stop by and see us. I'm Dave Vellante, he's Stu Miniman and Lisa Martin is here as well, John Furrier will be up tomorrow. You're watching theCube, the leader in live tech coverage. This is day zero essentially, Stu, of IBM Think. Day one, the big keynotes, start tomorrow. Chairman's keynote in the afternoon. Jamie Thomas is here. She's the general manager of IBM's Systems Strategy and Development at IBM. Great to see you again Jamie, thanks for coming on. >> Great to see you guys as usual and thanks for coming back to Think this year. >> You're very welcome. So, I love your new role. You get to put on the binoculars sometimes the telescope. Look at the road map. You have your fingers in a lot of different areas and you get some advanced visibility on some of the things that are coming down the road. So we're really excited about that. But give us the update from a year ago. You guys have been busy. >> We have been busy, and it was a phenomenal year, Dave and Stu. Last year, I guess one of the pinnacles we reached is that we were named with our technology, our technology received the number one and two supercomputer ratings in the world and this was a significant accomplishment. Rolling out the number one supercomputer in Oakridge National Laboratory and the number two supercomputer in Lawrence Livermore Laboratory. And Summit as it's called in Oakridge is really a cool system. Over 9000 CPUs about 27,000 GPUs. It does 200 petaflops at peak capacity. It has about 250 petabytes of storage attached to it at scale and to cool this guy, Summit, I guess it's a guy. I'm not sure of the denomination actually it takes about 4,000 gallons of water per minute to cool the supercomputer. So we're really pleased with the engineering that we worked on for so many years and achieving these World records, if you will, for both Summit and Sierra. >> Well it's not just bragging rights either, right, Jamie? I mean, it underscores the technical competency and the challenge that you guys face I mean, you're number one and number two, that's not easy. Not easy to sustain of course, you got to do it again. >> Right, right, it's not easy. But the good thing is the design point of these systems is that we're able to take what we created here from a technology perspective around POWER9 and of course the patnership we did with Invidia in this case and the software storage. And we're able to downsize that significantly for commercial clients. So this is the world's largest artificial intlligence supercomputer and basically we are able to take that technology that we invented in this case 'cause they ended up being one of our first clients albeit a very large client, and use that across industries to serve the needs of artificial intelligence work loads. So I think that was one of the most significant elements of what we actually did here. >> And IBM has maintained, despite you guys selling off your microelectronics division years ago, you've maintained a lot of IP in the core processing and the design. You've also reached out certainly with open power, for example, to folks. You mentioned Invidia. But having that, sort of embracing that alternative processor mode as opposed to trying to jam everything in the die. Different philosophy that IBM is taking. >> Yeah we think that the workload specific processing is still very much in demand. Workloads are going to have different dimensions and that's what we really have focused on here. I don't think that this has really changed over the last decades of computing and so we're really focused on specialized computing purpose-built computing, if you will. Obviously using that on premise and also using that in our hybrid cloud strategies for clients that want to do that as well. >> What are some of the other cool things that you guys are working on that you can talk about. >> Well I would say last year was quite an interesting year in that from a mainframe perspective we delivered our first 19 inch form factor which allows us to fit nicely on a floor tile. Obviously allows clients to scale more effectively from a data center planning perspective. Allows us to have a cloud footprint, but with all the characteristics of security that you would normally expect in a mainframe system. But really tailored toward new workloads once again. So Linux form factor and going after the new workloads that a lot of these cloud data centers really need. One of our first and foremost focus areas continues to be security around that system and tomorrow there will be some announcements that will happen around Z security. I can't say what they are right now but you'll see that we are extending security in new ways to support more of these hybrid cloud scenarios. >> It's so funny. We were talking in one of our earlier segments talking about how the path of virtualization and trying to get lots of workloads into something and goes back to the device that could manage all workloads which was the Mainframe. So we've watched for many years system Z lots of Linux on there if you want to do some cool container, you know global Z that's an option, so it's interesting to watch while the pendulum swings in IT have happened the Z system has kept up with a lot of these innovations that have been going on in the industry. >> And you're right, one of our big focuses for the platform for Z and power of course is a container-based strategy. So we've created, you know last year we talked about secure container technology and we continue to evolve secure container technology but the idea is we want to eliminate any kind of friction from a developer's perspective. So if you want to design in a container-based environment then you're more easily able to port that technology or your applications, if you will to a Z mainframe environment if that's really what your target environment is. So that's been a huge focus. The other of course major invention that we announced at the Consumer Electronics show is our Quantum System One. And this represented an evolution of our Quantum system over the last year where we now have the world's really first self-contained universal quantum computer in a single form factor where we were able to combine the Quantum processor which is living in the dilution refrigerator. You guys remember the beautiful chandelier from last year. I think it's back this year. But this is all self-contained with it's electronics in a single form factor. And that really represents the evolution of the electronics in particular over the last year where we were able to miniaturize those electronics and get them into this differentiated form factor. >> What should people know about Quantum? When you see the demos, they explain it's not a binary one or zero, it could be either, a virtually infinite set of possibilities, but what should the lay person know about Quantum and try to understand? >> Well I think really the fundamental aspect of it is in today's world with traditional computers they're very powerful but they cannot solve certain problems. So when you look at areas like material science, areas like chemistry even some financial trading scenarios, the problems can either not be solved at all or they cannot be completed in the right amount of time. Particularly in the world of financial services. But in the area of chemistry for instance molecular modeling. Today we can model simple molecules but we cannot model something even as complex as caffeine. We simply don't have the traditional compute capacity to do that. A quantum computer will allow us once it comes to maturity allow us to solve these problems that are not solvable today and you can think about all the things that we could do if were able to have more sophisticated molecular modeling. All the kinds of problems we could solve probably in the world of pharmacology, material science which affects many, many industries right? People that are developing automobiles, people that are exploring for oil. All kinds of opportunities here in this space. The technology is a little bit spooky, I guess, that's what Einstein said when he first solved some of this, right? But it really represents the state of the universe, right? How the universe behaves today. It really is happening around us but that's what quantum mechanics helps us capture and when combined with IT technology the quantum computer can bring this to life over time. >> So one of the things that people point to is potentially a new security paradigm because Quantum can flip the way in which we do security on it's head so you got to be thinking around that as well. I know security is something that is very important to IBM's Systems division. >> Right, absolutely. So the first thing that happens when someone hears about quantum computing is they ask about quantum security. And as you can imagine there's a lot of clients here that are concerned about security. So in IBM research we're also working on quantum-safe encryption. So you got one team working on a quantum computer, you got another team ensuring that the data will be protected from the quantum computer. So we do believe we can construct quantum-safe encryption algorithms based on lattice-based technology that will allow us to encrypt data today and in the future when the quantum computer does reach that kind of capacity the data will be protected. So the idea is that we would start using these new algorithms far earlier than the computer could actually achieve this result but it would mean that data created today would be quantum safe in the future. >> You're kind of in your own arm's race internally. >> But it's very important. Both aspects are very important. To be able to solve these problems that we can't solve today, which is really amazing, right? And to also be able to protect our data should it be used in inappropriate ways, right? >> Now we had Ed Bausch on earlier today. Used to run the storage division. What's going on in that world? I know you've got your hands in that pie as well. What can you tell us about what's going on there? >> Well I believe that Ed and the team have made some phenomenal innovations in the past year around flash MVME technology and fusing that across product lines state-of-the-art. The other area that I think is particularly interesting of course is their data management strategy around things like Spectrum Discover. So, today we all know that many of our clients have just huge amounts of data. I visited a client last year that interesting enough had 1 million tapes, and of course we sell tapes so that's a good thing but then how do you deal and manage all the data that is on 1 million tapes. So one of the inventions that the team has worked on is a metadata tagging capability that they've now shipped in a product called spectrum discover. And that allows a client to have a better way to have a profile of their data, data governance and understand for different use cases like data governance or compliance how do they pull back the right data and what does this data really mean to them. So have a better lexicon of their data, if you will than what they can do in today's world. So I think that's very important technology. >> That's interesting. I would imagine that metadata could sit in Flash somewhere and then inform the serial technology to maybe find stuff faster. I mean, everybody thinks tape is slow because it's sequential. But actually if you do some interesting things with metadata you can-- >> There's all kinds of things you can do I mean it's one thing to have a data ocean if you will, but then how do you really get value out of that data over a long period of time and I think we're just the tip of the spear in understanding the use cases that we can use this technology for. >> Jamie, how does IBM manage that pipeline of innovation. I think we heard very specific examples of how the super computers drive HPC architectures which everybody is going to use for their AI infrastructure. Something like quantum computing is a little bit more out there. So how do you balance kind of the research through the product and what's going to be more useful to users today. >> Yeah, well, that's an interesting question. So IBM is one of the few organizations in the world really that have an applied research organization still. And Dario Gil is here this week he manages our research organization now under Arvind Krishna. An organization like IBM Systems has a great relationship with research. Research are the folks that had people working on Quantum for decades, right? And they're the reason that we are in a position now to be able to apply this in the way that we are. The great news is that along the way we're always working on a pipeline of this next generation set of technologies and innovations. Some of them succeed and some of them don't. But without doing that we would not have things like Quantum. We would not have advanced encryption capability that we pushed all the way down into our chips. We would not have quantum-safe encryption. Things like the metadata tagging that I talked about came out of IBM research. So it's working with them on problems that we see coming down the pipe, if you will that will affect our clients and then working with them to make sure we get those into the product lines at the right amount of time. I would say that Quantum is the ultimate partnership between IBM Systems and IBM research. We have one team in this case that are working jointly on this product. Bringing the skills to bear that each of us have on this case with them having the quantum physics experts and us having the electronics experts and of course the software stacks spanning both organizations is really a great partnership. >> Is there anything you could tell us about what's going on at the edge. The edge computing you hear a lot about that today. IBM's got some activities going on there? You haven't made huge splashes there but anything going on in research that you can share with us, or any directions. >> Well I believe the edge is going to be a practical endeavor for us and what I mean by that is there are certain use cases that I think we can serve very well. So if we look at the edge as perhaps a factory environment, we are seeing opportunities for our storaging compute solutions around the data management out in some of these areas. If you look at the self-driving automobile for instance, just design something like that can easily take over a hundred petabytes of data. So being able to manage the data at the edge, being able to then to provide insight appropriately using AI technologies is something we think we can do and we see that. I own factories based on what I do and I'm starting to use AI technology. I use Power AI technology in my factories for visual inspection. Think about a lot of the challenges around provenance of parts as well as making sure that they're finally put together in the right way. Using these kind of technologies in factories is just really an easy use case that we can see. And so what we anticipate is we will work with the other parts of IBM that are focused on edge as well and understand which areas we think our technology can best serve. >> That's interesting you mention visual inspection. That's an analog use case which now you're transforming into digital. >> Yeah well Power AI vision has been very successful in the last year . So we had this power AI package of open source software that we pulled together but we drastically simplified the use of this software, if you will the ability to use it deploy it and we've added vision capability to it in the last year. And there's many use cases for this vision capability. If you think about even the case where you have a patient that is in an MRI. If you're able to decrease the amount of time they stay in the MRI in some cases by less fidelity of the picture but then you've got to be able to interpret it. So this kind of AI and then extensions of AI to vision is really important. Another example for Power AI vision is we're actually seeing use cases in advertising so the use case of maybe you're at a sporting event or even a busy place like this where you're able to use visual inspection techniques to understand the use of certain products. In the case of a sporting event it's how many times did my logo show up in this sporting event, right? Particularly our favorite one is Formula One which we usually feature the Formula One folks here a little bit at the events. So you can see how that kind of technology can be used to help advertisers understand the benefits in these cases. >> Got it. Well Jamie we always love having you on because you have visibility into so many different areas. Really thank you for coming and sharing a little taste of what's to come. Appreciate it. >> Well thank you. It's always good to see you and I know it will be an exciting week here. >> Yeah, we're very excited. Day zero here, day one and we're kicking off four days of coverage with theCube. Jamie Thomas of IBM. I'm Dave Vellante, he's Stu Miniman. We'll be right back right after this short break from IBM Think in Moscone. (upbeat music)

Published Date : Feb 12 2019

SUMMARY :

Brought to you by IBM. She's the general manager of IBM's Systems Great to see you on some of the things that the pinnacles we reached and the challenge that you guys face and of course the patnership we did in the core processing and the design. over the last decades of computing on that you can talk about. that you would normally that have been going on in the industry. And that really represents the the things that we could do So one of the things that So the idea is that we would start using You're kind of in your that we can't solve today, hands in that pie as well. that the team has worked on But actually if you do the use cases that we can the super computers in the way that we are. research that you can share Well I believe the edge is going to be That's interesting you the use of this software, if you will Well Jamie we always love having you on It's always good to see you days of coverage with theCube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Jamie ThomasPERSON

0.99+

Lisa MartinPERSON

0.99+

JamiePERSON

0.99+

EinsteinPERSON

0.99+

Dario GilPERSON

0.99+

Stu MinimanPERSON

0.99+

San FranciscoLOCATION

0.99+

John FurrierPERSON

0.99+

DavePERSON

0.99+

Last yearDATE

0.99+

last yearDATE

0.99+

TodayDATE

0.99+

StuPERSON

0.99+

200 petaflopsQUANTITY

0.99+

IBM SystemsORGANIZATION

0.99+

last yearDATE

0.99+

InvidiaORGANIZATION

0.99+

1 million tapesQUANTITY

0.99+

MosconeLOCATION

0.99+

OakridgeLOCATION

0.99+

tomorrowDATE

0.99+

eachQUANTITY

0.99+

one teamQUANTITY

0.99+

this yearDATE

0.99+

Arvind KrishnaPERSON

0.99+

a year agoDATE

0.99+

Both aspectsQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

Over 9000 CPUsQUANTITY

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.98+

day oneQUANTITY

0.98+

both organizationsQUANTITY

0.98+

about 27,000 GPUsQUANTITY

0.98+

first 19 inchQUANTITY

0.98+

SummitORGANIZATION

0.98+

LinuxTITLE

0.97+

about 250 petabytesQUANTITY

0.97+

past yearDATE

0.97+

Day zeroQUANTITY

0.96+

over a hundred petabytesQUANTITY

0.96+

Moscone NorthLOCATION

0.95+

SierraORGANIZATION

0.95+

single form factorQUANTITY

0.95+

Moscone CenterLOCATION

0.94+

first clientsQUANTITY

0.93+

decadesQUANTITY

0.93+

this weekDATE

0.93+

OneQUANTITY

0.93+

Ed BauschPERSON

0.93+

Day Three AWS re:Invent 2018 Analysis | AWS re:Invent 2018


 

>> Live from Las Vegas, it's theCUBE covering AWS re:Invent 2018 brought to you by Amazon Web Services, Intel and, their Ecosystem Partners. >> Okay, welcome back everyone. Day three, we're live in Las Vegas for AWS re:Invent 2018. It's our sixth year covering Amazon re:Invent and AWS, Amazon Web Services meteoric rise in value, profitability, market share, just a rising tide floating all boats. I'm here with Dave Vellante. We're kicking off day three analyzing, you know, Vernon's keynote. Things start to wind down. Yesterday was kind of the big day with Andy Jassy. Dave, after yesterday it's pretty clear that there's a couple big mega trends that people are talking about. One AWS Outpost, okay, that is going to be a one year conversation about what that means, what implications. I mean basically if you're a Cloud-native company you order a data center and Amazon Prime will deliver it in two days, why would anyone want to buy hardware again from HPE or other companies? This is a huge risk, huge challenge, a huge shot across the bow to the industry because this essentially the thing. This is essentially Cloud in a box. Put it in, plug it in, we'll service, turn it on and it works and developers can just do their thing, that's amazing. So I think that's going to be a very hotly-contested topic throughout, at least one year until they ship that and all the posturing and jockeying's going to go on there. And then the other thing that was interesting was there was a lot of coolness, the F1, Racing car with analytics. You had Lockheed Martin with space satellite provisioning, that was pretty cool. And, you know, you got robots and IoT. That's cool, you got space, you got robots, you got, you know, sports cars all using analytics, all using AI, all using large-scale compute storage and networking, very elastic, all with all kinds of new tools and reference engines, but Jerry Chen laid it out from Greylock yesterday around the strategy. Amazon drives the cost down on the infrastructure side and bring the API concept up to AI and bring the marketplace together. So, a lot of action. Today we're going to see the impact and the fallout of that. What's your thoughts? >> Well, first of all John, there's so much to talk about. I want to say, so Werner Vogels this morning gave the keynote. When, when I first joined, you know this industry, we, IBM was everything, IBM was the dominant player. So we used to pour over IBM system and technology guides, and IBM white papers, because they set the technical standard for the industry and they shared that knowledge obviously with their customers to inspire them to buy more stuff, but they were giving back to the community as well to help people understand architectures and core computer science. Listening to Werner Vogels today, Amazon is now the beacon of technology in the industry. He went through the worse day of his life, which was December 12, 2004 when their Oracle Database went down for 12 hours because of a bug in the code and because they were pushing it beyond its limits. And so he described how they solved that problem over a multi-year effort and really got heavy into the technology of database, and recovery, and it was actually quite fascinating. But my takeaway was Amazon is now the company that is setting the technical direction of the industry for the next wave of Cloud-based applications. So that was actually really fascinating. We heard similar things on S3 and S3 recovery, even though they're still using some Oracle stuff it was really, really fascinating to see and very, very impressive. So that's one. As you say, there's so much to talk about. The IoT pieces, John, I really like what Amazon is doing with IoT. They're coming at it from a bottoms up approach, what do I mean by that? Do you remember when mobile first came out Microsoft basically said, hey we're going to put Windows on a phone, top down. We're going to take our existing IT Desktop standards, we're going to push 'em down to mobile, didn't work. And I see a lot of IT companies trying to do that with IoT today, not Amazon. Amazon's saying, look we're going to go bottoms up and serve the operation's technology people with a software development platform that's secure, that allows it, that's fully managed and allows them to build applications for IoT. I think it's the right approach. >> I think the other thing that's coming out is a Tweet here from Bobby Allen who we know from theCUBE days. I, you know, when we, I shared a Tweet about, you know, the future of the converged infrastructure on the outpost he says, software should be where tech companies differentiation value lies. This is back to our beating of the drum about software, software, software, you know. Andy Bechtolsheim, the Rembrandt of motherboards, Pat Gelsinger calls him, said, he's the founder of Arista, hardware's easy, software's hard. Software's where the action is. What Amazon's doing is essentially pushing large-scale platform capabilities and trying to make that as cheap and affordable as possible, the range of services, while creating a new shim layer around API concepts and microservices up the stack to enable people to write software faster, more compelling, more meaningful, and to iterate, and this is resonating with customers, Dave, because if I'm a business I got to write software, okay. I don't want to be in the running data center business because the data center powers the business. So the end doesn't justify the means in that regard. You say, hey, I need a data center to power my top-line revenue which is either going to be software-based or some sort of Edge network scenario, or even a human interface wearable or whatever. Software is the key. So if Amazon can continue to push the cost structure the lock-in spec is locked in because the better value so if it's going to be 80% less cost, and you call that a lock-in spec? A lot of lock-in spec, it's not like a technical lock-in spec, that's just called value. >> I'm locked in to Google Search. I mean, you know, I don't know what to tell ya. I'm not going to use any alternative search I'm just familiar with it, I like it, it's better. >> But software's the key, your thoughts. >> Okay, so, my thoughts on lock-in are, lock-in is one of the most overstated concepts in the business. I'm not saying that lock-in doesn't happen, it does happen, it happens everywhere. It happens across open-source. You do open-source you're locked-in to your developers. I've done research on this John and my research shows that 15% of the buyers really make primary decisions based on whether or not they're going to be locked-in. 85% look at the business value and they trade that off against lock-in so, you know, yeah, buyer beware, blah, blah, blah, but I think it's just really overstated. Yes, it's a Cloud, mother of all lock-ins, but what's the value that you get out of it? Speaking about another lock-in. I want to talk about Intel a little bit because the press has been like chirping about, about Intel and alternative processors, and the arm-based stuff that Amazon is doing. >> Well hold on, let's just set the table on this conversation. Intel announced a series of proprietary processors, their own silicon, you know the-- >> Amazon you mean. >> I mean Amazon, yeah, proprietary processors that are specific to certain workloads, inference engines, and other things around network-- >> Based on the Annapurna acquisition of 2015, a small Israeli-based company that they acquired. >> Yeah, so the press, I've been sharing on, oh, chips must be confronting Intel, your thoughts. >> Yeah, so here's the deal. Look it, Intel is massive and they do a huge amount of business with the Cloud players. Now, here's the thing about Intel, it's really, I've observed Intel for decades. Intel wants a level playing field amongst its customer base and so it wants a lot of different Cloud suppliers even though there's three, four, five, you know, worldwide, there's, there's many dozens, and hundreds of Cloud players out there. Intel wants to support them all. They're an arms dealer, right? They love all their customers and so, so what they do is they sprinkle around the innovation in the industry, they try to open up their architecture such that people can, you know, write software to their architecture and they try to support all their customers. We see it at all the shows. You see it at Lenovo, you see it at Dell, you see it here at AWS, you see at Google, Intel is everywhere and they are by far the biggest supplier. Now, Amazon, of course, has to have alternatives, right? They care about data-center power, you know, they do buy some stuff AMD, why not, why wouldn't you second-source some of this stuff? They do a lot of work with Invidia, ARM has its place, and so, but it's a rounding error in the grand scheme in the market. Now why people get excited is they say, okay, ARM now has a foot in the door, oh, Intel's in trouble. Intel obviously still a dominant player. I think it's, you know-- >> Is Intel in trouble? >> The press likes to glom onto that. Intel's like the dominant player in the microprocessor business and it has to move, and it has to move fast. I would not say Intel is in trouble, I'd say it continues to be the dominant player in the data center. It's got opportunities for alternative processors like Invidia. Intel strategy is to put as much function on the DI as possible and to grab that function, it's always the way it's behaved. You see people like Invidia trying to create opportunities, and doing a very good job of it, and so, there's white space there. It's competition, we love competition, right? >> Here's my, here's my. >> Intel needs some competition frankly. >> Here's my take. One, Intel pays, Amazon pays Intel a lot of money. >> Huge amount of money. >> So it's not like Intel's hurting, Intel's not in trouble. Here's why Intel's not in trouble. One, the Cloud service provider business that Raejeanne runs, she was on theCube yesterday, is growing significantly. A new total adjustable market, they call TAM expansion, is happening. >> So, you know, if you're looking at microprocessors it's not a one, or few, suppliers, it's a total TAM expansion and of course with that expansion of the market Intel's going to take a big chunk of the shares, so they are not in trouble, Amazon pays them a lot of money, they're a big-time supplier to AWS, check. Two, Intel is on a cadence on processor design that spans years. And Raejeanne and other Intel executives have spoken to us off the record, and here on theCUBE that hey, you know, sometimes there's use cases where they're not responding fast enough that are outside they're operating cycles, but as Raejeanne said, Amazon makes them get better, okay? So, they have to manage that, but there's no way Intel's in trouble. I think the press are using this as, to create link bait, for news that is sensational. But, yeah, I mean, on the surface you go, oh, chip, Intel, oh that's Intel's business, it must be bad for Intel. So, yeah, Amazon made their own processor. They got some specific things they want to build specialized processors for like GP Alternative, or inference engines that are tied to the stack, why wouldn't they? Why wouldn't they? >> What Intel will do, what Intel will do is they'll learn from that and they'll respond with functionality for maybe others, or maybe they'll earn Amazon's business, we'll see, but yield to your point, you know, Intel's exposure to the desktop and the laptop, a lot of people wrote about that, that Intel is, the (mumbles) entry is Intel's business, they're so huge, the cost of doing what they do, Intel's such a strategic supplier to so many companies and as we talked to Raejeanne about yesterday the Cloud has completely changed that dynamic and actually brought more suppliers. The data center consolidation that you've seen has been offset by the Cloud explosions, that's a good trend for Intel. And of course the mobile dynamic, you know more about that then I do, but, everybody said mobile's going to kill Intel, it obviously didn't happen. >> Look it, Intel, Intel's smart, they've been around, they're going to not miss the ball. They got a big team that services a lot of these big players. >> Are they still paranoid in your opinion? >> I think they are. >> I do too. >> I do too, I mean I, look it, Intel is, have a cadence of Moore's law. They have a execution style that's somewhat similar to AWS, they've very strict about how they execute and they have a great execution engine. So I would bet the farm that Intel's talking to Amazon and saying, what do you need for us to be better? And if Amazon does what they do best, which is tell them what they need, Intel will deliver. So I'm kind of not worried about Intel on that front. I think in the short term maybe this processor doesn't fit for that, but, that's why GPUs became popular, floating point was a unique thing that CPUs didn't do well on so a GPU comes out, there it is. And we're going to see processors like data-processing units, Pradeep Sindu, former founder of Juniper's, got a venture called, Fungible, that's building a data-processing unit. It's a dedicated chip to serve analytic workloads. These are specialized silicon chips that are going to come on faster, and, to the marketplace. So , just because there's more chips doesn't mean Intel dies 'cause if the TAM expands it's a, it's an overall bigger market so their share might not be as dominant on a smaller market, but it's-- >> You know, I got a, I got to come back to your John Chambers interview. I've watched it a couple times now and I would recommend people would go to, thecube.net, and see John Furrier's interview with John Chambers. The great companies of this industry have survived, you know, I talked about paranoia, Andy Grove, they've survived because they were not dogmatic about the past. So for the past several decades this industry has marched to the cadence of Moore's law and that was obviously very favorable to Intel. Well, that's changing, and it's changed, the innovation engine now, you've called it the innovation sandwich, which is data, machine intelligence applied to that data, in the scale of Cloud. So Intel has to pivot to that to take advantage of that and that's exactly what they're doing. So the great companies of the future, the Microsoft's, the Intel's, the AWS's, they survive because they can evolve. It's the Wang's that didn't, they denied, it was the PC-- >> They were entitled. >> The digital, right. They thought they were entitled and the point that John Chambers made is there's no entitlement and he kept referring to Boston 128, it used to be the Silicon Valley. And the leading executives today, of companies like, like Cisco, like Intel, like Microsoft, can see a vision to the future and they change when they have to change. >> So companies that are entitled, who are they (chuckling)? >> Wow, that' a really-- >> Is Oracle entitled? >> A good question. >> HPE, Dell? >> I think Oracle absolutely acts as though they're entitled and they're bunkering down into their red stack. Now, you know I've often said, don't bet against Larry Ellison, and I wouldn't make that bet against Larry Ellison, but his TAM is confined to Oracle customers. He's not currently going after, non-Oracle customers in my opinion at least not with a strategy that's obvious to me. And I think that's part of the reason why Thomas Kurian left the company is I think they had a battle about that, at least that's what my sources tell me. I haven't talked to him directly, I actually don't know him, but I know people who know him and have worked with him. HPE, I think HPE is more confused as to what the next step is. When they split the company apart they kind of gave up on software, they gave up on an integrated-supply chain. Mike Odell took the other approach, and thanks to VMware he's got a wining strategy. So, I think today's leading executives realize that they have to change. Look at Ginni Rometty, remember IBM was in trouble in my opinion because Watson failed, and their Cloud strategy essentially failed. So they just made a 34 billion dollar acquisition, a Red Hat, which is a bold move. And that, again, demonstrated a company who said, okay, hey it's not working, we have to pivot and we have to invest and go forward. >> Alright Dave, great kickoff day three. Andy Jassy coming up at the end of the day and he's going to do his annual, kind of, end of the last day roundup on theCUBE, kind of lean back, talk about what's going on and how he feels from the quotes, what people missed, what people got, and do a full review of re:Invent 2018. Day three kicks off here, CUBE, two sets on the floor gettin' all the content. We already have over a hundred videos. We'll have 500 total video assets, go to siliconangle.com and check out the blog there. A lot of stories flowing, a lot of flow, a lot of demand for the content. Stay with us for more after this short break.

Published Date : Nov 30 2018

SUMMARY :

brought to you by Amazon Web Services, Intel and all the posturing and jockeying's going to go on there. and serve the operation's technology people and to iterate, and this is resonating I'm not going to use any alternative search and my research shows that 15% of the buyers Well hold on, let's just set the table the Annapurna acquisition of 2015, Yeah, so the press, I've been sharing on, Now, Amazon, of course, has to have alternatives, right? on the DI as possible and to grab that function, Here's my take. One, the Cloud service provider business or inference engines that are tied to the stack, And of course the mobile dynamic, they've been around, they're going to not miss the ball. to Amazon and saying, what do you need I got a, I got to come back to your John Chambers interview. and the point that John Chambers made realize that they have to change. and how he feels from the quotes,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

RaejeannePERSON

0.99+

IBMORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Andy BechtolsheimPERSON

0.99+

CiscoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Mike OdellPERSON

0.99+

Jerry ChenPERSON

0.99+

Dave VellantePERSON

0.99+

Pat GelsingerPERSON

0.99+

December 12, 2004DATE

0.99+

Andy JassyPERSON

0.99+

Bobby AllenPERSON

0.99+

DavePERSON

0.99+

Andy GrovePERSON

0.99+

JohnPERSON

0.99+

12 hoursQUANTITY

0.99+

2015DATE

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

Larry EllisonPERSON

0.99+

80%QUANTITY

0.99+

one yearQUANTITY

0.99+

Pradeep SinduPERSON

0.99+

IntelORGANIZATION

0.99+

John ChambersPERSON

0.99+

InvidiaORGANIZATION

0.99+

hundredsQUANTITY

0.99+

LenovoORGANIZATION

0.99+

15%QUANTITY

0.99+

John FurrierPERSON

0.99+

AMDORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Ginni RomettyPERSON

0.99+

threeQUANTITY

0.99+

fourQUANTITY

0.99+

fiveQUANTITY

0.99+

Red HatORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

VernonPERSON

0.99+

JuniperORGANIZATION

0.99+

TodayDATE

0.99+

two setsQUANTITY

0.99+

ARMORGANIZATION

0.99+

sixth yearQUANTITY

0.99+

yesterdayDATE

0.99+

Derek Kerton, Autotech Council | Autotech Council 2018


 

>> Announcer: From Milpitas, California, at the edge of Silicon Valley, it's The Cube. Covering autonomous vehicles. Brought to you by Western Digital. >> Hey, welcome back everybody, Jeff Frick here with the Cube. We're at Western Digital in Milpitas, California at the Auto Tech Council, Autonomous vehicle meetup, get-together, I'm exactly sure. There's 300 people, they get together every year around a lot of topics. Today is all about autonomous vehicles, and really, this whole ecosystem of startups and large companies trying to solve, as I was just corrected, not the thousands of problems but the millions and billions of problems that are going to have to be solved to really get autonomous vehicles to their ultimate destination, which is, what we're all hoping for, is just going to save a lot of lives, and that's really serious business. We're excited to have the guy that's kind of running the whole thing, Derek Curtain. He's the chairman of the Auto Tech Council. Derek, saw you last year, great to be back, thanks for having us. >> Well, thanks for having me back here to chat. >> So, what's really changed in the last year, kind of contextually, since we were here before? I think last year it was just about, like, mapping for autonomous vehicles. >> Yes. >> Which is an amazing little subset. >> There's been a tremendous amount of change in one year. One thing I can say right off the top that's critically important is, we've had fatalities. And that really shifts the conversation and refocuses everybody on the issue of safety. So, there's real vehicles out there driving real miles and we've had some problems crop up that the industry now has to re-double down in their efforts and really focus on stopping those, and reducing those. What's been really amazing about those fatalities is, everybody in the industry anticipated, 'oh' when somebody dies from these cars, there's going to be the governments, the people, there's going to be a backlash with pitchforks, and they'll throw the breaks on the whole effort. And so we're kind of hoping nobody goes out there and trips up to mess it up for the whole industry because we believe, as a whole, this'll actually bring safety to the market. But a few missteps can create a backlash. What's surprising is, we've had those fatalities, there's absolutely some issues revealed there that are critically important to address. But the backlash hasn't happened, so that's been a very interesting social aspect for the industry to try and digest and say, 'wow, we're pretty lucky.' and 'Why did that happen?' and 'Great!' to a certain extent. >> And, obviously, horrible for the poor people that passed away, but a little bit of a silver lining is that these are giant data collection machines. And so the ability to go back after the fact, to do a postmortem, you know, we've all seen the video of the poor gal going across the street in the dark and they got the data off the one, 101 87. So luckily, you know, we can learn from it, we can see what happened and try to move forward. >> Yeah, it is, obviously, a learning moment, which is absolutely not worth the price we pay. So, essentially, these learning moments have to happen without the human fatalities and the human cost. They have to happen in software and simulations in a variety of ways that don't put people in the public at risk. People outside the vehicle, who haven't even chosen to adopt those risks. So it's a terrible cost and one too high to pay. And that's the sad reality of the whole situation. On the other hand, if you want to say silver lining, well, there is no fatalities in a silver lining but the upside about a fatality in the self-driving world is that in the human world we're used to, when somebody crashes a car they learn a valuable lesson, and maybe the people around them learned a valuable lesson. 'I'm going to be more careful, I'm not going to have that drink.' When an autonomous car gets involved in any kind of an accident, a tremendous number of cars learn the lesson. So it's a fleet learning and that lesson is not just shared among one car, it might be all Teslas or all Ubers. But something this serious and this magnitude, those lessons are shared throughout the industry. And so this extremely terrible event is something that actually will drive an improvement in performance throughout the industry. >> That's a really good, that's a super good point. Because it is not a good thing. But again, it's nice that we can at least see the video, we could call kind of make our judgment, we could see what the real conditions were, and it was a tough situation. What's striking to me, and it came up in one of the other keynotes is, on one hand is this whole trust issue of autonomous vehicles and Uber's a great example. Would you trust an autonomous vehicle? Or will you trust some guy you don't know to drive your daughter to the prom? I mean, it's a really interesting question. But now we're seeing, at least in the Tesla cases that have been highlighted, people are all in. They got a 100% trust. >> A little too much trust. >> They think level five, we're not even close to level five and they're reading or, you know, doing all sorts of interesting things in the car rather than using it as a driver assist technology. >> What you see there is that there's a wide range of customers, a wide range users and some of them are cautious, some of them will avoid the technology completely and some of them will abuse it and be over confident in the technology. In the case of Tesla, they've been able to point out in almost every one of their accidents where their autopilot is involved, they've been able to go through the logs and they've been able to exonerate themselves and say, 'listen, this was customer misbehavior. Not our problem. This was customer misbehavior.' And I'm a big fan, so I go, 'great!' They're right. But the problem is after a certain point, it doesn't matter who's fault it is if your tool can be used in a bad way that causes fatalities to the person in the car and, once again, to people outside the car who are innocent bystanders in this, if your car is a tool in that, you have reconsider the design of that tool and you have to reconsider how you can make this idiot proof or fail safe. And whether you can exonerate yourself by saying, 'the driver was doing something bad, the pedestrian was doing something bad,' is largely irrelevant. People should be able to make mistakes and the systems need to correct those mistakes. >> But, not to make excuses, but it's just ridiculous that people think they're driving a level five car. It's like, oh my goodness! Really. >> Yeah when growing up there was that story or the joke of somebody that had cruise control in the R.V. so they went in the back to fry up some bacon. And it was a running joke when I was a kid but you see now that people with level two autonomous cars are kind of taking that joke a little too far and making it real and we're not ready for that. >> They're not ready. One thing that did strike that is here today that Patty talked about, Patty Rob from Intel, is just with the lane detection and the forward-looking, what's the technical term? >> There's forward-looking radar for braking. >> For braking, the forward-looking radar. And the crazy high positive impact on fatalities just those two technologies are having today. >> Yeah and you see the Insurance Institute for Highway Safety and the entire insurance industry, is willing to lower your rates if you have some of these technologies built into your car because these forward-looking radars and lidars that are able to apply brakes in emergency situations, not only can they completely avoid an accident and save the insurer a lot of money and the driver's life and limb, but even if they don't prevent the accident, if they apply a brake where a human driver might not have or they put the break on one second before you, it could have a tremendous affect on the velocity of the impact and since the energy that's imparted in a collision is a function of the square of the velocity, if you have a small reduction of velocity, you could have a measurable impact on the energy that's delivered in that collision. And so just making it a little slower can really deliver a lot of safety improvements. >> Right, so want to give you a chance to give a little plug in terms of, kind of, what the Auto Tech Council does. 'Cause I think what's great with the automotive industry right, is clearly, you know, is born in the U.S. and in Detroit and obviously Japan and Europe those are big automotive presences. But there's so much innovation here and we're seeing them all set up these kind of innovation centers here in the Bay area, where there's Volkswagen or Ford and the list goes on and on. How is the, kind of, your mission of bringing those two worlds together? Working, what are some of the big hurdles you still have to go over? Any surprises, either positive or negative as this race towards autonomous vehicles seems to be just rolling down the track? >> Yeah, I think, you know, Silicone Valley historically a source of great innovation for technologies. And what's happened is that the technologies that Silicone Valley is famous for inventing, cloud-based technology and network technology, processing, artificial intelligence, which is machine learning, this all Silicone Valley stuff. Not to say that it isn't done anywhere else in the world, but we're really strong in it. And, historically, those may not have been important to a car maker in Detroit. And say, 'well that's great, but we had to worry about our transmission, and make these ratios better. And it's a softer transmission shift is what we're working on right now.' Well that era is still with us but they've layered on this extremely important software-based and technology-based innovation that now is extremely important. The car makers are looking at self-driving technologies, you know, the evolution of aid as technologies as extremely disruptive to their world. They're going to need to adopt like other competitors will. It'll shift the way people buy cars, the number of cars they buy and the way those cars are used. So they don't want to be laggards. No car maker in the world wants to come late to that party. So they want to either be extremely fast followers or be the leaders in this space. So to that they feel like well, 'we need to get a shoulder to shoulder with a lot of these innovation companies. Some of them are pre-existing, so you mentioned Patti Smith from Intel. Okay we want to get side by side with Intel who's based here in Silicone Valley. The ones that are just startups, you know? Outside I see a car right now from a company called Iris, they make driver monitoring software that monitors the state of the driver. This stuff's pretty important if your car is trading off control between the automated system and the driver, you need to know what the driver's state is. So that's startup is here in Silicone Valley, they want to be side by side and interacting with startups like that all the time. So as a result, the car companies, as you said, set up here in Silicone Valley. And we've basically formed a club around them and said, 'listen, that's great! We're going to be a club where the innovators can come and show their stuff and the car makers can come and kind of shop those wares. >> It's such crazy times because the innovation is on so many axis for this thing. Somebody used in the keynote care, or Case. So they're connected, they're autonomous, so the operation of them is changing, the ownership now, they're all shared, that's all changing. And then the propulsion in the motors are all going to electric and hybrid, that's all changing. So all of those factors are kind of flipping at the same time. >> Yeah, we just had a panel today and the subject was the changes in supply chain that Case is essentially going to bring. We said autonomy but electrification is a big part of that as well. And we have these historic supply chains that have been very, you know, everyone's going as far GM now, so GM will have these premier suppliers that give them their parts. Brake stores, motors that drive up and down the windows and stuff, and engine parts and such. And they stick year after year with the same suppliers 'cause they have good relationships and reliability and they meet their standards, their factories are co-located in the right places. But because of this Case notion and these new kinds of cars, new range of suppliers are coming into play. So that's great, we have suppliers for our piston rods, for example. Hey, they built a factory outside Detroit and in Lancing real near where we are. But we don't want piston rods anymore we want electric motors. We need rare earth magnets to put in our electric motors and that's a whole new range of suppliers. That supply either motors or the rare earth magnets or different kind of, you know, a switch that can transmit right amperage from your battery to your motor. So new suppliers but one of the things that panel turned up that was really interesting is, specifically, was, it's not just suppliers in these kind of brick and mortar, or mechanical spaces that car makers usually had. It's increasing the partners and suppliers in the technology space. So cloud, we need a cloud vendor or we got to build the cloud data center ourselves. We need a processing partner to sell us powerful processors. We can't use these small dedicated chips anymore, we need to have a central computer. So you see companies like Invidia and Intel going, 'oh, that's an opportunity for us we're keen to provide.' >> Right, exciting times. It looks like you're in the right place at the right time. >> It is exciting. >> Alright Derek, we got to leave it there. Congratulations, again, on another event and inserting yourself in a very disruptive and opportunistic filled industry. >> Yup, thanks a lot. >> He's Derek, I'm Jeff, you're watching The Cube from Western Digital Auto Tech Council event in Milpitas, California. Thanks for watching and see you next time. (electronic music)

Published Date : Apr 14 2018

SUMMARY :

Brought to you by Western Digital. that are going to have to be solved to really get kind of contextually, since we were here before? that the industry now has to re-double down And so the ability to go back after the fact, is that in the human world we're used to, But again, it's nice that we can at least see the video, to level five and they're reading or, you know, and the systems need to correct those mistakes. But, not to make excuses, but it's just ridiculous or the joke of somebody that had cruise control in the R.V. that Patty talked about, Patty Rob from Intel, And the crazy high positive impact on fatalities and save the insurer a lot of money and the list goes on and on. and the car makers can come and kind of shop those wares. so the operation of them is changing, and suppliers in the technology space. It looks like you're in the right place at the right time. and inserting yourself in a very disruptive Thanks for watching and see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DerekPERSON

0.99+

FordORGANIZATION

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Western DigitalORGANIZATION

0.99+

VolkswagenORGANIZATION

0.99+

UberORGANIZATION

0.99+

Derek CurtainPERSON

0.99+

JapanLOCATION

0.99+

Derek KertonPERSON

0.99+

InvidiaORGANIZATION

0.99+

DetroitLOCATION

0.99+

PattyPERSON

0.99+

EuropeLOCATION

0.99+

U.S.LOCATION

0.99+

Auto Tech CouncilORGANIZATION

0.99+

last yearDATE

0.99+

Insurance Institute for Highway SafetyORGANIZATION

0.99+

Patti SmithPERSON

0.99+

100%QUANTITY

0.99+

millionsQUANTITY

0.99+

TeslaORGANIZATION

0.99+

Silicone ValleyLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

IntelORGANIZATION

0.99+

Patty RobPERSON

0.99+

Autotech CouncilORGANIZATION

0.99+

GMORGANIZATION

0.99+

one carQUANTITY

0.99+

300 peopleQUANTITY

0.99+

Milpitas, CaliforniaLOCATION

0.99+

one secondQUANTITY

0.99+

two technologiesQUANTITY

0.99+

todayDATE

0.99+

101 87OTHER

0.99+

The CubeTITLE

0.99+

LancingLOCATION

0.98+

TodayDATE

0.98+

IrisORGANIZATION

0.98+

oneQUANTITY

0.98+

two worldsQUANTITY

0.98+

one yearQUANTITY

0.98+

UbersORGANIZATION

0.97+

TeslasORGANIZATION

0.96+

thousands of problemsQUANTITY

0.94+

One thingQUANTITY

0.93+

level fiveQUANTITY

0.93+

BayLOCATION

0.93+

billions of problemsQUANTITY

0.91+

CaseORGANIZATION

0.82+

Autotech Council 2018EVENT

0.82+

level twoQUANTITY

0.79+

CubeORGANIZATION

0.77+

Digital Auto Tech CouncilEVENT

0.74+

level five carQUANTITY

0.65+

every oneQUANTITY

0.59+

WesternORGANIZATION

0.53+

thingsQUANTITY

0.51+

CubeCOMMERCIAL_ITEM

0.34+

Cat Graves & Natalia Vassilieva, HPE | HPE Discover Madrid 2017


 

>> (Narrator) Live from Madrid, Spain. It's The Cube covering HP Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> We're back at HPE Discover Madrid 2017. This is The Cube, the leader in live tech coverage. My name is Dave Vellante and I'm with my co-host for the week, Peter Burris. Cat Graves is here, she's a research scientist at Hewlett Packard Enterprises. And she's joined by Natalia Vassilieva. Cube alum, senior research manager at HPE. Both with the labs in Palo Alto. Thanks so much for coming on The Cube. >> Thank you for having us. >> You're welcome. So for decades this industry has marched to the cadence of Moore's Law, bowed down to Moore's Law, been subservient to Moore's Law. But that's changing, isn't it? >> Absolutely. >> What's going on? >> I can tell Moore's Law is changing. So we can't increase the number, of course, on the same chip and have the same space. We can't increase the density of the computer today. And from the software perspective, we need to analyze more and more data. We are now marching calls into the area of artificial intelligence when we need to train larger and larger models, we need more and more compute for that. And the only possible way today to speed up the training of those modules, to actually enable the AI, is to scale out. Because we can't put more cores on the chip. So we try to use more chips together But then communication bottlenecks come in. So we can't efficiently use all of those chips. So for us on the software side, on the part of people who works how to speed up the training, how to speed up the implementation of the algorithms, and the work of those algorithms, that's a problem. And that's where Cat can help us because she's working on a new hardware which will overcome those troubles. >> Yeah, so in our lab what we do is try and think of new ways of doing computation but also doing the computations that really matter. You know, what are the bottlenecks for the applications that Natalia is working on that are really preventing the performance from accelerating? Again exponentially like Moore's Law, right? We'd like to return to Moore's Law where we're in that sort of exponential growth in terms of what compute is really capable of. And so what we're doing in labs is leveraging novel devices so, you've heard of memristor in the past probably. But instead of using memristor for computer memory, non volatile memory for persistent memory driven computer systems, we're using these devices instead for doing computation itself in the analog domain. So one of our first target applications, and target core computations that we're going after is matrix multiplication. And that is a fundamental mathematical building block for a lot of different machine learning, deep learning, signal processing, you kind of name it, it's pretty broad in terms of where it's used today. >> So Dr. Tom Bradicich was talking about the dot product, and it sounds like it's related. Matrix multiplications, suddenly I start breaking out in hives but is that kind of related? >> That's exactly what it is. So, if you remember your linear algebra in college, a dot product is exactly a matrix multiplication. It's the dot in between the vector and the matrix. The two itself, so exactly right. Our hardware prototype is called the dot product engine. It's just cranking out those matrix multiplications. >> And can you explain how that addresses the problem that we're trying to solve with respect to Moore's Law? >> Yeah, let me. You mentioned the problem with Moore's Law. From me as a software person, the end of Moore's Law is a bad thing because I can't increase their compute power anymore on the single chip. But for Cat it's a good thing because it forced her to think what's unconventional. >> (Cat) It's an opportunity. >> It's an opportunity! >> It forced her to think, what are unconventional devices which she can come up with? And we also have to mention they understand that general purpose computing is not always a solution. Sometimes if you want to speed up the thing, you need to come up with a device which is designed specifically for the type of computation which you care about. And for machine learning technification, again as I've mentioned, these matrix-matrix multiplications matrix-vector multiplications, these are the core of it. Today if you want to do those AI type applications, you spend roughly 90% of the time doing exactly that computation. So if we can come up with a more power efficient and a more effective way of doing that, that will really help us, and that's what dot product engine is solving. >> Yes, an example some of our colleagues did in architectural work. Sort of taking the dot product engine as the core, and then saying, okay if I designed a computer architecture specifically for doing convolutional neural networks. So image classification, these kinds of applications. If I built this architecture, how would it perform? And how would it compare to GPUs? And we're seeing 10 to 100 X speed up over GPUs. And even 15 X speed up over if you had a custom-built, state of the art specialized digital Asic. Even comparing to the best that we can do today, we are seeing this potential for a huge amount of speed up and also energy savings as well. >> So follow up on that, if I may. So you're saying these alternative processors like GPUs, FGPAs, custom Asics, can I infer from that that that is a stop-gap architecturally, in your mind? Because you're seeing these alternative processors pop up all over the place. >> (Cat) Yes. >> Is that a fair assertion? >> I think that recent trends are obviously favoring a return to specialized hardware. >> (Dave) Yeah, for sure. Just look at INVIDIA, it's exploding. >> I think it really depends on the application and you have to look at what the requirements are. Especially in terms of where there's a lot of power limitations, right, GPUs have become a little bit tricky. So there's a lot of interest in the automotive industry, space, robotics, for more low power but still very high performance, highly efficient computation. >> Many years ago when I was actually thinking about doing computer science and realized pretty quickly that I didn't have the brain power to get there. But I remember thinking in terms of there's three ways of improving performance. You can do it architecturally, what do you do with an instruction? You can do it organizationally, how do you fit the various elements together? You can do it with technology, which is what's the clock speed, what's the underlying substrate? Moore's Law is focused on the technology. Risk, for example, focused on architecture. FPGAs, arm processors, GPUs focus on architecture. What we're talking about to get back to that doubling the performance every 18 months from a computing standpoint not just a chip standpoint, now we're talking about revealing and liberating, I presume, some of the organization elements. Ways of thinking about how to put these things together. So even if we can't get improvements that we've gotten out of technology, we can start getting more performance out of new architectures. But organizing how everything works together. And make it so that the software doesn't have to know, or the developer, doesn't have to know everything about the organization. Am I kind of getting there with this? >> Yes, I think you are right. And if we are talking about some of the architectural challenges of today's processors, not only we can't increase the power of a single device today, but even if we increase the power of a single device, then the challenge would be how do you bring the data fast enough to that device? So we will have problems with feeding that device. And again, what dot product engine does, it does computations in memory, inside. So you limit the number of data transfers between different chips and you don't face the problem of feeding their computation thing. >> So similar same technology, different architecture, and using a new organization to take advantage of that architecture. The dot product engine being kind of that combination. >> I would say that even technology is different. >> Yeah, my view of it we're actually thinking about it holistically. We have in labs software working with architects. >> I mean it's not just a clock speed issue. >> It's not just a clock speed issue. It's thinking about what computations actually matter, which ones you're actually doing, and how to perform them in different ways. And so one of the great things as well with the dot product engine and these kind of new computation accelerators, is with something like the memory driven computing architecture. We have now an ecosystem that is really favoring accelerators and encouraging the development of these specialized hardware pieces that can kind of slot in in the same architecture that can scale also in size. >> And you invoke that resource in an automated way, presumably. >> Yeah, exactly. >> What's the secret sauce behind that? Is that software that does that or an algorithm that chooses the algorithm? >> A gen z. >> A gen z's underlying protocol is to make the device talk to the data. But at the end of the system software, it's algorithms also which will make a decision at every particular point which compute device I should use to do a particular task. With memory driven computing, if all my data sits in the shared pool of memory and I have different heterogeneous compute devices, being able to see that data and to talk to that data, then it's up to the system management software to allocate the execution of a particular task to the device which does that the best. In a more power efficient way, in the fastest way, and everybody wins. >> So as a software person, you now with memory driven computing have been thinking about developing software in a completely different way. Is that correct? >> (Natalia) Yeah. You're not thinking about going through I/O stack anymore and waiting for a mechanical device and doing other things? >> It's not only the I/O stack. >> As I mentioned today, the only possibility for us to decrease the time of processing for the algorithms is to scale out. That means that I need to take into account the locality of the data. It's not only when you distribute the computation across multiple nodes, even if we have some number based which is we have different sockets in a single system. With local memory and the memory which is remote to that socket but which is local to another socket. Today as a software programmer, as a developer, I need to take into account where my data sits. Because I know in order to accept the data on a local memory it'll take me 100 seconds to accept my data. In the remote socket, it will take me longer. So when I developed the algorithm in order to prevent my computational course to stall and to wait for the data, I need to schedule that very carefully. With memory driven computing, giving an assumption that, again, all memory not only in the single pool, but it's also evenly accessible from every compute device. I don't need to care about that anymore. And you can't even imagine such a relief it is! (laughs) It makes our life so much easier. >> Yeah, because you're spending a lot of time previously trying to optimize your code >> Yes for that factor of the locality of the data. How much of your time was spent doing that menial task? >> Years! In the beginning of Moore's Law and the beginning of the traditional architectures, if you turn to the HPC applications, every HPC application device today needs to take care of data locality. >> And you hear about when a new GPU comes out or even just a slightly new generation. They have to take months to even redesign their algorithm to tune it to that specific hardware, right? And that's the same company, maybe even the same product sort of path lined. But just because that architecture has slightly changed changes exactly what Natalia is talking about. >> I'm interested in switching subjects here. I'd love to spend a minute on women in tech. How you guys got into this role. You're both obviously strong in math, computer backgrounds. But give us a little flavor of your background, Cat, and then, Natalia, you as well. >> Me or you? >> You start. >> Hm, I don't know. I was always interested in a lot of different things. I kind of wanted to study and do everything. And I got to the point in college where physics was something that still fascinated me. I felt like I didn't know nearly enough. I felt like there was still so much to learn and it was constantly challenging me. So I decided to pursue my Ph.D in that, and it's never boring, and you're always learning something new. Yeah, I don't know. >> Okay, and that led to a career in technology development. >> Yeah, and I actually did my Ph.D in kind of something that was pretty different. But towards the end of it, decided I really enjoyed research and was just always inspired by it. But I wanted to do that research on projects that I felt like might have more of an impact. And particularly an impact in my lifetime. My Ph.D work was kind of something that I knew would never actually be implemented in, maybe a couple hundred years or something we might get to that point. So there's not too many places, at least in my field in hardware, where you can be doing what feels like very cutting edge research, but be doing it in a place where you can see your ideas and your work be implemented. That's something that led me to labs. >> And Natalia, what's your passion? How did you arrive here? >> As a kid I always liked different math puzzles. I was into math and pretty soon it became obvious that I like solving those math problems much more than writing about anything. I think in middle school there was the first class on programming, I went right into that. And then the teacher told me that I should probably go to a specialized school and that led me to physics and mathematics lyceum and then mathematical department at the university so it was pretty straightforward for me since then. >> You're both obviously very comfortable in this role, extremely knowledgeable. You seem like great leaders. Why do you feel that more women don't pursue a career in technology. Do you have these discussions amongst yourselves? Is this something that you even think about? >> I think it starts very early. For me, both my parents are scientists, and so always had books around the house. Always was encouraged to think and pursue that path, and be curious. I think its something that happens at a very young age. And various academic institutions have done studies and shown when they do certain things, its surmountable. Carnegie Mellon has a very nice program for this, where they went for the percentage of women in their CS program went from 10% to 40% in five years. And there were a couple of strategies that they implemented. I'm not gonna get all of them, but one was peer to peer mentoring, when the freshmen came in, pairing them with a senior, feeling like you're not the only one doing what you're doing, or interested in what you're doing. It's like anything human, you want to feel like you belong and can relate to your group. So I think, yeah. (laughs) >> Let's have a last word. >> On that topic? >> Yeah sure, or any topic. But yes, I'm very interested in this topic because less than 20% of the tech business is women. Its 50W% of the population. >> I think for me its not the percentage which matters Just don't stay in the way of those who's interested in that. And give equal opportunities to everybody. And yes, the environment from the very childhood should be the proper one. >> Do you feel like the industry gives women equal opportunity? >> For me, my feeling would be yes. You also need to understand >> Because of your experience Because of my experience, but I also originally came from Russia, was born in St. Petersburg, and I do believe that ex-Soviet Union countries has much better history in that. Because the Soviet Union, we don't have man and woman. We have comrades. And after the Second World War, there was women who took all hard jobs. And we used to get moms at work. All moms of all my peers have been working. My mom was an engineer, my dad is an engineer. From that, there is no perception that the woman should stay at home, or the woman is taking care of kids. There is less of that. >> Interesting. So for me, yes. Now I think that industry going that direction. And that's right. >> Instructive, great. Well, listen, thanks very much for coming on the Cube. >> Sure. >> Sharing the stories, and good luck in lab, wherever you may end up. >> Thank you. >> Good to see you. >> Thank you very much. >> Alright, keep it right there everybody. We'll be back with our next guest, Dave Vallante for Peter Buress. We're live from Madrid, 2017, HPE Discover. This is the Cube.

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. for the week, Peter Burris. to the cadence of Moore's Law, And from the software perspective, for doing computation itself in the analog domain. the dot product, and it sounds like it's related. It's the dot in between the vector and the matrix. You mentioned the problem with Moore's Law. for the type of computation which you care about. Sort of taking the dot product engine as the core, can I infer from that that that is a stop-gap a return to specialized hardware. (Dave) Yeah, for sure. and you have to look at what the requirements are. And make it so that the software doesn't have to know, of the architectural challenges of today's processors, The dot product engine being kind of that combination. We have in labs software working with architects. And so one of the great things as well And you invoke that resource the device talk to the data. So as a software person, you now with and doing other things? for the algorithms is to scale out. for that factor of the locality of the data. of the traditional architectures, if you turn to the HPC And that's the same company, maybe even the same product and then, Natalia, you as well. And I got to the point in college where That's something that led me to labs. at the university so it was pretty straightforward Why do you feel that more women don't pursue and so always had books around the house. Its 50W% of the population. And give equal opportunities to everybody. You also need to understand And after the Second World War, So for me, yes. coming on the Cube. Sharing the stories, and good luck This is the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Dave VallantePERSON

0.99+

Peter BurrisPERSON

0.99+

Natalia VassilievaPERSON

0.99+

NataliaPERSON

0.99+

Palo AltoLOCATION

0.99+

10%QUANTITY

0.99+

10QUANTITY

0.99+

Tom BradicichPERSON

0.99+

100 secondsQUANTITY

0.99+

Peter BuressPERSON

0.99+

15 XQUANTITY

0.99+

St. PetersburgLOCATION

0.99+

RussiaLOCATION

0.99+

HPEORGANIZATION

0.99+

DavePERSON

0.99+

Hewlett Packard EnterprisesORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Carnegie MellonORGANIZATION

0.99+

TodayDATE

0.99+

50W%QUANTITY

0.99+

MadridLOCATION

0.99+

five yearsQUANTITY

0.99+

Madrid, SpainLOCATION

0.99+

40%QUANTITY

0.99+

todayDATE

0.99+

Cat GravesPERSON

0.99+

Second World WarEVENT

0.99+

less than 20%QUANTITY

0.99+

bothQUANTITY

0.99+

Moore's LawTITLE

0.99+

twoQUANTITY

0.99+

single chipQUANTITY

0.98+

CatPERSON

0.98+

100 XQUANTITY

0.98+

single poolQUANTITY

0.98+

AsicsORGANIZATION

0.97+

2017DATE

0.97+

BothQUANTITY

0.97+

INVIDIAORGANIZATION

0.96+

oneQUANTITY

0.96+

single systemQUANTITY

0.95+

HPORGANIZATION

0.94+

single deviceQUANTITY

0.94+

first classQUANTITY

0.93+

AsicORGANIZATION

0.92+

first targetQUANTITY

0.92+

Soviet UnionLOCATION

0.92+

Many years agoDATE

0.9+

I/OTITLE

0.89+

three waysQUANTITY

0.87+

90%QUANTITY

0.87+

Moore'sTITLE

0.86+

Discover Madrid 2017EVENT

0.8+

decadesQUANTITY

0.76+

Dr.PERSON

0.75+

every 18 monthsQUANTITY

0.73+

couple hundred yearsQUANTITY

0.7+

Soviet UnionORGANIZATION

0.65+

The CubeCOMMERCIAL_ITEM

0.53+

CubeORGANIZATION

0.46+

CubeCOMMERCIAL_ITEM

0.38+

Wikibon Research Meeting | October 20, 2017


 

(electronic music) >> Hi, I'm Peter Burris and welcome once again to Wikibon's weekly research meeting from the CUBE studios in Palo Alto, California. This week we're going to build upon a conversation we had last week about the idea of different data shapes or data tiers. For those of you who watched last week's meeting, we discussed the idea that data across very complex distributed systems featuring significant amounts of work associated with the edge are going to fall into three classifications or tiers. At the primary tier, this is where the sensor data that's providing direct and specific experience about the things that the sensors are indicating, that data will then signal work or expectations or decisions to a secondary tier that aggregates it. So what is the sensor saying? And then the gateways will provide a modeling capacity, a decision making capacity, but also a signal to tertiary tiers that increasingly look across a system wide perspective on how the overall aggregate system's performing. So very, very local to the edge, gateway at the level of multiple edge devices inside a single business event, and then up to a system wide perspective on how all those business events aggregate and come together. Now what we want to do this week is we want to translate that into what it means for some of the new technologies, new analytics technologies that are going to provide much of the intelligence against each of this data. As you can imagine, the characteristics of the data is going to have an impact on the characteristics of the machine intelligence that we can expect to employ. So that's what we want to talk about this week. So Jim Kobielus, with that as a backdrop, why don't you start us off? What are we actually thinking about when we think about machine intelligence at the edge? >> Yeah, Peter, we at the edge, the edge of body, the device be in the primary tier that acquires fresh environmental data through its sensors, what happens at the edge? In the extreme model, we think about autonomous engines, let me just go there just very briefly, basically, it's a number of workloads that take place at the edge, the data workloads. The data is (mumbles) or ingested, it may be persisted locally, and that data then drives local inferences that might be using deep layer machine learning chipsets that are embedded in that device. It might also trigger various tools called actuations. Things, actions are taken at the edge. If it's the self-driving vehicle for example, an action may be to steer the car or brake the car or turn on the air conditioning or whatever it might be. And then last but not least, there might be some degree of adaptive learning or training of those algorithms at the edge, or the training might be handled more often up at the second or tertiary tier. The tertiary tier at the cloud level, which has visibility usually across a broad range of edge devices and is ingesting data that is originated from all of the many different edge devices and is the focus of modeling, of training, of the whole DevOps process, where teams of skilled professionals make sure that the models are trained to a point where they are highly effective for their intended purposes. Then those models are sent right back down to the secondary and the primary tiers, where act out inferences are made, you know, 24 by seven, based on those latest and greatest models. That's the broad framework in terms of the workloads that take place in this fabric. >> So Neil, let me talk to you, because we want to make sure that we don't confuse the nature of the data and the nature of the devices, which may be driven by economics or physics or even preferences inside of business. There is a distinction that we have to always keep track of, that some of this may go up to the Cloud, some of it may stay local. What are some of the elements that are going to indicate what types of actual physical architectures or physical infrastructures will be built out as we start to find ways to take advantage of this very worthwhile and valuable data that's going to be created across all of these different tiers? >> Well first of all, we have a long way to go with sensor technology and capability. So when we talk about sensors, we really have to define classes of sensors and what they do. However, I really believe that we'll begin to think in a way that approximates human intelligence, about the same time as airplanes start to flap their wings. (Peter laughs) So, I think, let's have our expectations and our models reflect that, so that they're useful, instead of being, you know hypothetical. >> That's a great point Neil. In fact, I'm glad you said that, because I strongly agree with you. But having said that, the sensors are going to go a long ways, when we... but there is a distinction that needs to be made. I mean, it may be that that some point in time, a lot of data moves up to a gateway, or a lot of data moves up to the Cloud. It may be that a given application demands it. It may be that the data that's being generated at the edge may have a lot of other useful applications we haven't anticipated. So we don't want to presume that there's going to be some hard wiring of infrastructure today. We do want to presume that we better understand the characteristics of the data that's being created and operated on, today. Does that make sense to you? >> Well, there's a lot of data, and we're just going to have to find a way to not touch it or handle it any more times than we have to. We can't be shifting it around from place to place, because it's too much. But I think the market is going to define a lot of that for us. >> So George, if we think about the natural place where the data may reside, the processes may reside, give us a sense of what kinds of machine learning technologies or machine intelligence technologies are likely to be especially attractive at the edge, dealing with this primary information. Okay, I think that's actually a softball which is, we've talked before about bandwidth and latency limitations, meaning we're going to have to do automated decisioning at the edge, because it's got to be fast, low latency. We can't move all the data up to the Cloud for bandwidth limitations. But, by contrast, so that's data intensive and it's fast, but up in the cloud, where we enhance our models, either continual learning of the existing ones or rethinking them entirely, that's actually augmented decisions, and augmented means it's augmenting a human in the process, where, most likely, a human is adding additional contextual data, performing simulations, and optimizing the model for different outcomes or enriching the model. >> It may in fact be a crucial element or crucial feature of the training by in fact, validating that the action taken by the system was appropriate. >> Yes, and I would add to that, actually, that you might, you used an analogy, people are going from two extremes where they say, some people say, "Okay, so all the analytics has to be done in the cloud," Wikibon and David Floyer, and Jim Kovielus have been pioneering the notion that we have to do a lot more at the client. But you might look back at client server computing where the client was focused on presentation, the server was focused on data integrity. Similarly, here, the edge or client is going to be focused on fast inferencing and the server is going to do many of the things that were associated with a DBMS and data integrity in terms of reproducibility, of decisions in the model for auditing, security, versioning, orchestration in terms of distributing updated models. So we're going to see the roles of the edge and the cloud rhyme with what we saw in server. Neither one goes away, they augment each other. >> So, Jim Kovielus, one of the key issues there is going to be the gateway, and the role that the gateway plays, and specifically here, we talked about the nature of again, the machine intelligence that's going to be operating more on the gateway. What are some of the characteristics of the work that's going to be performed at the gateway that kind of has oversight of groupings or collections of sensor and actuator devices? >> Right, good question. So the perfect example that everybody's familiar with now about a gateway in this environment, a smart home hub. A smart home hub, just for the sake of discussion, has visibility across two or more edge devices. It could be a smart speaker, could be the HVAC system is sensor equipped and so forth, what it does, the pool it performs, a smart hub of any sort, is that it acquires data from the edge devices, the edge devices might report all of their data directly to the hub, or the sensor devices might also do inferences and then pass on the results of the inferences it has given to the hub, regardless. What the hub does is A, it aggregates the data across those different edge devices over which it has this ability and control, B, it may perform it's own inferences based on models that look out across an entire home in terms of patterns of activity. Then it might take the hub, various actions autonomous by itself, without consulting an end user or anything else. It might take action in terms of beef up the security, adjust the HVAC, it adjusts the light in the house or whatever it might be, based on all that information streaming in real time. Possibly, its algorithms will allow you to determine what of that data shows an anomalous condition that deviates from historical patterns. Those kinds of determinations, whether it's anomalous or a usual pattern, are often taken at the hub level, 'cause it's maintaining sort of a homeostatic environment, as it were, within its own domain, and that hub might also communicate up the stream, to a tertiary tier that has oversight, let's say, of a smart city environment, where everybody in that city or whatever, might have a connection into some broader system that say, regulates utility usage across the entire region to avoid brownouts and that kind of thing. So that gives you an idea of what the role of a hub is in this kind of environment. It's really a controller. >> So, Neil, if we think about some of the issues that people really have to consider as they start to architect what some of these systems are going to look like, we need to factor both what is the data doing now, but also ensure that we build into the entire system enough of a buffer so that we can anticipate and take advantage of future ways of using that data. Where do we draw that fine line between we only need this data for this purpose now and geez, let's ensure that we keep our options open so that we can use as much data as we want at some point in time in the future? >> Well, that's a hard question, Peter, but I would say that if it turns out that this detailed data coming from sensors, that the historical aspect of it isn't really that important. If the things you might be using that data for are more current, then you probably don't need to capture all that. On the other hand, there have been many, many occasions historically, where data has been used other than its original purpose. My favorite example was scanners in grocery stores, where it was meant to improve the checkout process, not have to put price stickers on everything, manage inventory and so forth. It turned out that some smart people like IRI and some other companies said, "We'll buy that data from you, "and we're going to sell it to advertisers," and all sorts of things. We don't know the value of this data yet, it's too new. So I would err on the side of being conservative and capturing and saving as much as I could. >> So what we need to do is, we need to marry or we need to do an optimization of some form about how much is it going to cost to transmit the data versus what kind of future value or what kinds of options of future value might there be on that data. That is, as you said, a hard problem, but we can start to conceive of an approach to characterizing that ratio, can't we? >> I hope so. I know that, personally, when I download 10 gigabytes of data, I pay for 10 gigabytes of data, and it doesn't matter if it came from a mile away or 10,000 miles away. So there has to be adjustments for that. There's also ways of compressing data because this sensor data I'm sure is going to be fairly sparse, can be compressed, is redundant, you can do things like RLL encoding, which takes all the zeroes out and that sort of thing. There are going to be a million practices that we'll figure out. >> So as we imagine ourselves in this schemata of edge, hub, tertiary or primary, secondary and tertiary data and we start to envision the role that data's going to play and how we conduct or how we build these architectures and these infrastructures, it does raise an interesting question, and that is, from an economic standpoint, what do we anticipate is going to be the classes of devices that are going to exploit this data? David Foyer who's not here today, hope you're feeling better David, has argued pretty forcibly, that over the next few years we'll see a lot of advances made in microprocessor technology. Jim, I know you've been thinking about this a fair amount. What types of function >> Jim: Right. >> might we actually see being embedded in some of these chips that software developers are going to utilize to actually build some of these more complex and interesting systems? >> Yeah, first of all, one of the trends we're seeing in the chipset market for deep learning, just to be there for a moment, is that deep learning chipsets traditionally, when I say traditionally, the last several years the market has been dominated by GP's graphic processing unit. Invidia of course, is the primary provider of those. Of course, Invidia has been along around for a long time as a gaming solution provider. Now, what's happening with GPU technology, in fact, the latest generation of Invidia's architecture shows where it's going. The thing that is more deep learning optimized capabilities at the chipset level. They're called tensor processing, and I don't want to bore you with all the technical details, but the whole notion of-- >> Peter: Oh, no, Jim, do bore us. What is it? (Jim laughs) >> Basically deep learning is based on doing high speed, fast matrix map. So fundamentally, tensor cores do high velocity fast matrix math, and the industry as a whole is moving toward embedding more tensor cores directly into the chipset, higher density of tensor core. Invidia in its latest generation of chip has done that. They haven't totally taken out the gaming oriented GPU capabilities, but there are competitors and they have a growing list, more than a dozen competitors on the chipset side now. We're all going down a road of embedding far more technical processing units into every chip. Google is well known for something called GPU tensor processing units, their chip architecture. But they're one of many vendors that are going down that road. The bottom line is the chipset itself is becoming authenticated and being optimized for the core function that CPU and really GPU technology and even ASIX and FPGAs were not traditionally geared to do, which is just deep learning at a high speed, many cores, to do things like face recognition and video and voice recognition freakishly fast, and really, that's where the market is going in terms of enabling underlying chipset technology. What we're seeing is that, what's likely to happen in the chipsets of the year 2020 and beyond, they'll be predominantly tensor core processing units, But they'll be systemed on a chip that, and I'm just talking about future, not saying it's here now, systems on a chip that include some, a CPU, to managing real time OS, like a real time Linux or what not, and with highly dense tensor core processing unit. And in this capability, these'll be low power chips, and low cost commodity chips that'll be embedded in everything. Everything from your smart phone, to your smart appliances in your home, to your smart cars and so forth. Everything will have these commodity chips. 'Cause suddenly every edge device, everything will be an edge device, and will be able to provide more than augmentation, automation, all these things we've been talking about, in ways that are not necessarily autonomous, but can operate with a great degree of autonomy to help us human beings to live our lives in an environmentally contextual way at all points in time. >> Alright, Jim, let me cut you off there, because you said something interesting, a lot more autonomy. George, what does it mean, that we're going to dramatically expand the number of devices that we're using, but not expand the number of people that are going to be in place to manage those devices. When we think about applying software technologies to these different classes of data, we also have to figure out how we're going to manage those devices and that data. What are we looking at from an overall IT operations management approach to handling a geometrically greater increase in the number of devices and the amount of data that's being generated? (Jim starts speaking) >> Peter: Hold on, hold on, George? >> There's a couple dimensions to that. Let me start at the modeling side, which is, we need to make data scientists more productive or we need to push out to a greater, we need to democratize the ability to build models, and again, going back to the notion of simulation, there's this merging of machine learning and simulation where machine learning tells you correlations in factors that influence an answer. Whereas, the simulation actually lets you play around with those correlations, to find the causations, and by merging them, we make it much, much more productive to find the models that are both accurate and to optimize them for different outcomes. >> So that's the modeling issue. >> Yes. >> When we think about after we, which is great. Now as we think about some of the data management elements, what are we looking at from a data management standpoint? >> Well, and this is something Jim has talked about, but, you know we had DevOps for joining the, essentially merging the skills of the developers with the operations folks, so that there's joint responsibility of keeping stuff live. >> Well what about things like digital twins, automated processes, we've talked a little it about breadth versus depth, ITOM, What do you think? Are we going to build out, are all these devices going to reveal themselves, or are we going to have to put in place a capacity for handling all of these things in some consistent, coherent way? >> Oh, okay, in terms of managing. >> In terms of managing. >> Okay. So, digital twins were interesting because they pioneered or they made well known a concept called essentially, a symmetric network, or a knowledge graph, which is just a way of abstracting what is a whole bunch of data models and machine learning models that represents the structure and behavior of a device. In IIoT terminology, it was like an industrial device, like a jet engine. But that same construct, the knowledge graph and the digital twin, can be used to describe the application software and the infrastructure, both middleware and hardware, that makes up this increasingly sophisticated network of learning and inferencing applications. And the reason this is important, it sounds arcane, the reason it's important is we're building now vastly more sophisticated applications over great distances, and the only way we can manage them is to make the administrators far more productive. The state of the art today is, alerts on the performance of the applications, and alerts on the, essentially, the resource intensity of the infrastructure. By combining that type of monitoring with the digital twin, we can get a, essentially much higher fidelity reading on when something goes wrong. We don't get false positives. In other words, you don't have, if something goes wrong, it's like the fairy tale of the pea underneath the mattress, all the way up, 10 mattresses, you know it's uncomfortable. Here, it'll pinpoint exactly what gets wrong, rather than cascading all sorts of alerts, and that is the key to productivity in managing this new infrastructure. >> Alright guys, so let's go into the action item around here. What I'd like to do now is ask each of you for the action item that you think users are going to have to apply or employ to actually get some value, and start down this path of utilizing machine intelligence across these different tiers of data to build more complex, manageable application infrastructures. So, Jim, I'd like to start with you, what's your action item? >> My action item is related what George just said, modeled centrally, deployed in a decentralized fashion, machine learning, and use digital twin technology to do your modeling against device classes, in a more coherent way. There's not one model that won't fit all of the devices. Use digital twin technology to structure the modeling process to be able to tune a model to each class of device out there. >> George, action item. >> Okay, recognize that there's a big difference between edge and cloud, as Jim said. But I would elaborate, edge is automated, low latency decision making, extremely data intensive. Recognize that the cloud is not just where you trickle up a little bit of data, this is where you're going to use simulations, with a human in the loop, to augment-- >> System wide, system wide. >> System wide, with a human in the loop to augment how you evaluate new models. >> Excellent. Neil, action item. >> I would have people start on the right side of the diagram and start to think about what their strategy is and where they fit into these technologies. Be realistic about what they think they can accomplish and do the homework. >> Alright, great. So let me summarize our meeting this week. This week we talked about the role that the three tiers of data that we've described will play in the use of machine intelligence technologies as we build increasingly complex and sophisticated applications. We've talked about the difference between primary, secondary, and tertiary data. Primary data being the immediate experience of sensors. Analog being translated into digital, about a particular thing or set of things. Secondary being the data that is then aggregated off of those sensors for business event purposes, so that we can make a business decision, often automatically down at an edge scenario, as a consequence of signals that we're getting from multiple sensors. And then finally, tertiary data, that looks at a range of gateways and a range of systems, and is considering things at a system wide level, for modeling, simulation and integration purposes. Now, what's important about this is that it's not just better understanding the data and not just understanding the classes of technologies that we used, that will remain important. For example, we'll see increasingly powerful low cost device specific arm like processors pushed into the edge. And a lot of competition at the gateway, or at the secondary data tier. It's also important, however to think about the nature of the allocations and where the work is going to be performed across those different classifications. Especially as we think about machine learning, machine etiologies and deep learning. Our expectation is that we will see machine learning being used on all three levels, Where machine etiology is being used on against all forms of data to perform a variety of different work, but that the work that will be performed will be a... Will be naturally associated and related to the characteristics of the data that's being aggregated at that point. In other words, we won't see simulations, which are characteristics of tertiary data, George, at the edge itself. We will however, see edge devices often reduce significant amounts of data from a perhaps a video camera or something else to make relatively simple decisions that may involve complex technologies to allow a person into a building, for example. So our expectation is that over the next five years we're going to see significant new approaches to applying increasingly complex machine etiologies technologies across all different classes of data, but we're going to see them applied in ways that fit the patterns associated with that data, because it's the patterns that drive the applications. So our overall action item, it's absolutely essential that businesses that considering and conceptualizing what machine intelligence can do, but be careful about drawing huge generalizations about what the future machine intelligence is. The first step is to parse out the characteristics of the data driven by the devices that are going to generate it and the applications that are going to use it, and understand the relationship between the characteristics of that data and the types of machine intelligence work that can be performed. What is likely, is that an impedance mismatch between data and expectations of machine intelligence will generate a significant number of failures that often will put businesses back years in taking full advantage of some of these rich technologies. So, once again we want to thank you this week for joining us here on the Wikibon weekly research meeting. I want to thank George Gilbert who is here CUBE Studio in Palo Alto, and Jim Kobielus and Neil Raden who were both on the phone. And we want to thank you very much for joining us here today, and we look forward to talking to you again in the future. So this is Peter Burris, from the CUBE's Palo Alto Studio. Thanks again for watching Wikibon's weekly research meeting. (electronic music)

Published Date : Oct 20 2017

SUMMARY :

the characteristics of the data is going to have an impact that take place at the edge, the data workloads. that are going to indicate what types about the same time as airplanes start to flap their wings. It may be that the data that's being generated at the edge to not touch it or handle it any more times than we have to. and optimizing the model for different outcomes or crucial feature of the training and the server is going to do many of the things and the role that the gateway plays, is that it acquires data from the edge devices, and geez, let's ensure that we keep our options open that the historical aspect of it or we need to do an optimization of some form So there has to be adjustments for that. has argued pretty forcibly, that over the next few years in fact, the latest generation of Invidia's architecture What is it? in the chipsets of the year 2020 and beyond, that are going to be in place to manage those devices. that are both accurate and to optimize them Now as we think about some of the data management elements, essentially merging the skills of the developers and that is the key to productivity in managing the action item that you think to structure the modeling process to be able to tune a model Recognize that the cloud is not just where you trickle up to augment how you evaluate new models. Neil, action item. and do the homework. So our expectation is that over the next five years

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

George GilbertPERSON

0.99+

Jim KobielusPERSON

0.99+

Peter BurrisPERSON

0.99+

NeilPERSON

0.99+

GeorgePERSON

0.99+

Neil RadenPERSON

0.99+

PeterPERSON

0.99+

David FloyerPERSON

0.99+

DavidPERSON

0.99+

Jim KovielusPERSON

0.99+

David FoyerPERSON

0.99+

October 20, 2017DATE

0.99+

10 gigabytesQUANTITY

0.99+

last weekDATE

0.99+

10 mattressesQUANTITY

0.99+

10,000 milesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

CUBEORGANIZATION

0.99+

This weekDATE

0.99+

InvidiaORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

secondQUANTITY

0.99+

two extremesQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

LinuxTITLE

0.99+

this weekDATE

0.99+

first stepQUANTITY

0.99+

bothQUANTITY

0.98+

one modelQUANTITY

0.98+

each classQUANTITY

0.98+

three tiersQUANTITY

0.98+

eachQUANTITY

0.98+

24QUANTITY

0.98+

oneQUANTITY

0.96+

a mileQUANTITY

0.96+

more than a dozen competitorsQUANTITY

0.95+

IRIORGANIZATION

0.95+

WikibonPERSON

0.94+

sevenQUANTITY

0.94+

firstQUANTITY

0.92+

CUBE StudioORGANIZATION

0.86+

2020DATE

0.85+

couple dimensionsQUANTITY

0.79+

Palo Alto StudioLOCATION

0.78+

single business eventQUANTITY

0.75+

tertiary tierQUANTITY

0.74+

last several yearsDATE

0.71+

yearsDATE

0.7+

twinQUANTITY

0.64+

Dean Takahashi, VentureBeat | Samsung Developer Conference 2017


 

>> From San Francisco, it's theCUBE. Covering Samsung Developer Conference 2017. Brought to you by Samsung. (electronic music) >> Welcome back everyone. Here live in San Francisco, Mascone West. This is theCUBE's exclusive coverage live video here at Samsung Developer Conference, #SDC2017. I'm John Furrier, the co-founder of SiliconANGLE Media and co-host of theCUBE. My next guest is Dean Takahashi, who is the lead writer for GamesBeat for VentureBeat big blog covering business and innovation technology. Obviously, been a journalist and writer covering mobile and mobile gaming for a long time. Legend in the Silicon Valley community. Dean, great to see you. >> Yeah, thank you. Thank you for the kind intro. >> People who follow you know, you've been out there in the front line looking at the evolution of gaming obviously, from gaming and then obviously mobile gaming hit a thing. Then gen 2, gen 3. I don't know what generation we're on, but certainly Samsung is converging. That's their message here. Trying to keep these smart things, the cloud message a little bit of an IOT. Feels like an enterprise show a little bit. But, at the end of the day, it's the consumer connection. >> It's all coming together now >> It's all coming together. What's your rapport? What are you seeing? What are you reporting on? >> Well, I cover everything from the smallest startups, including the small game companies. I try to pay attention to Silicon Valley in general. And then the big companies as well. So, the relationships pertain to developers, who are on the small side to the platform owners on the big side. And, I see a really big war going on among all the platform owners. They're trying to get the hearts and minds of those developers. They're trying to bring in, trying to do what Samsung is doing. Which is integrate a lot of different things onto their platform. And, we'll see how much sort of openness is sort of left at the end of this. Or how much of a commons there is across the whole tech landscape, or the whole game industry. And I don't know who's going to win, who's going to own it all. But, everybody's trying. >> It's a war. Platform wars immediately. The device here, my new Samsung 8 is nice. It's got a big screen. It's gameable. Mobile gaming obviously hot. But again, the platform wars are interesting. Now they have the living room, they've got the kitchen, the smart family hub. All this stuff they're talking about. They had the smart TV for a while. The question that I have is, developers don't want hassles. They want the distribution and all the goodness of the big vendor, but one of the things Samsung seems to be trying to create is this unified fabric of breaking down the stovepipes within their company. Problem is, developers won't tolerate different API documentation. This is an issue we've heard from developers here in theCUBE is how does Samsung do that? Because that'll really be, that's the kryptonite for developers. That'll keep the super developers away. >> Yeah. Like the announcement they had here about the Internet of Things and trying to sort of integrate three different standards down into one is the kind of move that you have to make or you have to seek. Some of these come in through acquisitions but, yeah. The developers don't want to mess around with the multiple APIs. >> It's interesting. We cover, as you know, we cover a lot of the enterprise and the emerging tech with SiliconANGLE and theCUBE, and we see the enterprise is clear, right? DevOps, the cloud native, the Linux foundation. Those worlds are exploding. Open source is exploding. And then you got companies like Intel, which cares about field programmable gate arrays and 5G. Enabling that end to end. And then you've got the consumer companies whether it's Ali Baba or Samsung or a Google or an Apple, really caring about the device side. So, everything is kind of coming into the middle where cloud is the engine, right. So, the interesting thing I'd love to get your perspective on, Are developers sensitive to the fact that they can have more compu because augmented reality, even virtual reality. We've had one VIP influencer here on theCUBE say VR is done. 1.0 is done. But we learn from it. It didn't really hunt. It didn't really go off the shelves. But augmented reality is hotter, because it's more realistic. Drones are using augmented industrial IOT. >> Augmented reality has a nice launch pad, right. It's got a long runway off of smartphones. You create your app for smartphones and eventually it's going to run on all these other new things that come out, like the glasses. Once those are established, that's great. But in the meantime, the apps and developers can sort of make this living on the smartphone. >> So it's not a big bad like a Google Glass where it just kind of crashes and burns >> Yeah. >> Kind of thing. So they can get some beach head with mobile. >> Yeah. >> So the question for you is how vet the signal from noise on companies. Obviously you look for the ones that have more of a pragmatic business model. Get in on mobile gaming. Obviously Google is on stage with Android. So you're starting to see more openness with APIs. Differentiating from Apple, ecosystem, which it is what it is. How do you see companies differentiating and being real? >> Signal from noise, you do look at everything from who their alliances are with, to how many people, do they have enough people to get the job done? Do they have the funding? It's sort of figuring out whether the team has experience at what they're doing. So, a lot of the basics of journalism. Just finding out facts about a company. >> So, Magic Leap. Have you dug into those guys? I saw the funding news yesterday. Another $500 Million. >> Yeah. >> I haven't seen the product. I haven't seen the demo. I'm not covering gaming like you are. But you have seen their demo. Have you? >> I haven't seen their demo. >> I think a half, a half a million dollars more. That's a war chest. >> Yeah. They're out in Florida. So they're a bit far from me. They are very lucky to have convinced someone to give them some additional money. When they've burned through a billion dollars plus already so, $1.4 billion >> Insane. And nothing to show for it. >> $500 million more, yeah. And they're very ambitious and that's good but, >> It better be good. >> They almost seem like they were trying to say we're going to accelerate and beat Moore's Law. We're going to do something impossible, put these things into little glasses and it's going to be amazing. It's going to be like, so you can't distinguish augmented reality from reality, right? And surprise, surprise, you can't really rush Moore's Law. >> And by the way that's, I'm surprised they're not in Silicon Valley because it seems like that's a go big or go home strategy. Certainly, a billion dollars they've burned through, another half a billion. No one can do that. It's hard to do. So, back down to the more pragmatic ecosystem, you're seeing Samsung here. I like their approach. I think that it's a good strategy. They didn't overplay their hand at the show on talking about where the data resides. That was one thing I'm still not seeing but maybe they're going to bring that out later. Maybe it's not ready yet. The cloud, I didn't really see the cloud story there as much. I don't know what that means. So, those are open discussion points for me. But, certainly leveraging the device, leveraging the distribution is what they're offering. But then they made a comment here on theCUBE, "We're open." What does that mean? I mean Android's obviously got a benefit of being open. But what does open mean to you and how do you see that? >> I think that, you could argue that for smart things where it's connecting to something like an Invidia Shield. And you can use the remote control on an Invidia Shield to change your lighting, or something like that. So, it's sort of overlapping circles of certain, you know, I don't know if that's open. But it works. If you deliver something that works, your consumers, you know, it's relatively open. >> Yeah. And the glam is obviously electronics. Consumer electronics base. You've got a little bit of the IOT. I find this fascinating story of the IOT because people are things too. I mean, you're walking around with the phones. We have the fashion tech happening. And obviously gaming. Alright, what's the big surprise for you here at the show. Give me some positive review. What you liked about it, and what critical analysis, where they need to improve. What are some of your thoughts? >> I think there is always sort of that challenge for a big company like this that has a worldwide consumer base. How much do they want to cater to or appeal to the hardcore crowd? So, say like gaming and non-gamers is a good example of that. And they're not really trying to get everybody in gaming onto their platform or onto their side. They're saying that they're welcome. They can come. We've built this as an all-purpose sort of platform. And, they're not going out to invest in a lot of the game companies. They didn't put money into Magic Leap. They're not sort of trying to pull people in and >> They're not giving the hard sell. >> Yeah. The challenge then is that other companies are. Microsoft, Sony, and Nintendo of course are doing it. But Amazon, Google, even Apple to some degree is embracing a lot of gamers on the game platforms. Making their platforms fairly friendly. So, I think Samsung needs to decide whether it's going to step up in that space. Other territories, yeah. It's on a very good march, I think. To continuously come out with new tech that gets widely adopted. They're doing well in VR. But I think, it almost seems like they've embraced 360 video a lot more than they have on the game side. >> We'd certainly love to get those 360 cameras here. Apple versus Samsung. Obviously, World Wide Developer Conference is legendary. Samsung 4th year now doing this event. Compare, close, getting there, leveling up? >> Well, I think Apple's event was underwhelming in a lot of ways as far as just what they announced. And say even the performance of the phones. It doesn't really, it's kind of flatish compared to the performance of Samsung phones. I think Samsung has maybe a broader following and broader base. And they have the potential to draw >> And Android's global appeal >> draw more >> is pretty interesting. >> Yeah, draw more developers over who might find it easier. >> Interesting to see the psychographic profile of developer makeup from Apple and Samsung. Dean, thanks for coming on theCUBE. Really appreciate it. Dean Takahashi here inside theCUBE. Lead writer for GamesBeat, part of VentureBeat blog in Silicon Valley. Check them out, VentureBeat.com. Of course you've got siliconangle.com and thecube.net. That's our content there. This is theCUBE live coverage from Samsung Developer Conference. I'm John Furrier, right back with more after the short break. >> And also plug our GamesBeat conference. >> GamesBeat conference. >> GamesBeat Summit in April. April 9th and 10th in Berkeley. >> Yep, get the plug in. GamesBeat Conference in April. Check it out. Dean co-chairs the committee for getting the great content. Hardcore gamers, thought leaders. Check out GamesBeat Summit in April. Of course, this is theCUBE live coverage here in San Francisco. More after this short break. (electronic music)

Published Date : Oct 19 2017

SUMMARY :

Brought to you by Samsung. Legend in the Silicon Valley community. Thank you for the kind intro. in the front line looking at the evolution of gaming What are you seeing? So, the relationships pertain to developers, of the big vendor, but one of the things Samsung is the kind of move that you have to make So, everything is kind of coming into the middle But in the meantime, the apps and developers Kind of thing. So the question for you is how vet the signal So, a lot of the basics of journalism. I saw the funding news yesterday. I haven't seen the product. I think a half, a half a million dollars more. to give them some additional money. And nothing to show for it. And they're very ambitious and that's good but, It's going to be like, so you can't distinguish And by the way that's, I'm surprised I think that, you could argue that for You've got a little bit of the IOT. a lot of the game companies. is embracing a lot of gamers on the game platforms. We'd certainly love to get those 360 cameras here. And say even the performance of the phones. more after the short break. April 9th and 10th in Berkeley. for getting the great content.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

Dean TakahashiPERSON

0.99+

NintendoORGANIZATION

0.99+

AppleORGANIZATION

0.99+

FloridaLOCATION

0.99+

John FurrierPERSON

0.99+

DeanPERSON

0.99+

BerkeleyLOCATION

0.99+

San FranciscoLOCATION

0.99+

$1.4 billionQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

$500 millionQUANTITY

0.99+

GamesBeatEVENT

0.99+

VentureBeatORGANIZATION

0.99+

yesterdayDATE

0.99+

half a billionQUANTITY

0.99+

AprilDATE

0.99+

IntelORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

AndroidTITLE

0.99+

Samsung Developer ConferenceEVENT

0.99+

360 camerasQUANTITY

0.98+

Magic LeapORGANIZATION

0.98+

April 9thDATE

0.98+

$500 MillionQUANTITY

0.98+

Ali BabaORGANIZATION

0.98+

GamesBeat SummitEVENT

0.98+

GamesBeatORGANIZATION

0.98+

World Wide Developer ConferenceEVENT

0.97+

Samsung Developer Conference 2017EVENT

0.97+

GamesBeat ConferenceEVENT

0.97+

siliconangle.comOTHER

0.97+

10thDATE

0.97+

oneQUANTITY

0.96+

4th yearQUANTITY

0.96+

one thingQUANTITY

0.95+

#SDC2017EVENT

0.94+

SiliconANGLEORGANIZATION

0.94+

Google GlassCOMMERCIAL_ITEM

0.94+

thecube.netOTHER

0.94+

8COMMERCIAL_ITEM

0.93+

gen 3QUANTITY

0.92+

VentureBeat.comOTHER

0.92+

Moore's LawTITLE

0.91+

gen 2QUANTITY

0.85+

a half a million dollarsQUANTITY

0.82+

Mascone WestLOCATION

0.8+

MoorePERSON

0.77+

ShieldCOMMERCIAL_ITEM

0.76+

Invidia ShieldCOMMERCIAL_ITEM

0.75+

Dustin Kirkland, Canonical | AWS Summit 2017


 

>> Announcer: Live from Manhattan, it's theCube, covering AWS Summit, New York City, 2017. Brought to you by Amazon Web Services. >> Welcome back to the Big Apple as we continue our coverage here on theCube of AWS Summit 2017. We're at the Javits Center. We're in midtown. A lot of hustle and bustle outsie and inside there, good buzz on the show floor with about 5,000 strong attending and some 20,000 registrants also for today's show. Along with Stu Miniman, I'm John Walls, and glad to have you here on theCube. And Dustin Kirkland now joins us. He's at Ubuntu, the product and strategy side of things at Canonical, and Dustin, good to see you back on theCube. >> Thank you very much. >> You just threw a big number out at us when we were talking off camera. I'll let you take it from there, but it shows you about the presence, you might say, of Ubuntu and AWS, what that nexus is right now. >> Ubuntu easily leads as the operating system in Amazon. About 70%, seven zero, 70% of all instances running in Amazon right now are running Ubuntu. And that's actually, despite the fact that Amazon have their own Amazon Linux and there are other, Windows, Rails, SUSE, Debian, Fedora, other alternatives. Ubuntu still represents seven out of 10 workloads in Amazon running right now. >> John: Huge number. >> So, Dustin, maybe give us a little insight as to what kind of workloads you're seeing. How much of this was people that, Ubuntu has a great footprint everywhere and therefore it kind of moved there. And how much of it is new and interesting things, IOT and machine learning and everything like that, where you also have support. >> When you're talking about that many instances, that's quite a bit of boat, right? So if you look at just EC2 and the two types of workloads, there are the long-running workloads. The workloads that are up for many months, years in some cases. I met a number of customers here this week that are running older versions of Ubuntu like 12.04 which are actually end of life, but as a customer of Canonical we continue providing security updates. So we have a product called Extended Security Maintenance. There's over a million instances of Ubuntu 12.04 which are already end of life but Canonical can continue providing security updates, critical security updates. That's great for the long-running workloads. The other thing that we do for long-running workloads are kernel live patches. So we're able to actually fix vulnerabilities in the Linux kernel without rebooting, using entirely upstream and open source technology to do that. So for those workloads that stay up for months or years, the combination of Extended Security Maintenance, covering it for a very long time, and the kernel live patch, ensuring that you're able to patch those vulnerabilities without rebooting those systems, it's great for hosting providers and some enterprise workloads. Now on the flip side, you also see a lot of workloads that are spikey, right. Workloads that come and go in bursts. Maybe they run at night or in the morning or just whenever an event happens. We see a lot of Ubuntu running there. It's really, a lot of that is focused on data and machine learning, artificial intelligence workloads, that run in that sort of bursty manner. >> Okay, so it was interesting, when I hear you talk about some things that have been running for a bunch of years, and on the other side of the spectrum is serverless and the new machine learning stuff where it tends to be there, what's Canonical doing there? What kind of exciting, any of the news, Macey, Glue, some of these other ones that came out, how much do those fit into the conversations you're having? >> Sure, they all really fit. When we talk about what we're doing to tune Ubuntu for those machine learning workloads, it really starts with the kernel. So we actually have an AWS-optimized Linux kernel. So we've taken the Ubuntu Linux kernel and we've tuned it, working with the Amazon kernel engineers, to ensure that we've carved out everything in that kernel that's not relevant inside of an Amazon data center and taken it out. And in doing so, we've actually made the kernel 15% smaller, which actually reduces the security footprint and the storage footprint of that kernel. And that means smaller downloads, smaller updates, and we've made it boot 30% faster. We've done that by adding support, turning on, configuring on some parameters that enable virtualization or divert IO drivers or specifically the Amazon drivers to work really well. We've also removed things like floppy disk drives and Bluetooth drivers, which you'll never find in a virtual machine in Amazon. And when you take all of those things in aggregate and you remove them from the kernel, you end up with a much smaller, better, more efficient package. So that's a great starting point. The other piece is we've ensured that the latest and greatest graphics adapters, the GPUs, GPGPUs from Invidia, that the experienced on Ubuntu out of the box just works. It works really well, and well at scale. You'll find almost all machine learning workloads are drastically improved inside of GPGPU instances. And for the dollar, you're able to compute sometimes hundreds or thousands of times more efficiently than a fewer CPU type workload. >> You're talking about machine learning, but on the artificial intelligence side of life, a lot of conversation about that at the keynotes this morning. A lot of good services, whatever, again, your activity in that and where that's going, do you think, over the next 12, 16 months? >> Yes, so artificial intelligence is a really nice place where we see a lot of Ubuntu, mainly because the nature of how AI is infiltrating our lives. It has these two sides. One side is at the edge, and those are really fundamentally connected devices. And for every one of those billions of devices out there, there are necessarily connections to an instance in the cloud somewhere. So if we take just one example, right, an autonomous vehicle. That vehicle is connected to the internet. Sometimes well, when you're at home, parked in the garage or parked at Whole Foods, right? But sometimes it's not. You're in the middle of the desert out in West Texas. That autonomous vehicle needs to have a lot of intelligence local to that vehicle. It gets downloaded opportunistically. And what gets downloaded are the results of that machine learning, the results of that artificial intelligence process. So we heard in the keynotes quite a bit about data modeling, right? Data modeling means putting a whole bunch of data into Amazon, which Amazon has made it really easy to do with things like Snowball and so forth. Once the data is there, then the big GPGPU instances crunch that data and the result is actually a very tight, tightly compressed bit of insight that then gets fed to devices. So an autonomous vehicle that every single night gets a little bit better by tweaking its algorithms, when to brake, when to change lanes, when to make a left turn safely or a right turn safely, those are constantly being updated by all the data that we're feeding that. Now why I said that's important from an Ubuntu perspective is that we find Ubuntu in both of those locations. So we open this by saying that Ubuntu is the leading operating system inside of Amazon, representing 70% of those instances. Ubuntu is, across the board, right now in 100% of the autonomous vehicles that are running today. So Uber's autonomous vehicle, the Tesla vehicles, the Google vehicles, a number of others from other manufacturers are all running Ubuntu on the CPU. There's usually three CPUs in a smart car. The CPU that's running the autonomous driving engine is, across the board, running Ubuntu today. The fact that it's the same OS makes it, makes life quite nice for the developers. The developers who are writing that software that's crunching the numbers in the cloud and making the critical real-time decisions in the vehicle. >> You talk about autonomous vehicles, I mean, it's about a car in general, thousands of data points coming in, in continual real time. >> Dustin: Right. >> So it's just not autonomous -- >> Dustin: Right. >> operations, right? So are you working in that way, diagnostics, navigation, all those areas? >> Yes, so we catch as headlines are a lot of the hobbyist projects, the fun stuff coming out of universities or startup space. Drones and robots and vacuum cleaners, right? And there's a lot of Ubuntu running there, anything from Raspberry Pis to smart appliances at home. But it's actually, I think, really where those artificially intelligent systems are going to change our lives, is in the industrial space. It's not the drone that some kids are flying around in the park, it's the drone that's surveying crops, that's coming to understand what areas of a field need more fertilizer or less water, right. And that's happening in an artificially intelligent way as smarter and smarter algorithms make its way onto those drones. It's less about the running Pandora and Spotify having to choose the right music for you when you're sitting in your car, and a lot more about every taxicab in the city taking data and analytics and understanding what's going on around them. It's a great way to detect traffic patterns, potentially threats of danger or something like that. That's far more industrial and less intresting than the fun stuff, you know, the fireworks that are shot off by a drone. >> Not nearly as sexy, right? It's not as much fun. >> But that's where the business is, you know. >> That's right. >> One of the things people have been looking at is how Amazon's really maturing their discussion of hyrid cloud. Now, you said that data centers, public cloud, edge devices, lots of mobile, we talked about IOT and everything, what do you see from customers, what do you think we're going to see from Amazon going forward to build these hybrid architectures and how does that fit in to autonomous vehicles and the like? >> So in the keynote we saw a couple of organizations who were spotlighted as all-in on Amazon, and that's great. And actually almost all of those logos that are all-in on Amazon are all-in on Amazon on Ubuntu and that's great. That's a very small number of logos compared to the number of organizations out there that are actually hybrid. Hybrid is certainly a ramp to being all-in but for quite a bit of the industry, that's the journey and the destination, too, in fact. That there's always going to be some amount compute that happens local and some amount of compute that happens in the cloud. Ubuntu helps provide an important portability layer. Knowing something runs well on Ubuntu locally, it's going to run well on Ubuntu in Amazon, or vise versa. The fact that it runs well in Amazon, it will also run well on Ubuntu locally. Now we have a support -- >> Yeah, I was just curious, you talked about some of the optimization you made for AWS. >> Dustin: Right. >> Is that now finding its way into other environments or do we have a little bit of a fork? >> We do, it does find it's way back into other environments so, you know, the Amazon hypervisors are usually Xen-based, although there are some interesting other things coming from Amazon there. Typically what we find on-prem is usually more KVM or Vmware based. Now, most of what goes into that virtual kernel that we build for Amazon actually applies to the virtual kernel that we built for Ubuntu that runs in Xen and Vmware and KVM. There's some subtle differences. Some, a few things that we've done very specifically for Amazon, but for the most part it's perfectly compatible all the way back to the virtual machines that you would run on-prem. >> Well, Dustin, always a pleasure, >> Yeah. >> to have you hear on theCube. >> Thanks, John. >> You're welcome back any time. >> All right. >> We appreciate the time and wish you the best of luck here the rest of the day, too. >> Great. >> Good deal. >> Thank you. >> Glad to be with us. Dustin Kirkland from Canonical joining us here on theCube. Back with more from AWS Summit 2017 here in New York City right after this.

Published Date : Aug 14 2017

SUMMARY :

Brought to you by Amazon Web Services. good buzz on the show floor with about 5,000 strong the presence, you might say, of Ubuntu and AWS, what And that's actually, despite the fact that Amazon where you also have support. Now on the flip side, you also see a lot of workloads And for the dollar, you're able to compute sometimes conversation about that at the keynotes this morning. The fact that it's the same OS makes it, it's about a car in general, thousands of data points than the fun stuff, you know, the fireworks that It's not as much fun. One of the things people have been looking at is So in the keynote we saw a couple of organizations some of the optimization you made for AWS. the virtual kernel that we built for Ubuntu that We appreciate the time and wish you the best of luck Glad to be with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

John WallsPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

UberORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

Dustin KirklandPERSON

0.99+

70%QUANTITY

0.99+

DustinPERSON

0.99+

100%QUANTITY

0.99+

New York CityLOCATION

0.99+

30%QUANTITY

0.99+

thousandsQUANTITY

0.99+

Ubuntu 12.04TITLE

0.99+

UbuntuTITLE

0.99+

hundredsQUANTITY

0.99+

two sidesQUANTITY

0.99+

AWSORGANIZATION

0.99+

InvidiaORGANIZATION

0.99+

One sideQUANTITY

0.99+

TeslaORGANIZATION

0.99+

two typesQUANTITY

0.99+

XenTITLE

0.99+

15%QUANTITY

0.99+

PandoraORGANIZATION

0.99+

10 workloadsQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

one exampleQUANTITY

0.98+

12.04TITLE

0.98+

Javits CenterLOCATION

0.98+

bothQUANTITY

0.98+

GoogleORGANIZATION

0.98+

West TexasLOCATION

0.98+

DebianTITLE

0.98+

sevenQUANTITY

0.97+

AWS Summit 2017EVENT

0.97+

AWS SummitEVENT

0.97+

EC2TITLE

0.96+

Big AppleLOCATION

0.96+

billions of devicesQUANTITY

0.96+

about 5,000 strongQUANTITY

0.96+

Whole FoodsORGANIZATION

0.96+

this morningDATE

0.96+

thousands of timesQUANTITY

0.95+

About 70%QUANTITY

0.95+

WindowsTITLE

0.95+

this weekDATE

0.95+

VmwareTITLE

0.95+