HPE Compute Engineered for your Hybrid World - Next Gen Enhanced Scalable processors
>> Welcome to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier, host of "theCUBE" with the new fourth gen Intel Z on scalable process being announced, HPE is releasing four new HPE ProLiant Gen 11 servers and here to talk about the feature of those servers as well as the partnership between HPE and Intel, we have Darren Anthony, director compute server product manager with HPE, and Suzi Jewett, general manager of the Zion products with Intel. Thanks for joining us folks. Appreciate you coming on. >> Thanks for having us. (Suzi's speech drowned out) >> This segment is about NextGen enhanced scale of process. Obviously the Zion fourth gen. This is really cool stuff. What's the most exciting element of the new Intel fourth gen Zion processor? >> Yeah, John, thanks for asking. Of course, I'm very excited about the fourth gen Intel Zion processor. I think the best thing that we'll be delivering is our new ong package accelerators, which you know allows us to service the majority of the server market, which still is buying in that mid core count range and provide workload acceleration that matters for every one of the products that we sell. And that workload acceleration allows us to drive better efficiency and allows us to really dive into improved sustainability and workload optimizations for the data center. >> It's about al the rage about the cores. Now we got the acceleration continued to innovate with Zion. Congratulations. Darren what does the new Intel fourth Gen Zion processes mean for HPE from the ProLiant perspective? You're on Gen 11 servers. What's in it? What's it mean for you guys and for your customers? >> Well, John, first we got to talk about the great partnership. HPE and Intel have been partners delivering innovation for our server products for over 30 years, and we're continuing that partnership with HP ProLiant Gen 11 servers to deliver compelling business outcomes for our customers. Customers are on a digital transformation journey, and they need the right compute to power applications, accelerate analytics, and turn data into value. HP ProLiant Compute is engineered for your hybrid world and delivers optimized performance for your workloads. With HP ProLiant Gen 11 servers and Intel fourth gen Zion processors, you can have the performance to accelerate workloads from the data center to the edge. With Gen 11, we have more. More performance to meet new workload demands. With PCI Gen five which delivers increased bandwidth with room for more data and graphics accelerators for workloads like VDI, our new demands at the edge. DDR5 memory springs greater bandwidth and performance increases for low latency and memory solutions for database and analytics workloads and higher clock speed CPU chipset combinations for processor intensive AI and machine learning applications. >> Got to love the low latency. Got to love the more performance. Got to love the engineered for the hybrid world. You mentioned that. Can you elaborate more on engineered for the hybrid world? What does that mean? Can you elaborate? >> Well, HP ProLiant Compute is based on three pillars. First, an intuitive cloud operating experience with HPE GreenLake compute ops management. Second, trusted security by design with a zero trust approach from silicone to cloud. And third, optimize for performance for your workloads, whether you deploy as a traditional infrastructure or a pay-as-you-go model with HPE GreenLake on-premise at the edge in a colo and in the public cloud. >> Well, thanks Suzi and Darren, we'll be right back. We're going to take a quick break. We're going to come back and do a deep dive and get into the ProLiant Gen 11 servers. We're going to dig into it. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (upbeat music) >> Hello everyone. Welcome back continuing coverage of "theCUBE's" "Compute Engineered for Your Hybrid World" with HP and Intel. I'm John Furrier, host of "theCUBE'" joined back by Darren Anthony from HPE and Suzie Jewitt from Intel. as we continue our conversation on the fourth gen Zion scalable processor and HP Gen 11 servers. Suzi, we'll start with you first. Can you give us some use cases around the new fourth gen, Intel Zion scalable processors? >> Yeah, I'd love to. What we're really seeing with an ever-changing market, and you know, adapting to that is we're leading with that workload focus approach. Some examples, you know, that we see are with vRAN. For in vRAN, we estimate the 2021 market size was about 150 million, and we expect a CAG of almost 30% all the way through 2030. So we're really focused on that, on, you know deployed edge use cases, growing about 10% to over 50% in 2026. And HPC use cases, of course, continue to grow at a study CAGR around, you know, about 7%. Then last but not least is cloud. So we're, you know, targeting a growth rate of almost 20% over a five year CAGR. And the fourth G Zion is targeted to all of those workloads, both through our architectural improvements that, you know deliver node level performance as well as our operational improvements that deliver data center performance. And wrapping that all around with the accelerators that I talked about earlier that provide that workload specific improvements that get us to where our customers need to operationalize in their data center. >> I love the focus solutions around seeing compute used that way and the processors. Great stuff. Darren, how do you see the new ProLiant Gen 11 servers being used on your side? I mean obviously, you've got the customers deploying the servers. What are you seeing on those workloads? Those targeted workloads? (John chuckling) >> Well, you know, very much in line with what Suzi was talking about. The generational improvements that we're seeing in performance for Gen 11. They're outstanding for many different use cases. You know, obviously VDI. what we're seeing a lot is around the analytics. You know, with moving to the edge, there's a lot more data. Customers need to convert that data into something tangible. Something that's actionable. And so we're really seeing the strong use cases around analytics in order to mine that data and to make better, faster decisions for the customers. >> You know what I love about this market is people really want to hear about performance. They love speed, they love the power, and low power, by the way on the other side. So, you know, this has really been a big part of the focus now this year. We're seeing a lot more discussion. Suzi, can you tell us more about the key performance improvements on the processors? And Darren, if you don't mind, if you can follow up on the benefits of the new servers relative to the performance. Suzi? >> Sure, so, you know, at a standard expectant rate we're looking at, you know, 60% gen over gen, from our previous third gen Zion, but more importantly as we've been mentioning is the performance improvement we get with the accelerators. As an example, an average accelerator proof point that we have is 2.9 times improvement in performance per wat for accelerated workloads versus non-accelerated workloads. Additionally, we're seeing really great and performance improvement in low jitter so almost 20 to 50 times improvement versus previous gen in jitter on particular workloads which is really important, you know to our cloud service providers. >> Darren, what's your follow up on this? This is obviously translates into the the gen 11 servers. >> Well, you know, this generation. Huge improvements across the board. And what we're seeing is that not only customers are prepared for what they need now you know, workloads are evolving and transitioning. Customers need more. They're doing more. They're doing more analytics. And so not only do you have the performance you need now, but it's actually built for the future. We know that customers are looking to take in that data and do something and work with the data wherever it resides within their infrastructure. We also see customers that are beginning to move servers out of a centralized data center more to the edge, closer to the way that where the data resides. And so this new generation really tremendous for that. Seeing a lot of benefits for the customers from that perspective. >> Okay, Suzi, Darren, I want to get your thoughts on one of the hottest trends happening right now. Obviously machine learning and AI has always been hot, but recently more and more focus has been on AI. As you start to see this kind of next gen kind of AI coming on, and the younger generation of developers, you know, they're all into this. This is really the one of the hottest trends of AI. We've seen the momentum and accelerations kind of going next level. Can you guys comment on how Zion here and Gen 11 are tying into that? What's that mean for AI? >> So, exactly. With the fourth gen Intel Zion, we have one of our key you know, on package accelerators in every core is our AMX. It delivers up to 10 times improvement on inference and training versus previous gens, and, you know throws the competition out of the water. So we are really excited for our AI performance leading with Zion >> And- >> And John, what we're seeing is that this next generation, you know you're absolutely right, you know. Workloads a lot more focused. A lot more taking more advantage of AI machine learning capabilities. And with this generation together with the Intel Zion fourth gen, you know what we're seeing is the opportunity with that increase in IO bandwidth that now we have an opportunity for those applications and those use cases and those workloads to take advantage of this capability. We haven't had that before, but now more than ever, we've actually, you know opened the throttle with the performance and with the capabilities to support those workloads. >> That's great stuff. And you know, the AI stuff also does all lot on differentiated heavy lifting, and it needs processing power. It needs the servers. This is just, (John chuckling) it creates more and more value. This is right in line. Congratulations. Super excited by that call out. Really appreciate it. Thanks Suzi and Darren. Really appreciate. A lot more discuss with you guys as we go a little bit deeper. We're going to talk about security and wrap things up after this short break. I'm John Furrier, "theCUBE," the leader in enterprise tech coverage. (upbeat music) >> Welcome back to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World." I'm John Furrier, host of "theCUBE" joined by Darren Anthony from HPE and Suzi Jewett from Intel as we turn our discussion to security. A lot of great features with the new Zion scalable processor's gen four and the ProLiant gen 11. Let's get into it. Suzi, what are some of the cool features of the fourth gen Intel Zion scalable processors? >> Sure, John, I'd love to talk about it. With fourth gen, Intel offers the most comprehensive confidential computing portfolio to really enhance data security and ingest regulatory compliance and sovereignty concerns. A couple examples of those features and technologies that we've included are a larger baseline enclave with the SGX technology, which is our application isolation technology and our Intel CET substantially reduces the risk of whole class software-based attacks. That wrapped around at a platform level really allows us, you know, to secure workload acceleration software and ensure platform integrity. >> Darren, this is a great enablement for HPE. Can you tell us about the security with the the new HP ProLiant Gen 11 servers? >> Absolutely, John. So HP ProLiant engineered with a fundamental security approach to defend against increasingly complex threats and uncompromising focus on state-of-the-art security innovations that are built right into our DNA, from silicon to software, from the factory to the cloud. It's our goal to protect the customer's infrastructure, workloads, and the data from threats to hardware and risk from third party software and devices. So Gen 11 is just a continuation of the the great technological innovations that we've had around providing zero trust architecture. We're extending our Silicon Root of Trust, and it's just a motion forward for innovating on that Silicon Root of Trust that we've had. So with Silicon Root of Trust, we protect millions of lines of firmware code from malware and ransomware with the digital footprint that's unique to the server. With this Silicon Root of Trust, we're securing over 4 million HPE servers around the world and beyond that Silicon, the authentication of and extending this to our partner ecosystem, the authentication of platform components, such as network interface cards and storage controllers just gives us that protection against additional entry points of security threats that can compromise the entire server infrastructure. With this latest version, we're also doing authentication integrity with those components using the security protocol and data model protocol or SPDM. But we know that trusted and protected infrastructure begins with a secure supply chain, a layer of protection that starts at the manufacturing floor. HP provides you optimized protection for ProLiant servers from trusted suppliers to the factories and into transit to the customer. >> Any final messages Darren you'd like to share with your audience on the hybrid world engineering for the hybrid world security overall the new Gen 11 servers with the Zion fourth generation process scalable processors? >> Well, it's really about choice. Having the right choice for your compute, and we know HPE ProLiant servers, together, ProLiant Gen 11 servers together with the new Zion processors is the right choice. Delivering the capabilities to performance and the efficiency that customers need to run their most complex workloads and their most performance hungry work workloads. We're really excited about this next generation of platforms. >> ProLiant Gen 11. Suzi, great customer for Intel. You got the fourth generation Zion scalable processes. We've been tracking multiple generations for both of you guys for many, many years now, the past decade. A lot of growth, a lot of innovation. I'll give you the last word on the series here on this segment. Can you share the the collaboration between Intel and HP? What does it mean and what's that mean for customers? Can you give your thoughts and share your views on the relationship with with HPE? >> Yeah, we value, obviously HPE as one of our key customers. We partner with them from the beginning of when we are defining the product all the way through the development and validation. HP has been a great partner in making sure that we deliver collaboratively to the needs of their customers and our customers all together to make sure that we get the best product in the market that meets our customer needs allowing for the flexibility, the operational efficiency, the security that our markets demand. >> Darren, Suzi, thank you so much. You know, "Compute for an Engineered Hybrid World" is really important. Compute is... (John stuttering) We need more compute. (John chuckling) Give us more power and less power on the sustainability side. So a lot of great advances. Thank you so much for spending the time and give us an overview on the innovation around the Zion and, and the ProLiant Gen 11. Appreciate your time. Appreciate it. >> You're welcome. Thanks for having us. >> You're watching "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier with "theCUBE." Thanks for watching. (upbeat music)
SUMMARY :
and here to talk about the Thanks for having us. of the new Intel fourth of the server market, continued to innovate with Zion. from the data center to the edge. engineered for the hybrid world? and in the public cloud. and get into the ProLiant Gen 11 servers. on the fourth gen Zion scalable processor and you know, adapting I love the focus solutions decisions for the customers. and low power, by the the performance improvement into the the gen 11 servers. the performance you need now, This is really the one of With the fourth gen Intel with the Intel Zion fourth gen, you know A lot more discuss with you guys and the ProLiant gen 11. Intel offers the most Can you tell us about the security from the factory to the cloud. and the efficiency that customers need on the series here on this segment. allowing for the flexibility, and the ProLiant Gen 11. Thanks for having us. I'm John Furrier with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Darren Anthony | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Mandy Dolly | PERSON | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
David Richards | PERSON | 0.99+ |
Suzi Jewett | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2.9 times | QUANTITY | 0.99+ |
Darren | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Suzi | PERSON | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
RenDisco | ORGANIZATION | 0.99+ |
2009 | DATE | 0.99+ |
Suzie Jewitt | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AKS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
500 terabytes | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Hadoop | TITLE | 0.99+ |
1,000 camera | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
18,000 customers | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2030 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
2026 | DATE | 0.99+ |
Yaron | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Mohan Rokkam & Greg Gibby | 4th Gen AMD EPYC on Dell PowerEdge: Virtualization
(cheerful music) >> Welcome to theCUBE's continuing coverage of AMD's 4th Generation EPYC launch. I'm Dave Nicholson, and I'm here in our Palo Alto studios talking to Greg Gibby, senior product manager, data center products from AMD, and Mohan Rokkam, technical marketing engineer at Dell. Welcome, gentlemen. >> Mohan: Hello, hello. >> Greg: Thank you. Glad to be here. >> Good to see each of you. Just really quickly, I want to start out. Let us know a little bit about yourselves. Mohan, let's start with you. What do you do at Dell exactly? >> So I'm a technical marketing engineer at Dell. I've been with Dell for around 15 years now and my goal is to really look at the Dell powered servers and see how do customers take advantage of some of the features we have, especially with the AMD EPYC processors that have just come out. >> Greg, and what do you do at AMD? >> Yeah, so I manage our software-defined infrastructure solutions team, and really it's a cradle to grave where we work with the ISVs in the market, so VMware, Nutanix, Microsoft, et cetera, to integrate the features that we're putting into our processors and make sure they're ready to go and enabled. And then we work with our valued partners like Dell on putting those into actual solutions that customers can buy and then we work with them to sell those solutions into the market. >> Before we get into the details on the 4th Generation EPYC launch and what that means and why people should care. Mohan, maybe you can tell us a little about the relationship between Dell and AMD, how that works, and then Greg, if you've got commentary on that afterwards, that'd be great. Yeah, Mohan. >> Absolutely. Dell and AMD have a long standing partnership, right? Especially now with EPYC series. We have had products since EPYC first generation. We have been doing solutions across the whole range of Dell ecosystem. We have integrated AMD quite thoroughly and effectively and we really love how performant these systems are. So, yeah. >> Dave: Greg, what are your thoughts? >> Yeah, I would say the other thing too is, is that we need to point out is that we both have really strong relationships across the entire ecosystem. So memory vendors, the software providers, et cetera, we have technical relationships. We're working with them to optimize solutions so that ultimately when the customer buys that, they get a great user experience right out of the box. >> So, Mohan, I know that you and your team do a lot of performance validation testing as time goes by. I suspect that you had early releases of the 4th Gen EPYC processor technology. What have you been seeing so far? What can you tell us? >> AMD has definitely knocked it out of the park. Time and again, in the past four generations, in the past five years alone, we have done some database work where in five years, we have seen five exit performance. And across the board, AMD is the leader in benchmarks. We have done virtualization where we would consolidate from five into one system. We have world records in AI, we have world records in databases, we have world records in virtualization. The AMD EPYC solutions has been absolutely performant. I'll leave you with one number here. When we went from top of Stack Milan to top of Stack Genoa, we saw a performance bump of 120%. And that number just blew my mind. >> So that prompts a question for Greg. Often we, in industry insiders, think in terms of performance gains over the last generation or the current generation. A lot of customers in the real world, however, are N - 2. They're a ways back, so I guess two points on that. First of all, the kinds of increases the average person is going to see when they move to this architecture, correct me if I'm wrong, but it's even more significant than a lot of the headline numbers because they're moving two generations, number one. Correct me if I'm wrong on that, but then the other thing is the question to you, Greg. I like very long complicated questions, as you can tell. The question is, is it okay for people to skip generations or make the case for upgrades, I guess is the problem? >> Well, yeah, so a couple thoughts on that first too. Mohan talked about that five X over the generation improvements that we've seen. The other key point with that too is that we've made significant process improvements along the way moving to seven nanocomputer to now five nanocomputer and that's really reducing the total amount of power or the performance per watt the customers can realize as well. And when we look at why would a customer want to upgrade, right? And I want to rephrase that as to why aren't you? And there is a real cost of not upgrading. And so when you look at infrastructure, the average age of a server in the data center is over five years old. And if you look at the most popular processors that were sold in that timeframe, it's 8, 10, 12 cores. So now you've got a bunch of servers that you need in order to deliver the applications and meet your SLAs to your end users, and all those servers pull power. They require maintenance. They have the opportunity to go down, et cetera. You got to pay licensing and service and support costs and all those. And when you look at all the costs that roll up, even though the hardware is paid for just to keep the lights on, and not even talking about the soft costs of unplanned downtime, and, "I'm not meeting your SLAs," et cetera, it's very expensive to keep those servers running. Now, if you refresh, and now you have processors that have 32, 64, 96 cores, now you can consolidate that infrastructure and reduce your total power bill. You can reduce your CapEx, you reduce your ongoing OpEx, you improve your performance, and you improve your security profile. So it really is more cost effective to refresh than not to refresh. >> So, Mohan, what has your experience been double clicking on this topic of consolidation? I know that we're going to talk about virtualization in some of the results that you've seen. What have you seen in that regard? Does this favor better consolidation and virtualized environments? And are you both assuring us that the ROI and TCO pencil out on these new big, bad machines? >> Greg definitely hit the nail on the head, right? We are seeing tremendous savings really, if you're consolidating from two generations old. We went from, as I said, five is to one. You're going from five full servers, probably paid off down to one single server. That itself is, if you look at licensing costs, which again, with things like VMware does get pretty expensive. If you move to a single system, yes, we are at 32, 64, 96 cores, but if you compare to the licensing costs of 10 cores, two sockets, that's still pretty significant, right? That's one huge thing. Another thing which actually really drives the thing is we are looking at security, and in today's environment, security becomes a major driving factor for upgrades. Dell has its own setups, cyber-resilient architecture, as we call it, and that really is integrated from processor all the way up into the OS. And those are some of the features which customers really can take advantage of and help protect their ecosystems. >> So what kinds of virtualized environments did you test? >> We have done virtualization across primary codes with VMware, but the Azure Stack, we have looked at Nutanix. PowerFlex is another one within Dell. We have vSAN Ready Nodes. All of these, OpenShift, we have a broad variety of solutions from Dell and AMD really fits into almost every one of them very well. >> So where does hyper-converged infrastructure fit into this puzzle? We can think of a server as something that contains not only AMD's latest architecture but also latest PCIe bus technology and all of the faster memory, faster storage cards, faster nicks, all of that comes together. But how does that play out in Dell's hyper-converged infrastructure or HCI strategy? >> Dell is a leader in hyper-converged infrastructure. We have the very popular VxRail line, we have the PowerFlex, which is now going into the AWS ecosystem as well, Nutanix, and of course, Azure Stack. With all these, when you look at AMD, we have up to 96 cores coming in. We have PCIe Gen 5 which means you can now connect dual port, 100 and 200 gig nicks and get line rate on those so you can connect to your ecosystem. And I don't know if you've seen the news, 200, 400 gig routers and switchers are selling out. That's not slowing down. The network infrastructure is booming. If you want to look at the AI/ML side of things, the VDI side of things, accelerator cards are becoming more and more powerful, more and more popular. And of course they need that higher end data path that PCIe Gen 5 brings to the table. GDDR5 is another huge improvement in terms of performance and latencies. So when we take all this together, you talk about hyper-converged, all of them add into making sure that A, with hyper-converged, you get ease of management, but B, just 'cause you have ease of management doesn't mean you need to compromise on anything. And the AMD servers effectively are a no compromise offering that we at Dell are able to offer to our customers. >> So Greg, I've got a question a little bit from left field for you. We covered Supercompute Conference 2022. We were in Dallas a couple of weeks ago, and there was a lot of discussion of the current processor manufacturer battles, and a lot of buzz around 4th Gen EPYC being launched and what's coming over the next year. Do you have any thoughts on what this architecture can deliver for us in terms of things like AI? We talk about virtualization, but if you look out over the next year, do you see this kind of architecture driving significant change in the world? >> Yeah, yeah, yeah, yeah. It has the real potential to do that from just the building blocks. So we have our chiplet architecture we call it. So you have an IO die and then you have your core complexes that go around that. And we integrate it all with our infinity fabric. That architecture allows you, if we wanted to, replace some of those CCDs with specific accelerators. And so when we look two, three, four years down the road, that architecture and that capability already built into what we're delivering and can easily be moved in. We just need to make sure that when you look at doing that, that the power that's required to do that and the software, et cetera, and those accelerators actually deliver better performance as a dedicated engine versus just using standard CPUs. The other things that I would say too is if you look at emerging workloads. So data center modernization is one of the buzzwords in cloud native, right? And these container environments, well, AMD'S architecture really just screams support for those type of environments, right? Where when you get into these larger core accounts and the consolidation that Mohan talked about. Now when I'm in a container environment, that blast radius so a lot of customers have concerns around, "Hey, having a single point of failure and having more than X number of cores concerns me." If I'm in containers, that becomes less of a concern. And so when you look at cloud native, containerized applications, data center modernization, AMD's extremely well positioned to take advantage of those use cases as well. >> Yeah, Mohan, and when we talk about virtualization, I think sometimes we have to remind everyone that yeah, we're talking about not only virtualization that has a full-blown operating system in the bucket, but also virtualization where the containers have microservices and things like that. I think you had something to add, Mohan. >> I did, and I think going back to the accelerator side of business, right? When we are looking at the current technology and looking at accelerators, AMD has done a fantastic job of adding in features like AVX-512, we have the bfloat16 and eight features. And some of what these do is they're effectively built-in accelerators for certain workloads especially in the AI and media spaces. And in some of these use cases we look at, for example, are inference. Traditionally we have used external accelerator cards, but for some of the entry level and mid-level use cases, CPU is going to work just fine especially with the newer CPUs that we are seeing this fantastic performance from. The accelerators just help get us to the point where if I'm at the edge, if I'm in certain use cases, I don't need to have an accelerator in there. I can run most of my inference workloads right on the CPU. >> Yeah, yeah. You know the game. It's an endless chase to find the bottleneck. And once we've solved the puzzle, we've created a bottleneck somewhere else. Back to the supercompute conversations we had, specifically about some of the AMD EPYC processor technology and the way that Dell is packaging it up and leveraging things like connectivity. That was one of the things that was also highlighted. This idea that increasingly connectivity is critically important, not just for supercomputing, but for high-performance computing that's finding its way out of the realms of Los Alamos and down to the enterprise level. Gentlemen, any more thoughts about the partnership or maybe a hint at what's coming in the future? I know that the original AMD announcement was announcing and previewing some things that are rolling out over the next several months. So let me just toss it to Greg. What are we going to see in 2023 in terms of rollouts that you can share with us? >> That I can share with you? Yeah, so I think look forward to see more advancements in the technology at the core level. I think we've already announced our product code name Bergamo, where we'll have up to 128 cores per socket. And then as we look in, how do we continually address this demand for data, this demand for, I need actionable insights immediately, look for us to continue to drive performance leadership in our products that are coming out and address specific workloads and accelerators where appropriate and where we see a growing market. >> Mohan, final thoughts. >> On the Dell side, of course, we have four very rich and configurable options with AMD EPYC servers. But beyond that, you'll see a lot more solutions. Some of what Greg has been talking about around the next generation of processors or the next updated processors, you'll start seeing some of those. and you'll definitely see more use cases from us and how customers can implement them and take advantage of the features that. It's just exciting stuff. >> Exciting stuff indeed. Gentlemen, we have a great year ahead of us. As we approach possibly the holiday seasons, I wish both of you well. Thank you for joining us. From here in the Palo Alto studios, again, Dave Nicholson here. Stay tuned for our continuing coverage of AMD's 4th Generation EPYC launch. Thanks for joining us. (cheerful music)
SUMMARY :
talking to Greg Gibby, Glad to be here. What do you do at Dell exactly? of some of the features in the market, so VMware, on the 4th Generation EPYC launch the whole range of Dell ecosystem. is that we need to point out is that of the 4th Gen EPYC processor technology. Time and again, in the the question to you, Greg. of servers that you need in some of the results that you've seen. really drives the thing is we have a broad variety and all of the faster We have the very popular VxRail line, over the next year, do you that the power that's required to do that in the bucket, but also but for some of the entry I know that the original AMD in the technology at the core level. and take advantage of the features that. From here in the Palo Alto studios,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Greg | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Greg Gibby | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
8 | QUANTITY | 0.99+ |
Mohan | PERSON | 0.99+ |
32 | QUANTITY | 0.99+ |
Mohan Rokkam | PERSON | 0.99+ |
100 | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
10 cores | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
120% | QUANTITY | 0.99+ |
two sockets | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
12 cores | QUANTITY | 0.99+ |
two generations | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
64 | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
five full servers | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two points | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
EPYC | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
one system | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Los Alamos | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
two generations | QUANTITY | 0.99+ |
four years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Azure Stack | TITLE | 0.98+ |
five nanocomputer | QUANTITY | 0.98+ |
Next Gen Servers Ready to Hit the Market
(upbeat music) >> The market for enterprise servers is large and it generates well north of $100 billion in annual revenue, and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is, it's like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds, as we've reported, are impacting all segments of the market. CIOs, you know, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary opex, particularly in the cloud. They're dialing it down and just being a little bit more, you know, cautious. The market for enterprise servers, it's dominated as you know, by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful competing with Intel because of its focus, it's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire Rapid CPUs, now slated for January 2023 have created an opportunity for AMD, specifically AMD's next generation EPYC CPUs codenamed Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads, there's a compute density optimized Zen 4 package and then a cache optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data-oriented workloads that are being driven by AI and machine learning and high performance computing, HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell, in particular, will be using these systems as the basis for its next generation Gen 16 servers, which are going to bring new capabilities to the market. Now, of course, Dell is not alone, there's got other OEM, you've got HPE, Lenovo, you've got ODMs, you've got the cloud players, they're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU the soul and most indicative performance metric. There's much more emphasis in innovation around all those supporting components in a system, specifically the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors, determine how well systems can perform and those kind of things around compute operations, IO and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So we're seeing OEMs like Dell, they're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante, and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on "theCUBE." Thanks for making some time with me. >> Yeah, of course, Dave, great to be here. >> All right, so you heard my little spiel in the intro, that summary, >> Yeah. >> Was it accurate? What would you add? What do people need to know? >> Yeah, no, no, no, 100% accurate, but you know, I'm a resident nerd, so just, you know, some kind of clarification. If we think of things like microprocessor release cycles, it's always going to be characterized as rolling thunder. I think 2023 in particular is going to be this constant release cycle that we're going to see. You mentioned the, (clears throat) excuse me, general processors with 96 cores, shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then, we can talk about what it means in terms of, you know, nanometers and performance per core and everything else. But yeah, no, that's the main thing I would say, is just people shouldn't look at this like a new car's being released on Saturday. This is going to happen over the next 18 months, really. >> All right, so to that point, you think about Dell's next generation systems, they're going to be featuring these new AMD processes, but to your point, when you think about performance claims, in this industry, it's a moving target. It's that, you call it a rolling thunder. So what does that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? >> So out of the gate, you know, slated as of right now for a November 10th release, AMD's going to be first to market with, you know, everyone will argue, but first to market with five nanometer technology in production systems, 96 cores. What's important though is, those microprocessors are going to be resident on motherboards from Dell that feature things like PCIe 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen 16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. >> So I appreciate you painting a picture. Let's kind of stay inside under the hood, if we can, >> Sure. >> And share with us what we should know about these kind of next generation CPUs. How are companies like Dell going to be configuring them? How important are clock speeds and core counts in these new systems? And what about, you mentioned motherboards, what about next gen motherboards? You mentioned PCIe Gen 5, where does that fit in? So take us inside deeper into the system, please. >> Yeah, so if you will, you know, if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture that interconnect. How quickly that interconnect performs is critical. Now, I'm going to give you a statistic that doesn't require a PhD to understand. When we go from PCIe Gen 4 to Gen 5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, 2X. The performance is doubled, but the numbers are pretty staggering in terms of giga transactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from 4th Gen to 5th Gen. But the reality is, most users of these systems are still on PCIe Gen 3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance, and then all of the peripherals that plug into that faster bus are faster, whether it's RAID control cards from RAID controllers or storage controllers or network interface cards. Companies like Broadcom come to mind. All of their components are leapfrogging their prior generation to fit into this ecosystem. >> So I wonder if we could stay with PCIe for a moment and, you know, just understand what Gen 5 brings. You said, you know, 2X, I think we're talking bandwidth here. Is there a latency impact? You know, why does this matter? And just, you know, this premise that these other components increasingly matter more, Which components of the system are we talking about that can actually take advantage of PCIe Gen 5? >> Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer servers tend to want to do in 2022, controllers that are attached to internal and external storage devices. All of them benefit from this enhancement and performance. And it's, you know, PCI express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do. It's mind numbing, I want to say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm going to have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCIe 4 is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD Genoa or you know, the EPYC processors, the Zen with the Z4 microprocessors, for every dollar that you're spending on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's going to be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive and the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage and for every dollar, you're getting a greater amount of performance and transactions, which translates up the stack through the application layer and, you know, out to the end user's desire to get work done. >> So I want to come back to that, but let me stay on performance for a minute. You know, we all used to be, when you'd go buy a new PC, you'd be like, what's the clock speed of that? And so, when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And where does that, again, where does that supporting ecosystem play? >> So if you are really into the speeds and feeds and what's under the covers, from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are, but really, the answer is look at the benchmarks that are created through testing, especially from third party organizations that test these things for workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's Law. Have we been able to continue to track along that path? We know there are physical limitations to Moore's Law from an individual microprocessor perspective, but none of that really matters. What really matters is what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. >> So I presume we're going to see these benchmarks at some point, I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? >> Yeah, 100%, 100%. Dell, and I'm sure other companies, are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we are going to see quite a few world records set because of the combination of things, like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, you know, AMD is sort of starting off this season of rolling thunder and in a few months, we'll start getting the initial entries from Intel also, and we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have, you know, a portfolio of products that highlight the advantages of each processor's set. >> Yeah, I talked in my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? >> So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it, AI/ML are going to be the big buzzwords moving forward. >> So Dave, you talked earlier about this, some people might have sticker shocks. So some of the infrastructure pros that are watching this might be, oh, okay, I'm going to have to pitch this, especially in this, you know, tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? You know, if they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? >> As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily, or I don't have necessarily insider access to street pricing on next gen servers yet, but what I do know from examining what the component suppliers tell us is that, these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal, and anyone who looks at it and says, 10 bucks? It used to only be five bucks, well, the ROI and the TCO, that's where all of this really needs to be measured and a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. >> So it's consolidation, which means you could do more with less. It's going to be, or more with the same, it's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into staff, so you're going to have to sort of identify how the staff can be productive in other areas. You're probably not going to fire people hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids, you know, of course it's been well documented, it's late but they're now scheduled for January. Pat Gelsinger's talked about this, and of course they're going to try to leapfrog AMD and then AMD is going to respond, you talked about this earlier, so that game is going to continue. How long do you think this cycle will last? >> Forever. (laughs) It's just that, there will be periods of excitement like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket. You can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the, you know, the x86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so, you would think that over time, ARM is going to creep up as all destructive technologies do, and we've seen that, we've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got NVIDIA kind of off to the side starting out, you know, heavy in the GPU space saying, hey, you know what, you can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional x86 vendors certainly. >> Yes, so I'm glad- >> That's going to be forever. >> I'm glad you brought up ARM and NVIDIA, I think, but you know, maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave, talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the x86. It's the supporting, it's the CPU, the NPU, the XPU, if you will, but also all those surrounding components that, to your earlier point, are taking advantage of the faster bus speeds. >> Yeah, no, 100%. You know, look at it this way. A server used to be measured, well, they still are, you know, how many U of rack space does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we are in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCIe 5, and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICs and what that means from a performance and/or consolidation perspective, or things like RDMA over Converged Ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors, with this number of cores in memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, the definition of what constitutes a server and what's critically important I think has definitely changed. >> Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks, so that we can quantify these innovations that we've been talking about, bring us home. >> Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really what, it's the details that are going to come the day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So, you know, hang on, it's going to be a fun ride. >> All right, Dave, we're going to leave it there. Thanks you so much, my friend. Appreciate you coming on. >> Thanks, Dave. >> Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders, we got analysts on there, technical experts from all over the world. Thanks for watching, and we'll see you next time. (upbeat music)
SUMMARY :
and the things that you should know about Dave, great to be here. I think 2023 in particular is going to be over the next 12 to 18 months? So out of the gate, you know, So I appreciate you painting a picture. going to be configuring them? So just, you can write that down, two, 2X. Which components of the and the peripherals, the And so, when you think about So it's not just about the core. can expect in the future? Dell to have, you know, about the diversity of workloads. So a lot of the applications that to your management? So I don't think it's going to and then AMD is going to respond, as opposed to the, you the XPU, if you will, and the things that get expect in the future? it's the details that are going to come going to leave it there. Okay, and don't forget to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
January 2023 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
November 10th | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
10 bucks | QUANTITY | 0.99+ |
five bucks | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Saturday | DATE | 0.99+ |
128 core | QUANTITY | 0.99+ |
25 gig | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
2X | QUANTITY | 0.99+ |
96 core | QUANTITY | 0.99+ |
8X | QUANTITY | 0.99+ |
4X | QUANTITY | 0.99+ |
96 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2022 | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
doeshardwarematter.com | OTHER | 0.98+ |
5th Gen. | QUANTITY | 0.98+ |
4th Gen | QUANTITY | 0.98+ |
ARM | ORGANIZATION | 0.98+ |
18-wheeler | QUANTITY | 0.98+ |
Z4 | COMMERCIAL_ITEM | 0.97+ |
first | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
2023 | DATE | 0.97+ |
Zen 4 | COMMERCIAL_ITEM | 0.97+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.97+ |
thousands | QUANTITY | 0.96+ |
one server | QUANTITY | 0.96+ |
double | QUANTITY | 0.95+ |
PCIe Gen 4 | OTHER | 0.95+ |
Sapphire Rapid CPUs | COMMERCIAL_ITEM | 0.94+ |
PCIe Gen 3 | OTHER | 0.93+ |
PCIe 4 | OTHER | 0.93+ |
x86 | COMMERCIAL_ITEM | 0.92+ |
Wharton CTO Academy | ORGANIZATION | 0.92+ |
Next Gen Analytics & Data Services for the Cloud that Comes to You | An HPE GreenLake Announcement
(upbeat music) >> Welcome back to theCUBE's coverage of HPE GreenLake announcements. We're seeing the transition of Hewlett Packard Enterprise as a company, yes they're going all in for as a service, but we're also seeing a transition from a hardware company to what I look at increasingly as a data management company. We're going to talk today to Vishal Lall who's GreenLake cloud services solutions at HPE and Matt Maccaux who's a global field CTO, Ezmeral Software at HPE. Gents welcome back to theCube. Good to see you again. >> Thank you for having us here. >> Thanks Dave. >> So Vishal let's start with you. What are the big mega trends that you're seeing in data? When you talk to customers, when you talk to partners, what are they telling you? What's your optic say? >> Yeah, I mean, I would say the first thing is data is getting even more important. It's not that data hasn't been important for enterprises, but as you look at the last, I would say 24 to 36 months has become really important, right? And it's become important because customers look at data and they're trying to stitch data together across different sources, whether it's marketing data, it's supply chain data, it's financial data. And they're looking at that as a source of competitive advantage. So, customers were able to make sense out of the data, enterprises that are able to make sense out of that data, really do have a competitive advantage, right? And they actually get better business outcomes. So that's really important, right? If you start looking at, where we are from an analytics perspective, I would argue we are in maybe the third generation of data analytics. Kind of the first one was in the 80's and 90's with data warehousing kind of EDW. A lot of companies still have that, but think of Teradata, right? The second generation more in the 2000's was around data lakes, right? And that was all about Hadoop and others, and really the difference between the first and the second generation was the first generation was more around structured data, right? Second became more about unstructured data, but you really couldn't run transactions on that data. And I would say, now we are entering this third generation, which is about data lake houses, right? Customers what they want really is, or enterprises, what they want really is they want structured data. They want unstructured data altogether. They want to run transactions on them, right? They want to use the data to mine it for machine learning purposes, right? Use it for SQL as well as non-SQL, right? And that's kind of where we are today. So, that's really what we are hearing from our customers in terms of at least the top trends. And that's how we are thinking about our strategy in context of those trends. >> So lake house use that term. It's an increasing popular term. It connotes, "Okay, I've got the best of data warehouse "and I've got the best of data lake. "I'm going to try to simplify the data warehouse. "And I'm going to try to clean up the data swamp "if you will." Matt, so, talk a little bit more about what you guys are doing specifically and what that means for your customers. >> Well, what we think is important is that there has to be a hybrid solution, that organizations are going to build their analytics. They're going to deploy algorithms, where the data either is being produced or where it's going to be stored. And that could be anywhere. That could be in the trunk of a vehicle. It could be in a public cloud or in many cases, it's on-premises in the data center. And where organizations struggle is they feel like they have to make a choice and a trade-off going from one to the other. And so what HPE is offering is a way to unify the experiences of these different applications, workloads, and algorithms, while connecting them together through a fabric so that the experience is tied together with consistent, security policies, not having to refactor your applications and deploying tools like Delta lake to ensure that the organization that needs to build a data product in one cloud or deploy another data product in the trunk of an automobile can do so. >> So, Vishal I wonder if we could talk about some of the patterns that you're seeing with customers as you go to deploy solutions. Are there other industry patterns? Are there any sort of things you can share that you're discerning? >> Yeah, no, absolutely. As we kind of hear back from our customers across industries, I think the problem sets are very similar, right? Whether you look at healthcare customers. You look at telco customers, you look at consumer goods, financial services, they're all quite similar. I mean, what are they looking for? They're looking for making sense, making business value from the data, breaking down the silos that I think Matt spoke about just now, right? How do I stitch intelligence across my data silos to get more business intelligence out of it. They're looking for openness. I think the problem that's happened is over time, people have realized that they are locked in with certain vendors or certain technologies. So, they're looking for openness and choice. So that's an important one that we've at least heard back from our customers. The other one is just being able to run machine learning on algorithms on the data. I think that's another important one for them as well. And I think the last one I would say is, TCO is important as customers over the last few years have realized going to public cloud is starting to become quite expensive, to run really large workloads on public cloud, especially as they want to egress data. So, cost performance, trade offs are starting to become really important and starting to enter into the conversation now. So, I would say those are some of the key things and themes that we are hearing from customers cutting across industries. >> And you talked to Matt about basically being able to essentially leave the data where it belongs, bring the compute to data. We talk about that all the time. And so that has to include on-prem, it's got to include the cloud. And I'm kind of curious on the edge, where you see that 'cause that's... Is that an eventual piece? Is that something that's actually moving in parallel? There's lot of fuzziness as an observer in the edge. >> I think the edge is driving the most interesting use cases. The challenge up until recently has been, well, I think it's always been connectivity, right? Whether we have poor connection, little connection or no connection, being able to asynchronously deploy machine learning jobs into some sort of remote location. Whether it's a very tiny edge or it's a very large edge, like a factory floor, the challenge as Vishal mentioned is that if we're going to deploy machine learning, we need some sort of consistency of runtime to be able to execute those machine learning models. Yes, we need consistent access to data, but consistent access in terms of runtime is so important. And I think Hadoop got us started down this path, the ability to very efficiently and cost-effectively run large data jobs against large data sets. And it attempted to work into the source ecosystem, but because of the monolithic deployment, the tightly coupling of the compute and the data, it never achieved that cloud native vision. And so what as role in HPE through GreenLake services is delivering with open source-based Kubernetes, open source Apache Spark, open source Delta lake libraries, those same cloud native services that you can develop on your workstation, deploy in your data center in the same way you deploy through automation out at the edge. And I think that is what's so critical about what we're going to see over the next couple of years. The edge is driving these use cases, but it's consistency to build and deploy those machine learning models and connect it consistently with data that's what's going to drive organizations to success. >> So you're saying you're able to decouple, to compute from the storage. >> Absolutely. You wouldn't have a cloud if you didn't decouple compute from storage. And I think this is sort of the demise of Hadoop was forcing that coupling. We have high-speed networks now. Whether I'm in a cloud or in my data center, even at the edge, I have high-performance networks, I can now do distributed computing and separate compute from storage. And so if I want to, I can have high-performance compute for my really data intensive applications and I can have cost-effective storage where I need to. And by separating that off, I can now innovate at the pace of those individual tools in that opensource ecosystem. >> So, can I stay on this for a second 'cause you certainly saw Snowflake popularize that, they were kind of early on. I don't know if they're the first, but they certainly one of the most successful. And you saw Amazon Redshift copied it. And Redshift was kind of a bolt on. What essentially they did is they teared off. You could never turn off the compute. You still had to pay for a little bit compute, that's kind of interesting. Snowflakes at the t-shirt sizes, so there's trade offs there. There's a lot of ways to skin the cat. How did you guys skin the cat? >> What we believe we're doing is we're taking the best of those worlds. Through GreenLake cloud services, the ability to pay for and provision on demand the computational services you need. So, if someone needs to spin up a Delta lake job to execute a machine learning model, you spin up that. We're of course spinning that up behind the scenes. The job executes, it spins down, and you only pay for what you need. And we've got reserve capacity there. So you, of course, just like you would in the public cloud. But more importantly, being able to then extend that through a fabric across clouds and edge locations, so that if a customer wants to deploy in some public cloud service, like we know we're going to, again, we're giving that consistency across that, and exposing it through an S3 API. >> So, Vishal at the end of the day, I mean, I love to talk about the plumbing and the tech, but the customer doesn't care, right? They want the lowest cost. They want the fastest outcome. They want the greatest value. My question is, how are you seeing data organizations evolve to sort of accommodate this third era of this next generation? >> Yeah. I mean, the way at least, kind of look at, from a customer perspective, what they're trying to do is first of all, I think Matt addressed it somewhat. They're looking at a consistent experience across the different groups of people within the company that do something to data, right? It could be a SQL users. People who's just writing a SQL code. It could be people who are writing machine learning models and running them. It could be people who are writing code in Spark. Right now they are, you know the experience is completely disjointed across them, across the three types of users or more. And so that's one thing that they trying to do, is just try to get that consistency. We spoke about performance. I mean the disjointedness between compute and storage does provide the agility, because there customers are looking for elasticity. How can I have an elastic environment? So, that's kind of the other thing they're looking at. And performance and DCU, I think a big deal now. So, I think that that's definitely on a customer's mind. So, as enterprises are looking at their data journey, those are the at least the attributes that they are trying to hit as they organize themselves to make the most out of the data. >> Matt, you and I have talked about this sort of trend to the decentralized future. We're sort of hitting on that. And whether it's in a first gen data warehouse, second gen data lake, data hub, bucket, whatever, that essentially should ideally stay where it is, wherever it should be from a performance standpoint, from a governance standpoint and a cost perspective, and just be a node on this, I like the term data mesh, but be a node on that, and essentially allow the business owners, those with domain context to you've mentioned data products before to actually build data products, maybe air quotes, but a data product is something that can be monetized. Maybe it cuts costs. Maybe it adds value in other ways. How do you see HPE fitting into that long-term vision which we know is going to take some time to play out? >> I think what's important for organizations to realize is that they don't have to go to the public cloud to get that experience they're looking for. Many organizations are still reluctant to push all of their data, their critical data, that is going to be the next way to monetize business into the public cloud. And so what HPE is doing is bringing the cloud to them. Bringing that cloud from the infrastructure, the virtualization, the containerization, and most importantly, those cloud native services. So, they can do that development rapidly, test it, using those open source tools and frameworks we spoke about. And if that model ends up being deployed on a factory floor, on some common X86 infrastructure, that's okay, because the lingua franca is Kubernetes. And as Vishal mentioned, Apache Spark, these are the common tools and frameworks. And so I want organizations to think about this unified analytics experience, where they don't have to trade off security for cost, efficiency for reliability. HPE through GreenLake cloud services is delivering all of that where they need to do it. >> And what about the speed to quality trade-off? Have you seen that pop up in customer conversations, and how are organizations dealing with that? >> Like I said, it depends on what you mean by speed. Do you mean a computational speed? >> No, accelerating the time to insights, if you will. We've got to go faster, faster, agile to the data. And it's like, "Whoa, move fast break things. "Whoa, whoa. "What about data quality and governance and, right?" They seem to be at odds. >> Yeah, well, because the processes are fundamentally broken. You've got a developer who maybe is able to spin up an instance in the public cloud to do their development, but then to actually do model training, they bring it back on-premises, but they're waiting for a data engineer to get them the data available. And then the tools to be provisioned, which is some esoteric stack. And then runtime is somewhere else. The entire process is broken. So again, by using consistent frameworks and tools, and bringing that computation to where the data is, and sort of blowing this construct of pipelines out of the water, I think is what is going to drive that success in the future. A lot of organizations are not there yet, but that's I think aspirationally where they want to be. >> Yeah, I think you're right. I think that is potentially an answer as to how you, not incrementally, but revolutionized sort of the data business. Last question, is talking about GreenLake, how this all fits in. Why GreenLake? Why do you guys feel as though it's differentiable in the market place? >> So, I mean, something that you asked earlier as well, time to value, right? I think that's a very important attribute and kind of a design factor as we look at GreenLake. If you look at GreenLake overall, kind of what does it stand for? It stands for experience. How do we make sure that we have the right experience for the users, right? We spoke about it in context of data. How do we have a similar experience for different users of data, but just broadly across an enterprise? So, it's all about experience. How do you automate it, right? How do you automate the workloads? How do you provision fast? How do you give folks a cloud... An experience that they have been used to in the public cloud, on using an Apple iPhone? So it's all about experience, I think that's number one. Number two is about choice and openness. I mean, as we look at GreenLake is not a proprietary platform. We are very, very clear that the design, one of the important design principles is about choice and openness. And that's the reason we are, you hear us talk about Kubernetes, about Apaches Spark, about Delta lake et cetera, et cetera, right? We're using kind of those open source models where customers have a choice. If they don't want to be on GreenLake, they can go to public cloud tomorrow. Or they can run in our Holos if they want to do it that way or in their Holos, if they want to do it. So they should have the choice. Third is about performance. I mean, what we've done is it's not just about the software, but we as a company know how to configure infrastructure for that workload. And that's an important part of it. I mean if you think about the machine learning workloads, we have the right Nvidia chips that accelerate those transactions. So, that's kind of the last, the third one, and the last one, I think, as I spoke about earlier is cost. We are very focused on TCO, but from a customer perspective, we want to make sure that we are giving a value proposition, which is just not about experience and performance and openness, but also about costs. So if you think about GreenLake, that's kind of the value proposition that we bring to our customers across those four dimensions. >> Guys, great conversation. Thanks so much, really appreciate your time and insights. >> Matt: Thanks for having us here, David. >> All right, you're welcome. And thank you for watching everybody. Keep it right there for more great content from HPE GreenLake announcements. You're watching theCUBE. (upbeat music)
SUMMARY :
Good to see you again. What are the big mega trends enterprises that are able to "and I've got the best of data lake. fabric so that the experience about some of the patterns that And I think the last one I would say is, And so that has to include on-prem, the ability to very efficiently to compute from the storage. of the demise of Hadoop of the most successful. services, the ability to pay for end of the day, I mean, So, that's kind of the other I like the term data mesh, bringing the cloud to them. on what you mean by speed. to insights, if you will. that success in the future. in the market place? And that's the reason we are, Thanks so much, really appreciate And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Vishal | PERSON | 0.99+ |
Matt Maccaux | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
24 | QUANTITY | 0.99+ |
Vishal Lall | PERSON | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Spark | TITLE | 0.99+ |
Third | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
36 months | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
second generation | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
GreenLake | ORGANIZATION | 0.98+ |
Redshift | TITLE | 0.98+ |
first gen | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
Teradata | ORGANIZATION | 0.98+ |
third one | QUANTITY | 0.97+ |
SQL | TITLE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
second gen | QUANTITY | 0.96+ |
S3 | TITLE | 0.96+ |
today | DATE | 0.96+ |
Ezmeral Software | ORGANIZATION | 0.96+ |
Apple | ORGANIZATION | 0.96+ |
three types | QUANTITY | 0.96+ |
2000's | DATE | 0.95+ |
third | QUANTITY | 0.95+ |
90's | DATE | 0.95+ |
HPE GreenLake | ORGANIZATION | 0.95+ |
TCO | ORGANIZATION | 0.94+ |
Delta lake | ORGANIZATION | 0.93+ |
80's | DATE | 0.91+ |
Number two | QUANTITY | 0.88+ |
last | DATE | 0.88+ |
theCube | ORGANIZATION | 0.87+ |
Amazon | ORGANIZATION | 0.87+ |
Apache | ORGANIZATION | 0.87+ |
Kubernetes | TITLE | 0.86+ |
Kubernetes | ORGANIZATION | 0.83+ |
Hadoop | TITLE | 0.83+ |
first thing | QUANTITY | 0.82+ |
Snowflake | TITLE | 0.82+ |
four dimensions | QUANTITY | 0.8+ |
Holos | TITLE | 0.79+ |
years | DATE | 0.78+ |
second | QUANTITY | 0.75+ |
X86 | TITLE | 0.73+ |
next couple of years | DATE | 0.73+ |
Delta lake | TITLE | 0.69+ |
Apaches Spark | ORGANIZATION | 0.65+ |
RETAIL Next Gen 3soft
>> Hello everyone. And thanks for joining us today. My name is Brent Biddulph, managing director retail, consumer goods here at Cloudera. Cloudera is very proud to be partnering with companies like 3Soft to provide data and analytic capabilities for over 200 retailers across the world and understanding why demand forecasting could be considered the heartbeat of retail. And what's at stake is really no mystery to most retailers. And really just a quick level set before handing this over to my good friend, Kamil at 3Soft. IDC, Gartner, many other analysts kind of summed up an average here that I thought would be important to share just to level set the importance of demand forecasting in retail, and what's at stake, meaning the combined business value for retailers leveraging AI and IOT. So this is above and beyond what demand forecasting has been in the past, is a $371 billion opportunity. And what's critically important to understand about demand forecasting is it directly impacts both the top line and the bottom line of retail. So how does it affect the top line? Retailers that leverage AI and IOT for demand forecasting are seeing average revenue increases of 2% and think of that as addressing the in stock or out of stock issue in retail and retail is become much more complex now, and that it's no longer just brick and mortar, of course, but it's fulfillment centers driven by e-commerce. So inventory is now having to be spread over multiple channels. Being able to leverage AI and IOT is driving 2% average revenue increases. Now, if you think about the size of most retailers or the average retailer that, on its face is worth millions of dollars of improvement for any individual retailer. On top of that is balancing your inventory, getting the right product in the right place, and having productive inventory. And that is the bottom line. So the average inventory reduction, leveraging AI and IOT as the analysts have found, and frankly, having spent time in this space myself in the past a 15% average inventory reduction is significant for retailers, not being overstocked on product in the wrong place at the wrong time. And it touches everything from replenishment to out-of-stocks, labor planning, and customer engagement. For purposes of today's conversation, we're going to focus on inventory and inventory optimization and reducing out-of-stocks. And of course, even small incremental improvements. I mentioned before in demand forecast accuracy have millions of dollars of direct business impact, especially when it comes to inventory optimization. Okay. So without further ado, I would like to now introduce Dr. Kamil Volker to share with you what his team has been up to, and some of the amazing things are driving at top retailers today. So over to you, Kamil. >> I'm happy to be here and I'm happy to speak to you about what we deliver to our customers, but let me first introduce 3Soft. We are a 100 person company based in Europe, in Southern Poland, and we, with 18 years of experience specialized in providing what we call a data driven business approach to our customers. Our roots are in the solutions in the services. We originally started as a software house. And on top of that, we build our solutions. We've been automation that you get the software for biggest enterprises in Poland, further, we understood the meaning of data and data management and how it can be translated into business profits. Adding artificial intelligence on top of that makes our solutions portfolio holistic, which enables us to realize very complex projects, which leverage all of those three pillars of our business. However, in the recent time, we also understood the services is something which only the best and biggest companies can afford at scale. And we believe that the future of retail demand forecasting is in the product solutions. So that's why we created Occubee, our AI platform for data driven retail that also covers this area that we talked about today. I'm personally proud to be responsible for our technology partnerships with Cloudera and Microsoft. It's a great pleasure to work with such great companies and to be able to deliver the solutions to our customers together based on a common trust and understanding of the business, which cumulates at customer success at the end. So why should we analyze data at retail? Why is it so important? It's kind of obvious that there is a lot of potential in the data per se, but also understanding the different areas where it can be used in retail is very important. We believe that thanks to using data, it's basically easier to derive the good decisions for the business based on the facts and not intuition anymore. Those four areas that we observed in retail, our online data analysis, that's the fastest growing sector, let's say for those data analytics services, which is of course based on the econ and online channels, availability to the customer. Pandemic only speeds up this process of engagement of the customers in that channel, of course, but traditional offline, let's say brick and mortar shops. They still play the biggest role for most of the retailers, especially from the FMCG sector. However, it's also very important to remember that there is plenty of business related questions that need to be answered from the headquarter perspective. So is it actually good idea to open a store in a certain place? Is it a good idea to optimize a stock in a certain producer? Is it a good idea to allocate the goods to online channel in specific way, those kinds of questions, they need to be answered in retail every day. And with that massive amount of factors coming into the equation, it's really not that easy to base only on the integration and expert knowledge. Of course, as Brent mentioned at the beginning, the supply chain and everything who's relates to that is also super important. We observe our customers to seek for the huge improvements in the revenue, just from that one single area as well. So let me present you a case study of one of our solutions, and that was the lever to a leading global grocery retailer. The project started with the challenge set of challenges that we had to conquer. And of course the most important was how to limit overstocks and out of stocks. That's like the holy grail in retail, of course, how to do it without flooding the stores with the goods. And in the same time, how to avoid empty shelves. From the perspective of the customer, it was obvious that we need to provide a very well, a very high quality of sales forecast to be able to ask for what will be the actual sales of the individual product in each store every day, considering huge role of the perishable goods in the specific grocery retailer, it was a huge challenge to provide a solution that was able to analyze and provide meaningful information about what's there in the sales data and the other factors we analyzed on daily basis at scale, however, our holistic approach implementing AI with data management background and these automation solutions all together created a platform that was able to significantly increase the sales for our customer just by minimizing out of stocks. In the same time, we managed to not overflood the stock, the shops with the goods, which actually decreased losses significantly, especially on the fresh fruit. Having said that, these results, of course translate into the increase in revenue, which can be calculated in hundreds of millions of dollars per year. So how the solution actually works? Well in its principle, it's quite simple. We just collect the data. We do it online, we put that in our data, like based on the cloud, through other technology, we implement our artificial intelligence models on top of it. And then based on the aggregated information, we create the forecast and we do it every day or every night for every single product in every single store. This information is sent to the warehouses and then the automated replenishment based on the forecast is on the way. The huge and most important aspect of that is the use of the good tools to do the right job. Having said that, you can be sure that there is too many information in this data. And there is actually two-minute forecast created every night than any expert could ever check. This means our solution needs to be very robust. It needs to provide information with high quality and high veracity. There is plenty of different business process, which is based on our forecast, which need to be delivered on time for every product in each individual shop. Observing the success of this project and having the huge market potential in mind, we decided to create our Occubee, which can be used by many retailers who don't want to create a dedicated software that will be solving this kind of problem. Occubee is our software service offering, which is enabling retailers to go data driven path management. We create Occubee with retailers for retailers, implementing artificial intelligence on top of data science models created by our experts. Having data analysis in place based on data management tools that we use, we've written first attitude. The uncertain times of pandemic clearly shows that it's very important to apply correction factors, which are sometimes required because we need to respond quickly to the changes in the sales characteristics. That's why Occubee is open box solution, which means that you basically can implement that in your organization, without changing the process internally. It's all about mapping your process into the system, not the other way around. The fast trends and products collection possibilities allow the retailers to react to any changes, which occur in the sales every day. Also, it's worth to mention that really it's not only FMCG and we believe that different use cases, which we observe in fashion, health and beauty, home and garden, pharmacies, and electronics, flavors of retail are also very meaningful. They also have one common thread. That's the growing importance of e-commerce. That's why we didn't want to leave that aside of Occubee. And we made everything we can to implement a solution, which covers all the needs. When you think about the factors that affect sales, there is actually huge variety of data that we can analyze. Of course, the transactional data that every dealer possesses, like sales data from sale from stores, from e-commerce channel, also averaging numbers from weeks, months, and years makes sense, but it's also worth to mention that using the right tool that allows you to collect that data from also internal and external sources makes perfect sense for retail. It's very hard to imagine a competitive retailer that is not analyzing the competitor's activity, changes in weather or information about some seasonal stores, which can be very important during the summer and other holidays, for example. But on the other hand, having this information in one place makes the actual benefit and environment for the customer. Demand forecasting seems to be like the most important and promising use case. We can talk about when I think about retail, but it's also the whole process of replenishment that can cover with different sets of machine learning models, and data management tools. We believe that analyzing data from different parts of the retail replenishment process can be achieved with implementing a data management solution based on Cloudera products and with adding some AI on top of it, it makes perfect sense to focus on not only demand forecasting, but also further use cases down the line. When it comes to the actual benefits from implementing solutions for demand management, we believe it's really important to analyze them holistically first it's of course, out of stock minimization, which can be provided by simply better size focus, but also reducing overstocks by better inventory management can be achieved by us in the same time. Having said that, we believe that analyzing data without any specific new equipment required in point of sales is the low hanging fruit that can be easily achieved in almost every industry, in almost every regular customer. >> Hey, thanks, Kamil. Having worked with retailers in this space for a couple of decades, myself, I was really impressed by a couple of things and they might've been understated, frankly, the results of course. I mean, as I kind of set up this session, you doubled the numbers on the statistics that the analysts found. So obviously in customers, you're working with... you're doubling average numbers that the industry overall is having, and most notably how the use of AI or Occubee has automated so many manual tasks of the past, like tour tuning, item profiles, adding new items, et cetera, and also how quickly it felt like, and this is my core question. Your team can cover or provide the solution to not only core center store, for example, in grocery, but you're covering fresh products. And frankly, there are solutions out on the market today that only focus on center store non-perishable departments. I was really impressed by the coverage that you're able to provide as well. So can you articulate kind of what it takes to get up and running and your overall process to roll out the solution? I feel like based on what you talked about and how you were approaching this in leveraging AI, that you're streamlining processes of legacy, demand, forecasting solutions that required more manual intervention, how quickly can you get people set up? And what is the overall process of like to get started with this software? >> Yeah, usually, it takes three to six months to onboard a new customer to that kind of solution. And frankly, it depends on the data that the customer has. Usually it's different for smaller, bigger companies, of course, but we believe that it's very important to start with a good foundation. The platform needs to be there, the platform that is able to basically analyze or process different types of data, structured, unstructured, internal, external, and so on. But when you have this platform set is all about starting ingesting data there. And usually for a smaller companies, it's easier to start with those, let's say, low hanging fruits. So the internal data, which is there, this data has the highest veracity. It's all really easy to start with, to work with them because everyone in the organization understands this data. For the bigger companies it might be important to ingest also kind of more unstructured data, some kind of external data that need to be acquired. So that may influence the length of the process. But we usually start with the customers with workshops. That's very important to understand the reasons because not every deal is the same. Of course, we believe that the success of our customers comes also due to the fact that we train those models, those AI models individually to the needs of our customers. >> Totally understand. And POS data, every retailer has right in, in one way shape or form. And it is the fundamental data point, whether it's e-comm or the brick and mortar data, every retailer has that data. So, that totally makes sense. But what you just described was months, there are legacy and other solutions out there, that this could be a year or longer process to roll out to the number of stores, for example, that you're scaling to. So that's highly impressive. And my guess is a lot of the barriers that have been knocked down with your solution are the fact that you're running this in the cloud. from a compute standpoint on Cloudera from a public cloud stamp point on Microsoft. So there's no IT intervention, if you will, or hurdles in preparation to get the database set up and all of the work. I would imagine that part of the time savings to getting started, would that be an accurate description? >> Yeah, absolutely. In the same time, this actually lowering the business risks because we see the same data and put that into the data lake, which is in the cloud. We did not interfere with the existing processes, which are processing this data in the combined. So we just use the same data. We just already in the company, we ask some external data if needed, but it's all aside of the current customers infrastructure. So this is also a huge gain, as you said. >> Right. And you're meeting customers where they are, right? So as I said, foundationally, every retailer POS data, if they want to add weather data or calendar event data, or, one incorporated course online data with offline data, you have a roadmap and the ability to do that. So it is a building block process. So getting started with core data as with POS online or offline is the foundational component, which obviously you're very good at. And then having that ability to then incorporate other data sets is critically important because that just improves demand forecast accuracy, right. By being able to pull in those, those other data sources, if you will. So Kamil, I just have one final question for you. There are plenty of... not plenty, but I mean, there's enough demand forecasting solutions out on the market today for retailers. One of the things that really caught my eye, especially being a former retailer and talking with retailers was the fact that you're promoting an open box solution. And that is a key challenge for a lot of retailers that have seen black box solutions come and go. And especially in this space where you really need direct input from the customer to continue to fine tune and improve forecast accuracy. Could you give just a little bit more of a description or response to your approach to open box versus black box? >> Yeah, of course. So, we've seen in the past the failures of the projects based on the black box approach, and we believe that this is not the way to go, especially with this kind of, let's say specialized services that we provide in meaning of understanding the customer's business first and then applying the solution, because what stands behind our concept in Occubee is the, basically your process in the organization as a retailer, they have been optimized for years already. That's where retailers put their focus for many years. We don't want to change that. We are not able to optimize it properly for sure as IT combined, we are able to provide you a tool which can then be used for mapping those very well optimized process and not to change them. That's our idea. And the open box means that in every process that you will map in the solution, you can then in real time monitor the execution of those processes and see what is the result of every step. That way, we create truly explainable experience for our customers, then can easily go for the whole process and see how the forecast was calculated. And what is the reason for a specific number to be there at the end of the day? >> I think that is invaluable. (indistinct) I really think that is a differentiator and what 3Soft is bringing to market. With that, thanks everyone for joining us today. Let's stay in touch. I want to make sure to leave Kamil's information here. So reach out to him directly, or feel free at any point in time obviously to reach out to me. Again, so glad everyone was able to join today, look forward to talking to you soon.
SUMMARY :
And that is the bottom line. aspect of that is the use of the that the analysts found. So that may influence the the time savings to getting that into the data lake, the ability to do that. and see how the forecast was calculated. look forward to talking to you soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kamil | PERSON | 0.99+ |
3Soft | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Brent Biddulph | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Brent | PERSON | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Poland | LOCATION | 0.99+ |
two-minute | QUANTITY | 0.99+ |
$371 billion | QUANTITY | 0.99+ |
Kamil Volker | PERSON | 0.99+ |
2% | QUANTITY | 0.99+ |
18 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
100 person | QUANTITY | 0.99+ |
Southern Poland | LOCATION | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
3soft | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
each store | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
over 200 retailers | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
a year | QUANTITY | 0.97+ |
each individual shop | QUANTITY | 0.97+ |
one final question | QUANTITY | 0.96+ |
millions of dollars | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
One | QUANTITY | 0.95+ |
Occubee | LOCATION | 0.95+ |
one way | QUANTITY | 0.93+ |
one single area | QUANTITY | 0.91+ |
pandemic | EVENT | 0.9+ |
one place | QUANTITY | 0.88+ |
one common thread | QUANTITY | 0.86+ |
every single product | QUANTITY | 0.86+ |
hundreds of millions of dollars per year | QUANTITY | 0.83+ |
first attitude | QUANTITY | 0.81+ |
Occubee | ORGANIZATION | 0.8+ |
every night | QUANTITY | 0.8+ |
every single store | QUANTITY | 0.75+ |
three pillars | QUANTITY | 0.73+ |
Cloudera | TITLE | 0.7+ |
couple of decades | QUANTITY | 0.66+ |
product | QUANTITY | 0.58+ |
day | QUANTITY | 0.53+ |
The Value of Oracle’s Gen 2 Cloud Infrastructure + Oracle Consulting
>>from the Cube Studios in Palo Alto and Boston. It's the Cube covering empowering the autonomous enterprise brought to you by >>Oracle Consulting. Everybody, this is Dave Vellante. We've been covering the transformation of Oracle consulting and really, it's rebirth. And I'm here with Chris Fox, who's the group vice president for Enterprise Cloud Architects and chief technologist for the North America Tech Cloud at Oracle. Chris, thanks so much for coming on the Cube. >>Thanks too great to be here, >>So I love this title. You know, years ago, this thing is a cloud architect. Certainly there were chief technologist, but so you really that's those are your peeps, Is that right? >>That's right. That's right. That's really in my team. And I That's all we dio. So our focus is really helping our customers take this journey from when they were on premise. You really transforming with cloud? And when we think about Cloud, really, for us, it's a combination. It's it's our hybrid cloud, which happens to be on premise. And then, of course, the true public cloud, like most people, are familiar with so very exciting journey and frankly, of seeing just a lot of success for our customers. You know what I think we're seeing at Oracle, though? Because we're so connected with SAS. And then we're also connected with the traditional applications that have run the business for years. The legacy applications that have been, you know, servicing us for 20 years and then the cloud native developers. So with my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So if we think of, like a customer outcome like I want to have a package delivered to me from a retailer that actual process flow could touch a brand new cognitive site of e commerce it could touch essentially maybe a traditional application that used to be on Prem that's now in the cloud. And then it might even use new SAS application, maybe for maybe Herman process or delivery vehicle and scheduling. So when my team does, we actually connect all three. So what? I was mentioned, too. In my team and all of our customers, we have field service, all three of those constituents. And if you think about process flows, so I take a cloud. Native developer we help them become efficient. We take the person use to run in a traditional application, and we help them become more efficient. And then we have the SAS applications, which are now rolling out new features on a quarterly basis and the whole new delivery model. But the real key is connecting all three of these into your business process flow. That makes the customers life much more vision. >>So I want to get into this cloud conversations that you guys are using this term last mover advantage. I asked you last I was being last, You know, an advantage. But let me start there. >>People always say, You know, of course, we want to get out of the data center. We're going zero data center and how we say, Well, how are you going to handle that back office stuff, right? The stuff that's really big Frankie, um, doesn't handle just, you know, instances dying or things going away too easily. It needs predictable performance in the scale. It absolutely needs security. And ultimately, you know, a lot of these applications truly have relied on Oracle database. The Oracle database has its own specific characteristics that it means to run really well. So we actually looked at the cloud and we said, Let's take the first generation clouds but you're doing great But let's add the features that specifically a lot of times the Oracle workload needed in order to run very well and in a cost effective manner. So that's what we mean when we say last mover advantage, We said, Let's take the best of the clouds that are out there today. Let's look at the workloads that, frankly, Oracle runs and has been running for years. What are customers needed? And then let's build those features right into this, uh, this next version of the cloud we service the Enterprise. So our goal, honestly, which is interesting is even that first discussion we had about cloud, native and legacy applications and also the new SAS applications. We built a cloud that handles all three use cases at scale resiliently in very secure manner, and I don't know of any other cloud that's handling those three use cases all in. We'll call it the same pendency process. Oracle >>Mike witnesses. Why was it important for Oracle? And is it important for Oracle on its customers that have to participate in IAS and Pass and SAS. Why not just the last two layers of that? Um What does that mean from a strategic advantage standpoint? What does that do for >>you? Yeah, great question. So the number one reason why we needed to have all three was that we have so many customers to today are in a data center. They're running a lot of our workloads on premise, and they absolutely are trying to find a better way to deliver lower cost services to their customers. And so we couldn't just say, Let's just everyone needs to just become net new. Everyone just needs to ditch the old and go just a brand new alone. Too hard, too expensive at times. So we said, You know, let's kill us customers the ultimate amount of choice. So let's even go back against that developer conversation and SAS Um, if you didn't have eyes, we couldn't help customers achieve a zero data center strategy with their traditional applications will call it PeopleSoft or JD Edwards, Revisit Suite or even. There's some massive applications that are running on the Oracle cloud right now that are custom applications built on the Oracle database. What they want is, they said, Give me the lowest. Possibly a predictable performance. I as I'll run my app steer on this number two. Give me a platform service for database because, frankly, I don't really want to run your database. Like with all the manual effort. I want someone automate, patching scale up and down and all these types of features like should have given us. And then number three. You know, I do want SAS over time. So we spend a lot of time with our customers really saying, How do I take this traditional application, Run it on eyes and has and the number two Let's modernize it at scale. Maybe I want to start peeling off functionality and running in the cloud Native services right alongside, right? That's something again that we're doing at scale. And other people are having a hard time running these traditional workloads on Prem in the cloud. The second part is they say, you know, I've got this legacy traditional your api been servicing we well, or maybe a supply chain system ultimately want to get out of this. How do I get to SAS? You say Okay, here's the way to do this. First bring into the cloud running on IAS and pass and then selectively, I call it cloud slicing. Take a piece of functionality and put it into SAS. We're helping customers move to the cloud at scale. We're helping them do it at their rate, with whatever level of change they want. And when they're ready for SAS, we're ready for them. >>How does autonomous fit into this whole architecture Wait for that? That that description? I mean, it's a it's nuanced, but it's important. I'm sure you haven't discussed this conversation with a lot of cloud architects and chief technologist. They want to know this stuff. They want to know how it works. Um, you know, we will talk about what the business impact is, but but yeah, it's not about autonomous and where that fits. >>So the autonomous database, what we've done is really big. And look at all the runtime operations of an Oracle database. So tuning, patching, sparing all these different features and what we've done is taken the best of the Oracle database the best of something called Exit Data right, which we run in the cloud which really helps a lot of our customers. And then we wrapped it with a set of automation and security tools to help it. Really, uh, managing self tune itself. Patch itself scale up and down, independent between compute and storage. So why that's important, though, is that it? Really? Our goal is to help people run the Oracle databases they have for years, but with far less effort and then even not letting far less effort. Hopefully, you know a machine. Last man out of the equation we always talk about is your man plus machine is greater than man alone, so being assisted by, um, artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers. Far less paths are hoping goal is that people have been running Oracle databases, you know, How can we help them do it with far less effort and maybe spend more time on what the data can do for the organization? Right? Improve customer experience at Centra versus maybe like Hana Way. How do I spin up the table? It >>so talk about the business impact. So you go into customers, you talk to the the cloud Architects, the chief technologist. You pass that test now, you got to deliver the business impact. We're is Oracle Consulting fit with regard to that? And maybe you could talk about that where you were You guys want to take this thing? >>Yeah, absolutely. I mean, so you know, the cloud is a great set of technologies, but where Oracle Consulting is really helping us deliver is in, um, you know, one of the things I think that's been fantastic working with the Oracle consulting team is that, you know, Cloud is new for a lot of customers who've been running these environments for a number of years. There's always some fear and a little bit of trepidation saying, How do I learn this new cloud of the workloads? We're talking about David, like tier zero, tier one, tier two and all the way up to Dev and Test and, er, um, Oracle consulting. This really couple things in particular, Number one, they start with the end in mind, and number two that they start to do is they really help implement these systems. And, you know, there's a lot of different assurances that we have that we're going to get it done on time and better be under budget because ultimately, you know, again, that's a something is really paramount for us and then the third part of it. But sometimes a run book, right? We actually don't want to just live in our customer's environments. We want to help them understand how to run this new system. So training and change management. A lot of times, Oracle Consulting is helping with run books. We usually well, after doing it the first time. We'll sit back and say, Let the customer do in the next few times and essentially help them through the process. And our goal at that point is to leave only if the customer wants us to. But ultimately our goal is to implemented, get it to go live on time and then help the customer learn this journey to the cloud and without them. Frankly, uh, you know, I think these systems were sometimes too complex and difficult to do on your own. Maybe the first time, especially cause I could say they're closing the books. They might be running your entire supply chain. They run your entire HR system, whatever they might be, uh, too important, leading a chance. So they really help us with helping a customer become live and become very confident. Skilled. They could do themselves >>of the conversation. We have to leave it right there. But thanks so much for coming on the Cube and sharing your insights. Great stuff. >>Absolutely. Thanks for having me on. >>All right. You're welcome. And thank you for watching everybody. This is Dave Volante for the Cube. We are covering the oracle of North American Consulting. Transformation. And it's rebirth in this digital event. Keep it right there. We'll be right back.
SUMMARY :
empowering the autonomous enterprise brought to you by Chris, thanks so much for coming on the Cube. Certainly there were chief technologist, but so you really that's those are your peeps, And if you think about process flows, So I want to get into this cloud conversations that you guys are using this term last mover advantage. And ultimately, you know, Why not just the last two layers of that? There's some massive applications that are running on the Oracle cloud right now that are custom applications built Um, you know, we will talk about what the business impact is, of the equation we always talk about is your man plus machine is greater than man alone, You pass that test now, you got to deliver the business And our goal at that point is to leave only if the customer wants us to. But thanks so much for coming on the Cube and sharing your insights. Thanks for having me on. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Chris Fox | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
Oracle Consulting | ORGANIZATION | 0.99+ |
Centra | ORGANIZATION | 0.99+ |
Hana Way | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
three use cases | QUANTITY | 0.98+ |
North American Consulting | ORGANIZATION | 0.98+ |
third part | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
one | QUANTITY | 0.96+ |
Cube Studios | ORGANIZATION | 0.96+ |
three | QUANTITY | 0.96+ |
first generation | QUANTITY | 0.95+ |
North America Tech Cloud | ORGANIZATION | 0.94+ |
Frankie | ORGANIZATION | 0.92+ |
PeopleSoft | ORGANIZATION | 0.91+ |
JD Edwards | ORGANIZATION | 0.87+ |
Enterprise Cloud Architects | ORGANIZATION | 0.87+ |
two layers | QUANTITY | 0.86+ |
years | QUANTITY | 0.86+ |
SAS | TITLE | 0.84+ |
years | DATE | 0.84+ |
2 | QUANTITY | 0.83+ |
first discussion | QUANTITY | 0.79+ |
tier one | OTHER | 0.79+ |
Cube | ORGANIZATION | 0.79+ |
Revisit | TITLE | 0.75+ |
Suite | ORGANIZATION | 0.71+ |
one reason | QUANTITY | 0.71+ |
zero data | QUANTITY | 0.7+ |
tier two | OTHER | 0.68+ |
Pass | TITLE | 0.67+ |
tier zero | OTHER | 0.66+ |
IAS | TITLE | 0.65+ |
two | QUANTITY | 0.64+ |
Archite | PERSON | 0.61+ |
Herman | TITLE | 0.61+ |
zero | QUANTITY | 0.52+ |
number | QUANTITY | 0.51+ |
Cube | COMMERCIAL_ITEM | 0.51+ |
8 The Value of Oracle’s Gen 2 Cloud Infrastructure + Oracle Consulting
>> Narrator: From theCUBE studios in Palo Alto in Boston, it's theCUBE! Covering empowering the autonomous enterprise. Brought to you by ORACLE Consulting. >> Back to theCUBE everybody, this is Dave Vellante. We've been covering the transformation of ORACLE Consulting, and really it's rebirth, and I'm here with Chris Fox, who's the Group Vice President for Enterprise Cloud Architects and Chief Technologist for the North America Tech Cloud at ORACLE. Chris, thanks so much for coming on theCUBE. >> Thanks Dave, glad to be here. >> So, I love this title. I mean, years ago, there was no such thing as a cloud architect. Certainly there were chief technologists, but so, you are really, those are your peeps, is that right? >> That's right, that's right. That's really my team and I, that's all we do. So, our focus is really helping our customers take this journey from when they were on-premise to really transforming with cloud, and when we think about cloud, really, for us, it's a combination. It's our hybrid cloud, which happens to be on-premise, and then, of course, the true public cloud, like most people are familiar with. So, very exciting journey and, frankly, I've seen just a lot of success for our customers. You know, Dave, what I think we're seeing at ORACLE though, because we're so connected with SaaS, and then we're also connected with the traditional applications that have run the business for years, the legacy applications that have been, you know, servicing us for 20 years, and then the cloud needed developers. So, what my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So, if we think of, like, a customer outcome like I want to have a package delivered to me from a retailer, that actual process flow could touch a brand new cloud-native site from eCommerce, it could touch, essentially, maybe a traditional application that used to be on-prem that's now on the cloud, and then it might even use a new SaaS application, maybe, for maybe a permit process or delivery vehicle and scheduling. So, what my team does, we actually connect all three. So, what I always mention to my team and all of our customers, we have to be able to service all three of those constituents and really think about process flows. So, I take the cloud-native developer, we help them become efficient. We take the person who's been running that traditional application and we help them become more efficient, and then we have the SaaS applications, which are now rolling out new features on a quarterly basis and it's a whole new delivery model, but the real key is connecting all three of these into a business process flow that makes the customer's life much more efficient. People always say, you know, Chris, we want to get out of the data center, we're going zero data center, and I always say, well, how are you going to handle that back office stuff? Right? The stuff that's really big, it's cranky, doesn't handle just, you know, instances dying or things going away too easily. It needs predictable performance, it needs scale, it absolutely needs security, and ultimately, you know, a lot of these applications truly have relied on an ORACLE database. The ORACLE database has its own specific characteristics that it needs to run really well. So, we actually looked at the cloud and we said, let's take the first generation clouds, which are doing great, but let's add the features that specifically, a lot of times, the ORACLE workload needed in order to run very well and in a cost effective manner. So, that's what we mean when we say last mover advantage. We said, let's take the best of the clouds that are out there today, let's look at the workloads that, frankly, ORACLE runs and has been running for years, what our customers needed, and then let's build those features right into this next version of the cloud which can service the enterprise. So, our goal, honestly, which is interesting, is even that first discussion we had about cloud-native and legacy applications and also the new SaaS applications, we built a cloud that handles all three use cases at scale, resiliently, in a very secure manner, and I don't know of any other cloud that's handling those three use cases all in, we'll call it the same tendency for us at ORACLE. >> My question is why was it important for ORACLE, and is it important for ORACLE and its customers, to participate in IaaS and PaaS and SaaS? Why not just the last two layers of that? What does that give you from a strategic advantage standpoint and what does that do for your customer? >> Yeah, great question. So, the number one reason why we needed to have all three was that we have so many customers who, today, are in a data center. They're running a lot of our workloads on-premise and they absolutely are trying to find a better way to deliver lower-cost services to their customers and so we couldn't just say, let's just, everyone needs to just become net new, everyone just needs to ditch the old and go just to brand-new alone. Too hard, too expensive, at times. So we said, you know, let's give us customers the ultimate amount of choice. So, let's even go back again to that developer conversation in SaaS. If you didn't have IaaS, we couldn't help customers achieve a zero data center strategy with their traditional application, we'll call it PeopleSoft or JD Edwards or E-Business Suite or even, there's some massive applications that are running on the ORACLE cloud right now that are custom applications built on the ORACLE database. What they want is they said, give me the lowest cost but yet predictable performance IaaS. I'll run my apps tier on this. Number two, give me a platform service for database, 'cause frankly, I don't really want to run your database, like, with all the menial effort. I want someone to automate patching, scale up and down, and all these types of features like the cloud should have given us. And then number three, I do want SaaS over time. So, we spend a lot of time with our customers really saying, how do I take this traditional application, run it on IaaS and PaaS, and then number two, let's modernize it at scale. Maybe I want to start peeling off functionality and running them as cloud-native services right alongside, right? That's something, again, that we're doing at scale and other people are having a hard time running these traditional workloads on-prem in the cloud. The second part is they say, you know, I've got this legacy traditional ERP. It's been servicing me well, or maybe a supply chain system. Ultimately I want to get out of this. How do I get to SaaS? And we say, okay, here's the way to do this. First, bring it to the cloud, run it on IaaS and PaaS, and then selectively, I call it cloud slicing, take a piece of functionality and put it into SaaS. We're helping customers move to the cloud at scale. We're helping 'em do it at their rate, with whatever level of change they want, and when they are ready for SaaS, we're ready for them. >> And how does autonomous fit into this whole architecture? Thank you, by the way, for that description. I mean, it's nuanced but it's important. I'm sure you're having this conversation with a lot of cloud architects and chief technologists. They want to know this stuff, and they want to know how it works. And then, obviously, we'll talk about what the business impact is, but talk about autonomous and where that fit. >> So, the autonomous database, what we've done is really taken a look at all the runtime operations of an ORACLE database, so tuning, patching, securing, all these different features, and what we've done is taken the best of the ORACLE database, the best of something called Exadata, right, which we run on the cloud, which really helps a lot of our customers, and then we've wrapped it with a set of automation and security tools to help it really manage itself, tune itself, patch itself, scale up and down independent between computant storage. So, why that's important though is that it really, our goal is to help people run the ORACLE database as they have for years but with far less effort, and then even not only far less effort, hopefully, you know, a machine plus man, kind of the equation we always talk about is man plus machine is greater than man alone. So, being assisted by artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers with far less cost. Our hope and goal is that people have been running ORACLE databases. How can we help them do it with far less effort, and maybe spend more time on what the data can do for the organization, right? Improve customer experience, etc. Versus maybe, like, how do I spin up (breaks up). >> So, let's talk about the business impact. So, you go into customers, you talk to the cloud architects, the chief technologists, you pass that test. Now you got to deliver the business impact. Where does ORACLE Consulting fit with regard to that? And maybe you could talk about where you guys want to take this thing. >> Yeah, absolutely. I mean, the cloud is great set of technologies, but where ORACLE Consulting is really helping us deliver is in the outcome. One of the things, I think, that's been fantastic working with the ORACLE Consulting team is that, you know, cloud is new. For a lot of customers who've been running these environments for a number of years, there's always some fear and a little bit of trepidation saying, how do I learn this new cloud? I mean, the workloads we're talking about, Dave, are like tier zero, tier one, tier two and, you know, all the way up to DEV and TEST and DR. ORACLE Consulting does really couple of things in particular. Number one, they start with the end in mind, and number two that they start to do, is they really help implement these systems and there's a lot of different assurances that we have that we're going to get it done on time and better be under budget, 'cause ultimately, again, that's something that's really paramount for us. And then the third part of it, a lot of times it's runbooks, right? We actually don't want to just live in our customers' environments. We want to help them understand how to run this new system, so in training and change management, a lot of times ORACLE Consulting is helping with runbooks. We usually will, after doing it the first time, we'll sit back and let the customer do it the next few times and essentially help them through the process, and our goal at that point is to leave. Only if the customer wants us to, but ultimately our goal is to implement it, get it to go live on time, and then help the customer learn this journey to the cloud. And without them, frankly, I think these systems are sometimes too complex and difficult to do on your own maybe the first time, especially 'cause like I say, they're closing the books. They might be running your entire supply chain. They run your entire HR system or whatever they might be. Too important to leave to chance. So, they really help us with helping the customer become live and become very confident and skilled 'cause they can do it themselves. >> Well Chris, we've covered the gamut. Loved the conversation. We'll have to leave it right there, but thanks so much for coming on theCUBE and sharing your insights. Great stuff. >> Absolutely, thanks Dave, and thanks for having me on. >> All right, you're welcome, and thank you for watching everybody. This is Dave Vellante for theCUBE. We are covering the ORACLE of North America Consulting transformation and its rebirth in this digital event. Keep it right there, we'll be right back.
SUMMARY :
Brought to you by ORACLE Consulting. and I'm here with Chris Fox, So, I love this title. and then we have the SaaS applications, and go just to brand-new alone. and they want to know how it works. and machine learning to perform the business impact. and our goal at that point is to leave. and sharing your insights. and thanks for having me on. and thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Chris Fox | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORACLE Consulting | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
ORACLE Consulting | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Oracle Consulting | ORGANIZATION | 0.99+ |
ORACLE | ORGANIZATION | 0.99+ |
three use cases | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
third part | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
first discussion | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
ORACLE | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
zero | QUANTITY | 0.97+ |
first generation | QUANTITY | 0.96+ |
two | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
North America Consulting | ORGANIZATION | 0.96+ |
today | DATE | 0.95+ |
one | QUANTITY | 0.95+ |
JD Edwards | ORGANIZATION | 0.93+ |
North America Tech Cloud | ORGANIZATION | 0.91+ |
IaaS | TITLE | 0.9+ |
Number two | QUANTITY | 0.87+ |
PaaS | TITLE | 0.84+ |
years ago | DATE | 0.8+ |
Exadata | ORGANIZATION | 0.8+ |
E-Business Suite | TITLE | 0.79+ |
number three | QUANTITY | 0.74+ |
tier zero | OTHER | 0.68+ |
SaaS | TITLE | 0.67+ |
tier one | OTHER | 0.67+ |
years | QUANTITY | 0.63+ |
Cloud | ORGANIZATION | 0.62+ |
tier two | OTHER | 0.61+ |
PeopleSoft | ORGANIZATION | 0.59+ |
DEV | ORGANIZATION | 0.58+ |
Gen 2 | QUANTITY | 0.57+ |
number two | QUANTITY | 0.51+ |
runbooks | TITLE | 0.45+ |
The Value of Oracle’s Gen 2 Cloud Infrastructure + Oracle Consulting
>> Narrator: From theCUBE studios in Palo Alto in Boston, it's theCUBE! Covering empowering the autonomous enterprise. Brought to you by ORACLE Consulting. >> Back to theCUBE everybody, this is Dave Vellante. We've been covering the transformation of ORACLE Consulting, and really it's rebirth, and I'm here with Chris Fox, who's the Group Vice President for Enterprise Cloud Architects and Chief Technologist for the North America Tech Cloud at ORACLE. Chris, thanks so much for coming on theCUBE. >> Thanks Dave, glad to be here. >> So, I love this title. I mean, years ago, there was no such thing as a cloud architect. Certainly there were chief technologists, but so, you are really, those are your peeps, is that right? >> That's right, that's right. That's really my team and I, that's all we do. So, our focus is really helping our customers take this journey from when they were on-premise to really transforming with cloud, and when we think about cloud, really, for us, it's a combination. It's our hybrid cloud, which happens to be on-premise, and then, of course, the true public cloud, like most people are familiar with. So, very exciting journey and, frankly, I've seen just a lot of success for our customers. You know, Dave, what I think we're seeing at ORACLE though, because we're so connected with SaaS, and then we're also connected with the traditional applications that have run the business for years, the legacy applications that have been, you know, servicing us for 20 years, and then the cloud needed developers. So, what my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So, if we think of, like, a customer outcome like I want to have a package delivered to me from a retailer, that actual process flow could touch a brand new cloud-native site from eCommerce, it could touch, essentially, maybe a traditional application that used to be on-prem that's now on the cloud, and then it might even use a new SaaS application, maybe, for maybe a permit process or delivery vehicle and scheduling. So, what my team does, we actually connect all three. So, what I always mention to my team and all of our customers, we have to be able to service all three of those constituents and really think about process flows. So, I take the cloud-native developer, we help them become efficient. We take the person who's been running that traditional application and we help them become more efficient, and then we have the SaaS applications, which are now rolling out new features on a quarterly basis and it's a whole new delivery model, but the real key is connecting all three of these into a business process flow that makes the customer's life much more efficient. People always say, you know, Chris, we want to get out of the data center, we're going zero data center, and I always say, well, how are you going to handle that back office stuff? Right? The stuff that's really big, it's cranky, doesn't handle just, you know, instances dying or things going away too easily. It needs predictable performance, it needs scale, it absolutely needs security, and ultimately, you know, a lot of these applications truly have relied on an ORACLE database. The ORACLE database has its own specific characteristics that it needs to run really well. So, we actually looked at the cloud and we said, let's take the first generation clouds, which are doing great, but let's add the features that specifically, a lot of times, the ORACLE workload needed in order to run very well and in a cost effective manner. So, that's what we mean when we say last mover advantage. We said, let's take the best of the clouds that are out there today, let's look at the workloads that, frankly, ORACLE runs and has been running for years, what our customers needed, and then let's build those features right into this next version of the cloud which can service the enterprise. So, our goal, honestly, which is interesting, is even that first discussion we had about cloud-native and legacy applications and also the new SaaS applications, we built a cloud that handles all three use cases at scale, resiliently, in a very secure manner, and I don't know of any other cloud that's handling those three use cases all in, we'll call it the same tendency for us at ORACLE. >> My question is why was it important for ORACLE, and is it important for ORACLE and its customers, to participate in IaaS and PaaS and SaaS? Why not just the last two layers of that? What does that give you from a strategic advantage standpoint and what does that do for your customer? >> Yeah, great question. So, the number one reason why we needed to have all three was that we have so many customers who, today, are in a data center. They're running a lot of our workloads on-premise and they absolutely are trying to find a better way to deliver lower-cost services to their customers and so we couldn't just say, let's just, everyone needs to just become net new, everyone just needs to ditch the old and go just to brand-new alone. Too hard, too expensive, at times. So we said, you know, let's give us customers the ultimate amount of choice. So, let's even go back again to that developer conversation in SaaS. If you didn't have IaaS, we couldn't help customers achieve a zero data center strategy with their traditional application, we'll call it PeopleSoft or JD Edwards or E-Business Suite or even, there's some massive applications that are running on the ORACLE cloud right now that are custom applications built on the ORACLE database. What they want is they said, give me the lowest cost but yet predictable performance IaaS. I'll run my apps tier on this. Number two, give me a platform service for database, 'cause frankly, I don't really want to run your database, like, with all the menial effort. I want someone to automate patching, scale up and down, and all these types of features like the cloud should have given us. And then number three, I do want SaaS over time. So, we spend a lot of time with our customers really saying, how do I take this traditional application, run it on IaaS and PaaS, and then number two, let's modernize it at scale. Maybe I want to start peeling off functionality and running them as cloud-native services right alongside, right? That's something, again, that we're doing at scale and other people are having a hard time running these traditional workloads on-prem in the cloud. The second part is they say, you know, I've got this legacy traditional ERP. It's been servicing me well, or maybe a supply chain system. Ultimately I want to get out of this. How do I get to SaaS? And we say, okay, here's the way to do this. First, bring it to the cloud, run it on IaaS and PaaS, and then selectively, I call it cloud slicing, take a piece of functionality and put it into SaaS. We're helping customers move to the cloud at scale. We're helping 'em do it at their rate, with whatever level of change they want, and when they are ready for SaaS, we're ready for them. >> And how does autonomous fit into this whole architecture? Thank you, by the way, for that description. I mean, it's nuanced but it's important. I'm sure you're having this conversation with a lot of cloud architects and chief technologists. They want to know this stuff, and they want to know how it works. And then, obviously, we'll talk about what the business impact is, but talk about autonomous and where that fit. >> So, the autonomous database, what we've done is really taken a look at all the runtime operations of an ORACLE database, so tuning, patching, securing, all these different features, and what we've done is taken the best of the ORACLE database, the best of something called Exadata, right, which we run on the cloud, which really helps a lot of our customers, and then we've wrapped it with a set of automation and security tools to help it really manage itself, tune itself, patch itself, scale up and down independent between computant storage. So, why that's important though is that it really, our goal is to help people run the ORACLE database as they have for years but with far less effort, and then even not only far less effort, hopefully, you know, a machine plus man, kind of the equation we always talk about is man plus machine is greater than man alone. So, being assisted by artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers with far less cost. Our hope and goal is that people have been running ORACLE databases. How can we help them do it with far less effort, and maybe spend more time on what the data can do for the organization, right? Improve customer experience, etc. Versus maybe, like, how do I spin up (breaks up). >> So, let's talk about the business impact. So, you go into customers, you talk to the cloud architects, the chief technologists, you pass that test. Now you got to deliver the business impact. Where does ORACLE Consulting fit with regard to that? And maybe you could talk about where you guys want to take this thing. >> Yeah, absolutely. I mean, the cloud is great set of technologies, but where ORACLE Consulting is really helping us deliver is in the outcome. One of the things, I think, that's been fantastic working with the ORACLE Consulting team is that, you know, cloud is new. For a lot of customers who've been running these environments for a number of years, there's always some fear and a little bit of trepidation saying, how do I learn this new cloud? I mean, the workloads we're talking about, Dave, are like tier zero, tier one, tier two and, you know, all the way up to DEV and TEST and DR. ORACLE Consulting does really couple of things in particular. Number one, they start with the end in mind, and number two that they start to do, is they really help implement these systems and there's a lot of different assurances that we have that we're going to get it done on time and better be under budget, 'cause ultimately, again, that's something that's really paramount for us. And then the third part of it, a lot of times it's runbooks, right? We actually don't want to just live in our customers' environments. We want to help them understand how to run this new system, so in training and change management, a lot of times ORACLE Consulting is helping with runbooks. We usually will, after doing it the first time, we'll sit back and let the customer do it the next few times and essentially help them through the process, and our goal at that point is to leave. Only if the customer wants us to, but ultimately our goal is to implement it, get it to go live on time, and then help the customer learn this journey to the cloud. And without them, frankly, I think these systems are sometimes too complex and difficult to do on your own maybe the first time, especially 'cause like I say, they're closing the books. They might be running your entire supply chain. They run your entire HR system or whatever they might be. Too important to leave to chance. So, they really help us with helping the customer become live and become very confident and skilled 'cause they can do it themselves. >> Well Chris, we've covered the gamut. Loved the conversation. We'll have to leave it right there, but thanks so much for coming on theCUBE and sharing your insights. Great stuff. >> Absolutely, thanks Dave, and thanks for having me on. >> All right, you're welcome, and thank you for watching everybody. This is Dave Vellante for theCUBE. We are covering the ORACLE of North America Consulting transformation and its rebirth in this digital event. Keep it right there, we'll be right back.
SUMMARY :
Brought to you by ORACLE Consulting. and I'm here with Chris Fox, So, I love this title. and then we have the SaaS applications, and go just to brand-new alone. and they want to know how it works. and machine learning to perform the business impact. and our goal at that point is to leave. and sharing your insights. and thanks for having me on. and thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Chris Fox | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORACLE Consulting | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
ORACLE Consulting | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Oracle Consulting | ORGANIZATION | 0.99+ |
ORACLE | ORGANIZATION | 0.99+ |
three use cases | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
third part | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
first discussion | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
ORACLE | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
zero | QUANTITY | 0.97+ |
first generation | QUANTITY | 0.96+ |
two | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
North America Consulting | ORGANIZATION | 0.96+ |
today | DATE | 0.95+ |
one | QUANTITY | 0.95+ |
JD Edwards | ORGANIZATION | 0.93+ |
North America Tech Cloud | ORGANIZATION | 0.91+ |
IaaS | TITLE | 0.9+ |
Number two | QUANTITY | 0.87+ |
PaaS | TITLE | 0.84+ |
years ago | DATE | 0.8+ |
Exadata | ORGANIZATION | 0.8+ |
E-Business Suite | TITLE | 0.79+ |
number three | QUANTITY | 0.74+ |
tier zero | OTHER | 0.68+ |
SaaS | TITLE | 0.67+ |
tier one | OTHER | 0.67+ |
Gen 2 | QUANTITY | 0.66+ |
years | QUANTITY | 0.63+ |
Cloud | ORGANIZATION | 0.62+ |
tier two | OTHER | 0.61+ |
PeopleSoft | ORGANIZATION | 0.59+ |
DEV | ORGANIZATION | 0.58+ |
number two | QUANTITY | 0.51+ |
runbooks | TITLE | 0.45+ |
Dell EMC Next-Gen Data Protection
(intense orchestral music) >> Hi everybody this is Dave Vellante, welcome to this special CUBE presentation, where we're covering the Dell EMC Integrated Data Appliance announcement. You can see we also are running a crowd chat, it's an ask me anything crowd chat you can login with Twitter, LinkedIn, or Facebook, and ask any question. We've got Dell EMC executives, we're gonna hear from VMware executives, we've got the analyst perspective, we're gonna hear from customers and then of course we're gonna jump into the crowd chat. With me is Beth Phalen, who is the President of Dell's EMC, Dell EMCs Data Protection Division, Beth, great to see you again. >> Good to be here, Dave. >> Okay so, we know that 80% of the workloads are virtualized, we also know that when virtualization came on the scene it caused customers to really rethink their data protection strategies. Cloud is another force that's causing them to change the way in which they approach data protection, but let's start with virtualization. What are you guys doing for those virtualized customers? >> Data protection is crucial for our customers today, and more and more the vAdmins are being expected to protect their own environments. So we've been working very closely with VMware to make sure we're delivering the simplest data protection for VMware, taking into account all of the cloud capabilities that VMware is bringing to market and making sure we're protecting those as well. We have to do that without compromise, and so we have some really exciting innovations to talk about today. The first of those is the DP4400, we announced this a few weeks ago, it is a purpose-built appliance for mid-sized customers that brings forward all of our learnings from enterprise data protection, and makes it simple and easy to use, and at the right price point for our mid-sized customers. We're the extension into VMware environments and extensions into the cloud. >> Okay, so I mentioned up front that cloud is this disruptive force. You know people expect the outcome of cloud to be simplicity, ease of management, but the cloud adds IT complexity. How are you making data protection simpler for the cloud? >> And the cloud has many different ways the customers can leverage it. The two that we're gonna highlight today are for those customers that are using VMware Cloud on AWS, we're now enabling a seamless disaster recovery option, so customers can fail over to VMware Cloud on AWS for their DR configurations. And on top of that, we're very excited to talk about data protection as a service. We all know how wildly popular that is and how rapidly it's growing, and we've now integrated with VMware vCloud Director to allow customers to not have to have a separate backup as a service portal, but provide management for both their VMware environments and their data protection, all integrated within VCD. >> Okay great, so, we know that VMware of course is the leader in virtualization, we're gonna cut away for a moment and hear from VMware executives, we're gonna back here we're gonna do a deep dive, as I say we got great agenda, we're gonna explore some of these things; and then of course there's the crowd chat, the ask me anything crowd chat. So let's cut over to Palo Alto, California, in our studios over there, and let's hear from the VMware perspective and Peter Burris, take it away, Peter. (intense orchestral music) >> Thanks, Dave! And this is Peter Burris, and I can report that in fact we have another beautiful day here in California. And also, we've got a great VMware executive to talk a bit about this important announcement. Yanbing Li is the Senior Vice President and GM for the Storage and Availability Business Unit at Vmware, welcome back to theCUBE Yanbing. >> It's great to be here, thank you for having me Peter. >> Oh absolutely we've got a lot of great stuff to talk about but let's start with the obvious question. Why is it so important to VMware and Dell EMC to work on this question, data availability, data protection? >> You know I have a very simple answer for you. You know Dell EMC has been the marketing leader for the past decade, and they are also a leading solution for all of our VMware environment, it's very natural that we do a lot of collaboration with them. And what's most important, is our collaboration is not only go-to-market collaboration, in labeling our joint customers, but also deep engineering level collaboration, and that is very very exciting. Lots of our solutions are really co-engineered together. >> So, that is in service to something. And now putting all this knowledge, all this product together to create a solution, is in service of data protection but especially as it relates to spanning the cloud. So talk to us a little bit about how this is gonna make it easier for customers to be where they need to be in their infrastructure. >> Certainly VMware has been also on a journey to help with our customers, their transition from data center to the cloud, and data protection is a very crucial aspect of that; and we're looking for simpler, scalable, more robust data protection solutions. You know VMware launched our VMware Cloud on AWS service last year, and Dell EMC has been with us since day one; they're the first solution to be certified as a data protection service for VMware Cloud on AWS. We also work with 4500 VCCP partners, this is the VMware Cloud partner program partners that, you know they are building cloud services based on VMware software defined data center stack. And we are also working with Dell EMC on integrating their data protection source with vCloud, their vCloud Director software, so that you know our customer has integrated data protection for our VCCP partners. So you know across all the cloud initiatives, we are working very closely with Dell EMC. >> So bringing the best of the technology, the best of this massive ecosystem together, to help customers protect their data and give them options about where they operate their infrastructure. >> Definitely. I'm personally very excited about their recent announcement that has been to the Data Domain Virtual Edition, where they're offering a subscription-based data protection bundle that can allow a VMware Cloud on AWS instance to back up their data, you know, using a subscription model, and you can backup 96 terabytes for any single SDC cluster in VMware Cloud on AWS. So they're definitely driving a lot of innovation not only in technology, but also in consumption, how to make it easier for customers to consume. And we're excited to be a partner with Dell EMC together on this. >> Fantastic! Yanbing Li, VMware, back to you, Dave! >> Thanks, Peter. We're back for the deep dive, Beth Phalen and joining us again, and Ruya Barrett, who's the Vice President of Marketing for Dell EMC's Data Protection Division, thanks guys for coming on. Ruya, let me start with you. Why are customers, and what are they telling you, in terms of why they're acquiring your data protection solutions? >> Well, Beth talked a little bit about the engineering effort, and collaboration we've been putting in place, and so did Yanbing with VMware, so whether that's integration into vCenter, or vSphere, or vRealize Operations Manager, vRealize Automation or vCloud Director, all of this work, all of this engineering effort, and engineering hours is really to do two things: deliver simply powerful data protection for VMware customers >> But what do you mean by simple? >> Simple. Well, simple comes in two types of approaches, right? Simple is through automation. One of the things that we've done is really automate across the data protection stack for VMware. Where as 99% of the market solutions really leave it off at policy management, so they automate the policy layer. We automate not only the policy layer, but the vProxy deployment, as well as the data movement. We have five types of data movement capabilities that have been automated. Whether you're going directly from storage to protection storage, whether you're doing client to protection storage, whether you're doing application to protection storage, or whether you're doing Hypervisor Direct to application storage. So it really is to automate, and to maximize the performance of to meet the customer's service levels, so automation is critical when you're doing that. The other part of automation could be in how easy cloud is for the admins and users, it really has to do with being able to orchestrate all of the activities, you know very simply and easily. Simplicity is also management. We are hearing more and more that the admins are taking on the role of doing their backups and restores, so, our efforts with VMware have been to really simplify the management so that they can use their native tools. We've integrated with VMware for the vAdmins to be able to take backup and restore just a part of their daily operational tasks. >> So, when you talk about power, is that performance, you reference performance, but is it just performance, or is it more than that? >> That's also a great question, Dave, thank you. Power really, in terms of data protection, is three fold, it's power in making sure that you have a single, powerful solution, that really covers a comprehensive set of applications and requirements, not only for today, but also tomorrow's needs. So that comprehensive coverage, whether you're on-premise, or in the cloud is really critical. Power means performance, of course it means performance. Being able to deliver the highest performing protection, and more importantly restores, is really critical to our customers. Power also means not sacrificing efficiency to get that performance. So efficiency, we have the best source ID duplication technology in the market, that coupled with the performance is really critical to our customers. So all of these, the simplicity, the comprehensive coverage, the performance, the efficiency, also drives the lowest cost to protect for our customers. >> Alright, I wanna bring Beth Phalen into the conversation, Beth, let's talk about cloud a little bit. A lot of people feel as though I can take data, I can dump it into an object store in the cloud, and I'm protected. Your thoughts? >> Yeah, we hear that same misconception, and in fact the exact opposite is true; it's even more important that people have world class data protection when they're bringing cloud into that IT environment, they have to know where their data is, and how is protected and how to restore it. So we have a few innovations that are going on here for a long time, we've had our hyper cloud extensions, you can do cloud tiering directly from Data Domain. And now we've also extended what you can do if you're a VMware Cloud on AWS customer, so that you can use that for you cloud DR configuartion, fail over to AWS with VMware Cloud, and then fail back with vMotion if you choose to; and that's great for customers who don't wanna have a second site, but they do wanna have confidence that they can recover if there's a disaster. On top of that we've also been doing some really great with VMware, with vCloud Director integration. Data protection as a service is growing like crazy, it's highly popular around the globe as a way to consume data protection. And so now you can integrate both your VMware tasks, and your data protection tasks, from one UI in the Cloud Director. These are just a few of the things that we're doing, comprehensively bringing data protection to the cloud, is essential. >> Great, okay. Dell EMC just recently made an announcement, the IDPA DP4400, Ruya what's it all about? Explain. >> Absolutely, so, what we announced is really an integrated data protection appliance, turnkey, purpose-built, to meet the specific requirements of mid-sized customers, it's really, to bring that enterprise sensibility and protection to our mid-sized customers. It's all inclusive in terms of capabilities, so if you're talking about backup, restore, replication, disaster recovery, cloud disaster recovery, and cloud long-term retention, all at your fingertips, all included; as well as all of the capabilities we talked about in terms of enabling VM admins to be able to do all of their daily tasks and operations through their own native tools and UI's. So it's really all about bringing simply powerful protection to mid-sized customers at the lowest cost to protect. And we now also have a guarantee under our future proof loyalty program, we are introducing a 55 to one deduplication guarantee for those exact customers. >> Okay. Beth, could you talk about the motivation for this product? Why did you build it, and why is relevant to mid-sized customers? >> So we're known as number one in enterprise data protection we're known for our world-class dedupe, best in class, best in the world dedupe capabilities. And what we've done is we've taken the learnings and the IP that we have that's served enterprise customers for all of these years, and then we're making that accessible to mid-sized customers And there were so many companies out there that can take advantage of our technology that maybe couldn't before these announcements. So by building this, we've created a product that a mid-sized company, may have a small IT staff, like I said at the beginning, may have VM admins who are also responsible for data protection, that they can have what we bring to the market with best-in-class data protection. >> I wanna follow up with you on simple and powerful. What is your perspective on simple, what does it mean for customers? >> Yeah, I mean if you break it down, simple means simple to deploy, two times faster than traditional data protection, simple means easier to manage with modern HTML5 interfaces that include the data protection day-to-day tasks, also include reporting. Simple means easy to grow, growing in place from 24 terabytes up to 96 terabytes with just a simple software license to add in 12 terabyte increments. So all of those things come together to reduce the amount of time that an IT admin has to spend on data protection. >> So, when I hear powerful and here mid-sized customers, I'm thinking okay I wanna bring enterprise-class data protection down to the mid-sized organization. Is that what you means? Can you actually succeed in doing that? >> Yeah. If I'm an IT admin I wanna make sure that I can protect all of my data as quickly and efficiently as possible. And so, we have the broadest support matrix in the industry, I don't have to bring in multiple products to support protection on my different applications, that's key, that's one thing. The other thing is I wanna be able to scale, and I don't wanna have to be forced to bring in new products with this you have a logical five terabytes on-prem, you can grow to protecting additional 10 terabytes in the cloud, so that's another key piece of it, scalability. >> Petabytes, sorry. >> And then-- >> Sorry. Petabytes-- >> Petabytes. >> You said terabytes. (laughs) >> You live in a petabyte world! >> Of course, yes, what am I thinking. (all laugh) and then last but not least, it's just performance, right? This runs on a 14GB PowerEdge server; you're gonna get the efficiency, you can protect five times as many VMs as you could without this kind of product. So, all of those things come together with power, scalability, support matrix, and performance. >> Great, thank you. Okay, Ruya, let's talk about the business impact. Start with this IT operations person, what does it mean for that individual? >> Yeah, absolutely. So first, you're gonna get your weekends back, right? So, the product is just faster, we talked about it's simpler, you're not gonna have to get a PhD on how to do data protection, to be able to do your business. You're gonna enable your vAdmins to be able to take on some of the tasks. So it's really about freeing up your weekends, having that you know sound mind that data protection's just happening, it works! We've already tried and tested this with some of the most crucial businesses, with the most stringent service-level requirements; it's just gonna work. And, by the way, you're gonna look like a hero, because with this 2U appliance, you're gonna be able to support 15 petabytes across the most comprehensive coverage in the data center, so your boss is gonna think your just a superhero. >> Petabytes. >> Yeah exactly, petabytes, exactly. (all laugh) So it's tremendous for the IT user, and also the business user. >> By the way, what about the boss? What about the line of business, what does it mean to that individual? >> So if I'm the CEO or the CIO, I really wanna think about where am I putting my most skilled personnel? And my most skilled personnel, especially as IT is becoming so core to the business, is probably not best served doing data protection. So just being able to free up those resources to really drive applications or initiatives that are driving revenue for the business is critical. Number two, if I'm the boss, I don't wanna overpay for data protection. Data protection is insurance for the business, you need it, but you don't wanna overpay for it. So I think that lowest cost is a really critical requirement The third one is really minimizing risk and compliance issues for the business. If I have the sound mind, and the trust that this is just gonna work, then I'm gonna be able to recover my business no matter what the scenario; and that it's been tried and true in the biggest accounts across the world. I'm gonna rest assured that I have less exposure to my business. >> Great. Ruya, Beth, thank you very much, don't forget, we have an ask me anything crowd chat at the end of this session, so you can go in, login with Twitter, LinkedIn, or Facebook, and ask any question. Alright, let's take a look at the product, and then we're gonna come back and get the analysts perspective, keep it right there. (intense music) >> Organizations today, especially mid-sized organizations, are faced with increased complexity; driving the need for data protection solutions that enable them to do more with less. The Dell EMC IDPA DP4400 packages the proven enterprise class technologies that have made us the number one provider in data protection into a converged appliance specifically designed for mid-sized organizations. While other solutions sacrifice power in the name of simplicity, the IDPA DP4400 delivers simply powerful data protection. The IDPA DP4400 combines protection software and storage, search and analytics, and cloud readiness, in one appliance. To save you time and money, we made it simple for you to deploy and upgrade, and, easily grow in place without disruption, adding capacity with simple license upgrades without buying more hardware. Data protection management is also a snap with the IDPA System Manager. IDPA is optimized for VMware data protection. It is also integrated with vSphere, SQL, and Oracle, to enable a wider IT audience to manage data protection. The IDPA DP4400 provides protection across the largest application ecosystem, deliver breakneck backup speeds, more efficient network usage, and unmatched 55 to one average deduplication. The IDPA DP400 is natively extensible to the cloud for long-term retention. And, also enables simple, and cost effective cloud disaster recovery. Deduplicated data is stored in AWS with minimal footprint, with failover to AWS and failback to on-premises quickly, easily, and cost effectively. The IDPA DP4400 delivers all this at the lowest cost-to-protect. It includes a three year satisfaction guarantee, as well as an up to 55 to one data protection deduplication guarantee. The Dell EMC IDPA DP4400 provides backup, replication, deduplication, search, analytics, instant access for application testing and development, as well as DR and long-term retention to the cloud. Everything you need to deliver enterprise-class data protection, in a small integrated system, optimized for mid-sized environments. It's simply powerful. (upbeat music and rhythmic claps) >> Cool video! Alright, we're back, with Vinny Choinski, who is the Senior Analyst for the Validation Practice at ESG, Enterprise Strategy Group. ESG is a company that does a lot of research, and one of the areas is they have these lab reports, and they basically validate vendor claims, it's an awesome service, they've had it for a number of years and Vinny is an expert in this area. Vinny Choinski, welcome to theCUBE great to see you. >> How you doin' Dave? Great to see you. >> So, when you talk to customers they tell you they hate complexity, first of all, and specifically in the context of data protection, they want high performance, they don't wanna have to mess with this stuff, and they want low cost. What are you seeing in the marketplace? >> So our research is lining up with those challenges; and that's why I've recently done three reports. We talked to how EMC is addressing those challenges and how they are making it easier, faster, and less expensive to do data protection. >> So people don't wanna do a lot of heavy lifting. They worry about the time it takes to do deployment. So, what did you find, hands on, what'd you find with regards to deployment? >> Yeah, so for the deployment, we really yeah, we focused on the DP4400 and you know how that's making it easier for the IT generalist to do data protection deployment, and management. And what we did, I actually walked through the whole process from the delivery truck to first backup. We had it off the truck and racked up and powered up in about 30 minutes, so, it's a service sized appliance, pretty easy, easy to install. Spent 10 minutes in the server room kinda configuring it to the network, and then we went up to an office, and finished the configuration. After that I basically hit go on the configuration button, completely automated. And I simply monitored the process until the appliance was fully configured. Took me about 20 minutes, you know, to add that configuration to the appliance, hit go, and at the end, I had an appliance that was ready for on-site, and backups extended to the cloud. >> So, that met your expectations? It meshed with the vendors claims? >> It was real easy. We actually had to move it around a couple times, and you know, this stuff used to be huge you know, big box, metal gear. >> Refrigerators. (laughs) >> Refrigerators. It was a small appliance, once we installed it, got a note from the IT guy, had to move it. No tools, easy rack, the configuration was automated. We had to set network parameters, that's about it. >> How about your performance testing, what did that show? >> So we did some pretty extensive performance testing. We actually compared the IDPA Dell appliances to the industry recognized server grid scaled architecture. And basically we started by matching the hardware parameters of the box, CPU, memory, disk, network, flash, so once we had the boxes configured apples to apples shall we say, we ran a rigorous set of tests. We scaled the environment from a hundred to a thousand VMs, adding a hundred VMs in between each backup run. And what we found as we were doing the test was that the IDPA reduced the backup window significantly over the competitive solution. A 54 to 68% reduction in the backup window. >> Okay. So again, you're kind of expectations tied into the vendor claims? >> Yep. You know the reduction in backup time was pretty significant that's a pretty good environment, pretty good test environment, right, you got the hundred to a thousand VMs. We also looked at the efficiency of data transfer, and we found that IDPA outperformed the competitor there as well, significantly. And we found that this is do to the the mature data domain deduplication technology. It not only leverages, like most companies will, the VMware Changed Block Tracking API, but it has it's own client-side software that really reduces, significantly reduces the amount of data that needs to be transferred over the network for each backup. And we found that reduced the amount of data that needs to be transferred against the competitor by 74%. >> What about the economics, it's the one of the key paying points obviously for IT professionals. What did you see there? >> Yep, so, there's a lot that goes into the economics of a data protection environment. We summed it up into what we call the cost to protect. We actually collected call home data from 15,000 Dell EMC data protection appliances deployed worldwide. >> Oh cool, real data. >> Real data. So, we had the real data, we got it from 15,000 different environments, we took that data and we we used some of the stuff that we analyzed, the price that they paid for it, how long has it been in service, what the deduplication rates they're getting, and then the amount of data. So we had all the components that told us what was happening with that box. So that allowed us to to distill that into this InstaGraphic that we see up here, which takes 12, shows 12 of the customers that we analyzed. Different industries, different architectures, on the far left of this InstaGraphic you're gonna see that we had a data domain box connected to a third-party backup application, still performing economically, quite well. On the far right we have the fully integrated IDPA solution, you'll see that as you put things better together, the economics get even better, right? So, what we found was that both data domain and the IDPA can easily serve data protection environments storage for a fraction of a penny per month. >> Okay. Important to point out this is metadata, no customer data involved here, right, it's just. >> It's metadata that's correct. >> Right, okay. Summarize your impressions based on your research, and your hands on lab work. >> Yeah, so I've been doing this for almost 25 plus years, I've been in the data protection space, I was an end user, I actually ran backup environments, I worked in the reseller space, sold the gear, and now I'm an analyst with ESG, taking a look at all the different solutions that are out there, and, you know data protection has never been easy, and there's always a lot of moving parts, and it gets harder when you really need a solution that backs up everything, right? From your physical, virtual, to the cloud, the legacy stuff, right? Dell EMC has packaged this up, in my opinion, quite well. They've looked at the economics, they've looked at the ease of use, they've looked at the performance, and they've put the right components in there they have the data protection software, they have the target storage, they have the analytics, you can do it with an agent, you can do it without an agent. So I think they've put all the pieces in here, so it's not an easy thing in my opinion, and I think they've nailed this one. >> Excellent. Well Vinny, thanks so much for for comin' on and sharing the results of your research, really appreciate it. Alright, let's hear from the customer, and then we're gonna come back with Beth Phalen and wrap, keep it right there. (upbeat techno music) >> I was a fortune 500 company, a global provider of product solutions and services, and enterprise computing solutions. The DP4400 is attractive because customers have different consumption models. There are those that like to build their own, and there are those that want an integrated solution, they want to focus on their core business as opposed to engineering a solution. So for those customers that are looking for that type of experience, the DP4400 will address a full data protection solution that has a single pane of glass, simplified management, simplified deployment, and also, ease-of-management over time. >> Vollrath is a food service industry manufacturer, it's been in business for 144 years, in some way we probably touch your life everyday. From a semantic perspective, things that weren't meeting our needs really come around to the management of all of your backup sets. We had backup windows for four to eight hours, and we were to the point where when those backups failed, which was fairly regular, we didn't have enough time to run them again. With Dell EMC data protection, we're getting phenomenal returns, shorter times. What took us eight hours is taking under an hour, maybe it's upwards of two at times for even larger sets. It's single interface, really does help. So when you take into account how much time you spend trying to manage with old solutions that's another unparalleled piece. >> I'm the IT Director for Melanson Heath, we are a full service accounting firm. The top three benefits of the DP4400 simplicity of not having to do a lot of research, the ease of deployment, not having to go back or have external resources, it's really designed so that I can rack it, stack it, and get going. Having a data protection solution that works with all of my software and systems is vital. We are completely reliant on our technology infrastructure, and we need to know that if something happens, we have a plan B, that can be deployed quickly and easily. (upbeat techno music) >> We're back, it's always great to hear the customer perspective. We're back with Beth Phalen. Beth let's summarize, bring it home for us, this announcement. >> We are making sure that no matter what the size of your organization, you can protect your data in your VMware environment simply and powerfully without compromise, and have confidence, whether you're on-prem or in the cloud, you can restore your data whenever you need to. >> Awesome, well thanks so much Beth for sharing the innovations, and we're not done yet, so jump into the crowd chat, as I said, you can log in with Twitter, LinkedIn, or Facebook, ask any questions, we're gonna be teeing up some questions and doing some surveys. So thanks for watching everybody, and we'll see you in the crowd chat.
SUMMARY :
Beth, great to see you again. 80% of the workloads are virtualized, and more and more the vAdmins You know people expect the outcome of cloud to be And the cloud has many different ways and let's hear from the VMware perspective Yanbing Li is the Senior Vice President and GM Why is it so important to VMware and Dell EMC the marketing leader for the past decade, So, that is in service to something. to help with our customers, So bringing the best of the technology, to back up their data, you know, We're back for the deep dive, and to maximize the performance of also drives the lowest cost to protect for our customers. I can dump it into an object store in the cloud, and in fact the exact opposite is true; the IDPA DP4400, at the lowest cost to protect. and why is relevant to mid-sized customers? that they can have what we bring to the market with I wanna follow up with you on simple and powerful. that include the data protection day-to-day tasks, Is that what you means? I don't have to bring in multiple products to support Petabytes-- You said terabytes. So, all of those things come together with power, Okay, Ruya, let's talk about the business impact. And, by the way, you're gonna look like a hero, and also the business user. and the trust that this is just gonna work, at the end of this session, so you can go in, that enable them to do more with less. and one of the areas is they have these lab reports, Great to see you. and specifically in the context of data protection, and less expensive to do data protection. So, what did you find, hands on, and at the end, and you know, this stuff used to be huge you know, Refrigerators. got a note from the IT guy, had to move it. We actually compared the IDPA Dell appliances to So again, you're kind of expectations the amount of data that needs to be transferred it's the one of the key paying points obviously the cost to protect. On the far right we have the fully integrated IDPA solution, Important to point out this is metadata, based on your research, and your hands on lab work. and it gets harder when you really need a solution that for comin' on and sharing the results of your research, the DP4400 will address and we were to the point where when those backups failed, the ease of deployment, the customer perspective. you can protect your data in your VMware environment for sharing the innovations, and we're not done yet,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Beth Phalen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Vinny Choinski | PERSON | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
Ruya Barrett | PERSON | 0.99+ |
Beth | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
99% | QUANTITY | 0.99+ |
14GB | QUANTITY | 0.99+ |
144 years | QUANTITY | 0.99+ |
eight hours | QUANTITY | 0.99+ |
Vinny | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Ruya | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
24 terabytes | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
55 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
54 | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
three year | QUANTITY | 0.99+ |
74% | QUANTITY | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
12 | QUANTITY | 0.99+ |
12 terabyte | QUANTITY | 0.99+ |
hundred | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
HTML5 | TITLE | 0.99+ |
four | QUANTITY | 0.99+ |
five terabytes | QUANTITY | 0.99+ |
Vmware | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
first solution | QUANTITY | 0.99+ |
five types | QUANTITY | 0.99+ |
three reports | QUANTITY | 0.99+ |
96 terabytes | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
68% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
second site | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
vCloud | TITLE | 0.99+ |
third one | QUANTITY | 0.98+ |
vSphere | TITLE | 0.98+ |
five times | QUANTITY | 0.98+ |
Lena Smart & Tara Hernandez, MongoDB | International Women's Day
(upbeat music) >> Hello and welcome to theCube's coverage of International Women's Day. I'm John Furrier, your host of "theCUBE." We've got great two remote guests coming into our Palo Alto Studios, some tech athletes, as we say, people that've been in the trenches, years of experience, Lena Smart, CISO at MongoDB, Cube alumni, and Tara Hernandez, VP of Developer Productivity at MongoDB as well. Thanks for coming in to this program and supporting our efforts today. Thanks so much. >> Thanks for having us. >> Yeah, everyone talk about the journey in tech, where it all started. Before we get there, talk about what you guys are doing at MongoDB specifically. MongoDB is kind of gone the next level as a platform. You have your own ecosystem, lot of developers, very technical crowd, but it's changing the business transformation. What do you guys do at Mongo? We'll start with you, Lena. >> So I'm the CISO, so all security goes through me. I like to say, well, I don't like to say, I'm described as the ones throat to choke. So anything to do with security basically starts and ends with me. We do have a fantastic Cloud engineering security team and a product security team, and they don't report directly to me, but obviously we have very close relationships. I like to keep that kind of church and state separate and I know I've spoken about that before. And we just recently set up a physical security team with an amazing gentleman who left the FBI and he came to join us after 26 years for the agency. So, really starting to look at the physical aspects of what we offer as well. >> I interviewed a CISO the other day and she said, "Every day is day zero for me." Kind of goofing on the Amazon Day one thing, but Tara, go ahead. Tara, go ahead. What's your role there, developer productivity? What are you focusing on? >> Sure. Developer productivity is kind of the latest description for things that we've described over the years as, you know, DevOps oriented engineering or platform engineering or build and release engineering development infrastructure. It's all part and parcel, which is how do we actually get our code from developer to customer, you know, and all the mechanics that go into that. It's been something I discovered from my first job way back in the early '90s at Borland. And the art has just evolved enormously ever since, so. >> Yeah, this is a very great conversation both of you guys, right in the middle of all the action and data infrastructures changing, exploding, and involving big time AI and data tsunami and security never stops. Well, let's get into, we'll talk about that later, but let's get into what motivated you guys to pursue a career in tech and what were some of the challenges that you faced along the way? >> I'll go first. The fact of the matter was I intended to be a double major in history and literature when I went off to university, but I was informed that I had to do a math or a science degree or else the university would not be paid for. At the time, UC Santa Cruz had a policy that called Open Access Computing. This is, you know, the late '80s, early '90s. And anybody at the university could get an email account and that was unusual at the time if you were, those of us who remember, you used to have to pay for that CompuServe or AOL or, there's another one, I forget what it was called, but if a student at Santa Cruz could have an email account. And because of that email account, I met people who were computer science majors and I'm like, "Okay, I'll try that." That seems good. And it was a little bit of a struggle for me, a lot I won't lie, but I can't complain with how it ended up. And certainly once I found my niche, which was development infrastructure, I found my true love and I've been doing it for almost 30 years now. >> Awesome. Great story. Can't wait to ask a few questions on that. We'll go back to that late '80s, early '90s. Lena, your journey, how you got into it. >> So slightly different start. I did not go to university. I had to leave school when I was 16, got a job, had to help support my family. Worked a bunch of various jobs till I was about 21 and then computers became more, I think, I wouldn't say they were ubiquitous, but they were certainly out there. And I'd also been saving up every penny I could earn to buy my own computer and bought an Amstrad 1640, 20 meg hard drive. It rocked. And kind of took that apart, put it back together again, and thought that could be money in this. And so basically just teaching myself about computers any job that I got. 'Cause most of my jobs were like clerical work and secretary at that point. But any job that had a computer in front of that, I would make it my business to go find the guy who did computing 'cause it was always a guy. And I would say, you know, I want to learn how these work. Let, you know, show me. And, you know, I would take my lunch hour and after work and anytime I could with these people and they were very kind with their time and I just kept learning, so yep. >> Yeah, those early days remind me of the inflection point we're going through now. This major C change coming. Back then, if you had a computer, you had to kind of be your own internal engineer to fix things. Remember back on the systems revolution, late '80s, Tara, when, you know, your career started, those were major inflection points. Now we're seeing a similar wave right now, security, infrastructure. It feels like it's going to a whole nother level. At Mongo, you guys certainly see this as well, with this AI surge coming in. A lot more action is coming in. And so there's a lot of parallels between these inflection points. How do you guys see this next wave of change? Obviously, the AI stuff's blowing everyone away. Oh, new user interface. It's been called the browser moment, the mobile iPhone moment, kind of for this generation. There's a lot of people out there who are watching that are young in their careers, what's your take on this? How would you talk to those folks around how important this wave is? >> It, you know, it's funny, I've been having this conversation quite a bit recently in part because, you know, to me AI in a lot of ways is very similar to, you know, back in the '90s when we were talking about bringing in the worldwide web to the forefront of the world, right. And we tended to think in terms of all the optimistic benefits that would come of it. You know, free passing of information, availability to anyone, anywhere. You just needed an internet connection, which back then of course meant a modem. >> John: Not everyone had though. >> Exactly. But what we found in the subsequent years is that human beings are what they are and we bring ourselves to whatever platforms that are there, right. And so, you know, as much as it was amazing to have this freely available HTML based internet experience, it also meant that the negatives came to the forefront quite quickly. And there were ramifications of that. And so to me, when I look at AI, we're already seeing the ramifications to that. Yes, are there these amazing, optimistic, wonderful things that can be done? Yes. >> Yeah. >> But we're also human and the bad stuff's going to come out too. And how do we- >> Yeah. >> How do we as an industry, as a community, you know, understand and mitigate those ramifications so that we can benefit more from the positive than the negative. So it is interesting that it comes kind of full circle in really interesting ways. >> Yeah. The underbelly takes place first, gets it in the early adopter mode. Normally industries with, you know, money involved arbitrage, no standards. But we've seen this movie before. Is there hope, Lena, that we can have a more secure environment? >> I would hope so. (Lena laughs) Although depressingly, we've been in this well for 30 years now and we're, at the end of the day, still telling people not to click links on emails. So yeah, that kind of still keeps me awake at night a wee bit. The whole thing about AI, I mean, it's, obviously I am not an expert by any stretch of the imagination in AI. I did read (indistinct) book recently about AI and that was kind of interesting. And I'm just trying to teach myself as much as I can about it to the extent of even buying the "Dummies Guide to AI." Just because, it's actually not a dummies guide. It's actually fairly interesting, but I'm always thinking about it from a security standpoint. So it's kind of my worst nightmare and the best thing that could ever happen in the same dream. You know, you've got this technology where I can ask it a question and you know, it spits out generally a reasonable answer. And my team are working on with Mark Porter our CTO and his team on almost like an incubation of AI link. What would it look like from MongoDB? What's the legal ramifications? 'Cause there will be legal ramifications even though it's the wild, wild west just now, I think. Regulation's going to catch up to us pretty quickly, I would think. >> John: Yeah, yeah. >> And so I think, you know, as long as companies have a seat at the table and governments perhaps don't become too dictatorial over this, then hopefully we'll be in a good place. But we'll see. I think it's a really interest, there's that curse, we're living in interesting times. I think that's where we are. >> It's interesting just to stay on this tech trend for a minute. The standards bodies are different now. Back in the old days there were, you know, IEEE standards, ITF standards. >> Tara: TPC. >> The developers are the new standard. I mean, now you're seeing open source completely different where it was in the '90s to here beginning, that was gen one, some say gen two, but I say gen one, now we're exploding with open source. You have kind of developers setting the standards. If developers like it in droves, it becomes defacto, which then kind of rolls into implementation. >> Yeah, I mean I think if you don't have developer input, and this is why I love working with Tara and her team so much is 'cause they get it. If we don't have input from developers, it's not going to get used. There's going to be ways of of working around it, especially when it comes to security. If they don't, you know, if you're a developer and you're sat at your screen and you don't want to do that particular thing, you're going to find a way around it. You're a smart person. >> Yeah. >> So. >> Developers on the front lines now versus, even back in the '90s, they're like, "Okay, consider the dev's, got a QA team." Everything was Waterfall, now it's Cloud, and developers are on the front lines of everything. Tara, I mean, this is where the standards are being met. What's your reaction to that? >> Well, I think it's outstanding. I mean, you know, like I was at Netscape and part of the crowd that released the browser as open source and we founded mozilla.org, right. And that was, you know, in many ways kind of the birth of the modern open source movement beyond what we used to have, what was basically free software foundation was sort of the only game in town. And I think it is so incredibly valuable. I want to emphasize, you know, and pile onto what Lena was saying, it's not just that the developers are having input on a sort of company by company basis. Open source to me is like a checks and balance, where it allows us as a broader community to be able to agree on and enforce certain standards in order to try and keep the technology platforms as accessible as possible. I think Kubernetes is a great example of that, right. If we didn't have Kubernetes, that would've really changed the nature of how we think about container orchestration. But even before that, Linux, right. Linux allowed us as an industry to end the Unix Wars and as someone who was on the front lines of that as well and having to support 42 different operating systems with our product, you know, that was a huge win. And it allowed us to stop arguing about operating systems and start arguing about software or not arguing, but developing it in positive ways. So with, you know, with Kubernetes, with container orchestration, we all agree, okay, that's just how we're going to orchestrate. Now we can build up this huge ecosystem, everybody gets taken along, right. And now it changes the game for what we're defining as business differentials, right. And so when we talk about crypto, that's a little bit harder, but certainly with AI, right, you know, what are the checks and balances that as an industry and as the developers around this, that we can in, you know, enforce to make sure that no one company or no one body is able to overly control how these things are managed, how it's defined. And I think that is only for the benefit in the industry as a whole, particularly when we think about the only other option is it gets regulated in ways that do not involve the people who actually know the details of what they're talking about. >> Regulated and or thrown away or bankrupt or- >> Driven underground. >> Yeah. >> Which would be even worse actually. >> Yeah, that's a really interesting, the checks and balances. I love that call out. And I was just talking with another interview part of the series around women being represented in the 51% ratio. Software is for everybody. So that we believe that open source movement around the collective intelligence of the participants in the industry and independent of gender, this is going to be the next wave. You're starting to see these videos really have impact because there are a lot more leaders now at the table in companies developing software systems and with AI, the aperture increases for applications. And this is the new dynamic. What's your guys view on this dynamic? How does this go forward in a positive way? Is there a certain trajectory you see? For women in the industry? >> I mean, I think some of the states are trying to, again, from the government angle, some of the states are trying to force women into the boardroom, for example, California, which can be no bad thing, but I don't know, sometimes I feel a bit iffy about all this kind of forced- >> John: Yeah. >> You know, making, I don't even know how to say it properly so you can cut this part of the interview. (John laughs) >> Tara: Well, and I think that they're >> I'll say it's not organic. >> No, and I think they're already pulling it out, right. It's already been challenged so they're in the process- >> Well, this is the open source angle, Tara, you are getting at it. The change agent is open, right? So to me, the history of the proven model is openness drives transparency drives progress. >> No, it's- >> If you believe that to be true, this could have another impact. >> Yeah, it's so interesting, right. Because if you look at McKinsey Consulting or Boston Consulting or some of the other, I'm blocking on all of the names. There has been a decade or more of research that shows that a non homogeneous employee base, be it gender or ethnicity or whatever, generates more revenue, right? There's dollar signs that can be attached to this, but it's not enough for all companies to want to invest in that way. And it's not enough for all, you know, venture firms or investment firms to grant that seed money or do those seed rounds. I think it's getting better very slowly, but socialization is a much harder thing to overcome over time. Particularly, when you're not just talking about one country like the United States in our case, but around the world. You know, tech centers now exist all over the world, including places that even 10 years ago we might not have expected like Nairobi, right. Which I think is amazing, but you have to factor in the cultural implications of that as well, right. So yes, the openness is important and we have, it's important that we have those voices, but I don't think it's a panacea solution, right. It's just one more piece. I think honestly that one of the most important opportunities has been with Cloud computing and Cloud's been around for a while. So why would I say that? It's because if you think about like everybody holds up the Steve Jobs, Steve Wozniak, back in the '70s, or Sergey and Larry for Google, you know, you had to have access to enough credit card limit to go to Fry's and buy your servers and then access to somebody like Susan Wojcicki to borrow the garage or whatever. But there was still a certain amount of upfrontness that you had to be able to commit to, whereas now, and we've, I think, seen a really good evidence of this being able to lease server resources by the second and have development platforms that you can do on your phone. I mean, for a while I think Africa, that the majority of development happened on mobile devices because there wasn't a sufficient supply chain of laptops yet. And that's no longer true now as far as I know. But like the power that that enables for people who would otherwise be underrepresented in our industry instantly opens it up, right? And so to me that's I think probably the biggest opportunity that we've seen from an industry on how to make more availability in underrepresented representation for entrepreneurship. >> Yeah. >> Something like AI, I think that's actually going to take us backwards if we're not careful. >> Yeah. >> Because of we're reinforcing that socialization. >> Well, also the bias. A lot of people commenting on the biases of the large language inherently built in are also problem. Lena, I want you to weigh on this too, because I think the skills question comes up here and I've been advocating that you don't need the pedigree, college pedigree, to get into a certain jobs, you mentioned Cloud computing. I mean, it's been around for you think a long time, but not really, really think about it. The ability to level up, okay, if you're going to join something new and half the jobs in cybersecurity are created in the past year, right? So, you have this what used to be a barrier, your degree, your pedigree, your certification would take years, would be a blocker. Now that's gone. >> Lena: Yeah, it's the opposite. >> That's, in fact, psychology. >> I think so, but the people who I, by and large, who I interview for jobs, they have, I think security people and also I work with our compliance folks and I can't forget them, but let's talk about security just now. I've always found a particular kind of mindset with security folks. We're very curious, not very good at following rules a lot of the time, and we'd love to teach others. I mean, that's one of the big things stem from the start of my career. People were always interested in teaching and I was interested in learning. So it was perfect. And I think also having, you know, strong women leaders at MongoDB allows other underrepresented groups to actually apply to the company 'cause they see that we're kind of talking the talk. And that's been important. I think it's really important. You know, you've got Tara and I on here today. There's obviously other senior women at MongoDB that you can talk to as well. There's a bunch of us. There's not a whole ton of us, but there's a bunch of us. And it's good. It's definitely growing. I've been there for four years now and I've seen a growth in women in senior leadership positions. And I think having that kind of track record of getting really good quality underrepresented candidates to not just interview, but come and join us, it's seen. And it's seen in the industry and people take notice and they're like, "Oh, okay, well if that person's working, you know, if Tara Hernandez is working there, I'm going to apply for that." And that in itself I think can really, you know, reap the rewards. But it's getting started. It's like how do you get your first strong female into that position or your first strong underrepresented person into that position? It's hard. I get it. If it was easy, we would've sold already. >> It's like anything. I want to see people like me, my friends in there. Am I going to be alone? Am I going to be of a group? It's a group psychology. Why wouldn't? So getting it out there is key. Is there skills that you think that people should pay attention to? One's come up as curiosity, learning. What are some of the best practices for folks trying to get into the tech field or that's in the tech field and advancing through? What advice are you guys- >> I mean, yeah, definitely, what I say to my team is within my budget, we try and give every at least one training course a year. And there's so much free stuff out there as well. But, you know, keep learning. And even if it's not right in your wheelhouse, don't pick about it. Don't, you know, take a look at what else could be out there that could interest you and then go for it. You know, what does it take you few minutes each night to read a book on something that might change your entire career? You know, be enthusiastic about the opportunities out there. And there's so many opportunities in security. Just so many. >> Tara, what's your advice for folks out there? Tons of stuff to taste, taste test, try things. >> Absolutely. I mean, I always say, you know, my primary qualifications for people, I'm looking for them to be smart and motivated, right. Because the industry changes so quickly. What we're doing now versus what we did even last year versus five years ago, you know, is completely different though themes are certainly the same. You know, we still have to code and we still have to compile that code or package the code and ship the code so, you know, how well can we adapt to these new things instead of creating floppy disks, which was my first job. Five and a quarters, even. The big ones. >> That's old school, OG. There it is. Well done. >> And now it's, you know, containers, you know, (indistinct) image containers. And so, you know, I've gotten a lot of really great success hiring boot campers, you know, career transitioners. Because they bring a lot experience in addition to the technical skills. I think the most important thing is to experiment and figuring out what do you like, because, you know, maybe you are really into security or maybe you're really into like deep level coding and you want to go back, you know, try to go to school to get a degree where you would actually want that level of learning. Or maybe you're a front end engineer, you want to be full stacked. Like there's so many different things, data science, right. Maybe you want to go learn R right. You know, I think it's like figure out what you like because once you find that, that in turn is going to energize you 'cause you're going to feel motivated. I think the worst thing you could do is try to force yourself to learn something that you really could not care less about. That's just the worst. You're going in handicapped. >> Yeah and there's choices now versus when we were breaking into the business. It was like, okay, you software engineer. They call it software engineering, that's all it was. You were that or you were in sales. Like, you know, some sort of systems engineer or sales and now it's,- >> I had never heard of my job when I was in school, right. I didn't even know it was a possibility. But there's so many different types of technical roles, you know, absolutely. >> It's so exciting. I wish I was young again. >> One of the- >> Me too. (Lena laughs) >> I don't. I like the age I am. So one of the things that I did to kind of harness that curiosity is we've set up a security champions programs. About 120, I guess, volunteers globally. And these are people from all different backgrounds and all genders, diversity groups, underrepresented groups, we feel are now represented within this champions program. And people basically give up about an hour or two of their time each week, with their supervisors permission, and we basically teach them different things about security. And we've now had seven full-time people move from different areas within MongoDB into my team as a result of that program. So, you know, monetarily and time, yeah, saved us both. But also we're showing people that there is a path, you know, if you start off in Tara's team, for example, doing X, you join the champions program, you're like, "You know, I'd really like to get into red teaming. That would be so cool." If it fits, then we make that happen. And that has been really important for me, especially to give, you know, the women in the underrepresented groups within MongoDB just that window into something they might never have seen otherwise. >> That's a great common fit is fit matters. Also that getting access to what you fit is also access to either mentoring or sponsorship or some sort of, at least some navigation. Like what's out there and not being afraid to like, you know, just ask. >> Yeah, we just actually kicked off our big mentor program last week, so I'm the executive sponsor of that. I know Tara is part of it, which is fantastic. >> We'll put a plug in for it. Go ahead. >> Yeah, no, it's amazing. There's, gosh, I don't even know the numbers anymore, but there's a lot of people involved in this and so much so that we've had to set up mentoring groups rather than one-on-one. And I think it was 45% of the mentors are actually male, which is quite incredible for a program called Mentor Her. And then what we want to do in the future is actually create a program called Mentor Them so that it's not, you know, not just on the female and so that we can live other groups represented and, you know, kind of break down those groups a wee bit more and have some more granularity in the offering. >> Tara, talk about mentoring and sponsorship. Open source has been there for a long time. People help each other. It's community-oriented. What's your view of how to work with mentors and sponsors if someone's moving through ranks? >> You know, one of the things that was really interesting, unfortunately, in some of the earliest open source communities is there was a lot of pervasive misogyny to be perfectly honest. >> Yeah. >> And one of the important adaptations that we made as an open source community was the idea, an introduction of code of conducts. And so when I'm talking to women who are thinking about expanding their skills, I encourage them to join open source communities to have opportunity, even if they're not getting paid for it, you know, to develop their skills to work with people to get those code reviews, right. I'm like, "Whatever you join, make sure they have a code of conduct and a good leadership team. It's very important." And there are plenty, right. And then that idea has come into, you know, conferences now. So now conferences have codes of contact, if there are any good, and maybe not all of them, but most of them, right. And the ideas of expanding that idea of intentional healthy culture. >> John: Yeah. >> As a business goal and business differentiator. I mean, I won't lie, when I was recruited to come to MongoDB, the culture that I was able to discern through talking to people, in addition to seeing that there was actually women in senior leadership roles like Lena, like Kayla Nelson, that was a huge win. And so it just builds on momentum. And so now, you know, those of us who are in that are now representing. And so that kind of reinforces, but it's all ties together, right. As the open source world goes, particularly for a company like MongoDB, which has an open source product, you know, and our community builds. You know, it's a good thing to be mindful of for us, how we interact with the community and you know, because that could also become an opportunity for recruiting. >> John: Yeah. >> Right. So we, in addition to people who might become advocates on Mongo's behalf in their own company as a solution for themselves, so. >> You guys had great successful company and great leadership there. I mean, I can't tell you how many times someone's told me "MongoDB doesn't scale. It's going to be dead next year." I mean, I was going back 10 years. It's like, just keeps getting better and better. You guys do a great job. So it's so fun to see the success of developers. Really appreciate you guys coming on the program. Final question, what are you guys excited about to end the segment? We'll give you guys the last word. Lena will start with you and Tara, you can wrap us up. What are you excited about? >> I'm excited to see what this year brings. I think with ChatGPT and its copycats, I think it'll be a very interesting year when it comes to AI and always in the lookout for the authentic deep fakes that we see coming out. So just trying to make people aware that this is a real thing. It's not just pretend. And then of course, our old friend ransomware, let's see where that's going to go. >> John: Yeah. >> And let's see where we get to and just genuine hygiene and housekeeping when it comes to security. >> Excellent. Tara. >> Ah, well for us, you know, we're always constantly trying to up our game from a security perspective in the software development life cycle. But also, you know, what can we do? You know, one interesting application of AI that maybe Google doesn't like to talk about is it is really cool as an addendum to search and you know, how we might incorporate that as far as our learning environment and developer productivity, and how can we enable our developers to be more efficient, productive in their day-to-day work. So, I don't know, there's all kinds of opportunities that we're looking at for how we might improve that process here at MongoDB and then maybe be able to share it with the world. One of the things I love about working at MongoDB is we get to use our own products, right. And so being able to have this interesting document database in order to put information and then maybe apply some sort of AI to get it out again, is something that we may well be looking at, if not this year, then certainly in the coming year. >> Awesome. Lena Smart, the chief information security officer. Tara Hernandez, vice president developer of productivity from MongoDB. Thank you so much for sharing here on International Women's Day. We're going to do this quarterly every year. We're going to do it and then we're going to do quarterly updates. Thank you so much for being part of this program. >> Thank you. >> Thanks for having us. >> Okay, this is theCube's coverage of International Women's Day. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Thanks for coming in to this program MongoDB is kind of gone the I'm described as the ones throat to choke. Kind of goofing on the you know, and all the challenges that you faced the time if you were, We'll go back to that you know, I want to learn how these work. Tara, when, you know, your career started, you know, to me AI in a lot And so, you know, and the bad stuff's going to come out too. you know, understand you know, money involved and you know, it spits out And so I think, you know, you know, IEEE standards, ITF standards. The developers are the new standard. and you don't want to do and developers are on the And that was, you know, in many ways of the participants I don't even know how to say it properly No, and I think they're of the proven model is If you believe that that you can do on your phone. going to take us backwards Because of we're and half the jobs in cybersecurity And I think also having, you know, I going to be of a group? You know, what does it take you Tons of stuff to taste, you know, my primary There it is. And now it's, you know, containers, Like, you know, some sort you know, absolutely. I (Lena laughs) especially to give, you know, Also that getting access to so I'm the executive sponsor of that. We'll put a plug in for it. and so that we can live to work with mentors You know, one of the things And one of the important and you know, because So we, in addition to people and Tara, you can wrap us up. and always in the lookout for it comes to security. addendum to search and you know, We're going to do it and then we're I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susan Wojcicki | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Tara Hernandez | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Mark Porter | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Marty Lans | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jim Jackson | PERSON | 0.99+ |
Jason Newton | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Daniel Hernandez | PERSON | 0.99+ |
Dave Winokur | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Lena | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Julie Sweet | PERSON | 0.99+ |
Marty | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Kayla Nelson | PERSON | 0.99+ |
Mike Piech | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ireland | LOCATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Daniel Laury | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Todd Kerry | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$20 | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
January 30th | DATE | 0.99+ |
Meg | PERSON | 0.99+ |
Mark Little | PERSON | 0.99+ |
Luke Cerney | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Jeff Basil | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Allan | PERSON | 0.99+ |
40 gig | QUANTITY | 0.99+ |
Krista Satterthwaite | International Women's Day
(upbeat music) >> Hello, welcome to the Cube's coverage of International Women's Day 2023. I'm John Furrier, host of the CUBE series of profiles around leaders in the tech industry sharing their stories, advice, best practices, what they're doing in their jobs their vision of the future, and more importantly, passing it on and encouraging more and more networking and telling the stories that matter. Our next guest is a great executive leader talking about how to lead in challenging times. Krista Satterthwaite, who is Senior Vice President and GM of Mainstream Compute. Krista great to see you're Cube alumni. We've had you on before talking about compute power. And by the way, congratulations on your BPT and Black Professional Tech Network 2023 Black Tech Exec of the Year Award. >> Thank you very much. Appreciate it. And thanks for having me. >> I knew I liked you the first time we were doing interviews together. You were so smart and so on top of it. Thanks for coming on. >> No problem. >> All kidding aside, let's get into it. You know, one of the things that's coming out on these interviews is leadership is being showcased and there's a network effect happening in the industry and you're starting to see people look and hear stories that they may or may not have heard before or news stories are coming out. So, one of the things that's interesting is that also in the backdrop of post pandemic, there's been a turn in the industry a little bit, there's a little bit of headwind in certain areas, some tailwinds in cloud and other areas. Compute, your area is doing very well. It could be challenging. And as a leader, has the conversation changed? And where are you at right now in the network of folks you're working with? What's the mood? >> Yeah, so actually I, things are much better. Obviously we had a chip shortage last year. Things are much, much better. But I learned a lot when it came to going through challenging times and leadership. And I think when we talk to customers, a lot of 'em are in challenging situations. Sometimes it's budget, sometimes it's attracting and retaining talent and sometimes it's just demands because, it's really exciting that technology is behind everything. But that means the demands on IT are bigger than ever before. So what I find when it comes to challenging times is that there's really three qualities that are game changers when it comes to leading and challenging times. And the first one is positivity. People have to feel like there's a light at the end of the tunnel to make sure that, their attitudes stay up, that they stay working really really hard and they look to the leader for that. The second one is communication. And I read somewhere that communication is leadership. And we had a great example from our CEO Antonio Neri when the pandemic hit and everything shut down. He had an all employee meeting every week for a month and we have tens of thousands of employees. And then even after that month, we had 'em very regularly. But he wanted to make sure that everybody heard from, him his thoughts had all the updates, knew how their peers were doing, how we were helping customers. And I really learned a lot from that in terms of communicating and communicating more during tough times. And then I would say the third one is making sure that they are informed and they feel empowered. So I would say a leader who is able to do that really, really stands out in a challenging time. >> So how do you get yourself together? Obviously you the chip shortage everyone knows in the industry and for the folks not in the tech industry, it was an economic potential disaster, because you don't get the chips you need. You guys make servers and technology, chips power everything. If you miss a shipment, it could cause a lot of backlash. So Cisco had an earnings impact. It has impact to the business. When do you have that code red moment where it's like, okay, we have to kind of put the pause and go into emergency mode. And how do you handle that? >> Well, you know, it is funny 'cause when it, when we have challenges, I come to learn that people can look at challenges and hard work as a burden or a mission and they behave totally different. If they see it as a burden, then they're doing the bare minimum and they're pointing fingers and they're complaining and they're probably not getting a whole lot done. If they see it as a mission, then all of a sudden they're going above and beyond. They're working really hard, they're really partnering. And if it affects customers for HPE, obviously we, HPE is a very customer centric company, so everyone pays attention and tries to pitch in. But when it comes to a mission, I started thinking, what are the real ingredients for a mission? And I think it's important. I think it's, people feel like they can make an impact. And then I think the third one is that the goal is clear, even if the path isn't, 'cause you may have to pivot a lot if it's a challenge. And so when it came to the chip shortage, it was a mission. We wanted to make sure that we could ship to customers as quickly as possible. And it was a mission. Everybody pulled together. I learned how much our team could pull off and pull together through that challenge. >> And the consequences can be quantified in economics. So it's like the burn the boats example, you got to burn the boats, you're stuck. You got to figure out a solution. How does that change the demands on people? Because this is, okay, there's a mission it they're not, it's not normal. What are some of those new demands that arise during those times and how do you manage that? How do you be a leader? >> Yeah, so it's funny, I was reading this statement from James White who used to be the CEO of Jamba Juice. And he was talking about how he got that job. He said, "I think it was one thing I said that really convinced them that I was the right person." And what he said was something like, "I will get more out of people than nine out of 10 leaders on the planet." He said, "Because I will look at their strengths and their capabilities and I will play to their passions." and their capabilities and I will play their passions. and getting the most out people in difficult times, it is all about how much you can get out of people for their own sake and for the company's sake. >> That's great feedback. And to people watching who are early in their careers, leading is getting the best out of your team, attitude. Some of the things you mentioned. What advice would you give folks that are starting to get into the workforce, that are starting to get into that leadership track or might have a trajectory or even might have an innate ability that they know they have and they want to pursue that dream? >> Yeah so. >> What advice would you give them? >> Yeah, what I would say, I say this all the time that, for the first half of my career I was very job conscious, but I wasn't very career conscious. So I'd get in a role and I'd stay in that role for long periods of time and I'd do a good job, but I wasn't really very career conscious. And what I would say is, everybody says how important risk taking is. Well, risk taking can be a little bit of a scary word, right? Or term. And the way I see it is give it a shot and see what happens. You're interested in something, give it a shot and see what happens. It's kind of a less intimidating way of looking at risk because even though I was job conscious, and not career conscious, one thing I did when people asked me to take something on, hey Krista, would you like to take on more responsibility here? The answer was always yes, yes, yes, yes. So I said yes because I said, hey I'll give it a shot and see what happens. And that helped me tremendously because I felt like I am giving it a try. And the more you do that, the the better it is. >> It's great. >> And actually the the less scary it is because you do that, a few times and it goes well. It's like a muscle that builds. >> It's funny, a woman executive was on the program. I said, the word balance comes up a lot. And she stopped and said, "Let's just talk about balance for a second." And then she went contrarian and said, "It's about not being unbalanced. It's about being, taking a chance and being a little bit off balance to put yourself outside your comfort zone to try new things." And then she also came up and followed and said, "If you do that alone, you increase your risk. But if you do it with people, a team that you trust and you're authentic and you're vulnerable and you're communicating, that is the chemistry." And that was a really good point. What's your reaction? 'Cause you were talking about authentic conversations good communications with Antonio. How does someone get, feel, find that team and do you agree with it? And what was your, how would you react to that? >> Yes, I agree with that. And when it comes to being authentic, that's the magic and when someone isn't, if someone's not really being themselves, it's really funny because you can feel it, you can sense it. There's kind of a wall between you and them. And over time people won't be able to put their finger on it, but they'll feel a distance from you. But when you're authentic and you share who you are, what you find is you find things in common with other people. 'Cause you're sharing more of who you are and it's like, oh, I do that too. Oh, I'm interested in that too. And build the bonds between people and the authenticity. And that's what people crave. They want people to be authentic and people can tell when you're authentic and when you're not. >> Is managing and leading through a crisis a born talent or can you learn it? >> Oh, definitely learned. I think that we're born knowing nothing and I once read people are nurtured into greatness and I think that's true. So yeah, definitely learned. >> What are some examples that can come out of a tough time as folks may look at a crisis and be shy away from it? How do they lean into it? What advice would you give folks? How do you handle it? I mean, everyone's got different personality. Okay, they get to a position but stepping through that door. >> Yeah, well, I do this presentation called, "10 things I Wish I Knew Earlier in my Career." And one of those things is about the growth mindset and the growth mindset. There's a book called "Mindset" by Carol Dweck and the growth mindset is all about learning and not always having to know everything, but really the winning is in the learning. And so if you have a growth mindset it makes you feel better about everything because you can't lose. You're winning because you're learning. So when I've learned that, I started looking at things much differently. And when it comes to going through tough times, what I find is you're exercising muscles that you didn't even know you had, which makes you stronger when the crisis is over, obviously. And I also feel like you become a lot a much more creative when you're in challenging times. You're forced to do things that you hadn't had to do before. And it also bonds the team. It's almost like going through bootcamp together. When you go through a challenge together it bonds you for life. >> I mean, you could have bonding, could be trauma bonding or success bonding. People love to be on the success side because that's positive and that's really the key mindset. You're always winning if you have that attitude. And learnings is also positive. So it's not, it's never a failure unless you make it. >> That's right, exactly. As long as you learn from it. And that's the name of the game. So, learning is the goal. >> So I have to ask you, on your job now, you have a really big responsibility HPE compute and big division. What's the current mindset that you have right now in your career, where you're at? What are some of the things on your mind that you think about? We had other, other seniors leaders say, hey, you know I got the software as my brain and the hardware's my body. I like to keep software and hardware working together. What is your current state of your career and how you looking at it, what's next and what's going on in your mind right now? >> Yeah, so for me, I really want to make sure that for my team we're nurturing the next generation of leadership and that we're helping with career development and career growth. And people feel like they can grow their careers here. Luckily at HPE, we have a lot of people stay at HPE a long time, and even people who leave HPE a lot of times they come back because the culture's fantastic. So I just want to make sure I'm contributing to that culture and I'm bringing up the next generation of leaders. >> What's next for you? What are you looking at from a career personal standpoint? >> You know, it's funny, I, I love what I'm doing right now. I'm actually on a joint venture board with H3C, which is HPE Joint Venture Company. And so I'm really enjoying that and exploring more board service opportunities. >> You have a focus of good growth mindset, challenging through, managing through tough times. How do you stay focused on that North star? How do you keep the reinforcement of the mission? How do you nurture the team to greatness? >> Yeah, so I think it's a lot of clarity, providing a lot of clarity about what's important right now. And it goes back to some of the communication that I mentioned earlier, making sure that everybody knows where the North Star is, so everybody's focused on the same thing, because I feel like with the, I always felt like throughout my career I was set up for success if I had the right information, the right guidance and the right goals. And I try to make sure that I do that with my team. >> What are some of the things that you could share as we wrap up here for the folks watching, as the networks increase, as the stories start to unfold more and more on digital like we're doing here, what do you hope people walk away with? What's working, what needs work, and what is some things that people aren't talking about that should be discussed publicly? >> Do you mean from a career standpoint or? >> For career? For growing into tech and into leadership positions. >> Okay. >> Big migration tech is now a wide field. I mean, when I grew up, broke into the eighties, it was computer science, software engineering, and three degrees in engineering, right? >> I see huge swath of AI coming. So many technical careers. There's a lot more women. >> Yeah. And that's what's so exciting about being in a technical career, technical company, is that everything's always changing. There's always opportunity to learn something new. And frankly, you know, every company is in the business of technology right now, because they want to closer to their customers. Typically, they're using technology to do that. Everyone's digitally transforming. And so what I would say is that there's so much opportunity, keep your mind open, explore what interests you and keep learning because it's changing all the time. >> You know I was talking with Sue, former HP, she's on a lot of boards. The balance at the board level still needs a lot of work and the leaderships are getting better, but the board at the seats at the table needs work. Where do you see that transition for you in the future? Is that something on your mind? Maybe a board seat? You mentioned you're on a board with HPE, but maybe sitting on some other boards? Any, any? >> Yes, actually, actually, we actually have a program here at HPE called the Board Ready Now program that I'm a part of. And so HPE is very supportive of me exploring an independent board seat. And so they have some education and programming around that. And I know Sue well, she's awesome. And so yes, I'm looking into those opportunities right now. >> She advises do one no more than two. The day job. >> Yeah, I would only be doing one current job that I have. >> Well, kris, it was great to chat with you about these topics and leadership and challenging times. Great masterclass, great advice. As SVP and GM of mainstream compute for HPE, what's going on in your job these days? What's the most exciting thing happening? Share some of your work situations. >> Sure, so the most exciting thing happening right now is HPE Gen 11, which we just announced and started shipping, brings tremendous performance benefit, has an intuitive operating experience, a trusted security by design, and it's optimized to run workloads so much faster. So if anybody is interested, they should go check it out on hpe.com. >> And of course the CUBE will be at HPE Discover. We'll see you there. Any final wisdom you'd like to share as we wrap up the last minute here? >> Yeah, so I think the last thing I'll say is that when it comes to setting your sights, I think, expecting it, good things to happen usually happens when you believe you deserve it. So what happens is you believe you deserve it, then you expect it and you get it. And so sometimes that's about making sure you raise your thermostat to expect more. And I always talk about you don't have to raise it all up at once. You could do that incrementally and other people can set your thermostat too when they say, hey, you should be, you should get a level this high or that high, but raise your thermostat because what you expect is what you get. >> Krista, thank you so much for contributing to this program. We're going to do it quarterly. We're going to do getting more stories out there, so we'll have you back and if you know anyone with good stories, send them our way. And congratulations on your BPTN Tech Executive of the Year award for 2023. Congratulations, great prize there and great recognition for your hard work. >> Thank you so much, John, I appreciate it. >> Okay, this is the Cube's coverage of National Woodman's Day. I'm John Furrier, stories from the front lines, management ranks, developers, all there, global coverage of international events with theCUBE. Thanks for watching. (soft music)
SUMMARY :
And by the way, Thank you very much. I knew I liked you And where are you at right now And the first one is positivity. And how do you handle that? that the goal is clear, And the consequences can and for the company's sake. Some of the things you mentioned. And the more you do that, And actually the the less scary it is find that team and do you agree with it? and you share who you are, and I once read What advice would you give folks? And I also feel like you become a lot I mean, you could have And that's the name of the game. that you have right now of leadership and that we're helping And so I'm really enjoying that How do you nurture the team to greatness? of the communication For growing into tech and broke into the eighties, I see huge swath of AI coming. And frankly, you know, every company is Where do you see that transition And so they have some education She advises do one no more than two. one current job that I have. great to chat with you Sure, so the most exciting And of course the CUBE So what happens is you and if you know anyone with Thank you so much, from the front lines,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nutanix | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Krista | PERSON | 0.99+ |
Bernie Hannon | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Bernie | PERSON | 0.99+ |
H3C | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
September of 2015 | DATE | 0.99+ |
Dave Tang | PERSON | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
SanDisk | ORGANIZATION | 0.99+ |
Martin | PERSON | 0.99+ |
James White | PERSON | 0.99+ |
Sue | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Carol Dweck | PERSON | 0.99+ |
Martin Fink | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave allante | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Raghu | PERSON | 0.99+ |
Raghu Nandan | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Lee Caswell | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
three-month | QUANTITY | 0.99+ |
four-year | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
Gary | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
seven dollars | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Arm Holdings | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Joseph Nelson, Roboflow | Cube Conversation
(gentle music) >> Hello everyone. Welcome to this CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great remote guest coming in. Joseph Nelson, co-founder and CEO of RoboFlow hot startup in AI, computer vision. Really interesting topic in this wave of AI next gen hitting. Joseph, thanks for coming on this CUBE conversation. >> Thanks for having me. >> Yeah, I love the startup tsunami that's happening here in this wave. RoboFlow, you're in the middle of it. Exciting opportunities, you guys are in the cutting edge. I think computer vision's been talked about more as just as much as the large language models and these foundational models are merging. You're in the middle of it. What's it like right now as a startup and growing in this new wave hitting? >> It's kind of funny, it's, you know, I kind of describe it like sometimes you're in a garden of gnomes. It's like we feel like we've got this giant headstart with hundreds of thousands of people building with computer vision, training their own models, but that's a fraction of what it's going to be in six months, 12 months, 24 months. So, as you described it, a wave is a good way to think about it. And the wave is still building before it gets to its full size. So it's a ton of fun. >> Yeah, I think it's one of the most exciting areas in computer science. I wish I was in my twenties again, because I would be all over this. It's the intersection, there's so many disciplines, right? It's not just tech computer science, it's computer science, it's systems, it's software, it's data. There's so much aperture of things going on around your world. So, I mean, you got to be batting all the students away kind of trying to get hired in there, probably. I can only imagine you're hiring regiment. I'll ask that later, but first talk about what the company is that you're doing. How it's positioned, what's the market you're going after, and what's the origination story? How did you guys get here? How did you just say, hey, want to do this? What was the origination story? What do you do and how did you start the company? >> Yeah, yeah. I'll give you the what we do today and then I'll shift into the origin. RoboFlow builds tools for making the world programmable. Like anything that you see should be read write access if you think about it with a programmer's mind or legible. And computer vision is a technology that enables software to be added to these real world objects that we see. And so any sort of interface, any sort of object, any sort of scene, we can interact with it, we can make it more efficient, we can make it more entertaining by adding the ability for the tools that we use and the software that we write to understand those objects. And at RoboFlow, we've empowered a little over a hundred thousand developers, including those in half the Fortune 100 so far in that mission. Whether that's Walmart understanding the retail in their stores, Cardinal Health understanding the ways that they're helping their patients, or even electric vehicle manufacturers ensuring that they're making the right stuff at the right time. As you mentioned, it's early. Like I think maybe computer vision has touched one, maybe 2% of the whole economy and it'll be like everything in a very short period of time. And so we're focused on enabling that transformation. I think it's it, as far as I think about it, I've been fortunate to start companies before, start, sell these sorts of things. This is the last company I ever wanted to start and I think it will be, should we do it right, the world's largest in riding the wave of bringing together the disparate pieces of that technology. >> What was the motivating point of the formation? Was it, you know, you guys were hanging around? Was there some catalyst? What was the moment where it all kind of came together for you? >> You know what's funny is my co-founder, Brad and I, we were making computer vision apps for making board games more fun to play. So in 2017, Apple released AR kit, augmented reality kit for building augmented reality applications. And Brad and I are both sort of like hacker persona types. We feel like we don't really understand the technology until we build something with it and so we decided that we should make an app that if you point your phone at a Sudoku puzzle, it understands the state of the board and then it kind of magically fills in that experience with all the digits in real time, which totally ruins the game of Sudoku to be clear. But it also just creates this like aha moment of like, oh wow, like the ability for our pocket devices to understand and see the world as good or better than we can is possible. And so, you know, we actually did that as I mentioned in 2017, and the app went viral. It was, you know, top of some subreddits, top of Injure, Reddit, the hacker community as well as Product Hunt really liked it. So it actually won Product Hunt AR app of the year, which was the same year that the Tesla model three won the product of the year. So we joked that we share an award with Elon our shared (indistinct) But frankly, so that was 2017. RoboFlow wasn't incorporated as a business until 2019. And so, you know, when we made Magic Sudoku, I was running a different company at the time, Brad was running a different company at the time, and we kind of just put it out there and were excited by how many people liked it. And we assumed that other curious developers would see this inevitable future of, oh wow, you know. This is much more than just a pedestrian point your phone at a board game. This is everything can be seen and understood and rewritten in a different way. Things like, you know, maybe your fridge. Knowing what ingredients you have and suggesting recipes or auto ordering for you, or we were talking about some retail use cases of automated checkout. Like anything can be seen and observed and we presume that that would kick off a Cambrian explosion of applications. It didn't. So you fast forward to 2019, we said, well we might as well be the guys to start to tackle this sort of problem. And because of our success with board games before, we returned to making more board game solving applications. So we made one that solves Boggle, you know, the four by four word game, we made one that solves chess, you point your phone at a chess board and it understands the state of the board and then can make move recommendations. And each additional board game that we added, we realized that the tooling was really immature. The process of collecting images, knowing which images are actually going to be useful for improving model performance, training those models, deploying those models. And if we really wanted to make the world programmable, developers waiting for us to make an app for their thing of interest is a lot less efficient, less impactful than taking our tool chain and releasing that externally. And so, that's what RoboFlow became. RoboFlow became the internal tools that we used to make these game changing applications readily available. And as you know, when you give developers new tools, they create new billion dollar industries, let alone all sorts of fun hobbyist projects along the way. >> I love that story. Curious, inventive, little radical. Let's break the rules, see how we can push the envelope on the board games. That's how companies get started. It's a great story. I got to ask you, okay, what happens next? Now, okay, you realize this new tooling, but this is like how companies get built. Like they solve their own problem that they had 'cause they realized there's one, but then there has to be a market for it. So you actually guys knew that this was coming around the corner. So okay, you got your hacker mentality, you did that thing, you got the award and now you're like, okay, wow. Were you guys conscious of the wave coming? Was it one of those things where you said, look, if we do this, we solve our own problem, this will be big for everybody. Did you have that moment? Was that in 2019 or was that more of like, it kind of was obvious to you guys? >> Absolutely. I mean Brad puts this pretty effectively where he describes how we lived through the initial internet revolution, but we were kind of too young to really recognize and comprehend what was happening at the time. And then mobile happened and we were working on different companies that were not in the mobile space. And computer vision feels like the wave that we've caught. Like, this is a technology and capability that rewrites how we interact with the world, how everyone will interact with the world. And so we feel we've been kind of lucky this time, right place, right time of every enterprise will have the ability to improve their operations with computer vision. And so we've been very cognizant of the fact that computer vision is one of those groundbreaking technologies that every company will have as a part of their products and services and offerings, and we can provide the tooling to accelerate that future. >> Yeah, and the developer angle, by the way, I love that because I think, you know, as we've been saying in theCUBE all the time, developer's the new defacto standard bodies because what they adopt is pure, you know, meritocracy. And they pick the best. If it's sell service and it's good and it's got open source community around it, its all in. And they'll vote. They'll vote with their code and that is clear. Now I got to ask you, as you look at the market, we were just having this conversation on theCUBE in Barcelona at recent Mobile World Congress, now called MWC, around 5G versus wifi. And the debate was specifically computer vision, like facial recognition. We were talking about how the Cleveland Browns were using facial recognition for people coming into the stadium they were using it for ships in international ports. So the question was 5G versus wifi. My question is what infrastructure or what are the areas that need to be in place to make computer vision work? If you have developers building apps, apps got to run on stuff. So how do you sort that out in your mind? What's your reaction to that? >> A lot of the times when we see applications that need to run in real time and on video, they'll actually run at the edge without internet. And so a lot of our users will actually take their models and run it in a fully offline environment. Now to act on that information, you'll often need to have internet signal at some point 'cause you'll need to know how many people were in the stadium or what shipping crates are in my port at this point in time. You'll need to relay that information somewhere else, which will require connectivity. But actually using the model and creating the insights at the edge does not require internet. I mean we have users that deploy models on underwater submarines just as much as in outer space actually. And those are not very friendly environments to internet, let alone 5g. And so what you do is you use an edge device, like an Nvidia Jetson is common, mobile devices are common. Intel has some strong edge devices, the Movidius family of chips for example. And you use that compute that runs completely offline in real time to process those signals. Now again, what you do with those signals may require connectivity and that becomes a question of the problem you're solving of how soon you need to relay that information to another place. >> So, that's an architectural issue on the infrastructure. If you're a tactical edge war fighter for instance, you might want to have highly available and maybe high availability. I mean, these are words that mean something. You got storage, but it's not at the edge in real time. But you can trickle it back and pull it down. That's management. So that's more of a business by business decision or environment, right? >> That's right, that's right. Yeah. So I mean we can talk through some specifics. So for example, the RoboFlow actually powers the broadcaster that does the tennis ball tracking at Wimbledon. That runs completely at the edge in real time in, you know, technically to track the tennis ball and point the camera, you actually don't need internet. Now they do have internet of course to do the broadcasting and relay the signal and feeds and these sorts of things. And so that's a case where you have both edge deployment of running the model and high availability act on that model. We have other instances where customers will run their models on drones and the drone will go and do a flight and it'll say, you know, this many residential homes are in this given area, or this many cargo containers are in this given shipping yard. Or maybe we saw these environmental considerations of soil erosion along this riverbank. The model in that case can run on the drone during flight without internet, but then you only need internet once the drone lands and you're going to act on that information because for example, if you're doing like a study of soil erosion, you don't need to be real time. You just need to be able to process and make use of that information once the drone finishes its flight. >> Well I can imagine a zillion use cases. I heard of a use case interview at a company that does computer vision to help people see if anyone's jumping the fence on their company. Like, they know what a body looks like climbing a fence and they can spot it. Pretty easy use case compared to probably some of the other things, but this is the horizontal use cases, its so many use cases. So how do you guys talk to the marketplace when you say, hey, we have generative AI for commuter vision. You might know language models that's completely different animal because vision's like the world, right? So you got a lot more to do. What's the difference? How do you explain that to customers? What can I build and what's their reaction? >> Because we're such a developer centric company, developers are usually creative and show you the ways that they want to take advantage of new technologies. I mean, we've had people use things for identifying conveyor belt debris, doing gas leak detection, measuring the size of fish, airplane maintenance. We even had someone that like a hobby use case where they did like a specific sushi identifier. I dunno if you know this, but there's a specific type of whitefish that if you grew up in the western hemisphere and you eat it in the eastern hemisphere, you get very sick. And so there was someone that made an app that tells you if you happen to have that fish in the sushi that you're eating. But security camera analysis, transportation flows, plant disease detection, really, you know, smarter cities. We have people that are doing curb management identifying, and a lot of these use cases, the fantastic thing about building tools for developers is they're a creative bunch and they have these ideas that if you and I sat down for 15 minutes and said, let's guess every way computer vision can be used, we would need weeks to list all the example use cases. >> We'd miss everything. >> And we'd miss. And so having the community show us the ways that they're using computer vision is impactful. Now that said, there are of course commercial industries that have discovered the value and been able to be out of the gate. And that's where we have the Fortune 100 customers, like we do. Like the retail customers in the Walmart sector, healthcare providers like Medtronic, or vehicle manufacturers like Rivian who all have very difficult either supply chain, quality assurance, in stock, out of stock, anti-theft protection considerations that require successfully making sense of the real world. >> Let me ask you a question. This is maybe a little bit in the weeds, but it's more developer focused. What are some of the developer profiles that you're seeing right now in terms of low-hanging fruit applications? And can you talk about the academic impact? Because I imagine if I was in school right now, I'd be all over it. Are you seeing Master's thesis' being worked on with some of your stuff? Is the uptake in both areas of younger pre-graduates? And then inside the workforce, What are some of the devs like? Can you share just either what their makeup is, what they work on, give a little insight into the devs you're working with. >> Leading developers that want to be on state-of-the-art technology build with RoboFlow because they know they can use the best in class open source. They know that they can get the most out of their data. They know that they can deploy extremely quickly. That's true among students as you mentioned, just as much as as industries. So we welcome students and I mean, we have research grants that will regularly support for people to publish. I mean we actually have a channel inside our internal slack where every day, more student publications that cite building with RoboFlow pop up. And so, that helps inspire some of the use cases. Now what's interesting is that the use case is relatively, you know, useful or applicable for the business or the student. In other words, if a student does a thesis on how to do, we'll say like shingle damage detection from satellite imagery and they're just doing that as a master's thesis, in fact most insurance businesses would be interested in that sort of application. So, that's kind of how we see uptick and adoption both among researchers who want to be on the cutting edge and publish, both with RoboFlow and making use of open source tools in tandem with the tool that we provide, just as much as industry. And you know, I'm a big believer in the philosophy that kind of like what the hackers are doing nights and weekends, the Fortune 500 are doing in a pretty short order period of time and we're experiencing that transition. Computer vision used to be, you know, kind of like a PhD, multi-year investment endeavor. And now with some of the tooling that we're working on in open source technologies and the compute that's available, these science fiction ideas are possible in an afternoon. And so you have this idea of maybe doing asset management or the aerial observation of your shingles or things like this. You have a few hundred images and you can de-risk whether that's possible for your business today. So there's pretty broad-based adoption among both researchers that want to be on the state of the art, as much as companies that want to reduce the time to value. >> You know, Joseph, you guys and your partner have got a great front row seat, ground floor, presented creation wave here. I'm seeing a pattern emerging from all my conversations on theCUBE with founders that are successful, like yourselves, that there's two kind of real things going on. You got the enterprises grabbing the products and retrofitting into their legacy and rebuilding their business. And then you have startups coming out of the woodwork. Young, seeing greenfield or pick a specific niche or focus and making that the signature lever to move the market. >> That's right. >> So can you share your thoughts on the startup scene, other founders out there and talk about that? And then I have a couple questions for like the enterprises, the old school, the existing legacy. Little slower, but the startups are moving fast. What are some of the things you're seeing as startups are emerging in this field? >> I think you make a great point that independent of RoboFlow, very successful, especially developer focused businesses, kind of have three customer types. You have the startups and maybe like series A, series B startups that you're building a product as fast as you can to keep up with them, and they're really moving just as fast as as you are and pulling the product out at you for things that they need. The second segment that you have might be, call it SMB but not enterprise, who are able to purchase and aren't, you know, as fast of moving, but are stable and getting value and able to get to production. And then the third type is enterprise, and that's where you have typically larger contract value sizes, slower moving in terms of adoption and feedback for your product. And I think what you see is that successful companies balance having those three customer personas because you have the small startups, small fast moving upstarts that are discerning buyers who know the market and elect to build on tooling that is best in class. And so you basically kind of pass the smell test of companies who are quite discerning in their purchases, plus are moving so quick they're pulling their product out of you. Concurrently, you have a product that's enterprise ready to service the scalability, availability, and trust of enterprise buyers. And that's ultimately where a lot of companies will see tremendous commercial success. I mean I remember seeing the Twilio IPO, Uber being like a full 20% of their revenue, right? And so there's this very common pattern where you have the ability to find some of those upstarts that you make bets on, like the next Ubers of the world, the smaller companies that continue to get developed with the product and then the enterprise whom allows you to really fund the commercial success of the business, and validate the size of the opportunity in market that's being creative. >> It's interesting, there's so many things happening there. It's like, in a way it's a new category, but it's not a new category. It becomes a new category because of the capabilities, right? So, it's really interesting, 'cause that's what you're talking about is a category, creating. >> I think developer tools. So people often talk about B to B and B to C businesses. I think developer tools are in some ways a third way. I mean ultimately they're B to B, you're selling to other businesses and that's where your revenue's coming from. However, you look kind of like a B to C company in the ways that you measure product adoption and kind of go to market. In other words, you know, we're often tracking the leading indicators of commercial success in the form of usage, adoption, retention. Really consumer app, traditionally based metrics of how to know you're building the right stuff, and that's what product led growth companies do. And then you ultimately have commercial traction in a B to B way. And I think that that actually kind of looks like a third thing, right? Like you can do these sort of funny zany marketing examples that you might see historically from consumer businesses, but yet you ultimately make your money from the enterprise who has these de-risked high value problems you can solve for them. And I selfishly think that that's the best of both worlds because I don't have to be like Evan Spiegel, guessing the next consumer trend or maybe creating the next consumer trend and catching lightning in a bottle over and over again on the consumer side. But I still get to have fun in our marketing and make sort of fun, like we're launching the world's largest game of rock paper scissors being played with computer vision, right? Like that's sort of like a fun thing you can do, but then you can concurrently have the commercial validation and customers telling you the things that they need to be built for them next to solve commercial pain points for them. So I really do think that you're right by calling this a new category and it really is the best of both worlds. >> It's a great call out, it's a great call out. In fact, I always juggle with the VC. I'm like, it's so easy. Your job is so easy to pick the winners. What are you talking about its so easy? I go, just watch what the developers jump on. And it's not about who started, it could be someone in the dorm room to the boardroom person. You don't know because that B to C, the C, it's B to D you know? You know it's developer 'cause that's a human right? That's a consumer of the tool which influences the business that never was there before. So I think this direct business model evolution, whether it's media going direct or going direct to the developers rather than going to a gatekeeper, this is the reality. >> That's right. >> Well I got to ask you while we got some time left to describe, I want to get into this topic of multi-modality, okay? And can you describe what that means in computer vision? And what's the state of the growth of that portion of this piece? >> Multi modality refers to using multiple traditionally siloed problem types, meaning text, image, video, audio. So you could treat an audio problem as only processing audio signal. That is not multimodal, but you could use the audio signal at the same time as a video feed. Now you're talking about multi modality. In computer vision, multi modality is predominantly happening with images and text. And one of the biggest releases in this space is actually two years old now, was clip, contrastive language image pre-training, which took 400 million image text pairs and basically instead of previously when you do classification, you basically map every single image to a single class, right? Like here's a bunch of images of chairs, here's a bunch of images of dogs. What clip did is used, you can think about it like, the class for an image being the Instagram caption for the image. So it's not one single thing. And by training on understanding the corpora, you basically see which words, which concepts are associated with which pixels. And this opens up the aperture for the types of problems and generalizability of models. So what does this mean? This means that you can get to value more quickly from an existing trained model, or at least validate that what you want to tackle with a computer vision, you can get there more quickly. It also opens up the, I mean. Clip has been the bedrock of some of the generative image techniques that have come to bear, just as much as some of the LLMs. And increasingly we're going to see more and more of multi modality being a theme simply because at its core, you're including more context into what you're trying to understand about the world. I mean, in its most basic sense, you could ask yourself, if I have an image, can I know more about that image with just the pixels? Or if I have the image and the sound of when that image was captured or it had someone describe what they see in that image when the image was captured, which one's going to be able to get you more signal? And so multi modality helps expand the ability for us to understand signal processing. >> Awesome. And can you just real quick, define clip for the folks that don't know what that means? >> Yeah. Clip is a model architecture, it's an acronym for contrastive language image pre-training and like, you know, model architectures that have come before it captures the almost like, models are kind of like brands. So I guess it's a brand of a model where you've done these 400 million image text pairs to match up which visual concepts are associated with which text concepts. And there have been new releases of clip, just at bigger sizes of bigger encoding's, of longer strings of texture, or larger image windows. But it's been a really exciting advancement that OpenAI released in January, 2021. >> All right, well great stuff. We got a couple minutes left. Just I want to get into more of a company-specific question around culture. All startups have, you know, some sort of cultural vibe. You know, Intel has Moore's law doubles every whatever, six months. What's your culture like at RoboFlow? I mean, if you had to describe that culture, obviously love the hacking story, you and your partner with the games going number one on Product Hunt next to Elon and Tesla and then hey, we should start a company two years later. That's kind of like a curious, inventing, building, hard charging, but laid back. That's my take. How would you describe the culture? >> I think that you're right. The culture that we have is one of shipping, making things. So every week each team shares what they did for our customers on a weekly basis. And we have such a strong emphasis on being better week over week that those sorts of things compound. So one big emphasis in our culture is getting things done, shipping, doing things for our customers. The second is we're an incredibly transparent place to work. For example, how we think about giving decisions, where we're progressing against our goals, what problems are biggest and most important for the company is all open information for those that are inside the company to know and progress against. The third thing that I'd use to describe our culture is one that thrives with autonomy. So RoboFlow has a number of individuals who have founded companies before, some of which have sold their businesses for a hundred million plus upon exit. And the way that we've been able to attract talent like that is because the problems that we're tackling are so immense, yet individuals are able to charge at it with the way that they think is best. And this is what pairs well with transparency. If you have a strong sense of what the company's goals are, how we're progressing against it, and you have this ownership mentality of what can I do to change or drive progress against that given outcome, then you create a really healthy pairing of, okay cool, here's where the company's progressing. Here's where things are going really well, here's the places that we most need to improve and work on. And if you're inside that company as someone who has a preponderance to be a self-starter and even a history of building entire functions or companies yourself, then you're going to be a place where you can really thrive. You have the inputs of the things where we need to work on to progress the company's goals. And you have the background of someone that is just necessarily a fast moving and ambitious type of individual. So I think the best way to describe it is a transparent place with autonomy and an emphasis on getting things done. >> Getting shit done as they say. Getting stuff done. Great stuff. Hey, final question. Put a plug out there for the company. What are you going to hire? What's your pipeline look like for people? What jobs are open? I'm sure you got hiring all around. Give a quick plug for the company what you're looking for. >> I appreciate you asking. Basically you're either building the product or helping customers be successful with the product. So in the building product category, we have platform engineering roles, machine learning engineering roles, and we're solving some of the hardest and most impactful problems of bringing such a groundbreaking technology to the masses. And so it's a great place to be where you can kind of be your own user as an engineer. And then if you're enabling people to be successful with the products, I mean you're working in a place where there's already such a strong community around it and you can help shape, foster, cultivate, activate, and drive commercial success in that community. So those are roles that tend themselves to being those that build the product for developer advocacy, those that are account executives that are enabling our customers to realize commercial success, and even hybrid roles like we call it field engineering, where you are a technical resource to drive success within customer accounts. And so all this is listed on roboflow.com/careers. And one thing that I actually kind of want to mention John that's kind of novel about the thing that's working at RoboFlow. So there's been a lot of discussion around remote companies and there's been a lot of discussion around in-person companies and do you need to be in the office? And one thing that we've kind of recognized is you can actually chart a third way. You can create a third way which we call satellite, which basically means people can work from where they most like to work and there's clusters of people, regular onsite's. And at RoboFlow everyone gets, for example, $2,500 a year that they can use to spend on visiting coworkers. And so what's sort of organically happened is team numbers have started to pull together these resources and rent out like, lavish Airbnbs for like a week and then everyone kind of like descends in and works together for a week and makes and creates things. And we call this lighthouses because you know, a lighthouse kind of brings ships into harbor and we have an emphasis on shipping. >> Yeah, quality people that are creative and doers and builders. You give 'em some cash and let the self-governing begin, you know? And like, creativity goes through the roof. It's a great story. I think that sums up the culture right there, Joseph. Thanks for sharing that and thanks for this great conversation. I really appreciate it and it's very inspiring. Thanks for coming on. >> Yeah, thanks for having me, John. >> Joseph Nelson, co-founder and CEO of RoboFlow. Hot company, great culture in the right place in a hot area, computer vision. This is going to explode in value. The edge is exploding. More use cases, more development, and developers are driving the change. Check out RoboFlow. This is theCUBE. I'm John Furrier, your host. Thanks for watching. (gentle music)
SUMMARY :
Welcome to this CUBE conversation You're in the middle of it. And the wave is still building the company is that you're doing. maybe 2% of the whole economy And as you know, when you it kind of was obvious to you guys? cognizant of the fact that I love that because I think, you know, And so what you do is issue on the infrastructure. and the drone will go and the marketplace when you say, in the sushi that you're eating. And so having the And can you talk about the use case is relatively, you know, and making that the signature What are some of the things you're seeing and pulling the product out at you because of the capabilities, right? in the ways that you the C, it's B to D you know? And one of the biggest releases And can you just real quick, and like, you know, I mean, if you had to like that is because the problems Give a quick plug for the place to be where you can the self-governing begin, you know? and developers are driving the change.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
Joseph Nelson | PERSON | 0.99+ |
January, 2021 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Medtronic | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
400 million | QUANTITY | 0.99+ |
Evan Spiegel | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
RoboFlow | ORGANIZATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Rivian | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Cardinal Health | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Wimbledon | EVENT | 0.99+ |
roboflow.com/careers | OTHER | 0.99+ |
first | QUANTITY | 0.99+ |
second segment | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both worlds | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
two years later | DATE | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Ubers | ORGANIZATION | 0.98+ |
third way | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a week | QUANTITY | 0.98+ |
Magic Sudoku | TITLE | 0.98+ |
second | QUANTITY | 0.98+ |
Nvidia | ORGANIZATION | 0.98+ |
Sudoku | TITLE | 0.98+ |
MWC | EVENT | 0.97+ |
today | DATE | 0.97+ |
billion dollar | QUANTITY | 0.97+ |
one single thing | QUANTITY | 0.97+ |
over a hundred thousand developers | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
third | QUANTITY | 0.96+ |
Elon | ORGANIZATION | 0.96+ |
third thing | QUANTITY | 0.96+ |
Tesla | ORGANIZATION | 0.96+ |
Jetson | COMMERCIAL_ITEM | 0.96+ |
Elon | PERSON | 0.96+ |
RoboFlow | TITLE | 0.96+ |
ORGANIZATION | 0.95+ | |
Twilio | ORGANIZATION | 0.95+ |
twenties | QUANTITY | 0.95+ |
Product Hunt AR | TITLE | 0.95+ |
Moore | PERSON | 0.95+ |
both researchers | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Andy Sheahen, Dell Technologies & Marc Rouanne, DISH Wireless | MWC Barcelona 2023
>> (Narrator) The CUBE's live coverage is made possible by funding by Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Fira Barcelona. It's theCUBE live at MWC23 our third day of coverage of this great, huge event continues. Lisa Martin and Dave Nicholson here. We've got Dell and Dish here, we are going to be talking about what they're doing together. Andy Sheahen joins as global director of Telecom Cloud Core and Next Gen Ops at Dell. And Marc Rouanne, one of our alumni is back, EVP and Chief Network Officer at Dish Wireless. Welcome guys. >> Great to be here. >> (Both) Thank you. >> (Lisa) Great to have you. Mark, talk to us about what's going on at Dish wireless. Give us the update. >> Yeah so we've built a network from scratch in the US, that covered the US, we use a cloud base Cloud native, so from the bottom of the tower all the way to the internet uses cloud distributed cloud, emits it, so there are a lot of things about that. But it's unique, and now it's working, so we're starting to play with it and that's pretty cool. >> What's some of the proof points, proof in the pudding? >> Well, for us, first of all it was to do basic voice and data on a smartphone and for me the success would that you won't see the difference for a smartphone. That's base line. the next step is bringing this to the enterprise for their use case. So we've covered- now we have services for smartphones. We use our brand, Boost brand, and we are distributing that across the US. But as I said, the real good stuff is when you start to making you know the machines and all the data and the applications for the enterprise. >> Andy, how is Dell a facilitator of what Marc just described and the use cases and what their able to deliver? >> We're providing a number of the servers that are being used out in their radio access network. The virtual DU servers, we're also providing some bare metal orchestration capabilities to help automate the process of deploying all these hundreds and thousands of nodes out in the field. Both of these, the servers and the bare metal orchestra product are things that we developed in concert with Dish, working together to understand the way, the best way to automate, based on the tooling their using in other parts of their network, and we've been with you guys since day one, really. >> (Marc) Absolutely, yeah. >> Making each others solutions better the whole way. >> Marc, why Dell? >> So, the way the networks work is you have a cloud, and you have a distributed edge you need someone who understands the diversity of the edge in order to bring the cloud software to the edge, and Dell is the best there, you know, you can, we can ask them to mix and match accelerators, processors memory, it's very diverse distributed edge. We are building twenty thousands sides so you imagine the size and the complexity and Dell was the right partner for that. >> (Andy) Thank you. >> So you mentioned addressing enterprise leads, which is interesting because there's nothing that would prevent you from going after consumer wireless technically, right but it sounds like you have taken a look at the market and said "we're going to go after this segment of the market." >> (Marc) Yeah. >> At least for now. Are there significant differences between what an enterprise expects from a 5G network than, verses a consumer? >> Yeah. >> (Dave) They have higher expectations, maybe, number one I guess is, if my bill is 150 dollars a month I can have certain levels of expectations whereas a large enterprise the may be making a much more significant investment, are their expectations greater? >> (Marc) Yeah. >> Do you have a higher bar to get over? >> So first, I mean first we use our network for consumers, but for us it's an enterprise. That's the consumer segment, an enterprise. So we expose the network like we would to a car manufacturer, or to a distributor of goods of food and beverage. But what you expect when you are an enterprise, you expect, manage your services. You expect to control the goodness of your services, and for this you need to observe what's happening. Are you delivering the right service? What is the feedback from the enterprise users, and that's what we call the observability. We have a data centric network, so our enterprises are saying "Yeah connecting is enough, but show us how it works, and show us how we can learn from the data, improve, improve, and become more competitive." That's the big difference. >> So what you say Marc, are some of the outcomes you achieved working with Dell? TCO, ROI, CapX, OpX, what are some of the outcomes so far, that you've been able to accomplish? >> Yeah, so obviously we don't share our numbers, but we're very competitive. Both on the CapX and the OpX. And the second thing is that we are much faster in terms of innovation, you know one of the things that Telecorp would not do, was to tap into the IT industry. So we access to the silicon and we have access to the software and at a scale that none of the Telecorp could ever do and for us it's like "wow" and it's a very powerful industry and we've been driving the consist- it's a bit technical but all the silicone, the accelerators, the processors, the GPU, the TPUs and it's like wow. It's really a transformation. >> Andy, is there anything anagallis that you've dealt with in the past to the situation where you have this true core edge, environment where you have to instrument the devices that you provide to give that level of observation or observability, whatever the new word is, that we've invented for that. >> Yeah, yeah. >> I mean has there, is there anything- >> Yeah absolutely. >> Is this unprecedented? >> No, no not at all. I mean Dell's been really working at the edge since before the edge was called the edge right, we've been selling, our hardware and infrastructure out to retail shops, branch office locations, you know just smaller form factors outside of data centers for a very long time and so that's sort of the consistency from what we've been doing for 30 years to now the difference is the volume, the different number of permutations as Marc was saying. The different type of accelerator cards, the different SKUS of different server types, the sheer volume of nodes that you have in a nationwide wireless network. So the volumes are much different, the amount of data is much different, but the process is really the same. It's about having the infrastructure in the right place at the right time and being able to understand if it's working well or if it's not and it's not just about a red light or a green light but healthy and unhealthy conditions and predicting when the red lights going to come on. And we've been doing that for a while it's just a different scale, and a different level of complexity when you're trying to piece together all these different components from different vendors. >> So we talk a lot about ecosystem, and sometimes because of the desire to talk about the outcomes and what the end users, customers, really care about sometimes we will stop at the layer where say a Dell lives, and we'll see that as the sum total of the component when really, when you talk about a server that Dish is using that in and of itself is an ecosystem >> Yep, yeah >> (Dave) or there's an ecosystem behind it you just mentioned it, the kinds of components and the choices that you make when you optimize these devices determine how much value Dish, >> (Andy) Absolutely. >> Can get out of that. How deep are you on that hardware? I'm a knuckle dragging hardware guy. >> Deep, very deep, I mean just the number of permutations that were working through with Dish and other operators as well, different accelerator cards that we talked about, different techniques for timing obviously there's different SKUs with the silicon itself, different chip sets, different chips from different providers, all those things have to come together, and we build the basic foundation and then we also started working with our cloud partners Red Hat, Wind River, all these guys, VM Ware, of course and that's the next layer up, so you've got all the different hardware components, you've got the extraction layer, with your virtualization layer and or ubernetise layer and all of that stuff together has to be managed compatibility matrices that get very deep and very big, very quickly and that's really the foundational challenge we think of open ran is thinking all these different pieces are going to fit together and not just work today but work everyday as everything gets updated much more frequently than in the legacy world. >> So you care about those things, so we don't have to. >> That's right. >> That's the beauty of it. >> Yes. >> Well thank you. (laughter) >> You're welcome. >> I want to understand, you know some of the things that we've been talking about, every company is a data company, regardless of whether it's telco, it's a retailer, if it's my bank, it's my grocery store and they have to be able to use data as quickly as possible to make decisions. One of the things they've been talking here is the monetization of data, the monetization of the network. How do you, how does Dell help, like a Dish be able to achieve the monetization of their data. >> Well as Marc was saying before the enterprise use cases are what we are all kind of betting on for 5G, right? And enterprises expect to have access to data and to telemetry to do whatever use cases they want to execute in their particular industry, so you know, if it's a health care provider, if it's a factory, an agricultural provider that's leveraging this network, they need to get the data from the network, from the devices, they need to correlate it, in order to do things like automatically turn on a watering system at a certain time, right, they need to know the weather around make sure it's not too windy and you're going to waste a lot of water. All that has data, it's going to leverage data from the network, it's going to leverage data from devices, it's going to leverage data from applications and that's data that can be monetized. When you have all that data and it's all correlated there's value, inherit to it and you can even go onto a forward looking state where you can intelligently move workloads around, based on the data. Based on the clarity of the traffic of the network, where is the right place to put it, and even based on current pricing for things like on demand insists from cloud providers. So having all that data correlated allows any enterprise to make an intelligent decision about how to move a workload around a network and get the most efficient placing of that workload. >> Marc, Andy mentions things like data and networks and moving data across the networks. You have on your business card, Chief Network Officer, what potentially either keeps you up at night in terror or gets you very excited about the future of your network? What's out there in the frontier and what are those key obstacles that have to be overcome that you work with? >> Yeah, I think we have the network, we have the baseline, but we don't yet have the consumption that is easy by the enterprise, you know an enterprise likes to say "I have 4K camera, I connect it to my software." Click, click, right? And that's where we need to be so we're talking about it APIs that are so simple that they become a click and we engineers we have a tendency to want to explain but we should not, it should become a click. You know, and the phone revolution with the apps became those clicks, we have to do the same for the enterprise, for video, for surveillance, for analytics, it has to be clicks. >> While balancing flexibility, and agility of course because you know the folks who were fans of CLIs come in light interfaces, who hate gooeys it's because they feel they have the ability to go down to another level, so obviously that's a balancing act. >> But that's our job. >> Yeah. >> Our job is to hide the complexity, but of course there is complexity. It's like in the cloud, an emprise scaler, they manage complex things but it's successful if they hide it. >> (Dave) Yeah. >> It's the same. You know we have to be emprise scaler of connectivity but hide it. >> Yeah. >> So that people connect everything, right? >> Well it's Andy's servers, we're all magicians hiding it all. >> Yeah. >> It really is. >> It's like don't worry about it, just know, >> Let us do it. >> Sit down, we will serve you the meal. Don't worry how it's cooked. >> That's right, the enterprises want the outcome. >> (Dave) Yeah. >> They don't want to deal with that bottom layer. But it is tremendously complex and we want to take that on and make it better for the industry. >> That's critical. Marc I'd love to go back to you and just I know that you've been in telco for such a long time and here we are day three of MWC the name changed this year, from Mobile World Congress, reflecting mobilism isn't the only thing, obviously it was the catalyst, but what some of the things that you've heard at the event, maybe seen at the event that give you the confidence that the right players are here to help move Dish wireless forward, for example. >> You know this is the first, I've been here for decades it's the first time, and I'm a Chief Network Officer, first time we don't talk about the network. >> (Andy) Yeah. >> Isn't that surprising? People don't tell me about speed, or latency, they talk about consumption. Apps, you know videos surveillance, or analytics or it's, so I love that, because now we're starting to talk about how we can consume and monetize but that's the first time. We use to talk about gigabytes and this and that, none of that not once. >> What does that signify to you, in terms of the evolution? >> Well you know, we've seen that the demand for the healthcare, for the smart cities, has been here for a decade, proof of concepts for a decade but the consumption has been behind and for me this is the oldest team is waking up to we are going to make it easy, so that the consumption can take off. The demand is there, we have to serve it. And the fact that people are starting to say we hide the complexity that's our problem, but don't even mention it, I love it. >> Yep. Drop the mic. >> (Andy and Marc) Yeah, yeah. >> Andy last question for you, some of the things we know Dell has a big and verging presents in telco, we've had a chance to see the booth, see the cool things you guys are featuring there, Dave did a great tour of it, talk about some of the things you've heard and maybe even from customers at this event that demonstrate to you that Dell is going in the right direction with it's telco strategy. >> Yeah, I mean personally for me this has been an unbelievable event for Dell we've had tons and tons of customer meetings of course and the feedback we're getting is that the things we're bring to market whether it's infrablocks, or purposeful servers that are designed for the telecom network are what our customers need and have always wanted. We get a lot of wows, right? >> (Lisa) That's nice. >> "Wow we didn't know Dell was doing this, we had no idea." And the other part of it is that not everybody was sure that we were going to move as fast as we have so the speed in which we've been able to bring some of these things to market and part of that was working with Dish, you know a pioneer, to make sure we were building the right things and I think a lot of the customers that we talked to really appreciate the fact that we're doing it with the industry, >> (Lisa) Yeah. >> You know, not at the industry and that comes across in the way they are responding and what their talking to us about now. >> And that came across in the interview that you just did. Thank you both for joining Dave and me. >> Thank you >> Talking about what Dell and Dish are doing together the proof is in the pudding, and you did a great job at explaining that, thanks guys, we appreciate it. >> Thank you. >> All right, our pleasure. For our guest and for Dave Nicholson, I'm Lisa Martin, you're watching theCUBE live from MWC 23 day three. We will be back with our next guest, so don't go anywhere. (upbeat music)
SUMMARY :
that drive human progress. we are going to be talking about Mark, talk to us about what's that covered the US, we use a cloud base and all the data and the and the bare metal orchestra product solutions better the whole way. and Dell is the best at the market and said between what an enterprise and for this you need to but all the silicone, the instrument the devices and so that's sort of the consistency from deep are you on that hardware? and that's the next So you care about those Well thank you. One of the things and get the most efficient the future of your network? You know, and the phone and agility of course It's like in the cloud, an emprise scaler, It's the same. Well it's Andy's Sit down, we will serve you the meal. That's right, the and make it better for the industry. that the right players are here to help it's the first time, and but that's the first easy, so that the consumption some of the things we know and the feedback we're getting is that so the speed in which You know, not at the industry And that came across in the the proof is in the pudding, We will be back with our next
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Marc Rouanne | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Andy Sheahen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Telecorp | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
Dish | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
DISH Wireless | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Dish wireless | ORGANIZATION | 0.98+ |
Lisa | PERSON | 0.98+ |
MWC | EVENT | 0.98+ |
third day | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Next Gen Ops | ORGANIZATION | 0.97+ |
TCO | ORGANIZATION | 0.97+ |
Dish Wireless | ORGANIZATION | 0.97+ |
CapX | ORGANIZATION | 0.97+ |
this year | DATE | 0.96+ |
Boost | ORGANIZATION | 0.95+ |
150 dollars a month | QUANTITY | 0.94+ |
OpX | ORGANIZATION | 0.92+ |
Telecom Cloud Core | ORGANIZATION | 0.91+ |
thousands | QUANTITY | 0.9+ |
ROI | ORGANIZATION | 0.9+ |
tons and tons of customer | QUANTITY | 0.86+ |
SiliconANGLE News | Intel Accelerates 5G Network Virtualization
(energetic music) >> Welcome to the Silicon Angle News update Mobile World Congress theCUBE coverage live on the floor for four days. I'm John Furrier, in the studio here. Dave Vellante, Lisa Martin onsite. Intel in the news, Intel accelerates 5G network virtualization with radio access network boost for Xeon processors. Intel, well known for power and computing, they today announced their integrated virtual radio access network into its latest fourth gen Intel Xeon system on a chip. This move will help network operators gear up their efforts to deliver Cloud native features for next generation 5G core and edge networks. This announcement came today at MWC, formerly knows Mobile World Congress. In Barcelona, Intel is taking the latest step in its mission to virtualize the world's networks, including Core, Open RAN and Edge. Network virtualization is the key capability for communication service providers as they migrate from fixed function hardware to programmable software defined platforms. This provides greater agility and greater cost efficiency. According to Intel, this is the demand for agile, high performance, scalable networks requiring adoption. Fully virtualized software based platforms run on general purpose processors. Intel believes that network operators need to accelerate network virtualization to get the most out of these new architectures, and that's where it can be made its mark. With Intel vRAN Boost, it delivers twice the capability and capacity gains over its previous generation of silicon with the same power envelope with 20% in power savings that results from an integrated acceleration. In addition, Intel announced new infrastructure power manager for 5G core reference software that's designed to work with vRAN Boost. Intel also showcased its new Intel Converged Edge media platform designed to deliver multiple video services from a shared multi-tenant architecture. The platform leverages Cloud native scalability to respond to the shifting demands. Lastly, Intel announced a range of Agilex 7 Field Programmable Gate Arrays and eASIC N5X structured applications specific integrated circuits designed for individual cloud communications and embedded applications. Intel is targeting the power consumption which is energy and more horsepower for chips, which is going to power the industrial internet edge. That's going to be Cloud native. Big news happening at Mobile World Congress. theCUBE is there. Go to siliconangle.com for all the news and special report and live feed on theCUBE.net. (energetic music)
SUMMARY :
Intel in the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Mobile World Congress | EVENT | 0.98+ |
twice | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
four days | QUANTITY | 0.98+ |
fourth gen | QUANTITY | 0.96+ |
theCUBE.net | OTHER | 0.9+ |
Xeon | COMMERCIAL_ITEM | 0.86+ |
MWC | EVENT | 0.84+ |
vRAN Boost | TITLE | 0.82+ |
Agilex | TITLE | 0.78+ |
Silicon Angle | ORGANIZATION | 0.77+ |
7 Field Programmable | COMMERCIAL_ITEM | 0.76+ |
SiliconANGLE News | ORGANIZATION | 0.76+ |
eASIC | TITLE | 0.75+ |
theCUBE | ORGANIZATION | 0.63+ |
N5X | COMMERCIAL_ITEM | 0.62+ |
5G | QUANTITY | 0.55+ |
Gate Arrays | OTHER | 0.41+ |
Humphreys & Ferron-Jones | Trusted security by design, Compute Engineered for your Hybrid World
(upbeat music) >> Welcome back, everyone, to our Cube special programming on "Securing Compute, Engineered for the Hybrid World." We got Cole Humphreys who's with HPE, global server security product manager, and Mike Ferron-Jones with Intel. He's the product manager for data security technology. Gentlemen, thank you for coming on this special presentation. >> All right, thanks for having us. >> So, securing compute, I mean, compute, everyone wants more compute. You can't have enough compute as far as we're concerned. You know, more bits are flying around the internet. Hardware's mattering more than ever. Performance markets hot right now for next-gen solutions. When you're talking about security, it's at the center of every single conversation. And Gen11 for the HPE has been big-time focus here. So let's get into the story. What's the market for Gen11, Cole, on the security piece? What's going on? How do you see this impacting the marketplace? >> Hey, you know, thanks. I think this is, again, just a moment in time where we're all working towards solving a problem that doesn't stop. You know, because we are looking at data protection. You know, in compute, you're looking out there, there's international impacts, there's federal impacts, there's state-level impacts, and even regulation to protect the data. So, you know, how do we do this stuff in an environment that keeps changing? >> And on the Intel side, you guys are a Tier 1 combination partner, Better Together. HPE has a deep bench on security, Intel, We know what your history is. You guys have a real root of trust with your code, down to the silicon level, continuing to be, and you're on the 4th Gen Xeon here. Mike, take us through the Intel's relationship with HPE. Super important. You guys have been working together for many, many years. Data security, chips, HPE, Gen11. Take us through the relationship. What's the update? >> Yeah, thanks and I mean, HPE and Intel have been partners in delivering technology and delivering security for decades. And when a customer invests in an HPE server, like at one of the new Gen11s, they're getting the benefit of the combined investment that these two great companies are putting into product security. On the Intel side, for example, we invest heavily in the way that we develop our products for security from the ground up, and also continue to support them once they're in the market. You know, launching a product isn't the end of our security investment. You know, our Intel Red Teams continue to hammer on Intel products looking for any kind of security vulnerability for a platform that's in the field. As well as we invest heavily in the external research community through our bug bounty programs to harness the entire creativity of the security community to find those vulnerabilities, because that allows us to patch them and make sure our customers are staying safe throughout that platform's deployed lifecycle. You know, in 2021, between Intel's internal red teams and our investments in external research, we found 93% of our own vulnerabilities. Only a small percentage were found by unaffiliated external entities. >> Cole, HPE has a great track record and long history serving customers around security, actually, with the solutions you guys had. With Gen11, it's more important than ever. Can you share your thoughts on the talent gap out there? People want to move faster, breaches are happening at a higher velocity. They need more protection now than ever before. Can you share your thoughts on why these breaches are happening, and what you guys are doing, and how you guys see this happening from a customer standpoint? What you guys fill in with Gen11 with solution? >> You bet, you know, because when you hear about the relentless pursuit of innovation from our partners, and we in our engineering organizations in India, and Taiwan, and the Americas all collaborating together years in advance, are about delivering solutions that help protect our customer's environments. But what you hear Mike talking about is it's also about keeping 'em safe. Because you look to the market, right? What you see in, at least from our data from 2021, we have that breaches are still happening, and lot of it has to do with the fact that there is just a lack of adequate security staff with the necessary skills to protect the customer's application and ultimately the workloads. And then that's how these breaches are happening. Because ultimately you need to see some sort of control and visibility of what's going on out there. And what we were talking about earlier is you see time. Time to seeing some incident happen, the blast radius can be tremendous in today's technical, advanced world. And so you have to identify it and then correct it quickly, and that's why this continued innovation and partnership is so important, to help work together to keep up. >> You guys have had a great track record with Intel-based platforms with HPE. Gen11's a really big part of the story. Where do you see that impacting customers? Can you explain the benefits of what's going on with Gen11? What's the key story? What's the most important thing we should be paying attention to here? >> I think there's probably three areas as we look into this generation. And again, this is a point in time, we will continue to evolve. But at this particular point it's about, you know, a fundamental approach to our security enablement, right? Partnering as a Tier 1 OEM with one of the best in the industry, right? We can deliver systems that help protect some of the most critical infrastructure on earth, right? I know of some things that are required to have a non-disclosure because it is some of the most important jobs that you would see out there. And working together with Intel to protect those specific compute workloads, that's a serious deal that protects not only state, and local, and federal interests, but, really, a global one. >> This is a really- >> And then there's another one- Oh sorry. >> No, go ahead. Finish your thought. >> And then there's another one that I would call our uncompromising focus. We work in the industry, we lead and partner with those in the, I would say, in the good side. And we want to focus on enablement through a specific capability set, let's call it our global operations, and that ability to protect our supply chain and deliver infrastructure that can be trusted and into an operating environment. You put all those together and you see very significant and meaningful solutions together. >> The operating benefits are significant. I just want to go back to something you just said before about the joint NDAs and kind of the relationship you kind of unpacked, that to me, you know, I heard you guys say from sand to server, I love that phrase, because, you know, silicone into the server. But this is a combination you guys have with HPE and Intel supply-chain security. I mean, it's not just like you're getting chips and sticking them into a machine. This is, like, there's an in-depth relationship on the supply chain that has a very intricate piece to it. Can you guys just double down on that and share that, how that works and why it's important? >> Sure, so why don't I go ahead and start on that one. So, you know, as you mentioned the, you know, the supply chain that ultimately results in an end user pulling, you know, a new Gen11 HPE server out of the box, you know, started, you know, way, way back in it. And we've been, you know, Intel, from our part are, you know, invest heavily in making sure that all of our entire supply chain to deliver all of the Intel components that are inside that HPE platform have been protected and monitored ever since, you know, their inception at one of any of our 14,000, you know, Intel vendors that we monitor as part of our supply-chain assurance program. I mean we, you know, Intel, you know, invests heavily in compliance with guidelines from places like NIST and ISO, as well as, you know, doing best practices under things like the Transported Asset Protection Alliance, TAPA. You know, we have been intensely invested in making sure that when a customer gets an Intel processor, or any other Intel silicone product, that it has not been tampered with or altered during its trip through the supply chain. HPE then is able to pick up that, those components that we deliver, and add onto that their own supply-chain assurance when it comes down to delivering, you know, the final product to the customer. >> Cole, do you want to- >> That's exactly right. Yeah, I feel like that integration point is a really good segue into why we're talking today, right? Because that then comes into a global operations network that is pulling together these servers and able to deploy 'em all over the world. And as part of the Gen11 launch, we have security services that allow 'em to be hardened from our factories to that next stage into that trusted partner ecosystem for system integration, or directly to customers, right? So that ability to have that chain of trust. And it's not only about attestation and knowing what, you know, came from whom, because, obviously, you want to trust and make sure you're get getting the parts from Intel to build your technical solutions. But it's also about some of the provisioning we're doing in our global operations where we're putting cryptographic identities and manifests of the server and its components and moving it through that supply chain. So you talked about this common challenge we have of assuring no tampering of that device through the supply chain, and that's why this partnering is so important. We deliver secure solutions, we move them, you're able to see and control that information to verify they've not been tampered with, and you move on to your next stage of this very complicated and necessary chain of trust to build, you know, what some people are calling zero-trust type ecosystems. >> Yeah, it's interesting. You know, a lot goes on under the covers. That's good though, right? You want to have greater security and platform integrity, if you can abstract the way the complexity, that's key. Now one of the things I like about this conversation is that you mentioned this idea of a hardware-root-of-trust set of technologies. Can you guys just quickly touch on that, because that's one of the major benefits we see from this combination of the partnership, is that it's not just one, each party doing something, it's the combination. But this notion of hardware-root-of-trust technologies, what is that? >> Yeah, well let me, why don't I go ahead and start on that, and then, you know, Cole can take it from there. Because we provide some of the foundational technologies that underlie a root of trust. Now the idea behind a root of trust, of course, is that you want your platform to, you know, from the moment that first electron hits it from the power supply, that it has a chain of trust that all of the software, firmware, BIOS is loading, to bring that platform up into an operational state is trusted. If you have a breach in one of those lower-level code bases, like in the BIOS or in the system firmware, that can be a huge problem. It can undermine every other software-based security protection that you may have implemented up the stack. So, you know, Intel and HPE work together to coordinate our trusted boot and root-of-trust technologies to make sure that when a customer, you know, boots that platform up, it boots up into a known good state so that it is ready for the customer's workload. So on the Intel side, we've got technologies like our trusted execution technology, or Intel Boot Guard, that then feed into the HPE iLO system to help, you know, create that chain of trust that's rooted in silicon to be able to deliver that known good state to the customer so it's ready for workloads. >> All right, Cole, I got to ask you, with Gen11 HPE platforms that has 4th Gen Intel Xeon, what are the customers really getting? >> So, you know, what a great setup. I'm smiling because it's, like, it has a good answer, because one, this, you know, to be clear, this isn't the first time we've worked on this root-of-trust problem. You know, we have a construct that we call the HPE Silicon Root of Trust. You know, there are, it's an industry standard construct, it's not a proprietary solution to HPE, but it does follow some differentiated steps that we like to say make a little difference in how it's best implemented. And where you see that is that tight, you know, Intel Trusted Execution exchange. The Intel Trusted Execution exchange is a very important step to assuring that route of trust in that HPE Silicon Root of Trust construct, right? So they're not different things, right? We just have an umbrella that we pull under our ProLiant, because there's ILO, our BIOS team, CPLDs, firmware, but I'll tell you this, Gen11, you know, while all that, keeping that moving forward would be good enough, we are not holding to that. We are moving forward. Our uncompromising focus, we want to drive more visibility into that Gen11 server, specifically into the PCIE lanes. And now you're going to be able to see, and measure, and make policies to have control and visibility of the PCI devices, like storage controllers, NICs, direct connect, NVME drives, et cetera. You know, if you follow the trends of where the industry would like to go, all the components in a server would be able to be seen and attested for full infrastructure integrity, right? So, but this is a meaningful step forward between not only the greatness we do together, but, I would say, a little uncompromising focus on this problem and doing a little bit more to make Gen11 Intel's server just a little better for the challenges of the future. >> Yeah, the Tier 1 partnership is really kind of highlighted there. Great, great point. I got to ask you, Mike, on the 4th Gen Xeon Scalable capabilities, what does it do for the customer with Gen11 now that they have these breaches? Does it eliminate stuff? What's in it for the customer? What are some of the new things coming out with the Xeon? You're at Gen4, Gen11 for HP, but you guys have new stuff. What does it do for the customer? Does it help eliminate breaches? Are there things that are inherent in the product that HP is jointly working with you on or you were contributing in to the relationship that we should know about? What's new? >> Yeah, well there's so much great new stuff in our new 4th Gen Xeon Scalable processor. This is the one that was codenamed Sapphire Rapids. I mean, you know, more cores, more performance, AI acceleration, crypto acceleration, it's all in there. But one of my favorite security features, and it is one that's called Intel Control-Flow Enforcement Technology, or Intel CET. And why I like CET is because I find the attack that it is designed to mitigate is just evil genius. This type of attack, which is called a return, a jump, or a call-oriented programming attack, is designed to not bring a whole bunch of new identifiable malware into the system, you know, which could be picked up by security software. What it is designed to do is to look for little bits of existing, little bits of existing code already on the server. So if you're running, say, a web server, it's looking for little bits of that web-server code that it can then execute in a particular order to achieve a malicious outcome, something like open a command prompt, or escalate its privileges. Now in order to get those little code bits to execute in an order, it has a control mechanism. And there are different, each of the different types of attacks uses a different control mechanism. But what CET does is it gets in there and it disrupts those control mechanisms, uses hardware to prevent those particular techniques from being able to dig in and take effect. So CET can, you know, disrupt it and make sure that software behaves safely and as the programmer intended, rather than picking off these little arbitrary bits in one of these return, or jump, or call-oriented programming attacks. Now it is a technology that is included in every single one of the new 4th Gen Xeon Scalable processors. And so it's going to be an inherent characteristic the customers can benefit from when they buy a new Gen11 HPE server. >> Cole, more goodness from Intel there impacting Gen11 on the HPE side. What's your reaction to that? >> I mean, I feel like this is exactly why you do business with the big Tier 1 partners, because you can put, you know, trust in from where it comes from, through the global operations, literally, having it hardened from the factory it's finished in, moving into your operating environment, and then now protecting against attacks in your web hosting services, right? I mean, this is great. I mean, you'll always have an attack on data, you know, as you're seeing in the data. But the more contained, the more information, and the more control and trust we can give to our customers, it's going to make their job a little easier in protecting whatever job they're trying to do. >> Yeah, and enterprise customers, as you know, they're always trying to keep up to date on the skills and battle the threats. Having that built in under the covers is a real good way to kind of help them free up their time, and also protect them is really killer. This is a big, big part of the Gen11 story here. Securing the data, securing compute, that's the topic here for this special cube conversation, engineering for a hybrid world. Cole, I'll give you the final word. What should people pay attention to, Gen11 from HPE, bottom line, what's the story? >> You know, it's, you know, it's not the first time, it's not the last time, but it's our fundamental security approach to just helping customers through their digital transformation defend in an uncompromising focus to help protect our infrastructure in these technical solutions. >> Cole Humphreys is the global server security product manager at HPE. He's got his finger on the pulse and keeping everyone secure in the platform integrity there. Mike Ferron-Jones is the Intel product manager for data security technology. Gentlemen, thank you for this great conversation, getting into the weeds a little bit with Gen11, which is great. Love the hardware route-of-trust technologies, Better Together. Congratulations on Gen11 and your 4th Gen Xeon Scalable. Thanks for coming on. >> All right, thanks, John. >> Thank you very much, guys, appreciate it. Okay, you're watching "theCube's" special presentation, "Securing Compute, Engineered for the Hybrid World." I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
for the Hybrid World." And Gen11 for the HPE has So, you know, how do we do this stuff And on the Intel side, you guys in the way that we develop and how you guys see this happening and lot of it has to do with the fact that Gen11's a really big part of the story. that you would see out there. And then Finish your thought. and that ability to that to me, you know, I heard you guys say out of the box, you know, and manifests of the is that you mentioned this idea is that you want your is that tight, you know, that HP is jointly working with you on and as the programmer intended, impacting Gen11 on the HPE side. and the more control and trust and battle the threats. you know, it's not the first time, is the global server security for the Hybrid World."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
India | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
ISO | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
Transported Asset Protection Alliance | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
93% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Mike Ferron-Jones | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Cole Humphreys | PERSON | 0.99+ |
TAPA | ORGANIZATION | 0.99+ |
Gen11 | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
14,000 | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Humphreys | PERSON | 0.98+ |
each party | QUANTITY | 0.98+ |
earth | LOCATION | 0.97+ |
Gen11 | COMMERCIAL_ITEM | 0.97+ |
Americas | LOCATION | 0.97+ |
Gen11s | COMMERCIAL_ITEM | 0.96+ |
Securing Compute, Engineered for the Hybrid World | TITLE | 0.96+ |
Xeon | COMMERCIAL_ITEM | 0.94+ |
4th Gen Xeon Scalable processor | COMMERCIAL_ITEM | 0.94+ |
each | QUANTITY | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.92+ |
Ferron-Jones | PERSON | 0.91+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.91+ |
first electron | QUANTITY | 0.9+ |
two great companies | QUANTITY | 0.89+ |
decades | QUANTITY | 0.86+ |
three areas | QUANTITY | 0.85+ |
Gen11 | EVENT | 0.84+ |
ILO | ORGANIZATION | 0.83+ |
Control-Flow Enforcement Technology | OTHER | 0.82+ |
Breaking Analysis: ChatGPT Won't Give OpenAI First Mover Advantage
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> OpenAI The company, and ChatGPT have taken the world by storm. Microsoft reportedly is investing an additional 10 billion dollars into the company. But in our view, while the hype around ChatGPT is justified, we don't believe OpenAI will lock up the market with its first mover advantage. Rather, we believe that success in this market will be directly proportional to the quality and quantity of data that a technology company has at its disposal, and the compute power that it could deploy to run its system. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this Breaking Analysis, we unpack the excitement around ChatGPT, and debate the premise that the company's early entry into the space may not confer winner take all advantage to OpenAI. And to do so, we welcome CUBE collaborator, alum, Sarbjeet Johal, (chuckles) and John Furrier, co-host of the Cube. Great to see you Sarbjeet, John. Really appreciate you guys coming to the program. >> Great to be on. >> Okay, so what is ChatGPT? Well, actually we asked ChatGPT, what is ChatGPT? So here's what it said. ChatGPT is a state-of-the-art language model developed by OpenAI that can generate human-like text. It could be fine tuned for a variety of language tasks, such as conversation, summarization, and language translation. So I asked it, give it to me in 50 words or less. How did it do? Anything to add? >> Yeah, think it did good. It's large language model, like previous models, but it started applying the transformers sort of mechanism to focus on what prompt you have given it to itself. And then also the what answer it gave you in the first, sort of, one sentence or two sentences, and then introspect on itself, like what I have already said to you. And so just work on that. So it it's self sort of focus if you will. It does, the transformers help the large language models to do that. >> So to your point, it's a large language model, and GPT stands for generative pre-trained transformer. >> And if you put the definition back up there again, if you put it back up on the screen, let's see it back up. Okay, it actually missed the large, word large. So one of the problems with ChatGPT, it's not always accurate. It's actually a large language model, and it says state of the art language model. And if you look at Google, Google has dominated AI for many times and they're well known as being the best at this. And apparently Google has their own large language model, LLM, in play and have been holding it back to release because of backlash on the accuracy. Like just in that example you showed is a great point. They got almost right, but they missed the key word. >> You know what's funny about that John, is I had previously asked it in my prompt to give me it in less than a hundred words, and it was too long, I said I was too long for Breaking Analysis, and there it went into the fact that it's a large language model. So it largely, it gave me a really different answer the, for both times. So, but it's still pretty amazing for those of you who haven't played with it yet. And one of the best examples that I saw was Ben Charrington from This Week In ML AI podcast. And I stumbled on this thanks to Brian Gracely, who was listening to one of his Cloudcasts. Basically what Ben did is he took, he prompted ChatGPT to interview ChatGPT, and he simply gave the system the prompts, and then he ran the questions and answers into this avatar builder and sped it up 2X so it didn't sound like a machine. And voila, it was amazing. So John is ChatGPT going to take over as a cube host? >> Well, I was thinking, we get the questions in advance sometimes from PR people. We should actually just plug it in ChatGPT, add it to our notes, and saying, "Is this good enough for you? Let's ask the real question." So I think, you know, I think there's a lot of heavy lifting that gets done. I think the ChatGPT is a phenomenal revolution. I think it highlights the use case. Like that example we showed earlier. It gets most of it right. So it's directionally correct and it feels like it's an answer, but it's not a hundred percent accurate. And I think that's where people are seeing value in it. Writing marketing, copy, brainstorming, guest list, gift list for somebody. Write me some lyrics to a song. Give me a thesis about healthcare policy in the United States. It'll do a bang up job, and then you got to go in and you can massage it. So we're going to do three quarters of the work. That's why plagiarism and schools are kind of freaking out. And that's why Microsoft put 10 billion in, because why wouldn't this be a feature of Word, or the OS to help it do stuff on behalf of the user. So linguistically it's a beautiful thing. You can input a string and get a good answer. It's not a search result. >> And we're going to get your take on on Microsoft and, but it kind of levels the playing- but ChatGPT writes better than I do, Sarbjeet, and I know you have some good examples too. You mentioned the Reed Hastings example. >> Yeah, I was listening to Reed Hastings fireside chat with ChatGPT, and the answers were coming as sort of voice, in the voice format. And it was amazing what, he was having very sort of philosophy kind of talk with the ChatGPT, the longer sentences, like he was going on, like, just like we are talking, he was talking for like almost two minutes and then ChatGPT was answering. It was not one sentence question, and then a lot of answers from ChatGPT and yeah, you're right. I, this is our ability. I've been thinking deep about this since yesterday, we talked about, like, we want to do this segment. The data is fed into the data model. It can be the current data as well, but I think that, like, models like ChatGPT, other companies will have those too. They can, they're democratizing the intelligence, but they're not creating intelligence yet, definitely yet I can say that. They will give you all the finite answers. Like, okay, how do you do this for loop in Java, versus, you know, C sharp, and as a programmer you can do that, in, but they can't tell you that, how to write a new algorithm or write a new search algorithm for you. They cannot create a secretive code for you to- >> Not yet. >> Have competitive advantage. >> Not yet, not yet. >> but you- >> Can Google do that today? >> No one really can. The reasoning side of the data is, we talked about at our Supercloud event, with Zhamak Dehghani who's was CEO of, now of Nextdata. This next wave of data intelligence is going to come from entrepreneurs that are probably cross discipline, computer science and some other discipline. But they're going to be new things, for example, data, metadata, and data. It's hard to do reasoning like a human being, so that needs more data to train itself. So I think the first gen of this training module for the large language model they have is a corpus of text. Lot of that's why blog posts are, but the facts are wrong and sometimes out of context, because that contextual reasoning takes time, it takes intelligence. So machines need to become intelligent, and so therefore they need to be trained. So you're going to start to see, I think, a lot of acceleration on training the data sets. And again, it's only as good as the data you can get. And again, proprietary data sets will be a huge winner. Anyone who's got a large corpus of content, proprietary content like theCUBE or SiliconANGLE as a publisher will benefit from this. Large FinTech companies, anyone with large proprietary data will probably be a big winner on this generative AI wave, because it just, it will eat that up, and turn that back into something better. So I think there's going to be a lot of interesting things to look at here. And certainly productivity's going to be off the charts for vanilla and the internet is going to get swarmed with vanilla content. So if you're in the content business, and you're an original content producer of any kind, you're going to be not vanilla, so you're going to be better. So I think there's so much at play Dave (indistinct). >> I think the playing field has been risen, so we- >> Risen and leveled? >> Yeah, and leveled to certain extent. So it's now like that few people as consumers, as consumers of AI, we will have a advantage and others cannot have that advantage. So it will be democratized. That's, I'm sure about that. But if you take the example of calculator, when the calculator came in, and a lot of people are, "Oh, people can't do math anymore because calculator is there." right? So it's a similar sort of moment, just like a calculator for the next level. But, again- >> I see it more like open source, Sarbjeet, because like if you think about what ChatGPT's doing, you do a query and it comes from somewhere the value of a post from ChatGPT is just a reuse of AI. The original content accent will be come from a human. So if I lay out a paragraph from ChatGPT, did some heavy lifting on some facts, I check the facts, save me about maybe- >> Yeah, it's productive. >> An hour writing, and then I write a killer two, three sentences of, like, sharp original thinking or critical analysis. I then took that body of work, open source content, and then laid something on top of it. >> And Sarbjeet's example is a good one, because like if the calculator kids don't do math as well anymore, the slide rule, remember we had slide rules as kids, remember we first started using Waze, you know, we were this minority and you had an advantage over other drivers. Now Waze is like, you know, social traffic, you know, navigation, everybody had, you know- >> All the back roads are crowded. >> They're car crowded. (group laughs) Exactly. All right, let's, let's move on. What about this notion that futurist Ray Amara put forth and really Amara's Law that we're showing here, it's, the law is we, you know, "We tend to overestimate the effect of technology in the short run and underestimate it in the long run." Is that the case, do you think, with ChatGPT? What do you think Sarbjeet? >> I think that's true actually. There's a lot of, >> We don't debate this. >> There's a lot of awe, like when people see the results from ChatGPT, they say what, what the heck? Like, it can do this? But then if you use it more and more and more, and I ask the set of similar question, not the same question, and it gives you like same answer. It's like reading from the same bucket of text in, the interior read (indistinct) where the ChatGPT, you will see that in some couple of segments. It's very, it sounds so boring that the ChatGPT is coming out the same two sentences every time. So it is kind of good, but it's not as good as people think it is right now. But we will have, go through this, you know, hype sort of cycle and get realistic with it. And then in the long term, I think it's a great thing in the short term, it's not something which will (indistinct) >> What's your counter point? You're saying it's not. >> I, no I think the question was, it's hyped up in the short term and not it's underestimated long term. That's what I think what he said, quote. >> Yes, yeah. That's what he said. >> Okay, I think that's wrong with this, because this is a unique, ChatGPT is a unique kind of impact and it's very generational. People have been comparing it, I have been comparing to the internet, like the web, web browser Mosaic and Netscape, right, Navigator. I mean, I clearly still remember the days seeing Navigator for the first time, wow. And there weren't not many sites you could go to, everyone typed in, you know, cars.com, you know. >> That (indistinct) wasn't that overestimated, the overhyped at the beginning and underestimated. >> No, it was, it was underestimated long run, people thought. >> But that Amara's law. >> That's what is. >> No, they said overestimated? >> Overestimated near term underestimated- overhyped near term, underestimated long term. I got, right I mean? >> Well, I, yeah okay, so I would then agree, okay then- >> We were off the charts about the internet in the early days, and it actually exceeded our expectations. >> Well there were people who were, like, poo-pooing it early on. So when the browser came out, people were like, "Oh, the web's a toy for kids." I mean, in 1995 the web was a joke, right? So '96, you had online populations growing, so you had structural changes going on around the browser, internet population. And then that replaced other things, direct mail, other business activities that were once analog then went to the web, kind of read only as you, as we always talk about. So I think that's a moment where the hype long term, the smart money, and the smart industry experts all get the long term. And in this case, there's more poo-pooing in the short term. "Ah, it's not a big deal, it's just AI." I've heard many people poo-pooing ChatGPT, and a lot of smart people saying, "No this is next gen, this is different and it's only going to get better." So I think people are estimating a big long game on this one. >> So you're saying it's bifurcated. There's those who say- >> Yes. >> Okay, all right, let's get to the heart of the premise, and possibly the debate for today's episode. Will OpenAI's early entry into the market confer sustainable competitive advantage for the company. And if you look at the history of tech, the technology industry, it's kind of littered with first mover failures. Altair, IBM, Tandy, Commodore, they and Apple even, they were really early in the PC game. They took a backseat to Dell who came in the scene years later with a better business model. Netscape, you were just talking about, was all the rage in Silicon Valley, with the first browser, drove up all the housing prices out here. AltaVista was the first search engine to really, you know, index full text. >> Owned by Dell, I mean DEC. >> Owned by Digital. >> Yeah, Digital Equipment >> Compaq bought it. And of course as an aside, Digital, they wanted to showcase their hardware, right? Their super computer stuff. And then so Friendster and MySpace, they came before Facebook. The iPhone certainly wasn't the first mobile device. So lots of failed examples, but there are some recent successes like AWS and cloud. >> You could say smartphone. So I mean. >> Well I know, and you can, we can parse this so we'll debate it. Now Twitter, you could argue, had first mover advantage. You kind of gave me that one John. Bitcoin and crypto clearly had first mover advantage, and sustaining that. Guys, will OpenAI make it to the list on the right with ChatGPT, what do you think? >> I think categorically as a company, it probably won't, but as a category, I think what they're doing will, so OpenAI as a company, they get funding, there's power dynamics involved. Microsoft put a billion dollars in early on, then they just pony it up. Now they're reporting 10 billion more. So, like, if the browsers, Microsoft had competitive advantage over Netscape, and used monopoly power, and convicted by the Department of Justice for killing Netscape with their monopoly, Netscape should have had won that battle, but Microsoft killed it. In this case, Microsoft's not killing it, they're buying into it. So I think the embrace extend Microsoft power here makes OpenAI vulnerable for that one vendor solution. So the AI as a company might not make the list, but the category of what this is, large language model AI, is probably will be on the right hand side. >> Okay, we're going to come back to the government intervention and maybe do some comparisons, but what are your thoughts on this premise here? That, it will basically set- put forth the premise that it, that ChatGPT, its early entry into the market will not confer competitive advantage to >> For OpenAI. >> To Open- Yeah, do you agree with that? >> I agree with that actually. It, because Google has been at it, and they have been holding back, as John said because of the scrutiny from the Fed, right, so- >> And privacy too. >> And the privacy and the accuracy as well. But I think Sam Altman and the company on those guys, right? They have put this in a hasty way out there, you know, because it makes mistakes, and there are a lot of questions around the, sort of, where the content is coming from. You saw that as your example, it just stole the content, and without your permission, you know? >> Yeah. So as quick this aside- >> And it codes on people's behalf and the, those codes are wrong. So there's a lot of, sort of, false information it's putting out there. So it's a very vulnerable thing to do what Sam Altman- >> So even though it'll get better, others will compete. >> So look, just side note, a term which Reid Hoffman used a little bit. Like he said, it's experimental launch, like, you know, it's- >> It's pretty damn good. >> It is clever because according to Sam- >> It's more than clever. It's good. >> It's awesome, if you haven't used it. I mean you write- you read what it writes and you go, "This thing writes so well, it writes so much better than you." >> The human emotion drives that too. I think that's a big thing. But- >> I Want to add one more- >> Make your last point. >> Last one. Okay. So, but he's still holding back. He's conducting quite a few interviews. If you want to get the gist of it, there's an interview with StrictlyVC interview from yesterday with Sam Altman. Listen to that one it's an eye opening what they want- where they want to take it. But my last one I want to make it on this point is that Satya Nadella yesterday did an interview with Wall Street Journal. I think he was doing- >> You were not impressed. >> I was not impressed because he was pushing it too much. So Sam Altman's holding back so there's less backlash. >> Got 10 billion reasons to push. >> I think he's almost- >> Microsoft just laid off 10000 people. Hey ChatGPT, find me a job. You know like. (group laughs) >> He's overselling it to an extent that I think it will backfire on Microsoft. And he's over promising a lot of stuff right now, I think. I don't know why he's very jittery about all these things. And he did the same thing during Ignite as well. So he said, "Oh, this AI will write code for you and this and that." Like you called him out- >> The hyperbole- >> During your- >> from Satya Nadella, he's got a lot of hyperbole. (group talks over each other) >> All right, Let's, go ahead. >> Well, can I weigh in on the whole- >> Yeah, sure. >> Microsoft thing on whether OpenAI, here's the take on this. I think it's more like the browser moment to me, because I could relate to that experience with ChatG, personally, emotionally, when I saw that, and I remember vividly- >> You mean that aha moment (indistinct). >> Like this is obviously the future. Anything else in the old world is dead, website's going to be everywhere. It was just instant dot connection for me. And a lot of other smart people who saw this. Lot of people by the way, didn't see it. Someone said the web's a toy. At the company I was worked for at the time, Hewlett Packard, they like, they could have been in, they had invented HTML, and so like all this stuff was, like, they just passed, the web was just being passed over. But at that time, the browser got better, more websites came on board. So the structural advantage there was online web usage was growing, online user population. So that was growing exponentially with the rise of the Netscape browser. So OpenAI could stay on the right side of your list as durable, if they leverage the category that they're creating, can get the scale. And if they can get the scale, just like Twitter, that failed so many times that they still hung around. So it was a product that was always successful, right? So I mean, it should have- >> You're right, it was terrible, we kept coming back. >> The fail whale, but it still grew. So OpenAI has that moment. They could do it if Microsoft doesn't meddle too much with too much power as a vendor. They could be the Netscape Navigator, without the anti-competitive behavior of somebody else. So to me, they have the pole position. So they have an opportunity. So if not, if they don't execute, then there's opportunity. There's not a lot of barriers to entry, vis-a-vis say the CapEx of say a cloud company like AWS. You can't replicate that, Many have tried, but I think you can replicate OpenAI. >> And we're going to talk about that. Okay, so real quick, I want to bring in some ETR data. This isn't an ETR heavy segment, only because this so new, you know, they haven't coverage yet, but they do cover AI. So basically what we're seeing here is a slide on the vertical axis's net score, which is a measure of spending momentum, and in the horizontal axis's is presence in the dataset. Think of it as, like, market presence. And in the insert right there, you can see how the dots are plotted, the two columns. And so, but the key point here that we want to make, there's a bunch of companies on the left, is he like, you know, DataRobot and C3 AI and some others, but the big whales, Google, AWS, Microsoft, are really dominant in this market. So that's really the key takeaway that, can we- >> I notice IBM is way low. >> Yeah, IBM's low, and actually bring that back up and you, but then you see Oracle who actually is injecting. So I guess that's the other point is, you're not necessarily going to go buy AI, and you know, build your own AI, you're going to, it's going to be there and, it, Salesforce is going to embed it into its platform, the SaaS companies, and you're going to purchase AI. You're not necessarily going to build it. But some companies obviously are. >> I mean to quote IBM's general manager Rob Thomas, "You can't have AI with IA." information architecture and David Flynn- >> You can't Have AI without IA >> without, you can't have AI without IA. You can't have, if you have an Information Architecture, you then can power AI. Yesterday David Flynn, with Hammersmith, was on our Supercloud. He was pointing out that the relationship of storage, where you store things, also impacts the data and stressablity, and Zhamak from Nextdata, she was pointing out that same thing. So the data problem factors into all this too, Dave. >> So you got the big cloud and internet giants, they're all poised to go after this opportunity. Microsoft is investing up to 10 billion. Google's code red, which was, you know, the headline in the New York Times. Of course Apple is there and several alternatives in the market today. Guys like Chinchilla, Bloom, and there's a company Jasper and several others, and then Lena Khan looms large and the government's around the world, EU, US, China, all taking notice before the market really is coalesced around a single player. You know, John, you mentioned Netscape, they kind of really, the US government was way late to that game. It was kind of game over. And Netscape, I remember Barksdale was like, "Eh, we're going to be selling software in the enterprise anyway." and then, pshew, the company just dissipated. So, but it looks like the US government, especially with Lena Khan, they're changing the definition of antitrust and what the cause is to go after people, and they're really much more aggressive. It's only what, two years ago that (indistinct). >> Yeah, the problem I have with the federal oversight is this, they're always like late to the game, and they're slow to catch up. So in other words, they're working on stuff that should have been solved a year and a half, two years ago around some of the social networks hiding behind some of the rules around open web back in the days, and I think- >> But they're like 15 years late to that. >> Yeah, and now they got this new thing on top of it. So like, I just worry about them getting their fingers. >> But there's only two years, you know, OpenAI. >> No, but the thing (indistinct). >> No, they're still fighting other battles. But the problem with government is that they're going to label Big Tech as like a evil thing like Pharma, it's like smoke- >> You know Lena Khan wants to kill Big Tech, there's no question. >> So I think Big Tech is getting a very seriously bad rap. And I think anything that the government does that shades darkness on tech, is politically motivated in most cases. You can almost look at everything, and my 80 20 rule is in play here. 80% of the government activity around tech is bullshit, it's politically motivated, and the 20% is probably relevant, but off the mark and not organized. >> Well market forces have always been the determining factor of success. The governments, you know, have been pretty much failed. I mean you look at IBM's antitrust, that, what did that do? The market ultimately beat them. You look at Microsoft back in the day, right? Windows 95 was peaking, the government came in. But you know, like you said, they missed the web, right, and >> so they were hanging on- >> There's nobody in government >> to Windows. >> that actually knows- >> And so, you, I think you're right. It's market forces that are going to determine this. But Sarbjeet, what do you make of Microsoft's big bet here, you weren't impressed with with Nadella. How do you think, where are they going to apply it? Is this going to be a Hail Mary for Bing, or is it going to be applied elsewhere? What do you think. >> They are saying that they will, sort of, weave this into their products, office products, productivity and also to write code as well, developer productivity as well. That's a big play for them. But coming back to your antitrust sort of comments, right? I believe the, your comment was like, oh, fed was late 10 years or 15 years earlier, but now they're two years. But things are moving very fast now as compared to they used to move. >> So two years is like 10 Years. >> Yeah, two years is like 10 years. Just want to make that point. (Dave laughs) This thing is going like wildfire. Any new tech which comes in that I think they're going against distribution channels. Lina Khan has commented time and again that the marketplace model is that she wants to have some grip on. Cloud marketplaces are a kind of monopolistic kind of way. >> I don't, I don't see this, I don't see a Chat AI. >> You told me it's not Bing, you had an interesting comment. >> No, no. First of all, this is great from Microsoft. If you're Microsoft- >> Why? >> Because Microsoft doesn't have the AI chops that Google has, right? Google is got so much core competency on how they run their search, how they run their backends, their cloud, even though they don't get a lot of cloud market share in the enterprise, they got a kick ass cloud cause they needed one. >> Totally. >> They've invented SRE. I mean Google's development and engineering chops are off the scales, right? Amazon's got some good chops, but Google's got like 10 times more chops than AWS in my opinion. Cloud's a whole different story. Microsoft gets AI, they get a playbook, they get a product they can render into, the not only Bing, productivity software, helping people write papers, PowerPoint, also don't forget the cloud AI can super help. We had this conversation on our Supercloud event, where AI's going to do a lot of the heavy lifting around understanding observability and managing service meshes, to managing microservices, to turning on and off applications, and or maybe writing code in real time. So there's a plethora of use cases for Microsoft to deploy this. combined with their R and D budgets, they can then turbocharge more research, build on it. So I think this gives them a car in the game, Google may have pole position with AI, but this puts Microsoft right in the game, and they already have a lot of stuff going on. But this just, I mean everything gets lifted up. Security, cloud, productivity suite, everything. >> What's under the hood at Google, and why aren't they talking about it? I mean they got to be freaked out about this. No? Or do they have kind of a magic bullet? >> I think they have the, they have the chops definitely. Magic bullet, I don't know where they are, as compared to the ChatGPT 3 or 4 models. Like they, but if you look at the online sort of activity and the videos put out there from Google folks, Google technology folks, that's account you should look at if you are looking there, they have put all these distinctions what ChatGPT 3 has used, they have been talking about for a while as well. So it's not like it's a secret thing that you cannot replicate. As you said earlier, like in the beginning of this segment, that anybody who has more data and the capacity to process that data, which Google has both, I think they will win this. >> Obviously living in Palo Alto where the Google founders are, and Google's headquarters next town over we have- >> We're so close to them. We have inside information on some of the thinking and that hasn't been reported by any outlet yet. And that is, is that, from what I'm hearing from my sources, is Google has it, they don't want to release it for many reasons. One is it might screw up their search monopoly, one, two, they're worried about the accuracy, 'cause Google will get sued. 'Cause a lot of people are jamming on this ChatGPT as, "Oh it does everything for me." when it's clearly not a hundred percent accurate all the time. >> So Lina Kahn is looming, and so Google's like be careful. >> Yeah so Google's just like, this is the third, could be a third rail. >> But the first thing you said is a concern. >> Well no. >> The disruptive (indistinct) >> What they will do is do a Waymo kind of thing, where they spin out a separate company. >> They're doing that. >> The discussions happening, they're going to spin out the separate company and put it over there, and saying, "This is AI, got search over there, don't touch that search, 'cause that's where all the revenue is." (chuckles) >> So, okay, so that's how they deal with the Clay Christensen dilemma. What's the business model here? I mean it's not advertising, right? Is it to charge you for a query? What, how do you make money at this? >> It's a good question, I mean my thinking is, first of all, it's cool to type stuff in and see a paper get written, or write a blog post, or gimme a marketing slogan for this or that or write some code. I think the API side of the business will be critical. And I think Howie Xu, I know you're going to reference some of his comments yesterday on Supercloud, I think this brings a whole 'nother user interface into technology consumption. I think the business model, not yet clear, but it will probably be some sort of either API and developer environment or just a straight up free consumer product, with some sort of freemium backend thing for business. >> And he was saying too, it's natural language is the way in which you're going to interact with these systems. >> I think it's APIs, it's APIs, APIs, APIs, because these people who are cooking up these models, and it takes a lot of compute power to train these and to, for inference as well. Somebody did the analysis on the how many cents a Google search costs to Google, and how many cents the ChatGPT query costs. It's, you know, 100x or something on that. You can take a look at that. >> A 100x on which side? >> You're saying two orders of magnitude more expensive for ChatGPT >> Much more, yeah. >> Than for Google. >> It's very expensive. >> So Google's got the data, they got the infrastructure and they got, you're saying they got the cost (indistinct) >> No actually it's a simple query as well, but they are trying to put together the answers, and they're going through a lot more data versus index data already, you know. >> Let me clarify, you're saying that Google's version of ChatGPT is more efficient? >> No, I'm, I'm saying Google search results. >> Ah, search results. >> What are used to today, but cheaper. >> But that, does that, is that going to confer advantage to Google's large language (indistinct)? >> It will, because there were deep science (indistinct). >> Google, I don't think Google search is doing a large language model on their search, it's keyword search. You know, what's the weather in Santa Cruz? Or how, what's the weather going to be? Or you know, how do I find this? Now they have done a smart job of doing some things with those queries, auto complete, re direct navigation. But it's, it's not entity. It's not like, "Hey, what's Dave Vellante thinking this week in Breaking Analysis?" ChatGPT might get that, because it'll get your Breaking Analysis, it'll synthesize it. There'll be some, maybe some clips. It'll be like, you know, I mean. >> Well I got to tell you, I asked ChatGPT to, like, I said, I'm going to enter a transcript of a discussion I had with Nir Zuk, the CTO of Palo Alto Networks, And I want you to write a 750 word blog. I never input the transcript. It wrote a 750 word blog. It attributed quotes to him, and it just pulled a bunch of stuff that, and said, okay, here it is. It talked about Supercloud, it defined Supercloud. >> It's made, it makes you- >> Wow, But it was a big lie. It was fraudulent, but still, blew me away. >> Again, vanilla content and non accurate content. So we are going to see a surge of misinformation on steroids, but I call it the vanilla content. Wow, that's just so boring, (indistinct). >> There's so many dangers. >> Make your point, cause we got to, almost out of time. >> Okay, so the consumption, like how do you consume this thing. As humans, we are consuming it and we are, like, getting a nicely, like, surprisingly shocked, you know, wow, that's cool. It's going to increase productivity and all that stuff, right? And on the danger side as well, the bad actors can take hold of it and create fake content and we have the fake sort of intelligence, if you go out there. So that's one thing. The second thing is, we are as humans are consuming this as language. Like we read that, we listen to it, whatever format we consume that is, but the ultimate usage of that will be when the machines can take that output from likes of ChatGPT, and do actions based on that. The robots can work, the robot can paint your house, we were talking about, right? Right now we can't do that. >> Data apps. >> So the data has to be ingested by the machines. It has to be digestible by the machines. And the machines cannot digest unorganized data right now, we will get better on the ingestion side as well. So we are getting better. >> Data, reasoning, insights, and action. >> I like that mall, paint my house. >> So, okay- >> By the way, that means drones that'll come in. Spray painting your house. >> Hey, it wasn't too long ago that robots couldn't climb stairs, as I like to point out. Okay, and of course it's no surprise the venture capitalists are lining up to eat at the trough, as I'd like to say. Let's hear, you'd referenced this earlier, John, let's hear what AI expert Howie Xu said at the Supercloud event, about what it takes to clone ChatGPT. Please, play the clip. >> So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest to get a, you know, another shot to the openAI sort of the level." You know, I did a (indistinct) >> Line up. >> A hundred million dollar is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So a hundred- >> Guys a hundred million dollars, that's an astoundingly low figure. What do you make of it? >> I was in an interview with, I was interviewing, I think he said hundred million or so, but in the hundreds of millions, not a billion right? >> You were trying to get him up, you were like "Hundreds of millions." >> Well I think, I- >> He's like, eh, not 10, not a billion. >> Well first of all, Howie Xu's an expert machine learning. He's at Zscaler, he's a machine learning AI guy. But he comes from VMware, he's got his technology pedigrees really off the chart. Great friend of theCUBE and kind of like a CUBE analyst for us. And he's smart. He's right. I think the barriers to entry from a dollar standpoint are lower than say the CapEx required to compete with AWS. Clearly, the CapEx spending to build all the tech for the run a cloud. >> And you don't need a huge sales force. >> And in some case apps too, it's the same thing. But I think it's not that hard. >> But am I right about that? You don't need a huge sales force either. It's, what, you know >> If the product's good, it will sell, this is a new era. The better mouse trap will win. This is the new economics in software, right? So- >> Because you look at the amount of money Lacework, and Snyk, Snowflake, Databrooks. Look at the amount of money they've raised. I mean it's like a billion dollars before they get to IPO or more. 'Cause they need promotion, they need go to market. You don't need (indistinct) >> OpenAI's been working on this for multiple five years plus it's, hasn't, wasn't born yesterday. Took a lot of years to get going. And Sam is depositioning all the success, because he's trying to manage expectations, To your point Sarbjeet, earlier. It's like, yeah, he's trying to "Whoa, whoa, settle down everybody, (Dave laughs) it's not that great." because he doesn't want to fall into that, you know, hero and then get taken down, so. >> It may take a 100 million or 150 or 200 million to train the model. But to, for the inference to, yeah to for the inference machine, It will take a lot more, I believe. >> Give it, so imagine, >> Because- >> Go ahead, sorry. >> Go ahead. But because it consumes a lot more compute cycles and it's certain level of storage and everything, right, which they already have. So I think to compute is different. To frame the model is a different cost. But to run the business is different, because I think 100 million can go into just fighting the Fed. >> Well there's a flywheel too. >> Oh that's (indistinct) >> (indistinct) >> We are running the business, right? >> It's an interesting number, but it's also kind of, like, context to it. So here, a hundred million spend it, you get there, but you got to factor in the fact that the ways companies win these days is critical mass scale, hitting a flywheel. If they can keep that flywheel of the value that they got going on and get better, you can almost imagine a marketplace where, hey, we have proprietary data, we're SiliconANGLE in theCUBE. We have proprietary content, CUBE videos, transcripts. Well wouldn't it be great if someone in a marketplace could sell a module for us, right? We buy that, Amazon's thing and things like that. So if they can get a marketplace going where you can apply to data sets that may be proprietary, you can start to see this become bigger. And so I think the key barriers to entry is going to be success. I'll give you an example, Reddit. Reddit is successful and it's hard to copy, not because of the software. >> They built the moat. >> Because you can, buy Reddit open source software and try To compete. >> They built the moat with their community. >> Their community, their scale, their user expectation. Twitter, we referenced earlier, that thing should have gone under the first two years, but there was such a great emotional product. People would tolerate the fail whale. And then, you know, well that was a whole 'nother thing. >> Then a plane landed in (John laughs) the Hudson and it was over. >> I think verticals, a lot of verticals will build applications using these models like for lawyers, for doctors, for scientists, for content creators, for- >> So you'll have many hundreds of millions of dollars investments that are going to be seeping out. If, all right, we got to wrap, if you had to put odds on it that that OpenAI is going to be the leader, maybe not a winner take all leader, but like you look at like Amazon and cloud, they're not winner take all, these aren't necessarily winner take all markets. It's not necessarily a zero sum game, but let's call it winner take most. What odds would you give that open AI 10 years from now will be in that position. >> If I'm 0 to 10 kind of thing? >> Yeah, it's like horse race, 3 to 1, 2 to 1, even money, 10 to 1, 50 to 1. >> Maybe 2 to 1, >> 2 to 1, that's pretty low odds. That's basically saying they're the favorite, they're the front runner. Would you agree with that? >> I'd say 4 to 1. >> Yeah, I was going to say I'm like a 5 to 1, 7 to 1 type of person, 'cause I'm a skeptic with, you know, there's so much competition, but- >> I think they're definitely the leader. I mean you got to say, I mean. >> Oh there's no question. There's no question about it. >> The question is can they execute? >> They're not Friendster, is what you're saying. >> They're not Friendster and they're more like Twitter and Reddit where they have momentum. If they can execute on the product side, and if they don't stumble on that, they will continue to have the lead. >> If they say stay neutral, as Sam is, has been saying, that, hey, Microsoft is one of our partners, if you look at their company model, how they have structured the company, then they're going to pay back to the investors, like Microsoft is the biggest one, up to certain, like by certain number of years, they're going to pay back from all the money they make, and after that, they're going to give the money back to the public, to the, I don't know who they give it to, like non-profit or something. (indistinct) >> Okay, the odds are dropping. (group talks over each other) That's a good point though >> Actually they might have done that to fend off the criticism of this. But it's really interesting to see the model they have adopted. >> The wildcard in all this, My last word on this is that, if there's a developer shift in how developers and data can come together again, we have conferences around the future of data, Supercloud and meshs versus, you know, how the data world, coding with data, how that evolves will also dictate, 'cause a wild card could be a shift in the landscape around how developers are using either machine learning or AI like techniques to code into their apps, so. >> That's fantastic insight. I can't thank you enough for your time, on the heels of Supercloud 2, really appreciate it. All right, thanks to John and Sarbjeet for the outstanding conversation today. Special thanks to the Palo Alto studio team. My goodness, Anderson, this great backdrop. You guys got it all out here, I'm jealous. And Noah, really appreciate it, Chuck, Andrew Frick and Cameron, Andrew Frick switching, Cameron on the video lake, great job. And Alex Myerson, he's on production, manages the podcast for us, Ken Schiffman as well. Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters. Rob Hof is our editor-in-chief over at SiliconANGLE, does some great editing, thanks to all. Remember, all these episodes are available as podcasts. All you got to do is search Breaking Analysis podcast, wherever you listen. Publish each week on wikibon.com and siliconangle.com. Want to get in touch, email me directly, david.vellante@siliconangle.com or DM me at dvellante, or comment on our LinkedIn post. And by all means, check out etr.ai. They got really great survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, We'll see you next time on Breaking Analysis. (electronic music)
SUMMARY :
bringing you data-driven and ChatGPT have taken the world by storm. So I asked it, give it to the large language models to do that. So to your point, it's So one of the problems with ChatGPT, and he simply gave the system the prompts, or the OS to help it do but it kind of levels the playing- and the answers were coming as the data you can get. Yeah, and leveled to certain extent. I check the facts, save me about maybe- and then I write a killer because like if the it's, the law is we, you know, I think that's true and I ask the set of similar question, What's your counter point? and not it's underestimated long term. That's what he said. for the first time, wow. the overhyped at the No, it was, it was I got, right I mean? the internet in the early days, and it's only going to get better." So you're saying it's bifurcated. and possibly the debate the first mobile device. So I mean. on the right with ChatGPT, and convicted by the Department of Justice the scrutiny from the Fed, right, so- And the privacy and thing to do what Sam Altman- So even though it'll get like, you know, it's- It's more than clever. I mean you write- I think that's a big thing. I think he was doing- I was not impressed because You know like. And he did the same thing he's got a lot of hyperbole. the browser moment to me, So OpenAI could stay on the right side You're right, it was terrible, They could be the Netscape Navigator, and in the horizontal axis's So I guess that's the other point is, I mean to quote IBM's So the data problem factors and the government's around the world, and they're slow to catch up. Yeah, and now they got years, you know, OpenAI. But the problem with government to kill Big Tech, and the 20% is probably relevant, back in the day, right? are they going to apply it? and also to write code as well, that the marketplace I don't, I don't see you had an interesting comment. No, no. First of all, the AI chops that Google has, right? are off the scales, right? I mean they got to be and the capacity to process that data, on some of the thinking So Lina Kahn is looming, and this is the third, could be a third rail. But the first thing What they will do out the separate company Is it to charge you for a query? it's cool to type stuff in natural language is the way and how many cents the and they're going through Google search results. It will, because there were It'll be like, you know, I mean. I never input the transcript. Wow, But it was a big lie. but I call it the vanilla content. Make your point, cause we And on the danger side as well, So the data By the way, that means at the Supercloud event, So one of the VCs actually What do you make of it? you were like "Hundreds of millions." not 10, not a billion. Clearly, the CapEx spending to build all But I think it's not that hard. It's, what, you know This is the new economics Look at the amount of And Sam is depositioning all the success, or 150 or 200 million to train the model. So I think to compute is different. not because of the software. Because you can, buy They built the moat And then, you know, well that the Hudson and it was over. that are going to be seeping out. Yeah, it's like horse race, 3 to 1, 2 to 1, that's pretty low odds. I mean you got to say, I mean. Oh there's no question. is what you're saying. and if they don't stumble on that, the money back to the public, to the, Okay, the odds are dropping. the model they have adopted. Supercloud and meshs versus, you know, on the heels of Supercloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Sarbjeet | PERSON | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
Lina Khan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Reid Hoffman | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Lena Khan | PERSON | 0.99+ |
Sam Altman | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
David Flynn | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Noah | PERSON | 0.99+ |
Ray Amara | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
150 | QUANTITY | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Howie Xu | PERSON | 0.99+ |
Anderson | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Santa Cruz | LOCATION | 0.99+ |
1995 | DATE | 0.99+ |
Lina Kahn | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
50 words | QUANTITY | 0.99+ |
Hundreds of millions | QUANTITY | 0.99+ |
Compaq | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
two sentences | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Cameron | PERSON | 0.99+ |
100 million | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
one sentence | QUANTITY | 0.99+ |
10 million | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
Netscape | ORGANIZATION | 0.99+ |
Meet the new HPE ProLiant Gen11 Servers
>> Hello, everyone. Welcome to theCUBE's coverage of Compute Engineered For Your Hybrid World, sponsored by HPE and Intel. I'm John Furrier, host of theCUBE. I'm pleased to be joined by Krista Satterthwaite, SVP and general manager for HPE Mainstream Compute, and Lisa Spelman, corporate vice president, and general manager of Intel Xeon Products, here to discuss the major announcement. Thanks for joining us today. Thanks for coming on theCUBE. >> Thanks for having us. >> Great to be here. >> Great to see you guys. And exciting announcement. Krista, Compute continues to evolve to meet the challenges of businesses. We're seeing more and more high performance, more Compute, I mean, it's getting more Compute every day. You guys officially announced this next generation of ProLiant Gen11s in November. Can you share and talk about what this means? >> Yeah, so first of all, thanks so much for having me. I'm really excited about this announcement. And yeah, in November we announced our HPE ProLiant NextGen, and it really was about one thing. It's about engineering Compute for customers' hybrid world. And we have three different design principles when we designed this generation. First is intuitive cloud operating experience, and that's with our HPE GreenLake for Compute Ops Management. And that's all about management that is simple, unified, and automated. So it's all about seeing everything from one council. So you have a customer that's using this, and they were so surprised at how much they could see, and they were excited because they had servers in multiple locations. This was a hotel, so they had servers everywhere, and they can now see all their different firmware levels. And with that type of visibility, they thought their planning was going to be much, much easier. And then when it comes to updates, they're much quicker and much easier, so it's an exciting thing, whether you have servers just in the data center, or you have them distributed, you could see and do more than you ever could before with HPE GreenLake for Compute Ops Management. So that's number one. Number two is trusted security by design. Now, when we launched our HPE ProLiant Gen10 servers years ago, we launched groundbreaking innovative security features, and we haven't stopped, we've continued to enhance that every since then. And this generation's no exception. So we have new innovations around security. Security is a huge focus area for us, and so we're excited about delivering those. And then lastly, performance for every workload. We have a huge increase in performance with HPE ProLiant Gen11, and we have customers that are clamoring for this additional performance right now. And what's great about this is that, it doesn't matter where the bottleneck is, whether it's CPU, memory or IO, we have advancements across the board that are going to make real differences in what customers are going to be able to get out of their workloads. And then we have customers that are trying to build headroom in. So even if they don't need a today, what they put in their environment today, they know needs to last and need to be built for the future. >> That's awesome. Thanks for the recap. And that's great news for folks looking to power those workloads, more and more optimizations needed. I got to ask though, how is what you guys are announcing today, meeting these customer needs for the future, and what are your customers looking for and what are HPE and Intel announcing today? >> Yeah, so customers are doing more than ever before with their servers. So they're really pushing things to the max. I'll give you an example. There's a retail customer that is waiting to get their hands on our ProLiant Gen11 servers, because they want to do video streaming in every one of their retail stores and what they're building, when they're building what they need, we started talking to 'em about what their needs were today, and they were like, "Forget about what my needs are today. We're buying for headroom. We don't want to touch these servers for a while." So they're maxing things out, because they know the needs are coming. And so what you'll see with this generation is that we've built all of that in so that customers can deploy with confidence and know they have the headroom for all the things they want to do. The applications that we see and what people are trying to do with their servers is light years different than the last big announcement we had, which was our ProLiant Gen10 servers. People are trying to do more than ever before and they're trying to do that at the Edge as well as as the data center. So I'll tell you a little bit about the servers we have. So in partnership with Intel, we're really excited to announce a new batch of servers. And these servers feature the 4th Gen Intel Xeon scalable processors, bringing a lot more performance and efficiency. And I'll talk about the servers, one, the first one is a HPE ProLiant DL320 Gen11. Now, I told you about that retail customer that's trying to do video streaming in their stores. This is the server they were looking at. This server is a new server, we didn't have a Gen10 or a Gen10+ version of the server. This is a new server and it's optimized for Edge use cases. It's a rack-based server and it's very, very flexible. So different types of storage, different types of GPU configurations, really designed to take care of many, many use cases at the Edge and doing more at the Edge than ever before. So I mentioned video streaming, but also VDI and analytics at the Edge. The next two servers are some of our most popular servers, our HPE ProLiant DL360 Gen11, and that's our density-optimized server for enterprise. And that is getting an upgrade across the board as well, big, big improvements in terms of performance, and expansion. And for those customers that need even more expansion when it comes to, let's say, storage or accelerators then the DL 380 Gen11 is a server that's new as well. And that's really for folks that need more expandability than the DL360, which is a one use server. And then lastly, our ML350, which is a tower server. These tower servers are typically used at remote sites, branch offices and this particular server holds a world record for energy efficiency for tower servers. So those are some of the servers we have today that we're announcing. I also want to talk a little bit about our Cray portfolio. So we're announcing two new servers with our HPE Cray portfolio. And what's great about this is that these servers make super computing more accessible to more enterprise customers. These servers are going to be smaller, they're going to come in at lower price points, and deliver tremendous energy efficiency. So these are the Cray XD servers, and there's more servers to come, but these are the ones that we're announcing with this first iteration. >> Great stuff. I can talk about servers all day long, I love server innovation. It's been following for many, many years, and you guys know. Lisa, we'll bring you in. Servers have been powered by Intel Xeon, we've been talking a lot about the scalable processors. This is your 4th Gen, they're in Gen11 and you're at 4th Gen. Krista mentioned this generation's about Security Edge, which is essentially becoming like a data center model now, the Edges are exploding. What are some of the design principles that went into the 4th Gen this time around the scalable processor? Can you share the Intel role here? >> Sure. I love what Krista said about headroom. If there's anything we've learned in these past few years, it's that you can plan for today, and you can even plan for tomorrow, but your tomorrow might look a lot different than what you thought it was going to. So to meet these business challenges, as we think about the underlying processor that powers all that amazing server lineup that Krista just went through, we are really looking at delivering that increased performance, the power efficient compute and then strong security. And of course, attention to the overall operating cost of the customer environment. Intel's focused on a very workload-first approach to solving our customers' real problems. So this is the applications that they're running every day to drive their digital transformation, and we really like to focus our innovation, and leadership for those highest value, and also the highest growth workloads. Some of those that we've uniquely focused on in 4th Gen Xeon, our artificial intelligence, high performance computing, network, storage, and as well as the deployments, like you were mentioning, ranging from the cloud all the way out to the Edge. And those are all satisfied by 4th Gen Xeon scalable. So our strategy for architecting is based off of all of that. And in addition to doing things like adding core count, improving the platform, updating the memory and the IO, all those standard things that you do, we've invested deeply in delivering the industry's CPU with the most built-in accelerators. And I'll just give an example, in artificial intelligence with built-in AMX acceleration, plus the framework optimizations, customers can see a 10X performance improvement gen over gen, that's on both training and inference. So it further cements Xeon as the world's foundation for inference, and it now delivers performance equivalent of a modern GPU, but all within your CPU. The flexibility that, that opens up for customers is tremendous and it's so many new ways to utilize their infrastructure. And like Krista said, I just want to say that, that best-in-class security, and security solutions are an absolute requirement. We believe that starts at the hardware level, and we continue to invest in our security features with that full ecosystem support so that our customers, like HPE, can deliver that full stacked solution to really deliver on that promise. >> I love that scalable processor messaging too around the silicon and all those advanced features, the accelerators. AI's certainly seeing a lot of that in demand now. Krista, similar question to you on your end. How do you guys look at these, your core design principles around the ProLiant Gen11, and how that helps solve the challenges for your customers that are living in this hybrid world today? >> Yeah, so we see how fast things are changing and we kept that in mind when we decided to design this generation. We talked all already about distributed environments. We see the intensity of the requirements that are at the Edge, and that's part of what we're trying to address with the new platform that I mentioned. It's also part of what we're trying to address with our management, making sure that people can manage no matter where a server is and get a great experience. The other thing we're realizing when it comes to what's happening is customers are looking at how they operate. Many want to buy as a service and with HPE GreenLake, we see that becoming more and more popular. With HPE GreenLake, we can offer that to customers, which is really helpful, especially when they're trying to get new technology like this. Sometimes they don't have it in the budget. With something like HP GreenLake, there's no upfront costs so they can enjoy this technology without having to come up with a big capital outlay for it. So that's great. Another one is around, I liked what Lisa said about security starting at the hardware. And that's exactly, the foundation has to be secure, or you're starting at the wrong place. So that's also something that we feel like we've advanced this time around. This secure root of trust that we started in Gen10, we've extended that to additional partners, so we're excited about that as well. >> That's great, Krista. We're seeing and hearing a lot about customers challenges at the Edge. Lisa, I want to bring you back in on this one. What are the needs that you see at the Edge from an Intel perspective? How is Intel addressing the Edge? >> Yeah, thanks, John. You know, one of the best things about Xeon is that it can span workloads and environments all the way from the Edge back to the core data center all within the same software environment. Customers really love that portability. For the Edge, we have seen an explosion of use cases coming from all industries and I think Krista would say the same. Where we're focused on delivering is that performant-enough compute that can fit into a constrained environment, and those constraints can be physical space, they can be the thermal environment. The Network Edge has been a big focus for us. Not only adding features and integrating acceleration, but investing deeply in that software environment so that more and more critical applications can be ported to Xeon and HPE industry standard servers versus requiring expensive, proprietary systems that were quite frankly not designed for this explosion of use cases that we're seeing. Across a variety of Edge to cloud use cases, we have identified ways to provide step function improvements in both performance and that power efficiency. For example, in this generation, we're delivering an up to 2.9X average improvement in performance per watt versus not using accelerators, and up to 70 watt power savings per CPU opportunity with some unique power management features, and improve total cost of ownership, and just overall power- >> What's the closing thoughts? What should people take away from this announcement around scalable processors, 4th Gen Intel, and then Gen11 ProLiant? What's the walkaway? What's the main super thought here? >> So I can go first. I think the main thought is that, obviously, we have partnered with Intel for many, many years. We continue to partner this generation with years in the making. In fact, we've been working on this for years, so we're both very excited that it's finally here. But we're laser focused on making sure that customers get the most out of their workloads, the most out of their infrastructure, and that they can meet those challenges that people are throwing at 'em. I think IT is under more pressure than ever before and the demands are there. They're critical to the business success with digital transformation and our job is to make sure they have everything they need, and they could do and meet the business needs as they come at 'em. >> Lisa, your thoughts on this reflection point we're in right now? >> Well, I agree with everything that Krista said. It's just a really exciting time right now. There's a ton of challenges in front of us, but the opportunity to bring technology solutions to our customers' digital transformation is tremendous right now. I think I would also like our customers to take away that between the work that Intel and HPE have done together for generations, they have a community that they can trust. We are committed to delivering customer-led solutions that do solve these business transformation challenges that we know are in front of everyone, and we're pretty excited for this launch. >> Yeah, I'm super enthusiastic right now. I think you guys are on the right track. This title Compute Engineered for Hybrid World really kind of highlights the word, "Engineered." You're starting to see this distributed computing architecture take shape with the Edge. Cloud on-premise computing is everywhere. This is real relevant to your customers, and it's a great announcement. Thanks for taking the time and joining us today. >> Thank you. >> Yeah, thank you. >> This is the first episode of theCUBE's coverage of Compute Engineered For Your Hybrid World. Please continue to check out thecube.net, our site, for the future episodes where we'll discuss how to build high performance AI applications, transforming compute management experiences, and accelerating VDI at the Edge. Also, to learn more about the new HPE ProLiant servers with the 4th Gen Intel Xeon processors, you can go to hpe.com. And check out the URL below, click on it. I'm John Furrier at theCUBE. You're watching theCUBE, the leader in high tech, enterprise coverage. (bright music)
SUMMARY :
and general manager of Great to see you guys. that are going to make real differences Thanks for the recap. This is the server they were looking at. into the 4th Gen this time and also the highest growth workloads. and how that helps solve the challenges that are at the Edge, How is Intel addressing the Edge? from the Edge back to the core data center and that they can meet those challenges but the opportunity to Thanks for taking the and accelerating VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Krista | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
November | DATE | 0.99+ |
10X | QUANTITY | 0.99+ |
DL360 | COMMERCIAL_ITEM | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
DL 380 Gen11 | COMMERCIAL_ITEM | 0.99+ |
ProLiant Gen11 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.98+ |
first iteration | QUANTITY | 0.98+ |
ML350 | COMMERCIAL_ITEM | 0.98+ |
first | QUANTITY | 0.98+ |
Xeon | COMMERCIAL_ITEM | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
ProLiant Gen11s | COMMERCIAL_ITEM | 0.97+ |
first episode | QUANTITY | 0.97+ |
HPE Mainstream Compute | ORGANIZATION | 0.97+ |
thecube.net | OTHER | 0.97+ |
two servers | QUANTITY | 0.97+ |
4th Gen | QUANTITY | 0.96+ |
Edge | ORGANIZATION | 0.96+ |
Intel Xeon Products | ORGANIZATION | 0.96+ |
hpe.com | OTHER | 0.95+ |
one | QUANTITY | 0.95+ |
4th Gen. | QUANTITY | 0.95+ |
HPE GreenLake | ORGANIZATION | 0.93+ |
Gen10 | COMMERCIAL_ITEM | 0.93+ |
two new servers | QUANTITY | 0.92+ |
up to 70 watt | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
HPE ProLiant Gen11 | COMMERCIAL_ITEM | 0.91+ |
one council | QUANTITY | 0.91+ |
HPE ProLiant NextGen | COMMERCIAL_ITEM | 0.89+ |
first one | QUANTITY | 0.87+ |
Cray | ORGANIZATION | 0.86+ |
Gen11 ProLiant | COMMERCIAL_ITEM | 0.85+ |
Edge | TITLE | 0.83+ |
three different design principles | QUANTITY | 0.83+ |
HP GreenLake | ORGANIZATION | 0.82+ |
Number two | QUANTITY | 0.81+ |
HPE Compute Engineered for your Hybrid World - Transform Your Compute Management Experience
>> Welcome everyone to "theCUBE's" coverage of "Compute engineered for your hybrid world," sponsored by HP and Intel. Today we're going to going to discuss how to transform your compute management experience with the new 4th Gen Intel Xeon scalable processors. Hello, I'm John Furrier, host of "theCUBE," and my guests today are Chinmay Ashok, director cloud engineering at Intel, and Koichiro Nakajima, principal product manager, compute at cloud services with HPE. Gentlemen, thanks for coming on this segment, "Transform your compute management experience." >> Thanks for having us. >> Great topic. A lot of people want to see that system management one pane of glass and want to manage everything. This is a really important topic and they started getting into distributed computing and cloud and hybrid. This is a major discussion point. What are some of the major trends you guys see in the system management space? >> Yeah, so system management is trying to help user manage their IT infrastructure effectively and efficiently. So, the system management is evolving along with the IT infrastructures which is trying to accommodate market trends. We have been observing the continuous trends like digital transformation, edge computing, and exponential data growth never stops. AI, machine learning, deep learning, cloud native applications, hybrid cloud, multi-cloud strategies. There's a lot of things going on. Also, COVID-19 pandemic has changed the way we live and work. These are all the things that, given a profound implication to the system design architectures that system management has to consider. Also, security has always been the very important topic, but it has become more important than ever before. Some of the research is saying that the cyber criminals becoming like a $10.5 trillion per year. We all do our efforts on the solution provider size and on the user side, but still cyber criminals are growing 15% year by year. So, with all this kind of thing in the mind, system management really have to evolve in a way to help user efficiently and effectively manage their more and more distributed IT infrastructure. >> Chinmay, what's your thoughts on the major trends in system management space? >> Thanks, John, Yeah, to add to what Koichiro said, I think especially with the view of the system or the service provider, as he was saying, is changing, is evolving over the last few years, especially with the advent of the cloud and the different types of cloud usage models like platform as a service, on-premises, of course, infrastructure is a service, but the traditional software as a service implies that the service provider needs a different view of the system and the context in which we need the CPU vendor, or the platform vendor needs to provide that, is changing. That includes both in-band telemetry being able to monitor what is going on on the system through traditional in-band methods, but also the advent of the out-of-band methods to do this without end user disruption is a key element to the enhancements that our customers are expecting from us as we deploy CPUs and platforms. >> That's great. You know what I love about this discussion is we had multiple generation enhancements, 4th Gen Xeon, 11th Gen ProLiant, iLOs going to come up with got another generation increase on that one. We'll get into that on the next segment, but while we're here, what is iLO? Can you guys define what that is and why it's important? >> Yeah, great question. Real quick, so HPE Integrated Lights-Out is the formal name of the product and we tend to call it as a iLO for short. iLO is HPE'S BMC. If you're familiar with this topic it's a Baseboard Management Controller. If not, this is a small computer on the server mother board and it runs independently from host CPU and the operating system. So, that's why it's named as Lights-Out. Now what can you do with the iLO? iLO really helps a user manage and use and monitor the server remotely, securely, throughout its life from the deployment to the retirement. So, you can really do things like, you know, turning a server power on, off, install operating system, access to IT, firmware update, and when you decide to retire server, you can completely wipe the data off that server so then it's ready to trash. iLO is really a best solution to manage a single server, but when you try to manage hundreds or thousand of servers in a larger scale environment, then managing server one by one by one through the iLO is not practical. So, HPE has two options. One of them is a HPE OneView. OneView is a best solution to manage a very complex, on-prem IT infrastructure that involves a thousand of servers as well as the other IT elements like fiber channel storage through the storage agent network and so on. Another option that we have is HPE for GreenLake Compute Ops Management. This is our latest, greatest product that we recently launched and this is a best solution to manage a distributed IT environment with multiple edge points or multiple clouds. And I recently involved in the customer conversation about the computer office management and with the hotel chain, global hotel chain with 9,000 locations worldwide and each of the location only have like a couple of servers to manage, but combined it's, you know, 27,000 servers and over the 9,000 locations, we didn't really have a great answer for that kind of environment before, but now HPE has GreenLake for computer office management for also deal with, you know, such kind of environment. >> Awesome. We're going to do a big dive on iLO in the next segment, but Chinmay, before we end this segment, what is PMT? >> Sure, so yeah, with the introduction of the 4th Gen Intel Xeon scalable processor, we of course introduce many new technologies like PCI Gen 5, DDR5, et cetera. And these are very key to general system provision, if you will. But with all of these new technologies come new sources of telemetry that the service provider now has to manage, right? So, the PMT is a technology called Platform Monitoring Technology. That is a capability that we introduced with the Intel 4th Gen Xeon scalable processor that allows the service provider to monitor all of these sources of telemetry within the system, within the system on chip, the CPU SOC, in all of these contexts that we talked about, like the hybrid cloud and cloud infrastructure as a service or platform as a service, but both in their in-band traditional telemetry collection models, but also out-of-band collection models such as the ones that Koichiro was talking about through the BMC et cetera. So, this is a key enhancement that we believe that takes the Intel product line closer to what the service providers require for managing their end user experience. >> Awesome, well thanks so much for spending the time in this segment. We're going to take a quick break, we're going to come back and we're going to discuss more what's new with Gen 11 and iLO 6. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (light music) Welcome back. We're continuing the coverage of "theCUBE's" coverage of compute engineered for your hybrid world. I'm John Furrier, I'm joined by Chinmay Ashok who's from Intel and Koichiro Nakajima with HPE. We're going to dive deeper into transforming your compute management experience with 4th Gen Intel Xeon scalable processors and HP ProLiant Gen11. Okay, let's get into it. We want to talk about Gen11. What's new with Gen11? What's new with iLO 6? So, NexGen increases in performance capabilities. What's new, what's new at Gen11 and iLO 6 let's go. >> Yeah, iLO 6 accommodates a lot of new features and the latest, greatest technology advancements like a new generation CPUs, DDR5 memories, PCI Gen 5, GPGPUs, SmartNICs. There's a lot of great feature functions. So, it's an iLO, make sure that supports all the use cases that associate with those latest, greatest advancements. For instance, like you know, some of the higher thermal design point CPU SKUs that requires a liquid cooling. We all support those kind of things. And also iLO6 accommodates latest, greatest industry standard system management, standard specifications, for instance, like an DMTF, TLDN, DMTF, RDE, SPDM. And what are these means for the iLO6 and Gen11? iLO6 really offers the greatest manageability and monitoring user experiences as well as the greatest automation through the refresh APIs. >> Chinmay, what's your thoughts on the Gen11 and iLO6? You're at Intel, you're enabling all this innovation. >> Yeah. >> What's the new features? >> Yeah, thanks John. Yeah, so yeah, to add to what Koichiro said, I think with the introduction of Gen11, 4th Gen Intel Xeon scalable processor, we have all of these rich new feature sets, right? With the DDR5, PCI Gen5, liquid cooling, et cetera. And then all of these new accelerators for various specific workloads that customers can use using this processor. So, as we were discussing previously, what this brings is all of these different sources of telemetry, right? So, our sources of data that the system provider or the service provider then needs to utilize to manage the compute experience for their end user. And so, what's new from that perspective is Intel realized that these new different sources of telemetry and the new mechanisms by which the service provider has to extract this telemetry required us to fundamentally think about how we provide the telemetry experience to the service provider. And that meant extending our existing best-in-class, in-band telemetry capabilities that we have today already built into in market Intel processors. But now, extending that with the introduction of the PMT, the Platform Monitoring Technology, that allows us to expand on that in-band telemetry, but also include all of these new sources of telemetry data through all of these new accelerators through the new features like PCI Gen5, DDR5, et cetera, but also bring in that out-of-band telemetry management experience. And so, I think that's a key innovation here, helping prepare for the world that the cloud is enabling. >> It's interesting, you know, Koichiro you had mentioned on the previous segment, COVID-19, we all know the impact of how that changed, how IT at the managed, you know, all of a sudden remote work, right? So, as you have cloud go to hybrid, now we got the edge coming, we're talking about a distributed computing environment, we got telemetry, you got management. This is a huge shift and it's happening super fast. What's the Gen11 iLO6 mean for architects as they start to look at going beyond hybrid and going to the edge, you're going to need all this telemetry. What's the impact? Can you guys just riff and share your thoughts on what this means for that kind of NexGen cloud that we see coming on on which is essentially distributed computing. >> Yeah, that's a great topic to discuss. So, there's a couple of the things. Really, to make sure those remote environment and also the management distributed IT environments, the system management has to reach across the remote location, across the internet connections, and the connectivities. So, the system management protocol, for instance, like traditionally IPMI or SNMP, or those things, got to be modernized into more restful API and those modern integration friendly to the modern tool chains. So, we're investing on those like refresh APIs and also again, the security becomes paramount importance because those are exposed to the bad people to snoop and trying to do some bad thing like men in a middle attacks, things like that. So we really, you know, focus on the security side on the two aspects on the iLO6 and Gen11. One other thing is we continue our industry unique silicon root of trust technology. So, that one is fortunate platform making sure the platform firmware, only the authentic and legitimate image of the firmware can run on HP server. And when you check in, validating the firmware images, the root of the trust reside in the silicon. So, no one can change it. Even the bad people trying to change the root of trust, it's bond in the chips so you cannot really change. And that's why, even bad people trying to compromise, you know, install compromise the firmware image on the HPE servers, you cannot do that. Another thing is we're making a lot of enhancements to make sure security on board our HP server into your network or onto a services like a GreenLake. Give you a couple of example, for instance, like a IDevID, Initial Device ID. That one is conforming to IEEE 802.1AR and it's immutable so no one can change it. And by using the IDevID, you can really identify you are not onboarding a rogue server or unknown server, but the server that you you want to onboard, right? It's absolutely important. Another thing is like platform certificate. Platform certificate really is the measurement of the configuration. So again, this is a great feature that makes sure you receive a server from the factory and no one during the transportation touch the server and alter the configuration. >> Chinmay, what's your reaction to this new distributed NextGen cloud? You got data, security, edge, move the compute to the data, don't move the data around. These are big conversations. >> Yeah, great question, John. I think this is an important thing to consider for the end user, the service provider in all of these contexts, right? I think Koichiro mentioned some of these key elements that go into as we develop and design these new products. But for example, from a security perspective, we introduce the trust domain extensions, TDX feature, for confidential computing in Intel 4th Generation Xeon scalable processors. And that enables the isolation of user workloads in these cloud environments, et cetera. But again, going back to the point Koichiro was making where if you go to the edge, you go to the cloud and then have the edge connect to the cloud you have independent networks for system management, independent networks for user data, et cetera. So, you need the ability to create that isolation. All of this telemetry data that needs to be isolated from the user, but used by the service provider to provide the best experience. All of these are built on the foundations of technologies such as TDX, PMT, iLO6, et cetera. >> Great stuff, gentlemen. Well, we have a lot more to discuss on our next segment. We're going to take a break here before wrapping up. We'll be right back with more. You're watching "theCUBE," the leader in high tech coverage. (light music) Okay, welcome back here, on "theCUBE's" coverage of "Compute engineered for your hybrid world." I'm John Furrier, host of the Cube. We're wrapping up our discussion here on transforming compute management experience with 4th Gen Intel Xeon scalable processors and obviously HPE ProLiant Gen11. Gentlemen, welcome back. Let's get into the takeaways for this discussion. Obviously, systems management has been around for a while, but transforming that experience on the management side is super important as the environment just radically changing for the better. What are some of the key takeaways for the audience watching here that they should put into their kind of tickler file and/or put on their to-do list to keep an eye on? >> Yeah, so Gen11 and iLO6 offers the latest, greatest technologies with new generation CPUs, DDR5, PCI Gen5, and so on and on. There's a lot of things in there and also iLO6 is the most mature version of iLO and it offers the best manageability and security. On top of iLO, HP offers the best of read management options like HP OneView and Compute Ops Management. It's really a lot of the things that help user achieve a lot of the things regardless of the use case like edge computing, or distributed IT, or hybrid strategy and so on and on. And you could also have a great system management that you can unleash all the full potential of latest, greatest technology. >> Chinmay, what's your thoughts on the key takeaways? Obviously as the world's changing, more gen chips are coming out, specialized workloads, performance. I mean, I've never met anyone that says they want to run on slower infrastructure. I mean, come on, performance matters. >> Yes, no, it definitely, I think one of the key things I would say is yes, with Gen11 Intel for gen scalable we're introducing all of these technologies, but I think one of the key things that has grown over the last few years is the view of the system provider, the abstraction that's needed, right? Like the end user today is migrating a lot of what they're traditionally used to from a physical compute perspective to the cloud. Everything goes to the cloud and when that happens there's a lot of just the experience that the end user sees, but everything underneath is abstracted away and then managed by the system provider, right? So we at Intel, and of course, our partners at HP, we have spent a lot of time figuring out what are the best sets of features that provide that best system management experience that allow for that abstraction to work seamlessly without the end user noticing? And I think from that perspective, the 4th Gen Intel Xeon scalable processors is so far the best Intel product that we have introduced that is prepared for that type of abstraction. >> So, I'm going to put my customer hat on for a second. I'll ask you both. What's in it for me? I'm the customer. What's in it for me? What's the benefit to me? What does this all mean to me? What's my win? >> Yeah, I can start there. I think the key thing here is that when we create capabilities that allow you to build the best cloud, at the end of the day that efficiency, that performance, all of that translates to a better experience for the consumer, right? So, as the service provider is able to have all of these myriad capabilities to use and choose from and then manage the system experience, what that implies is that the end user sees a seamless experience as they go from one application to another as they go about their daily lives. >> Koichiro, what's your thoughts on what's in it for me? You guys got a lot of engineering going on in Gen11, every gen increase always is a step function and increase of value. What's in it for me? What do I care? What's in it for me? I'm the customer. >> Alright. Yeah, so I fully agree with Chinmay's point. You know, he lays out the all the good points, right? Again, you know what the Gen11 and iLO6 offer all the latest, greatest features and all the technology and advancements are packed in the Gen11 platform and iLO6 unleash all full potentials for those benefits. And things are really dynamic in today's world and IT system also going to be agile and the system management get really far, to the point like we never imagine what the system management can do in the past. For instance, the managing on-prem devices across multiple locations from a single point, like a single pane of glass on the cloud management system, management on the cloud, that's what really the compute office management that HP offers. It's all new and it's really help customers unleash full potential of the gear and their investment and provide the best TCO and ROIs, right? I'm very excited that all the things that all the teams have worked for the multiple years have finally come to their life and to the public. And I can't really wait to see our customers start putting their hands on and enjoy the benefit of the latest, greatest offerings. >> Yeah, 4th Gen Xeon, Gen11 ProLiant, I mean, all the things coming together, accelerators, more cores. You got data, you got compute, and you got now this idea of security, I mean, you got hitting all the points, data and security big features here, right? Data being computed in a way with Gen4 and Gen11. This is like the big theme, data security, kind of the the big part of the core here in this announcement, in this relationship. >> Absolutely. I believe, I think the key things as these new generations of processors enable is new types of compute which imply is more types of data, more types of and hence, with more types of data, more types of compute. You have more types of system management more differentiation that the service provider has to then deal with, the disaggregation that they have to deal with. So yes, absolutely this is, I think exciting times for end users, but also for new frontiers for service providers to go tackle. And we believe that the features that we're introducing with this CPU and this platform will enable them to do so. >> Well Chinmay thank you so much for sharing your Intel perspective, Koichiro with HPE. Congratulations on all that hard work and engineering coming together. Bearing fruit, as you said, Koichiro, this is an exciting time. And again, keep moving the needle. This is an important inflection point in the industry and now more than ever this compute is needed and this kind of specialization's all awesome. So, congratulations and participating in the "Transforming your compute management experience" segment. >> Thank you very much. >> Okay. I'm John Furrier with "theCUBE." You're watching the "Compute Engineered for your Hybrid World Series" sponsored by HP and Intel. Thanks for watching. (light music)
SUMMARY :
how to transform your in the system management space? that the cyber criminals becoming of the out-of-band methods to do this We'll get into that on the next segment, of the product and we tend to on iLO in the next segment, of telemetry that the service provider now for spending the time in this segment. and the latest, greatest on the Gen11 and iLO6? that the system provider at the managed, you know, and legitimate image of the move the compute to the data, by the service provider to I'm John Furrier, host of the Cube. a lot of the things Obviously as the world's experience that the end user sees, What's the benefit to me? that the end user sees I'm the customer. that all the things that kind of the the big part of the core here that the service provider And again, keep moving the needle. for your Hybrid World Series"
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Koichiro | PERSON | 0.99+ |
Koichiro Nakajima | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Chinmay Ashok | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
iLO 6 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
27,000 servers | QUANTITY | 0.99+ |
9,000 locations | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
two options | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iLO6 | COMMERCIAL_ITEM | 0.99+ |
Chinmay | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.98+ |
two aspects | QUANTITY | 0.98+ |
COVID-19 pandemic | EVENT | 0.97+ |
iLO | TITLE | 0.97+ |
single point | QUANTITY | 0.96+ |
IEEE 802.1AR | OTHER | 0.96+ |
Gen11 | COMMERCIAL_ITEM | 0.96+ |
PCI Gen 5 | OTHER | 0.96+ |
one | QUANTITY | 0.96+ |
Today | DATE | 0.96+ |
4th Generation Xeon | COMMERCIAL_ITEM | 0.95+ |
today | DATE | 0.95+ |
PCI Gen5 | OTHER | 0.95+ |
single server | QUANTITY | 0.94+ |
HPE ProLiant Gen11 | COMMERCIAL_ITEM | 0.94+ |
Gen11 ProLiant | COMMERCIAL_ITEM | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.91+ |
NexGen | COMMERCIAL_ITEM | 0.91+ |
$10.5 trillion per year | QUANTITY | 0.9+ |
Xeon | COMMERCIAL_ITEM | 0.89+ |
HPE Compute Engineered for your Hybrid World-Containers to Deploy Higher Performance AI Applications
>> Hello, everyone. Welcome to theCUBE's coverage of "Compute Engineered for your Hybrid World," sponsored by HPE and Intel. Today we're going to discuss the new 4th Gen Intel Xeon Scalable process impact on containers and AI. I'm John Furrier, your host of theCUBE, and I'm joined by three experts to guide us along. We have Jordan Plum, Senior Director of AI and products for Intel, Bradley Sweeney, Big Data and AI Product Manager, Mainstream Compute Workloads at HPE, and Gary Wang, Containers Product Manager, Mainstream Compute Workloads at HPE. Welcome to the program gentlemen. Thanks for coming on. >> Thanks John. >> Thank you for having us. >> This segment is going to be talking about containers to deploy high performance AI applications. This is a really important area right now. We're seeing a lot more AI deployed, kind of next gen AI coming. How is HPE supporting and testing and delivering containers for AI? >> Yeah, so what we're doing from HPE's perspective is we're taking these container platforms, combining with the next generation Intel servers to fully validate the deployment of the containers. So what we're doing is we're publishing the reference architectures. We're creating these automation scripts, and also creating a monitoring and security strategy for these container platforms. So for customers to easily deploy these Kubernete clusters and to easily secure their community environments. >> Gary, give us a quick overview of the new Proliant DL 360 and 380 Gen 11 servers. >> Yeah, the load, for example, for container platforms what we're seeing mostly is the DL 360 and DL 380 for matching really well for container use cases, especially for AI. The DL 360, with the expended now the DDR five memory and the new PCI five slots really, really helps the speeds to deploy these container environments and also to grow the data that's required to store it within these container environments. So for example, like the DL 380 if you want to deploy a data fabric whether it's the Ezmeral data fabric or different vendors data fabric software you can do so with the DL 360 and DL 380 with the new Intel Xeon processors. >> How does HP help customers with Kubernetes deployments? >> Yeah, like I mentioned earlier so we do a full validation to ensure the container deployment is easy and it's fast. So we create these automation scripts and then we publish them on GitHub for customers to use and to reference. So they can take that and then they can adjust as they need to. But following the deployment guide that we provide will make the, deploy the community deployment much easier, much faster. So we also have demo videos that's also published and then for reference architecture document that's published to guide the customer step by step through the process. >> Great stuff. Thanks everyone. We'll be going to take a quick break here and come back. We're going to do a deep dive on the fourth gen Intel Xeon scalable process and the impact on AI and containers. You're watching theCUBE, the leader in tech coverage. We'll be right back. (intense music) Hey, welcome back to theCUBE's continuing coverage of "Compute Engineered for your Hybrid World" series. I'm John Furrier with the Cube, joined by Jordan Plum with Intel, Bradley Sweeney with HPE, and Gary Wang from HPE. We're going to do a drill down and do a deeper dive into the AI containers with the fourth gen Intel Xeon scalable processors we appreciate your time coming in. Jordan, great to see you. I got to ask you right out of the gate, what is the view right now in terms of Intel's approach to containers for AI? It's hot right now. AI is booming. You're seeing kind of next gen use cases. What's your approach to containers relative to AI? >> Thanks John and thanks for the question. With the fourth generation Xeon scalable processor launch we have tested and validated this platform with over 400 deep learning and machine learning models and workloads. These models and workloads are publicly available in the framework repositories and they can be downloaded by anybody. Yet customers are not only looking for model validation they're looking for model performance and performance is usually a combination of a given throughput at a target latency. And to do that in the data center all the way to the factory floor, this is not always delivered from these generic proxy models that are publicly available in the industry. >> You know, performance is critical. We're seeing more and more developers saying, "Hey, I want to go faster on a better platform, faster all the time." No one wants to run slower stuff, that's for sure. Can you talk more about the different container approaches Intel is pursuing? >> Sure. First our approach is to meet the customers where they are and help them build and deploy AI everywhere. Some customers just want to focus on deployment they have more mature use cases, and they just want to download a model that works that's high performing and run. Others are really focused more on development and innovation. They want to build and train models from scratch or at least highly customize them. Therefore we have several container approaches to accelerate the customer's time to solution and help them meet their business SLA along their AI journey. >> So what developers can just download these containers and just go? >> Yeah, so let me talk about the different kinds of containers we have. We start off with pre-trained containers. We'll have about 55 or more of these containers where the model is actually pre-trained, highly performant, some are optimized for low latency, others are optimized for throughput and the customers can just download these from Intel's website or from HPE and they can just go into production right away. >> That's great. A lot of choice. People can just get jump right in. That's awesome. Good, good choice for developers. They want more faster velocity. We know that. What else does Intel provide? Can you share some thoughts there? What you guys else provide developers? >> Yeah, so we talked about how hey some are just focused on deployment and they maybe they have more mature use cases. Other customers really want to do some more customization or optimization. So we have another class of containers called development containers and this includes not just the kind of a model itself but it's integrated with the framework and some other capabilities and techniques like model serving. So now that customers can download just not only the model but an entire AI stack and they can be sort of do some optimizations but they can also be sure that Intel has optimized that specific stack on top of the HPE servers. >> So it sounds simple to just get started using the DL model and containers. Is that it? Where, what else are customers looking for? What can you take a little bit deeper? >> Yeah, not quite. Well, while the customer customer's ability to reproduce performance on their site that HPE and Intel have measured in our own labs is fantastic. That's not actually what the customer is only trying to do. They're actually building very complex end-to-end AI pipelines, okay? And a lot of data scientists are really good at building models, really good at building algorithms but they're less experienced in building end-to-end pipelines especially 'cause the number of use cases end-to-end are kind of infinite. So we are building end-to-end pipeline containers for use cases like media analytics and sentiment analysis, anomaly detection. Therefore a customer can download these end-to-end containers, right? They can either use them as a reference, just like, see how we built them and maybe they have some changes in their own data center where they like to use different tools, but they can just see, "Okay this is what's possible with an end-to-end container on top of an HPE server." And other cases they could actually, if the overlap in the use case is pretty close, they can just take our containers and go directly into production. So this provides developers, all three types of containers that I discussed provide developers an easy starting point to get them up and running quickly and make them productive. And that's a really important point. You talked a lot about performance, John. But really when we talk to data scientists what they really want to be is productive, right? They're under pressure to change the business to transform the business and containers is a great way to get started fast >> People take product productivity, you know, seriously now with developer productivity is the hottest trend obviously they want performance. Totally nailed it. Where can customers get these containers? >> Right. Great, thank you John. Our pre-trained model containers, our developmental containers, and our end-to-end containers are available at intel.com at the developer catalog. But we'd also post these on many third party marketplaces that other people like to pull containers from. And they're frequently updated. >> Love the developer productivity angle. Great stuff. We've still got more to discuss with Jordan, Bradley, and Gary. We're going to take a short break here. You're watching theCUBE, the leader in high tech coverage. We'll be right back. (intense music) Welcome back to theCUBE's coverage of "Compute Engineered for your Hybrid World." I'm John Furrier with theCUBE and we'll be discussing and wrapping up our discussion on containers to deploy high performance AI. This is a great segment on really a lot of demand for AI and the applications involved. And we got the fourth gen Intel Xeon scalable processors with HP Gen 11 servers. Bradley, what is the top AI use case that Gen 11 HP Proliant servers are optimized for? >> Yeah, thanks John. I would have to say intelligent video analytics. It's a use case that's supplied across industries and verticals. For example, a smart hospital solution that we conducted with Nvidia and Artisight in our previous customer success we've seen 5% more hospital procedures, a 16 times return on investment using operating room coordination. With that IVA, so with the Gen 11 DL 380 that we provide using the the Intel four gen Xeon processors it can really support workloads at scale. Whether that is a smart hospital solution whether that's manufacturing at the edge security camera integration, we can do it all with Intel. >> You know what's really great about AI right now you're starting to see people starting to figure out kind of where the value is does a lot of the heavy lifting on setting things up to make humans more productive. This has been clearly now kind of going neck level. You're seeing it all in the media now and all these new tools coming out. How does HPE make it easier for customers to manage their AI workloads? I imagine there's going to be a surge in demand. How are you guys making it easier to manage their AI workloads? >> Well, I would say the biggest way we do this is through GreenLake, which is our IT as a service model. So customers deploying AI workloads can get fully-managed services to optimize not only their operations but also their spending and the cost that they're putting towards it. In addition to that we have our Gen 11 reliance servers equipped with iLO 6 technology. What this does is allows customers to securely manage their server complete environment from anywhere in the world remotely. >> Any last thoughts or message on the overall fourth gen intel Xeon based Proliant Gen 11 servers? How they will improve workload performance? >> You know, with this generation, obviously the performance is only getting ramped up as the needs and requirements for customers grow. We partner with Intel to support that. >> Jordan, gimme the last word on the container's effect on AI applications. Your thoughts as we close out. >> Yeah, great. I think it's important to remember that containers themselves don't deliver performance, right? The AI stack is a very complex set of software that's compiled together and what we're doing together is to make it easier for customers to get access to that software, to make sure it all works well together and that it can be easily installed and run on sort of a cloud native infrastructure that's hosted by HPE Proliant servers. Hence the title of this talk. How to use Containers to Deploy High Performance AI Applications. Thank you. >> Gentlemen. Thank you for your time on the Compute Engineered for your Hybrid World sponsored by HPE and Intel. Again, I love this segment for AI applications Containers to Deploy Higher Performance. This is a great topic. Thanks for your time. >> Thank you. >> Thanks John. >> Okay, I'm John. We'll be back with more coverage. See you soon. (soft music)
SUMMARY :
Welcome to the program gentlemen. and delivering containers for AI? and to easily secure their of the new Proliant DL 360 and also to grow the data that's required and then they can adjust as they need to. and the impact on AI and containers. And to do that in the about the different container and they just want to download a model and they can just go into A lot of choice. and they can be sort of So it sounds simple to just to use different tools, is the hottest trend to pull containers from. on containers to deploy we can do it all with Intel. for customers to manage and the cost that they're obviously the performance on the container's effect How to use Containers on the Compute Engineered We'll be back with more coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jordan Plum | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Gary Wang | PERSON | 0.99+ |
Bradley | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
16 times | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Jordan | PERSON | 0.99+ |
Artisight | ORGANIZATION | 0.99+ |
DL 360 | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three experts | QUANTITY | 0.99+ |
DL 380 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Compute Engineered for your Hybrid World | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
Bradley Sweeney | PERSON | 0.98+ |
over 400 deep learning | QUANTITY | 0.97+ |
intel | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Gen 11 DL 380 | COMMERCIAL_ITEM | 0.95+ |
Xeon | COMMERCIAL_ITEM | 0.95+ |
Today | DATE | 0.95+ |
fourth gen | QUANTITY | 0.92+ |
GitHub | ORGANIZATION | 0.91+ |
380 Gen 11 | COMMERCIAL_ITEM | 0.9+ |
about 55 or more | QUANTITY | 0.89+ |
four gen Xeon | COMMERCIAL_ITEM | 0.88+ |
Big Data | ORGANIZATION | 0.88+ |
Gen 11 | COMMERCIAL_ITEM | 0.87+ |
five slots | QUANTITY | 0.86+ |
Proliant | COMMERCIAL_ITEM | 0.84+ |
GreenLake | ORGANIZATION | 0.75+ |
Compute Engineered for your Hybrid | TITLE | 0.7+ |
Ezmeral | ORGANIZATION | 0.68+ |
HPE Compute Engineered for your Hybrid World - Accelerate VDI at the Edge
>> Hello everyone. Welcome to theCUBEs coverage of Compute Engineered for your Hybrid World sponsored by HPE and Intel. Today we're going to dive into advanced performance of VDI with the fourth gen Intel Zion scalable processors. Hello I'm John Furrier, the host of theCUBE. My guests today are Alan Chu, Director of Data Center Performance and Competition for Intel as well as Denis Kondakov who's the VDI product manager at HPE, and also joining us is Cynthia Sustiva, CAD/CAM product manager at HPE. Thanks for coming on, really appreciate you guys taking the time. >> Thank you. >> So accelerating VDI to the Edge. That's the topic of this topic here today. Let's get into it, Dennis, tell us about the new HPE ProLiant DL321 Gen 11 server. >> Okay, absolutely. Hello everybody. So HP ProLiant DL320 Gen 11 server is the new age center CCO and density optimized compact server, compact form factor server. It enables to modernize and power at the next generation of workloads in the diverse rec environment at the Edge in an industry standard designed with flexible scale for advanced graphics and compute. So it is one unit, one processor rec optimized server that can be deployed in the enterprise data center as well as at the remote office at end age. >> Cynthia HPE has announced another server, the ProLiant ML350. What can you tell us about that? >> Yeah, so the HPE ProLiant ML350 Gen 11 server is a powerful tower solution for a wide range of workloads. It is ideal for remote office compute with NextGen performance and expandability with two processors in tower form factor. This enables the server to be used not only in the data center environment, but also in the open office space as a powerful workstation use case. >> Dennis mentioned both servers are empowered by the fourth gen Intel Zion scale of process. Can you talk about the relationship between Intel HPE to get this done? How do you guys come together, what's behind the scenes? Share as much as you can. >> Yeah, thanks a lot John. So without a doubt it takes a lot to put all this together and I think the partnership that HPE and Intel bring together is a little bit of a critical point for us to be able to deliver to our customers. And I'm really thrilled to say that these leading Edge solutions that Dennis and Cynthia just talked about, they're built on the foundation of our fourth Gen Z on scalable platform that's trying to meet a wide variety of deployments for today and into the future. So I think the key point of it is we're together trying to drive leading performance with built-in acceleration and in order to deliver a lot of the business values to our customers, both HP and Intels, look to scale, drive down costs and deliver new services. >> You got the fourth Gen Z on, you got the Gen 11 and multiple ProLiants, a lot of action going on. Again, I love when these next gens come out. Can each of you guys comment and share what are the use cases for each of the systems? Because I think what we're looking at here is the next level innovation. What are some of the use cases on the systems? >> Yeah, so for the ML350, in the modern world where more and more data are generated at the Edge, we need to deploy computer infrastructure where the data is generated. So smaller form factor service will satisfy the requirements of S&B customers or remote and branch offices to deliver required performance redundancy where we're needed. This type of locations can be lacking dedicated facilities with strict humidity, temperature and noise isolation control. The server, the ML350 Gen 11 can be used as a powerful workstation sitting under a desk in the office or open space as well as the server for visualized workloads. It is a productivity workhorse with the ability to scale and adapt to any environment. One of the use cases can be for hosting digital workplace for manufacturing CAD/CAM engineering or oil and gas customers industry. So this server can be used as a high end bare metal workstation for local end users or it can be virtualized desktop solution environments for local and remote users. And talk about the DL320 Gen 11, I will pass it on to Dennis. >> Okay. >> Sure. So when we are talking about age of location we are talking about very specific requirements. So we need to provide solution building blocks that will empower and performance efficient, secure available for scaling up and down in a smaller increments than compared to the enterprise data center and of course redundant. So DL 320 Gen 11 server is the perfect server to satisfy all of those requirements. So for example, S&B customers can build a video solution, for example starting with just two HP ProLiant TL320 Gen 11 servers that will provide sufficient performance for high density video solution and at the same time be redundant and enable it for scaling up as required. So for VGI use cases it can be used for high density general VDI without GP acceleration or for a high performance VDI with virtual VGPU. So thanks to the modern modular architecture that is used on the server, it can be tailored for GPU or high density storage deployment with software defined compute and storage environment and to provide greater details on your Intel view I'm going to pass to Alan. >> Thanks a lot Dennis and I loved how you're both seeing the importance of how we scale and the applicability of the use cases of both the ML350 and DL320 solutions. So scalability is certainly a key tenant towards how we're delivering Intel's Zion scalable platform. It is called Zion scalable after all. And we know that deployments are happening in all different sorts of environments. And I think Cynthia you talked a little bit about kind of a environmental factors that go into how we're designing and I think a lot of people think of a traditional data center with all the bells and whistles and cooling technology where it sometimes might just be a dusty closet in the Edge. So we're defining fortunes you see on scalable to kind of tackle all those different environments and keep that in mind. Our SKUs range from low to high power, general purpose to segment optimize. We're supporting long life use cases so that all goes into account in delivering value to our customers. A lot of the latency sensitive nature of these Edge deployments also benefit greatly from monolithic architectures. And with our latest CPUs we do maintain quite a bit of that with many of our SKUs and delivering higher frequencies along with those SKUs optimized for those specific workloads in networking. So in the end we're looking to drive scalability. We're looking to drive value in a lot of our end users most important KPIs, whether it's latency throughput or efficiency and 4th Gen Z on scalable is looking to deliver that with 60 cores up to 60 cores, the most builtin accelerators of any CPUs in the market. And really the true technology transitions of the platform with DDR5, PCIE, Gen five and CXL. >> Love the scalability story, love the performance. We're going to take a break. Thanks Cynthia, Dennis. Now we're going to come back on our next segment after a quick break to discuss the performance and the benefits of the fourth Gen Intel Zion Scalable. You're watching theCUBE, the leader in high tech coverage, be right back. Welcome back around. We're continuing theCUBE's coverage of compute engineer for your hybrid world. I'm John Furrier, I'm joined by Alan Chu from Intel and Denis Konikoff and Cynthia Sistia from HPE. Welcome back. Cynthia, let's start with you. Can you tell us the benefits of the fourth Gen Intel Zion scale process for the HP Gen 11 server? >> Yeah, so HP ProLiant Gen 11 servers support DDR five memory which delivers increased bandwidth and lower power consumption. There are 32 DDR five dim slots with up to eight terabyte total on ML350 and 16 DDR five dim slots with up to two terabytes total on DL320. So we deliver more memory at a greater bandwidth. Also PCIE 5.0 delivers an increased bandwidth and greater number of lanes. So when we say increased number of lanes we need to remember that each lane delivers more bandwidth than lanes of the previous generation plus. Also a flexible storage configuration on HPDO 320 Gen 11 makes it an ideal server for establishing software defined compute and storage solution at the Edge. When we consider a server for VDI workloads, we need to keep the right balance between the number of cords and CPU frequency in order to deliver the desire environment density and noncompromised user experience. So the new server generation supports a greater number of single wide and global wide GPU use to deliver more graphic accelerated virtual desktops per server unit than ever before. HPE ProLiant ML 350 Gen 11 server supports up to four double wide GPUs or up to eight single wide GPUs. When the signing GPU accelerated solutions the number of GPUs available in the system and consistently the number of BGPUs that can be provisioned for VMs in the binding factor rather than CPU course or memory. So HPE ProLiant Gen 11 servers with Intel fourth generation science scalable processors enable us to deliver more virtual desktops per server than ever before. And with that I will pass it on to Alan to provide more details on the new Gen CPU performance. >> Thanks Cynthia. So you brought up I think a really great point earlier about the importance of achieving the right balance. So between the both of us, Intel and HPE, I'm sure we've heard countless feedback about how we should be optimizing efficiency for our customers and with four Gen Z and scalable in HP ProLiant Gen 11 servers I think we achieved just that with our built-in accelerator. So built-in acceleration delivers not only the revolutionary performance, but enables significant offload from valuable core execution. That offload unlocks a lot of previously unrealized execution efficiency. So for example, with quick assist technology built in, running engine X, TLS encryption to drive 65,000 connections per second we can offload up to 47% of the course that do other work. Accelerating AI inferences with AMX, that's 10X higher performance and we're now unlocking realtime inferencing. It's becoming an element in every workload from the data center to the Edge. And lastly, so with faster and more efficient database performance with RocksDB, we're executing with Intel in-memory analytics accelerator we're able to deliver 2X the performance per watt than prior gen. So I'll say it's that kind of offload that is really going to enable more and more virtualized desktops or users for any given deployment. >> Thanks everyone. We still got a lot more to discuss with Cynthia, Dennis and Allen, but we're going to take a break. Quick break before wrapping things up. You're watching theCUBE, the leader in tech coverage. We'll be right back. Okay, welcome back everyone to theCUBEs coverage of Compute Engineered for your Hybrid World. I'm John Furrier. We'll be wrapping up our discussion on advanced performance of VDI with the fourth gen Intel Zion scalable processers. Welcome back everyone. Dennis, we'll start with you. Let's continue our conversation and turn our attention to security. Obviously security is baked in from day zero as they say. What are some of the new security features or the key security features for the HP ProLiant Gen 11 server? >> Sure, I would like to start with the balance, right? We were talking about performance, we were talking about density, but Alan mentioned about the balance. So what about the security? The security is really important aspect especially if we're talking about solutions deployed at the H. When the security is not active but other aspects of the environment become non-important. And HP is uniquely positioned to deliver the best in class security solution on the market starting with the trusted supply chain and factories and silicon route of trust implemented from the factory. So the new ISO6 supports added protection leveraging SPDM for component authorization and not only enabled for the embedded server management, but also it is integrated with HP GreenLake compute ops manager that enables environment for secure and optimized configuration deployment and even lifecycle management starting from the single server deployed on the Edge and all the way up to the full scale distributed data center. So it brings uncompromised and trusted solution to customers fully protected at all tiers, hardware, firmware, hypervisor, operational system application and data. And the new intel CPUs play an important role in the securing of the platform. So Alan- >> Yeah, thanks. So Intel, I think our zero trust strategy toward security is a really great and a really strong parallel to all the focus that HPE is also bringing to that segment and market. We have even invested in a lot of hardware enabled security technologies like SGX designed to enhance data protection at rest in motion and in use. SGX'S application isolation is the most deployed, researched and battle tested confidential computing technology for the data center market and with the smallest trust boundary of any solution in market. So as we've talked about a little bit about virtualized use cases a lot of virtualized applications rely also on encryption whether bulk or specific ciphers. And this is again an area where we've seen the opportunity for offload to Intel's quick assist technology to encrypt within a single data flow. I think Intel and HP together, we are really providing security at all facets of execution today. >> I love that Software Guard Extension, SGX, also silicon root of trust. We've heard a lot about great stuff. Congratulations, security's very critical as we see more and more. Got to be embedded, got to be completely zero trust. Final question for you guys. Can you share any messages you'd like to share with the audience each of you, what should they walk away from this? What's in it for them? What does all this mean? >> Yeah, so I'll start. Yes, so to wrap it up, HPR Proliant Gen 11 servers are built on four generation science scalable processors to enable high density and extreme performance with high performance CDR five memory and PCI 5.0 plus HP engine engineered and validated workload solutions provide better ROI in any consumption model and prefer by a customer from Edge to Cloud. >> Dennis? >> And yeah, so you are talking about all of the great features that the new generation servers are bringing to our customers, but at the same time, customer IT organization should be ready to enable, configure, support, and fine tune all of these great features for the new server generation. And this is not an obvious task. It requires investments, skills, knowledge and experience. And HP is ready to step up and help customers at any desired skill with the HP Greenlake H2 cloud platform that enables customers for cloud like experience and convenience and the flexibility with the security of the infrastructure deployed in the private data center or in the Edge. So while consuming all of the HP solutions, customer have flexibility to choose the right level of the service delivered from HP GreenLake, starting from hardwares as a service and scale up or down is required to consume the full stack of the hardwares and software as a service with an option to paper use. >> Awesome. Alan, final word. >> Yeah. What should we walk away with? >> Yeah, thanks. So I'd say that we've talked a lot about the systems here in question with HP ProLiant Gen 11 and they're delivering on a lot of the business outcomes that our customers require in order to optimize for operational efficiency or to optimize for just to, well maybe just to enable what they want to do in, with their customers enabling new features, enabling new capabilities. Underpinning all of that is our fourth Gen Zion scalable platform. Whether it's the technology transitions that we're driving with DDR5 PCIA Gen 5 or the raw performance efficiency and scalability of the platform in CPU, I think we're here for our customers in delivering to it. >> That's great stuff. Alan, Dennis, Cynthia, thank you so much for taking the time to do a deep dive in the advanced performance of VDI with the fourth Gen Intel Zion scalable process. And congratulations on Gen 11 ProLiant. You get some great servers there and again next Gen's here. Thanks for taking the time. >> Thank you so much for having us here. >> Okay, this is theCUBEs keeps coverage of Compute Engineered for your Hybrid World sponsored by HP and Intel. I'm John Furrier for theCUBE. Accelerate VDI at the Edge. Thanks for watching.
SUMMARY :
the host of theCUBE. That's the topic of this topic here today. in the enterprise data center the ProLiant ML350. but also in the open office space by the fourth gen Intel deliver a lot of the business for each of the systems? One of the use cases can be and at the same time be redundant So in the end we're looking and the benefits of the fourth for VMs in the binding factor rather than from the data center to the Edge. for the HP ProLiant Gen 11 server? and not only enabled for the is the most deployed, got to be completely zero trust. by a customer from Edge to Cloud. of the HP solutions, Alan, final word. What should we walk away with? lot of the business outcomes the time to do a deep dive Accelerate VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Denis Kondakov | PERSON | 0.99+ |
Cynthia | PERSON | 0.99+ |
Dennis | PERSON | 0.99+ |
Denis Konikoff | PERSON | 0.99+ |
Alan Chu | PERSON | 0.99+ |
Cynthia Sustiva | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cynthia Sistia | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2X | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10X | QUANTITY | 0.99+ |
60 cores | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one unit | QUANTITY | 0.99+ |
each lane | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
ML350 | COMMERCIAL_ITEM | 0.99+ |
S&B | ORGANIZATION | 0.99+ |
DL320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
HPDO 320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
ML350 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
ProLiant ML350 | COMMERCIAL_ITEM | 0.97+ |
two | QUANTITY | 0.97+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.97+ |
DL 320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
ProLiant DL320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
single | QUANTITY | 0.97+ |
ProLiant ML350 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
Intels | ORGANIZATION | 0.96+ |
DL320 | COMMERCIAL_ITEM | 0.96+ |
ProLiant DL321 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
ProLiant TL320 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
two processors | QUANTITY | 0.96+ |
Zion | COMMERCIAL_ITEM | 0.95+ |
HPE ProLiant ML 350 Gen 11 | COMMERCIAL_ITEM | 0.95+ |
Zion | TITLE | 0.94+ |
HPE Compute Security - Kevin Depew, HPE & David Chang, AMD
>>Hey everyone, welcome to this event, HPE Compute Security. I'm your host, Lisa Martin. Kevin Dee joins me next Senior director, future Surfer Architecture at hpe. Kevin, it's great to have you back on the program. >>Thanks, Lisa. I'm glad to be here. >>One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And if we think of how dramatically the landscape has changed in the last couple of years, I was looking at some numbers that H P V E had provided. Cybercrime will reach 10.5 trillion by 2025. It's a couple years away. The average total cost of a data breach is now over 4 million, 15% year over year crime growth predicted over the next five years. It's no longer if we get hit, it's when it's how often. What's the severity? Talk to me about the current situation with the cybersecurity landscape that you're seeing. >>Yeah, I mean the, the numbers you're talking about are just staggering and then that's exactly what we're seeing and that's exactly what we're hearing from our customers is just absolutely key. Customers have too much to lose. The, the dollar cost is just, like I said, staggering. And, and here at HP we know we have a huge part to play, but we also know that we need partnerships across the industry to solve these problems. So we have partnered with, with our, our various partners to deliver these Gen 11 products. Whether we're talking about partners like a M D or partners like our Nick vendors, storage card vendors. We know we can't solve the problem alone. And we know this, the issue is huge. And like you said, the numbers are staggering. So we're really, we're really partnering with, with all the right players to ensure we have a secure solution so we can stay ahead of the bad guys to try to limit the, the attacks on our customers. >>Right. Limit the damage. What are some of the things that you've seen particularly change in the last 18 months or so? Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? >>Well, there, there's been a massive number of attacks just in the last 12 months, but I wouldn't really say it's so much changed because the amount of attacks has been increasing dramatically over the years for many, many, many years. It's just a very lucrative area for the bad guys, whether it's ransomware or stealing personal data, whatever it is, it's there. There's unfortunately a lot of money to be made into it, made from it, and a lot of money to be lost by the good guys, the good guys being our customers. So it's not so much that it's changed, it's just that it's even accelerating faster. So the real change is, it's accelerating even faster because it's becoming even more lucrative. So we have to stay ahead of these bad guys. One of the statistics of Microsoft operating environments, the number of tax in the last year, up 50% year over year, that's a huge acceleration and we've gotta stay ahead of that. We have to make sure our customers don't get impacted to the level that these, these staggering number of attacks are. The, the bad guys are out there. We've gotta protect, protect our customers from the bad guys. >>Absolutely. The acceleration that you talked about is, it's, it's kind of frightening. It's very eye-opening. We do know that security, you know, we've talked about it for so long as a, as a a C-suite priority, a board level priority. We know that as some of the data that HPE e also sent over organizations are risking are, are listing cyber risks as a top five concern in their organization. IT budgets spend is going up where security is concerned. And so security security's on everyone's mind. In fact, the cube did, I guess in the middle part of last, I did a series on this really focusing on cybersecurity as a board issue and they went into how companies are structuring security teams changing their assumptions about the right security model, offense versus defense. But security's gone beyond the board, it's top of mind and it's on, it's in an integral part of every conversation. So my question for you is, when you're talking to customers, what are some of the key challenges that they're saying, Kevin, these are some of the things the landscape is accelerating, we know it's a matter of time. What are some of those challenges and that they're key pain points that they're coming to you to help solve? >>Yeah, at the highest level it's simply that security is incredibly important to them. We talked about the numbers. There's so much money to be lost that what they come to us and say, is security's important for us? What can you do to protect us? What can you do to prevent us from being one of those statistics? So at a high level, that's kind of what we're seeing at a, with a little more detail. We know that there's customers doing digital transformations. We know that there's customers going hybrid cloud, they've got a lot of initiatives on their own. They've gotta spend a lot of time and a lot of bandwidth tackling things that are important to their business. They just don't have the bandwidth to worry about yet. Another thing which is security. So we are doing everything we can and partnering with everyone we can to help solve those problems for customers. >>Cuz we're hearing, hey, this is huge, this is too big of a risk. How do you protect us? And by the way, we only have limited bandwidth, so what can we do? What we can do is make them assured that that platform is secure, that we're, we are creating a foundation for a very secure platform and that we've worked with our partners to secure all the pieces. So yes, they still have to worry about security, but there's pieces that we've taken care of that they don't have to worry about and there's capabilities that we've provided that they can use and we've made that easy so they can build su secure solutions on top of it. >>What are some of the things when you're in customer conversations, Kevin, that you talk about with customers in terms of what makes HPE E'S approach to security really unique? >>Well, I think a big thing is security is part of our, our dna. It's part of everything we do. Whether we're designing our own asics for our bmc, the ilo ASIC ILO six used on Gen 11, or whether it's our firmware stack, the ILO firmware, our our system, UFI firmware, all those pieces in everything we do. We're thinking about security. When we're building products in our factory, we're thinking about security. When we're think designing our supply chain, we're thinking about security. When we make requirements on our suppliers, we're driving security to be a key part of those components. So security is in our D N a security's top of mind. Security is something we think about in everything we do. We have to think like the bad guys, what could the bad guy take advantage of? What could the bad guy exploit? So we try to think like them so that we can protect our customers. >>And so security is something that that really is pervasive across all of our development organizations, our supply chain organizations, our factories, and our partners. So that's what we think is unique about HPE is because security is so important and there's a whole lot of pieces of our reliance servers that we do ourselves that many others don't do themselves. And since we do it ourselves, we can make sure that security's in the design from the start, that those pieces work together in a secure manner. So we think that gives us a, an advantage from a security standpoint. >>Security is very much intention based at HPE e I was reading in some notes, and you just did a great job of talking about this, that fundamental security approach, security is fundamental to defend against threats that are increasingly complex through what you also call an uncompromising focus to state-of-the-art security and in in innovations built into your D N A. And then organizations can protect their infrastructure, their workloads, their data from the bad guys. Talk to us briefly in our final few minutes here, Kevin, about fundamental uncompromising protected the value in it for me as an HPE customer. >>Yeah, when we talk about fundamental, we're talking about the those fundamental technologies that are part of our platform. Things like we've integrated TPMS and sorted them down in our platforms. We now have platform certificates as a standard part of the platform. We have I dev id and probably most importantly, our platforms continue to support what we really believe was a groundbreaking technology, Silicon Root of trust and what that's able to do. We have millions of lines of firmware code in our platforms and with Silicon Root of trust, we can authenticate all of those lines of firmware. Whether we're talking about the the ILO six firmware, our U E I firmware, our C P L D in the system, there's other pieces of firmware. We authenticate all those to make sure that not a single line of code, not a single bit has been changed by a bad guy, even if the bad guy has physical access to the platform. >>So that silicon route of trust technology is making sure that when that system boots off and that hands off to the operating system and then eventually the customer's application stack that it's starting with a solid foundation, that it's starting with a system that hasn't been compromised. And then we build other things into that silicon root of trust, such as the ability to do the scans and the authentications at runtime, the ability to automatically recover if we detect something has been compromised, we can automatically update that compromised piece of firmware to a good piece before we've run it because we never want to run firmware that's been compromised. So that's all part of that Silicon Root of Trust solution and that's a fundamental piece of the platform. And then when we talk about uncompromising, what we're really talking about there is how we don't compromise security. >>And one of the ways we do that is through an extension of our Silicon Root of trust with a capability called S Spdm. And this is a technology that we saw the need for, we saw the need to authenticate our option cards and the firmware in those option cards. Silicon Root Prota, Silicon Root Trust protects against many attacks, but one piece it didn't do is verify the actual option card firmware and the option cards. So we knew to solve that problem we would have to partner with others in the industry, our nick vendors, our storage controller vendors, our G vendors. So we worked with industry standards bodies and those other partners to design a capability that allows us to authenticate all of those devices. And we worked with those vendors to get the support both in their side and in our platform side so that now Silicon Rivers and trust has been extended to where we protect and we trust those option cards as well. >>So that's when, when what we're talking about with Uncompromising and with with Protect, what we're talking about there is our capabilities around protecting against, for example, supply chain attacks. We have our, our trusted supply chain solution, which allows us to guarantee that our server, when it leaves our factory, what the server is, when it leaves our factory, will be what it is when it arrives at the customer. And if a bad guy does anything in that transition, the transit from our factory to the customer, they'll be able to detect that. So we enable certain capabilities by default capability called server configuration lock, which can ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, swapping out processors, whatever it is, we'll detect if a bad guy did any of that and the customer will know it before they deploy the system. That gets enabled by default. >>We have an intrusion detection technology option when you use by the, the trusted supply chain that is included by default. That lets you know, did anybody open that system up, even if the system's not plugged in, did somebody take the hood off and potentially do something malicious to it? We also enable a capability called U EFI secure Boot, which can go authenticate some of the drivers that are located on the option card itself. Those kind of capabilities. Also ilo high security mode gets enabled by default. So all these things are enabled in the platform to ensure that if it's attacked going from our factory to the customer, it will be detected and the customer won't deploy a system that's been maliciously attacked. So that's got >>It, >>How we protect the customer through those capabilities. >>Outstanding. You mentioned partners, my last question for you, we've got about a minute left, Kevin is bring AMD into the conversation, where do they fit in this >>AMD's an absolutely crucial partner. No one company even HP can do it all themselves. There's a lot of partnerships, there's a lot of synergies working with amd. We've been working with AMD for almost 20 years since we delivered our first AM MD base ProLiant back in 2004 H HP ProLiant, DL 5 85. So we've been working with them a long time. We work with them years ahead of when a processor is announced, we benefit each other. We look at their designs and help them make their designs better. They let us know about their technology so we can take advantage of it in our designs. So they have a lot of security capabilities, like their memory encryption technologies, their a MD secure processor, their secure encrypted virtualization, which is an absolutely unique and breakthrough technology to protect virtual machines and hypervisor environments and protect them from malicious hypervisors. So they have some really great capabilities that they've built into their processor, and we also take advantage of the capabilities they have and ensure those are used in our solutions and in securing the platform. So a really such >>A great, great partnership. Great synergies there. Kevin, thank you so much for joining me on the program, talking about compute security, what HPE is doing to ensure that security is fundamental, that it is unpromised and that your customers are protected end to end. We appreciate your insights, we appreciate your time. >>Thank you very much, Lisa. >>We've just had a great conversation with Kevin Depu. Now I get to talk with David Chang, data center solutions marketing lead at a md. David, welcome to the program. >>Thank, thank you. And thank you for having me. >>So one of the hot topics of conversation that we can't avoid is security. Talk to me about some of the things that AMD is seeing from the customer's perspective, why security is so important for businesses across industries. >>Yeah, sure. Yeah. Security is, is top of mind for, for almost every, every customer I'm talking to right now. You know, there's several key market drivers and, and trends, you know, in, out there today that's really needing a better and innovative solution for, for security, right? So, you know, the high cost of data breaches, for example, will cost enterprises in downtime of, of the data center. And that time is time that you're not making money, right? And potentially even leading to your, to the loss of customer confidence in your, in your cust in your company's offerings. So there's real costs that you, you know, our customers are facing every day not being prepared and not having proper security measures set up in the data center. In fact, according to to one report, over 400 high-tech threats are being introduced every minute. So every day, numerous new threats are popping up and they're just, you know, the, you know, the bad guys are just getting more and more sophisticated. So you have to take, you know, measures today and you have to protect yourself, you know, end to end with solutions like what a AM MD and HPE has to offer. >>Yeah, you talked about some of the costs there. They're exorbitant. I've seen recent figures about the average, you know, cost of data breacher ransomware is, is close to, is over $4 million, the cost of, of brand reputation you brought up. That's a great point because nobody wants to be the next headline and security, I'm sure in your experiences. It's a board level conversation. It's, it's absolutely table stakes for every organization. Let's talk a little bit about some of the specific things now that A M D and HPE E are doing. I know that you have a really solid focus on building security features into the EPIC processors. Talk to me a little bit about that focus and some of the great things that you're doing there. >>Yeah, so, you know, we partner with H P E for a long time now. I think it's almost 20 years that we've been in business together. And, and you know, we, we help, you know, we, we work together design in security features even before the silicons even, you know, even born. So, you know, we have a great relationship with, with, with all our partners, including hpe and you know, HPE has, you know, an end really great end to end security story and AMD fits really well into that. You know, if you kind of think about how security all started, you know, in, in the data center, you, you've had strategies around encryption of the, you know, the data in, in flight, the network security, you know, you know, VPNs and, and, and security on the NS. And, and even on the, on the hard drives, you know, data that's at rest. >>You know, encryption has, you know, security has been sort of part of that strategy for a a long time and really for, you know, for ages, nobody really thought about the, the actual data in use, which is, you know, the, the information that's being passed from the C P U to the, the, the memory and, and even in virtualized environments to the, the, the virtual machines that, that everybody uses now. So, you know, for a long time nobody really thought about that app, you know, that third leg of, of encryption. And so a d comes in and says, Hey, you know, this is things that as, as the bad guys are getting more sophisticated, you, you have to start worrying about that, right? And, you know, for example, you know, you know, think, think people think about memory, you know, being sort of, you know, non-persistent and you know, when after, you know, after a certain time, the, the, you know, the, the data in the memory kind of goes away, right? >>But that's not true anymore because even in in memory data now, you know, there's a lot of memory modules that still can retain data up to 90 minutes even after p power loss. And with something as simple as compressed, compressed air or, or liquid nitrogen, you can actually freeze memory dams now long enough to extract the data from that memory module for up, you know, up, up to two or three hours, right? So lo more than enough time to read valuable data and, and, and even encryption keys off of that memory module. So our, our world's getting more complex and you know, more, the more data out there, the more insatiable need for compute and storage. You know, data management is becoming all, all the more important, you know, to keep all of that going and secure, you know, and, and creating security for those threats. It becomes more and more important. And, and again, especially in virtualized environments where, you know, like hyperconverged infrastructure or vir virtual desktop memories, it's really hard to keep up with all those different attacks, all those different attack surfaces. >>It sounds like what you were just talking about is what AMD has been able to do is identify yet another vulnerability Yes. Another attack surface in memory to be able to, to plug that hole for organizations that didn't, weren't able to do that before. >>Yeah. And, you know, and, and we kind of started out with that belief that security needed to be scalable and, and able to adapt to, to changing environments. So, you know, we, we came up with, you know, the, you know, the, the philosophy or the design philosophy that we're gonna continue to build on those security features generational generations and stay ahead of those evolving attacks. You know, great example is in, in the third gen, you know, epic C P U, that family that we had, we actually created this feature called S E V S N P, which stands for SECURENESS Paging. And it's really all around this, this new attack where, you know, your, the, the, you know, it's basically hypervisor based attacks where people are, you know, the bad actors are writing in to the memory and writing in basically bad data to corrupt the mem, you know, to corrupt the data in the memory. So s e V S and P is, was put in place to help, you know, secure that, you know, before that became a problem. And, you know, you heard in the news just recently that that becoming a more and more, more of a bigger issue. And the great news is that we had that feature built in, you know, before that became a big problem. >>And now you're on the fourth gen, those epic crosses talk of those epic processes. Talk to me a little bit about some of the innovations that are now in fourth gen. >>Yeah, so in fourth gen we actually added, you know, on top of that. So we've, we've got, you know, the sec the, the base of our, our, what we call infinity guard is, is all around the secure boot. The, you know, the, the, the, the secure root of trust that, you know, that we, we work with HPE on the, the strong memory encryption and the S E V, which is the secure encrypted virtualization. And so remember those s s and p, you know, incap capabilities that I talked about earlier. We've actually, in the fourth gen added two x the number of sev v s and P guests for even higher number of confidential VMs to support even more customers than before. Right? We've also added more guest protection from simultaneous multi threading or S M T side channel attacks. And, you know, while it's not officially part of Infinity Guard, we've actually added more APEC acceleration, which greatly benefits the security of those confidential VMs with the larger number of VCPUs, which basically means that you can build larger VMs and still be secured. And then lastly, we actually added even stronger a e s encryption. So we went from 128 bit to 256 bit, which is now military grade encryption on top of that. And, you know, and, and that's really, you know, the de facto crypto cryptography that is used for most of the applications for, you know, customers like the US federal government and, and all, you know, the, is really an essential element for memory security and the H B C applications. And I always say if it's good enough for the US government, it's good enough for you. >>Exactly. Well, it's got to be, talk a little bit about how AMD is doing this together with HPE a little bit about the partnership as we round out our conversation. >>Sure, absolutely. So security is only as strong as the layer below it, right? So, you know, that's why modern security must be built in rather than, than, you know, bolted on or, or, or, you know, added after the fact, right? So HPE and a MD actually developed this layered approach for protecting critical data together, right? Through our leadership and, and security features and innovations, we really deliver a set of hardware based features that, that help decrease potential attack surfaces. With, with that holistic approach that, you know, that safeguards the critical information across system, you know, the, the entire system lifecycle. And we provide the confidence of built-in silicon authentication on the world's most secure industry standard servers. And with a 360 degree approach that brings high availability to critical workloads while helping to defend, you know, against internal and external threats. So things like h hp, root of silicon root of trust with the trusted supply chain, which, you know, obviously AMD's part of that supply chain combined with AMD's Infinity guard technology really helps provide that end-to-end data protection in today's business. >>And that is so critical for businesses in every industry. As you mentioned, the attackers are getting more and more sophisticated, the vulnerabilities are increasing. The ability to have a pa, a partnership like H P E and a MD to deliver that end-to-end data protection is table stakes for businesses. David, thank you so much for joining me on the program, really walking us through what am MD is doing, the the fourth gen epic processors and how you're working together with HPE to really enable security to be successfully accomplished by businesses across industries. We appreciate your insights. >>Well, thank you again for having me, and we appreciate the partnership with hpe. >>Well, you wanna thank you for watching our special program HPE Compute Security. I do have a call to action for you. Go ahead and visit hpe com slash security slash compute. Thanks for watching.
SUMMARY :
Kevin, it's great to have you back on the program. One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And like you said, the numbers are staggering. Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? So the real change is, it's accelerating even faster because it's becoming We do know that security, you know, we've talked about it for so long as a, as a a C-suite Yeah, at the highest level it's simply that security is incredibly important to them. And by the way, we only have limited bandwidth, So we try to think like them so that we can protect our customers. our reliance servers that we do ourselves that many others don't do themselves. and you just did a great job of talking about this, that fundamental security approach, of code, not a single bit has been changed by a bad guy, even if the bad guy has the ability to automatically recover if we detect something has been compromised, And one of the ways we do that is through an extension of our Silicon Root of trust with a capability ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, That lets you know, into the conversation, where do they fit in this and in securing the platform. Kevin, thank you so much for joining me on the program, Now I get to talk with David Chang, And thank you for having me. So one of the hot topics of conversation that we can't avoid is security. numerous new threats are popping up and they're just, you know, the, you know, the cost of, of brand reputation you brought up. know, the data in, in flight, the network security, you know, you know, that app, you know, that third leg of, of encryption. the data from that memory module for up, you know, up, up to two or three hours, It sounds like what you were just talking about is what AMD has been able to do is identify yet another in the third gen, you know, epic C P U, that family that we had, Talk to me a little bit about some of the innovations Yeah, so in fourth gen we actually added, you know, Well, it's got to be, talk a little bit about how AMD is with that holistic approach that, you know, that safeguards the David, thank you so much for joining me on the program, Well, you wanna thank you for watching our special program HPE Compute Security.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
David Chang | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Kevin Dee | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10.5 trillion | QUANTITY | 0.99+ |
HPE E | ORGANIZATION | 0.99+ |
H P E | ORGANIZATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
over $4 million | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
fourth gen. | QUANTITY | 0.99+ |
fourth gen | QUANTITY | 0.99+ |
over 4 million | QUANTITY | 0.99+ |
DL 5 85 | COMMERCIAL_ITEM | 0.99+ |
256 bit | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
three hours | QUANTITY | 0.98+ |
amd | ORGANIZATION | 0.98+ |
128 bit | QUANTITY | 0.98+ |
over 400 high-tech threats | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
Infinity Guard | ORGANIZATION | 0.98+ |
one piece | QUANTITY | 0.98+ |
almost 20 years | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
millions of lines | QUANTITY | 0.97+ |
single bit | QUANTITY | 0.97+ |
50% | QUANTITY | 0.97+ |
one report | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
hpe | ORGANIZATION | 0.96+ |
third gen | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
H P V E | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.95+ |
two | QUANTITY | 0.95+ |
third leg | QUANTITY | 0.94+ |
last couple of years | DATE | 0.93+ |
Silicon Rivers | ORGANIZATION | 0.92+ |
up to 90 minutes | QUANTITY | 0.92+ |
S Spdm | ORGANIZATION | 0.9+ |
ILO | ORGANIZATION | 0.88+ |
AM | ORGANIZATION | 0.88+ |
US government | ORGANIZATION | 0.86+ |
single line | QUANTITY | 0.85+ |
last 18 months | DATE | 0.82+ |
Gen 11 | QUANTITY | 0.81+ |
last 12 months | DATE | 0.81+ |
AM MD base ProLiant | COMMERCIAL_ITEM | 0.8+ |
next five years | DATE | 0.8+ |
up to two | QUANTITY | 0.8+ |
Protect | ORGANIZATION | 0.79+ |
couple years | QUANTITY | 0.79+ |
Seamus Jones & Milind Damle
>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.
SUMMARY :
I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
70 | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
55% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
220% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
121% | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Shamus Jones | PERSON | 0.99+ |
12 cores | QUANTITY | 0.99+ |
Shamus | ORGANIZATION | 0.99+ |
Shamus | PERSON | 0.99+ |
2023 | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
96 core | QUANTITY | 0.99+ |
300 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
seven year | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
96 scores | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Milland Doley | PERSON | 0.99+ |
first guest | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
amd | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
Lin | PERSON | 0.98+ |
20 years ago | DATE | 0.98+ |
Melinda | PERSON | 0.98+ |
One terabyte | QUANTITY | 0.98+ |
Seamus | ORGANIZATION | 0.98+ |
one core | QUANTITY | 0.98+ |
Melind | PERSON | 0.98+ |
fourth generation | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
7 years | QUANTITY | 0.97+ |
Seamus Jones | PERSON | 0.97+ |
Dallas | LOCATION | 0.97+ |
One | QUANTITY | 0.97+ |
Melin | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
6 | QUANTITY | 0.96+ |
Milind Damle | PERSON | 0.96+ |
Melan | PERSON | 0.96+ |
first | QUANTITY | 0.95+ |
8 | QUANTITY | 0.94+ |
second generation | QUANTITY | 0.94+ |
Seamus | PERSON | 0.94+ |
TP C X | TITLE | 0.93+ |
Evan Touger, Prowess | Prowess Benchmark Testing Results for AMD EPYC Genoa on Dell Servers
(upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch. I've got a special guest with me today from Prowess Consulting. His name is Evan Touger, he's a senior technical writer with Prowess. Evan, welcome. >> Hi, great to be here. Thanks. >> So tell us a little bit about Prowess, what does Prowess do? >> Yeah, we're a consulting firm. We've been around for quite a few years, based in Bellevue, Washington. And we do quite a few projects with folks from Dell to a lot of other companies, and dive in. We have engineers, writers, production folks, so pretty much end-to-end work, doing research testing and writing, and diving into different technical topics. >> So you- in this case what we're going to be talking about is some validation studies that you've done, looking at Dell PowerEdge servers that happened to be integrating in fourth-gen EPYC processors from AMD. What were the specific workloads that you were focused on in this study? >> Yeah, this particular one was honing in on virtualization, right? You know, obviously it's pretty much ubiquitous in the industry, everybody works with virtualization in one way or another. So just getting optimal performance for virtualization was critical, or is critical for most businesses. So we just wanted to look a little deeper into, you know, how do companies evaluate that? What are they going to use to make the determination for virtualization performance as it relates to their workloads? So that led us to this study, where we looked at some benchmarks, and then went a little deeper under the hood to see what led to the results that we saw from those benchmarks. >> So when you say virtualization, does that include virtual desktop infrastructure or are we just talking about virtual machines in general? >> No, it can include both. We looked at VMs, thinking in terms of what about database performance when you're working in VMs, all the way through to VDI and companies like healthcare organizations and so forth, where it's common to roll out lots of virtual desktops, and performance is critical there as well. >> Okay, you alluded to, sort of, looking under the covers to see, you know, where these performance results were coming from. I assume what you're referencing is the idea that it's not just all about the CPU when you talk about a system. Am I correct in that assumption and- >> Yeah, absolutely. >> What can you tell us? >> Well, you know, for companies evaluating, there's quite a bit to consider, obviously. So they're looking at not just raw performance but power performance. So that was part of it, and then what makes up that- those factors, right? So certainly CPU is critical to that, but then other things come into play, like the RAID controllers. So we looked a little bit there. And then networking, of course can be critical for configurations that are relying on good performance on their networks, both in terms of bandwidth and just reducing latency overall. So interconnects as well would be a big part of that. So with, with PCIe gen 5 or 5.0 pick your moniker. You know in this- in the infrastructure game, we're often playing a game of whack-a-mole, looking for the bottlenecks, you know, chasing the bottlenecks. PCIe 5 opens up a lot of bandwidth for memory and things like RAID controllers and NICs. I mean, is the bottleneck now just our imagination, Evan, have we reached a point where there are no bottlenecks? What did you see when you ran these tests? What, you know, what were you able to stress to a point where it was saturated, if anything? >> Yeah. Well, first of all, we didn't- these are particular tests were ones that we looked at industry benchmarks, and we were examining in particular to see where world records were set. And so we uncovered a few specific servers, PowerEdge servers that were pretty key there, or had a lot of- were leading in the category in a lot of areas. So that's what led us to then, okay, well why is that? What's in these servers, and what's responsible for that? So in a lot of cases they, we saw these results even with, you know, gen 4, PCIe gen 4. So there were situations where clearly there was benefit from faster interconnects and, and especially NVMe for RAID, you know, for supporting NVMe and SSDs. But all of that just leads you to the understanding that it means it can only get better, right? So going from gen 4 to- if you're seeing great results on gen 4, then gen 5 is probably going to be, you know, blow that away. >> And in this case, >> It'll be even better. >> In this case, gen 5 you're referencing PCIe >> PCIe right. Yeah, that's right. >> (indistinct) >> And then the same thing with EPYC actually holds true, some of the records, we saw records set for both 3rd and 4th gen, so- with EPYC, so the same thing there. Anywhere there's a record set on the 3rd gen, you know, makes us really- we're really looking forward to going back and seeing over the next few months, which of those records fall and are broken by newer generation versions of these servers, once they actually wrap to the newer generation processors. You know, based on, on what we're seeing for the- for what those processors can do, not only in. >> (indistinct) Go ahead. >> Sorry, just want to say, not only in terms of raw performance, but as I mentioned before, the power performance, 'cause they're very efficient, and that's a really critical consideration, right? I don't think you can overstate that for companies who are looking at, you know, have to consider expenditures and power and cooling and meeting sustainability goals and so forth. So that was really an important category in terms of what we looked at, was that power performance, not just raw performance. >> Yeah, I want to get back to that, that's a really good point. We should probably give credit where credit is due. Which Dell PowerEdge servers are we talking about that were tested and what did those interconnect components look like from a (indistinct) perspective? >> Yeah, so we focused primarily on a couple benchmarks that seemed most important for real world performance results for virtualization. TPCx-V and VMmark 3.x. the TPCx-V, that's where we saw PowerEdge R7525, R7515. They both had top scores in different categories there. That benchmark is great for looking at database workloads in particular, right? Running in virtualization settings. And then the VMmark 3.x was critical. We saw good, good results there for the 7525 and the R 7515 as well as the R 6525, in that one and that included, sorry, just checking notes to see what- >> Yeah, no, no, no, no, (indistinct) >> Included results for power performance, as I mentioned earlier, that's where we could see that. So we kind of, we saw this in a range of servers that included both 3rd gen AMD EPYC and newer 4th gen as well as I mentioned. The RAID controllers were critical in the TPCx-V. I don't think that came into play in the VM mark test, but they were definitely part of the TPCx-V benchmarks. So that's where the RAID controllers would make a difference, right? And in those tests, I think they're using PERC 11. So, you know, the newer PERC 12 controllers there, again we'd expect >> (indistinct) >> To see continued, you know, gains in newer benchmarks. That's what we'll be looking for over the next several months. >> Yeah. So I think if I've got my Dell nomenclature down, performance, no no, PowerEdge RAID Controller, is that right? >> Exactly, yeah, there you go. Right? >> With Broadcom, you know, powered by Broadcom. >> That's right. There you go. Yeah. Isn't the Dell naming scheme there PERC? >> Yeah, exactly, exactly. Back to your comment about power. So you've had a chance to take a pretty deep look at the latest stuff coming out. You're confident that- 'cause some of these servers are going to be more expensive than previous generation. Now a server is not a server is not a server, but some are awakening to the idea that there might be some sticker shock. You're confident that the bang for your buck, the bang for your kilowatt hour is actually going to be beneficial. We're actually making things better, faster, stronger, cheaper, more energy efficient. We're continuing on that curve? >> That's what I would expect to see, right. I mean, of course can't speak to to pricing without knowing, you know, where the dollars are going to land on the servers. But I would expect to see that because you're getting gains in a couple of ways. I mean, one, if the performance increases to the point where you can run more VMs, right? Get more performance out of your VMs and run more total VMs or more BDIs, then there's obviously a good, you know, payback on your investment there. And then as we were discussing earlier, just the power performance ratio, right? So if you're bringing down your power and cooling costs, if these machines are just more efficient overall, then you should see some gains there as well. So, you know, I think the key is looking at what's the total cost of ownership over, you know, a standard like a three-year period or something and what you're going to get out of it for your number of sessions, the performance for the sessions, and the overall efficiency of the machines. >> So just just to be clear with these Dell PowerEdge servers, you were able to validate world record performance. But this isn't, if you, if you look at CPU architecture, PCIe bus architecture, memory, you know, the class of memory, the class of RAID controller, the class of NIC. Those were not all state of the art in terms of at least what has been recently announced. Correct? >> Right. >> Because (indistinct) the PCI 4.0, So to your point- world records with that, you've got next-gen RAID controllers coming out, and NICs coming out. If the motherboard was PCIe 5, with commensurate memory, all of those things are getting better. >> Exactly, right. I mean you're, you're really you're just eliminating bandwidth constraints latency constraints, you know, all of that should be improved. NVMe, you know, just collectively all these things just open the doors, you know, letting more bandwidth through reducing all the latency. Those are, those are all pieces of the puzzle, right? That come together and it's all about finding the weakest link and eliminating it. And I think we're reaching the point where we're removing the biggest constraints from the systems. >> Okay. So I guess is it fair to summarize to say that with this infrastructure that you tested, you were able to set world records. This, during this year, I mean, over the next several months, things are just going to get faster and faster and faster and faster. >> That's what I would anticipate, exactly, right. If they're setting world records with these machines before some of the components are, you know, the absolute latest, it seems to me we're going to just see a continuing trend there, and more and more records should fall. So I'm really looking forward to seeing how that goes, 'cause it's already good and I think the return on investment is pretty good there. So I think it's only going to get better as these roll out. >> So let me ask you a question that's a little bit off topic. >> Okay. >> Kind of, you know, we see these gains, you know, we're all familiar with Moore's Law, we're familiar with, you know, the advancements in memory and bus architecture and everything else. We just covered SuperCompute 2022 in Dallas a couple of weeks ago. And it was fascinating talking to people about advances in AI that will be possible with new architectures. You know, most of these supercomputers that are running right now are n minus 1 or n minus 2 infrastructure, you know, they're, they're, they're PCI 3, right. And maybe two generations of processors old, because you don't just throw out a 100,000 CPU super computing environment every 18 months. It doesn't work that way. >> Exactly. >> Do you have an opinion on this question of the qualitative versus quantitative increase in computing moving forward? And, I mean, do you think that this new stuff that you're starting to do tests on is going to power a fundamental shift in computing? Or is it just going to be more consolidation, better power consumption? Do you think there's an inflection point coming? What do you think? >> That's a great question. That's a hard one to answer. I mean, it's probably a little bit of both, 'cause certainly there will be better consolidation, right? But I think that, you know, the systems, it works both ways. It just allows you to do more with less, right? And you can go either direction, you can do what you're doing now on fewer machines, you know, and get better value for it, or reduce your footprint. Or you can go the other way and say, wow, this lets us add more machines into the mix and take our our level of performance from here to here, right? So it just depends on what your focus is. Certainly with, with areas like, you know, HPC and AI and ML, having the ability to expand what you already are capable of by adding more machines that can do more is going to be your main concern. But if you're more like a small to medium sized business and the opportunity to do what you were doing on, on a much smaller footprint and for lower costs, that's really your goal, right? So I think you can use this in either direction and it should, should pay back in a lot of dividends. >> Yeah. Thanks for your thoughts. It's an interesting subject moving forward. You know, sometimes it's easy to get lost in the minutiae of the bits and bites and bobs of all the components we're studying, but they're powering something that that's going to effect effectively all of humanity as we move forward. So what else do we need to consider when it comes to what you've just validated in the virtualization testing? Anything else, anything we left out? >> I think we hit all the key points, or most of them it's, you know, really, it's just keeping in mind that it's all about the full system, the components not- you know, the processor is a obviously a key, but just removing blockages, right? Freeing up, getting rid of latency, improving bandwidth, all these things come to play. And then the power performance, as I said, I know I keep coming back to that but you know, we just, and a lot of what we work on, we just see that businesses, that's a really big concern for businesses and finding efficiency, right? And especially in an age of constrained budgets, that's a big deal. So, it's really important to have that power performance ratio. And that's one of the key things we saw that stood out to us in, in some of these benchmarks, so. >> Well, it's a big deal for me. >> It's all good. >> Yeah, I live in California and I know exactly how much I pay for a kilowatt hour of electricity. >> I bet, yeah. >> My friends in other places don't even know. So I totally understand the power constraint question. >> Yeah, it's not going to get better, so, anything you can do there, right? >> Yeah. Well Evan, this has been great. Thanks for sharing the results that Prowess has come up with, third party validation that, you know, even without the latest and greatest components in all categories, Dell PowerEdge servers are able to set world records. And I anticipate that those world records will be broken in 2023 and I expect that Prowess will be part of that process, So Thanks for that. For the rest of us- >> (indistinct) >> Here at theCUBE, I want to thank you for joining us. Stay tuned for continuing coverage of AMD's fourth generation EPYC launch, for myself and for Evan Touger. Thanks so much for joining us. (upbeat music)
SUMMARY :
Welcome to theCUBE's Hi, great to be here. to a lot of other companies, and dive in. that you were focused on in this study? you know, how do companies evaluate that? all the way through to VDI looking under the covers to see, you know, you know, chasing the bottlenecks. But all of that just leads you Yeah, that's right. you know, makes us really- (indistinct) are looking at, you know, and what did those interconnect and the R 7515 as well as So, you know, the newer To see continued, you know, is that right? Exactly, yeah, there you go. With Broadcom, you There you go. the bang for your buck, to pricing without knowing, you know, PCIe bus architecture, memory, you know, So to your point- world records with that, just open the doors, you know, with this infrastructure that you tested, components are, you know, So let me ask you a question that's we're familiar with, you know, and the opportunity to do in the minutiae of the or most of them it's, you know, really, it's a big deal for me. for a kilowatt hour of electricity. So I totally understand the third party validation that, you know, I want to thank you for joining us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Evan | PERSON | 0.99+ |
Evan Touger | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Prowess Consulting | ORGANIZATION | 0.99+ |
2023 | DATE | 0.99+ |
three-year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
R 6525 | COMMERCIAL_ITEM | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
3rd | QUANTITY | 0.99+ |
R 7515 | COMMERCIAL_ITEM | 0.99+ |
R7515 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
4th gen | QUANTITY | 0.99+ |
3rd gen | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
7525 | COMMERCIAL_ITEM | 0.98+ |
Prowess | ORGANIZATION | 0.98+ |
Bellevue, Washington | LOCATION | 0.98+ |
100,000 CPU | QUANTITY | 0.98+ |
PowerEdge | COMMERCIAL_ITEM | 0.97+ |
two generations | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
PCIe 5 | OTHER | 0.96+ |
today | DATE | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
this year | DATE | 0.93+ |
PCI 4.0 | OTHER | 0.92+ |
TPCx-V | COMMERCIAL_ITEM | 0.92+ |
fourth-gen | QUANTITY | 0.92+ |
gen 5 | QUANTITY | 0.9+ |
Moore | ORGANIZATION | 0.89+ |
fourth generation | QUANTITY | 0.88+ |
gen 4 | QUANTITY | 0.87+ |
PCI 3 | OTHER | 0.87+ |
couple of weeks ago | DATE | 0.85+ |
SuperCompute 2022 | TITLE | 0.8+ |
PCIe gen 5 | OTHER | 0.79+ |
VMmark 3.x | COMMERCIAL_ITEM | 0.75+ |
minus | QUANTITY | 0.74+ |
one way | QUANTITY | 0.74+ |
18 months | QUANTITY | 0.7+ |
PERC 12 | COMMERCIAL_ITEM | 0.67+ |
5.0 | OTHER | 0.67+ |
EPYC | COMMERCIAL_ITEM | 0.65+ |
months | DATE | 0.64+ |
5 | QUANTITY | 0.63+ |
PERC 11 | COMMERCIAL_ITEM | 0.6+ |
next few months | DATE | 0.6+ |
first | QUANTITY | 0.59+ |
VMmark 3.x. | COMMERCIAL_ITEM | 0.55+ |
EPYC Genoa | COMMERCIAL_ITEM | 0.53+ |
gen | OTHER | 0.52+ |
R7525 | COMMERCIAL_ITEM | 0.52+ |
1 | QUANTITY | 0.5+ |
2 | QUANTITY | 0.47+ |
PowerEdge | ORGANIZATION | 0.47+ |
Dilip Ramachandran and Juergen Zimmerman
(bright upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch, along with the way that Dell has integrated this technology into its PowerEdge server lines. We're in for an interesting conversation today. Today, I'm joined by Dilip Ramachandran, Senior Director of Marketing at AMD, and Juergen Zimmermann. Juergen is Principal SAP Solutions Performance Benchmarking Engineer at Dell. Welcome, gentlemen. >> Welcome. >> Thank you David, nice to be here. >> Nice to meet you too, welcome to theCUBE. You will officially be CUBE alumni after this. Dilip, let's start with you. What's this all about? Tell us about AMD's recent launch and the importance of it. >> Thanks, David. I'm excited to actually talk to you today, AMD, at our fourth generation EPYC launch last month in November. And as part of that fourth generation EPYC launch, we announced industry-leading performance based on 96 cores, based on Zen 4 architecture. And new interfaces, PCIe Gen 5, as well as DDR5. Incredible amount of memory bandwidth, memory capacity supported, and a whole lot of other features as well. So we announced this product, we launched it in November last month. And we've been closely working with Dell on a number of benchmarks that we'd love to talk to you more about today. >> So just for some context, when was the last release of this scale? So when was the third generation released? How long ago? >> The third generation EPYC was launched in Q1 of 2021. So it was almost 18 to 24 months ago. And since then we've made a tremendous jump, the fourth generation EPYC, in terms of number of cores. So third generation EPYC supported 64 cores, fourth generation EPYC supports 96 cores. And these are new cores, the Zen 4 cores, the fourth generation of Zen cores. So very high performance, new interfaces, and really world-class performance. >> Excellent. Well, we'll go into greater detail in a moment, but let's go to Juergen. Tell us about the testing that you've been involved with to kind of prove out the benefits of this new AMD architecture. >> Yeah, well, the testing is SAP Standard Performance benchmark, the SAP SD two tier. And this is more or less a industry standard benchmark that is used to size your service for the needs of SAP. Actually, SAP customers always ask the vendors about the SAP benchmark and the SAPS values of their service. >> And I should have asked you before, but give us a little bit of your background working with SAP. Have you been doing this for longer than a week? >> Yeah, yeah, definitely, I do this for about 20 years now. Started with Sun Microsystems, and interestingly in the year 2003, 2004, I started working with AMD service on SAP with Linux, and afterwards parted the SAP application to Solaris AMD, also with AMD. So I have a lot of tradition with SAP and AMD benchmarks, and doing this ever since then. >> So give us some more detail on the results of the recent testing, and if you can, tell us why we should care? >> (laughs) Okay, the recent results actually also surprised myself, they were so good. So I initially installed the benchmark kit, and couldn't believe that the server is just getting, or hitting idle by the numbers I saw. So I cranked up the numbers and reached results that are most likely double the last generation, so Zen 3 generation, and that even passed almost all 8-socket systems out there. So if you want to have the same SAP performance, you can just use 2-socket AMD server instead of any four or 8-socket servers out there. And this is a tremendous saving in energy. >> So you just mentioned savings in terms of power consumption, which is a huge consideration. What are the sort of end user results that this delivers in terms of real world performance? How is a human being at the end of a computer going to notice something like this? >> So actually the results are like that you get almost 150,000 users concurrently accessing the system, and get their results back from SAP within one second response time. >> 150,000 users, you said? >> 150,000 users in parallel. >> (laughs) Okay, that's amazing. And I think it's interesting to note that, and I'll probably say this a a couple of times. You just referenced third generation EPYC architecture, and there are a lot of folks out there who are two generations back. Not everyone is religiously updating every 18 months, and so for a fair number of SAP environments, this is an even more dramatic increase. Is that a fair thing to say? >> Yeah, I just looked up yesterday the numbers from generation one of EPYC, and this was at about 28,000 users. So we are five times the performance now, within four years. Yeah, great. >> So Dilip, let's dig a little more into the EPYC architecture, and I'm specifically also curious about... You mentioned PCIe Gen five, or 5.0 and all of the components that plug into that. You mentioned I think faster DDR. Talk about that. Talk about how all of the components work together to make when Dell comes out with a PowerEdge server, to make it so much more powerful. >> Absolutely. So just to spend a little bit more time on this particular benchmark, the SAP Sales and Distribution benchmark. It's a widely used benchmark in the industry to basically look at how do I get the most performance out of my system for a variety of SAP business suite applications. And we touched upon it earlier, right, we are able to beat a performance of 4-socket and 8-socket servers out there. And you know, it saves energy, it saves cost, better TCO for the data center. So we're really excited to be able to support more users in a single server and meeting all the other dual socket and 4-socket combinations out there. Now, how did we get there, right, is more the important question. So as part of our fourth generation EPYC, we obviously upgraded our CPU core to provide much better single third performance per core. And at the socket level, you know, when you're packing 96 cores, you need to be able to feed these cores, you know, from a memory standpoint. So what we did was we went to 12 channels of memory, and these are DDR5 memory channels. So obviously you get much better bandwidth, higher speed of the memory with DDR5, you know, starting at 4,800 megahertz. And you're also now able to have more channels to be able to send the data from the memory into the CPU subsystem, which is very critical to keep the CPUs busy and active, and get the performance out. So that's on the memory side. On the data side, you know, we do have PCIe Gen five, and any data oriented applications that take data either from the PCIe drives or the network cards that utilize Gen five that are available in the industry today, you can actually really get data into the system through the PCIe I/O, either again, through the disk, or through the net card as well. So those are other ways to actually also feed the CPU subsystem with data to be processed by the CPU complex. So we are, again, very excited to see all of this coming together, and as they say, proof's in the pudding. You know, Juergen talked about it. How over generation after generation we've increased the performance, and now with our fourth generation EPYC, we are absolutely leading world-class performance on the SAP Sales and Distribution benchmark. >> Dilip, I have another question for you, and this may be, it may be a bit of a PowerEdge and beyond question. What are you seeing, or what are you anticipating in terms of end user perception when they go to buy a new server? Obviously server is a very loose term, and they can be configured in a bunch of different ways. But is there a discussion about ROI and TCO that's particularly critical? Because people are going to ask, "Well, wait a minute. If it's more expensive than the last one that I bought, am I getting enough bang for my buck?" Is that going to be part of the conversation, especially around power and cooling and things like that? >> Yeah, absolutely. You know, every data center decision maker has to ask the question, "Why should I upgrade? Should I stay with legacy hardware, or should I go into the latest and greatest that AMD offers?" And the advantages that the new generation products bring is much better performance at much better energy consumption levels, as well as much better performance per dollar levels. So when you do the upgrade, you are actually getting, you know, savings in terms of performance per dollar, as well as saving in space because you can consolidate your work into fewer servers 'cause you have more cores. As we talked about, you have eight, you know. Typically you might do it on a four or 8-socket server which is really expensive. You can consolidate down to a 2-socket server which is much cheaper. As also for maintenance costs, it's much lower maintenance costs as well. All of this, performance, power, maintenance costs, all of that translate into better TCO, right. So lower all of these, high performance, lower power, and then lower maintenance costs, translate to much better TCO for the end user. And that's an important equation that all customers pay attention to. and you know, we love to work with them and demonstrate those TCO benefits to them. >> Juergen, talk to us more in general about what Dell does from a PowerEdge perspective to make sure that Dell is delivering the best infrastructure possible for SAP. In general, I mean, I assume that this is a big responsibility of yours, is making sure that the stuff runs properly and if not, fixing it. So tell us about that relationship between Dell and a SAP. >> Yeah, for Dell and SAP actually, we're more or less partners with SAP. We have people sitting in SAP's Linux lab, and working in cooperative with SAP, also with Linux partners like SUSE and Red Hat. And we are in constant exchange about what's new in Linux, what's new on our side. And we're all a big family here. >> So when the new architecture comes out and they send it to Juergen, the boys back at the plant as they say, or the factory to use Formula One terms, are are waiting with baited breath to hear what Juergen says about the results. So just kind of kind of recap again, you know, the specific benchmarks that you were running. Tell us about that again. >> Yeah, the specific benchmark is the SAP Sales and Distribution benchmark. And for SAP, this is the benchmark that needs to be tested, and it shows the performance of the whole system. So in contrast to benchmarks that only check if the CPU is running, very good, this test the whole system up from the network stack, from the storage stack, the memory, subsystem, and the OS running on the CPUs. >> Okay, which makes perfect sense, since Dell is delivering an integrated system and not just CPU technology. You know, on that subject, Dilip, do you have any insights into performance numbers that you're hearing about with Gen four EPYC for other database environments? >> Yeah, we have actually worked together with Dell on a variety of benchmarks, both on the latest fourth generation EPYC processors as well as the preceding one, the third generation EPYC processors. And published a bunch of world records on database, particularly I would say TPC-H, TPCx-V, as well as TPCx-HS and TPCx-IoT. So a number of TPC related benchmarks that really showcase performance for database and related applications. And we've collaborated very closely with Dell on these benchmarks and published a number of them already, and you know, a number of them are world records as well. So again, we're very excited to collaborate with Dell on the SAP Sales and Distribution benchmark, as well as other benchmarks that are related to database. >> Well, speaking of other benchmarks, here at theCUBE we're going to be talking to actually quite a few people, looking at this fourth generation EPYC launch from a whole bunch of different angles. You two gentlemen have shed light on some really good pieces of that puzzle. I want to thank you for being on theCUBE today. With that, I'd like to thank all of you for joining us here on theCUBE. Stay tuned for continuing CUBE coverage of AMD's fourth generation EPYC launch, and Dell PowerEdge strategy to leverage it.
SUMMARY :
Welcome to theCUBE's Nice to meet you talk to you today, AMD, the fourth generation of Zen cores. to kind of prove out the benefits and the SAPS values of their service. you before, but give us and afterwards parted the SAP application and couldn't believe that the server What are the sort of end user results So actually the results Is that a fair thing to say? and this was at about 28,000 users. and all of the components And at the socket level, you know, of the conversation, And the advantages that the is delivering the best and working in cooperative with SAP, or the factory to use Formula One terms, and it shows the performance You know, on that subject, on the SAP Sales and With that, I'd like to thank all of you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dilip | PERSON | 0.99+ |
Dilip Ramachandran | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Juergen | PERSON | 0.99+ |
Sun Microsystems | ORGANIZATION | 0.99+ |
12 channels | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
4,800 megahertz | QUANTITY | 0.99+ |
2003 | DATE | 0.99+ |
2004 | DATE | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
96 cores | QUANTITY | 0.99+ |
Juergen Zimmermann | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
64 cores | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
one second | QUANTITY | 0.99+ |
November last month | DATE | 0.99+ |
8-socket | QUANTITY | 0.99+ |
about 28,000 users | QUANTITY | 0.98+ |
2-socket | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Juergen Zimmerman | PERSON | 0.98+ |
two generations | QUANTITY | 0.98+ |
four years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Zen 3 generation | COMMERCIAL_ITEM | 0.98+ |
about 20 years | QUANTITY | 0.97+ |
150,000 users | QUANTITY | 0.97+ |
Linux | TITLE | 0.96+ |
single | QUANTITY | 0.96+ |
almost 150,000 users | QUANTITY | 0.95+ |
fourth generation | QUANTITY | 0.95+ |
SAP | TITLE | 0.94+ |
two gentlemen | QUANTITY | 0.94+ |
third generation | QUANTITY | 0.94+ |
fourth | QUANTITY | 0.93+ |
single server | QUANTITY | 0.93+ |
two tier | QUANTITY | 0.92+ |
24 months ago | DATE | 0.92+ |
PCIe Gen five | OTHER | 0.91+ |
PCIe Gen 5 | OTHER | 0.9+ |
Zen 4 cores | COMMERCIAL_ITEM | 0.89+ |
Kevin Depew | HPE ProLiant Gen11 – Trusted Security by Design
>>Hey everyone, welcome to the cube. Lisa Martin here with Kevin Depu, senior Director Future Server Architecture at hpe. Kevin, it's great to have you on the program. You're gonna be breaking down everything that's exciting and compelling about Gen 11. How are you today? >>Thanks Lisa, and I'm doing great. >>Good, good, good. So let's talk about ProLiant Gen 11, the next generation of compute. I read some great stats on hpe.com. I saw that Gen 11 added 28 new world records while delivering up to 99% higher performance and 43% more energy efficiency than the previous version. That's amazing. Talk to me about Gen 11. What makes this update so compelling? >>Well, you talked about some of the stats regarding the performance and the power efficiency, and those are excellent. We partnered with amd, we've got excellent performance on these platforms. We have excellent power efficiency, but the advantage of this platform go beyond that. Today we're gonna talk a lot about cybersecurity and we've got a lot of security capabilities in these platforms. We've built on top of the security capabilities that we've had, generation over generation, we've got some new exciting capabilities we'll be talking about. So whether it's the performance, whether it's power efficient, whether it's security, all those capabilities are in this platform. Security is part of our dna. We put it into the design from the very beginning, and we've partnered with AMD to deliver what we think is a very compelling story. >>The security piece is absolutely critical. The to, we could have a, you know, an entire separate conversation on the cybersecurity landscape and the changes there. But one of the things I also noticed in the material on Gen 11 is that HPE says it's fundamental. What do you mean by that and what's new that makes it so fundamental? >>Well, by saying it's fundamental is security is a fundamental part of the platform. You need systems that are reliable. You need systems that have excellent performance. You need systems that are, have very good power efficiency, those things you talked about before, those are all very important to have a good server, but security's a part that's absolutely critical as well. So security is one of the fundamental capabilities of the platform. I had mentioned. We built on top of capabilities, capabilities like our silicon root of trust, which ensures that the firmware stack on these platforms is not compromised. Those are continuing this platform and have been expanded on. We have our trusted supply chain and we've expanded on that as well. We have a lot of security capabilities, our platform certificates, our IEB IDs. There's just a lot of security capabilities that are absolutely fundamental to these being a good solution because as we said, security is fundamental. It's an absolutely critical part of these platforms. >>Absolutely. For companies in every industry. I wanna talk a little bit about about one of the other things that HPE describes Gen 11 as as being uncompromising. And I wanted to understand what that means and what's the value add in it for customers? >>Yeah. Well, by uncompromising means we can't compromise on security. Security to what I said before, it's fundamental. It can't be promised. You have to have security be strong on these platforms. So one of the capabilities, which we're specifically talking about when we talk about Uncompromising is a capability called spdm. We've extended our silicon root of trust, which is one of our key technologies we've had since our Gen 10 platforms. We've extended that through something called spdm. We saw a problem in the industry with the ability to authenticate option cards and other devices in the system. Silicon Root of Trust verified many pieces of firmware in the platform, but one piece that it wasn't verifying was the option cards. And we needed, we knew we needed to solve this problem and we knew we couldn't do it a hundred percent on our own because we needed to work with our partners, whether it's a storage option card, a nick, or even devices in the future, we needed to make sure that we could verify that those were what they were meant to be. >>They weren't compromised, they weren't maliciously compromised and that we could authenticate them. So we worked with industry standards bodies to create the S P M specification. And what that allows us to do is authenticate the option cards in the systems. So that's one of our new capabilities that we've added in these platforms. So we've gone beyond securing all of the things that Silicon Real Trust secured in the past to extending that to the option cards and their firmware as well. So when we boot up one of these platforms, when we hand off to the OS and to the the customers software solution, they can be, they can rest assured that all the things that have run all that, that platform is not compromised. A bad guy has not gone in and changed things and that includes a bad guy with physical access to the platform. So that's why we have unpromised security in these platforms. >>Outstanding. That sounds like great work that's been done there and giving customers that piece of mind where security is concerned is table stakes for everybody across the organization. Kevin, you mentioned partners. I know HPE is extending protection to the partner ecosystem. I wanted to get a little bit more info on that from you. >>Yeah, we've worked with our option co card vendors, numerous partners across the industry to support spdm. We were the ones who kind of went to the, the industry standards bodies and said, we need to solve this problem. And we had agreement from everybody. Everybody agrees this is a problem that had to be solved. So, but to solve it, you've gotta have a partnership. We can't just do it on our own. There's a lot of things that we HPE can solve on our own. This is not one of them to be able to get a method that we could authenticate and trust the option cards in the system. We needed to work with our option card vendors. So that's something that we, we did. And we use also some capabilities that we work with some of our processor vendor partners as well. So working with partners across the industry, we were able to deliver spdm. >>So we know that option card, whether it's a storage card or a Nick Card or, or GPUs in the future, those, those may not be there from day one, but we know that those option cards are what they intended because you could do an attack where you compromise the option card, you compromise the firmware in that option card and option cards have the ability to read and write to memory using something called dma. And if those cards are running firmware that's being created by a bad guy, they can do a lot of, of very costly attacks. I mean we, there's a lot of statistics that showed just how, how costly cybersecurity attacks are. If option cards have been compromised, you can do some really bad things. So this is how we can trust those option cards. And we had to partner with those, those partners in the industry to both define the spec and both sides had to implement to that specification so that we could deliver the solution we're delivering. >>HPE is such a strong partner ecosystem. You did a great job of articulating the value in this for customers. From a security perspective, I know that you're also doing a lot of collaboration and work with amd. Talk to me a little bit about that and the value in it for your joint customers. >>Yeah, absolutely. AMD is a longstanding partner. We actually started working with AMD about 20 years ago when we delivered our first AMD opton based platform, the HP pro, HP Pliant, DL 5 85. So we've got a long engineering relationship with AMD and we've been making products with AMD since they introduced their epic generation processor in 2017. That's when AMD really upped the secure their security game. They created capabilities with their AMD secure processor, their secure encryption virtualization, their memory encryption technologies. And we work with AMD long before platforms actually release. So they come to us with their ideas, their designs, we collaborate with them on things we think are valuable when we see areas where they can do things better, we provide feedback. So we really have a partnership to make these processors better. And it's not something where we just work with them for a short amount of time and deliver a product. >>We're working with them for years before those products come out. So that partnership allows both parties to create better platforms cuz we understand what they're capable of, they understand what our needs are as a, as a server provider. And so we help them make their processors better and they help us make our products better. And that extends in all areas, whether it's performance, power, efficiency, but very importantly in what we're talking about here, security. So they have got an excellent security story with all of their technologies. Again, memory encryption. They, they've got some exceptional technologies there. All their secure encryption, virtualization to secure virtualized environments, those are all things that they excel at. And we take advantage of those in our designs. We make sure that those so work with our servers as part of a solution >>Sounds like a very deeply technically integrated and longstanding relationship that's really symbiotic for both sides. I wanted to get some information from you on HPE server security optimized service. Talk to me about what that is. How does that help HP help its customers get around some of those supply chain challenges that are persistent? >>Yeah, what that is is with our previous generation of products, we announced something called our HPE trusted supply chain and but that was focused on the US market with the solution for gen 11. We've expanded that to other markets. It's, it's available from factories other than the ones in our us it's available for shipping products to other geographies. So what that really was is taking the HPE trusted supply chain and expanding it to additional geographies throughout the world, which provides a big, big benefit for our non-US based customers. And what that is, is we're trying to make sure that the server that we ship out of our factories is indeed exactly what that customer is getting. So try to prevent any possibility of attack in the supply chain going from our factories to the customer. And if there is an attack, we can detect it and the customer knows about it. >>So they won't deploy a system that's been compromised cuz there, there have been high profile cases of supply chain attacks. We don't want to have that with our, our customers buying our Reliant products. So we do things like enable you I Secure Boot, which is an ability to authenticate the, what's called a u i option ROM driver on option cards. That's enabled by default. Normally that's not enabled by default. We enable our high security mode in our ILO product. We include our intrusion tech detection technology option, which is an optional feature, but it's their standard when you buy one of the boxes with this, this capability, this trusted supply chain capability. So there's a lot of capabilities that get enabled at the factory. We also enable server configuration lock, which allows a customer to detect, get a bad guy, modify anything in the platform when it transits from our factory to them. So what it allows a customer to do is get that platform and know that it is indeed what it is intended to be and that it hasn't been attacked and we've now expanded that to many geographies throughout the world. >>Excellent. So much more coverage across the world, which is so incredibly important. As cyber attacks continue to rise year over year, the the ransomware becomes a household word, the ransoms get even more expensive, especially considering the cybersecurity skills gap. I'm just wondering what are some of the, the ways in which everything that you've described with Gen 11 and the HPE partner ecosystem with A and B for example, how does that help customers to get around that security skills gap that is present? >>Well, the key thing there is we care about our customer security. So as I mentioned, security is in our dna. We do, we consider security in everything. We do every update to firm where we make, when we do the hardware design, whatever we're doing, we're always considering what could a bad guy do? What could a bad guy take advantage of and attempt to prevent it. And AMD does the same thing. You can look at all the technologies they have in their AMD processor. They're, they're making sure their processor is secure. We're making sure our platform is secure so the customer doesn't have to worry about it. So that's something the customer can trust us. They can trust the amd so they know that that's not the area where they, they have to expend their bandwidth. They can extend their bandwidth on the security on other parts of the, the solution versus knowing that the platform and the CPU is secure. >>And beyond that, we create features and capabilities that they can take advantage of in the, in the case of amd, a lot of their capabilities are things that the software stack and the OS can take advantage of. We have capabilities on the client side that the software and that they can take advantage of, whether it's server configuration lock or whatever. We try to create features that are easy for them to use to make their environments more secure. So we're making things that can trust the platform, they can trust the processor, they don't have to worry about that. And then we have features and capabilities that lets them solve some of the problems easier. So we're, we're trying to, to help them with that skills gap by making certain things easier and making certain things that they don't even have to worry about. >>Right. It sounds like allowing them to be much more strategic about the security skills that they do have. My last question for you, Kevin, is Gen 11 available now? Where can folks go to get their hands on it? >>So Gen 11 was announced earlier this month. The products will actually be shipping before the end of this year, before the end of 2022. And you can go to our website and find all about our compute security. So it all that information's available on our website. >>Awesome. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in it, why security is fundamental to the uncompromising nature with which HPE and partners have really updated the system and the rest of world coverage that you guys are enabling. We appreciate your insights on your time, Kevin. >>Thank you very much, Lisa. Appreciate >>It. And we want to let you and the audience know, check out hpe.com/info/compute for more info on 11. Thanks for watching.
SUMMARY :
Kevin, it's great to have you on the program. So let's talk about ProLiant Gen 11, the next generation of compute. We put it into the design from the very beginning, The to, we could have a, you know, an entire separate conversation So security is one of the fundamental capabilities of the platform. And I wanted to understand what that means and what's the value add in it for customers? a nick, or even devices in the future, we needed to make sure that we could verify in the past to extending that to the option cards and their firmware as well. is table stakes for everybody across the organization. the industry standards bodies and said, we need to solve this problem. the spec and both sides had to implement to that specification so that we could deliver You did a great job of articulating the value in this for customers. So they come to us with their ideas, their designs, we collaborate parties to create better platforms cuz we understand what they're capable of, Talk to me about what that is. possibility of attack in the supply chain going from our factories to the customer. So we do things like enable you I Secure Boot, So much more coverage across the world, which is so incredibly important. So that's something the customer can trust us. We have capabilities on the client side that the It sounds like allowing them to be much more strategic about the security skills that they do have. So it all that information's available on our website. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in It. And we want to let you and the audience know, check out hpe.com/info/compute
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Kevin Depu | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
43% | QUANTITY | 0.99+ |
amd | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
Silicon Real Trust | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
end of 2022 | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
both parties | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
Today | DATE | 0.97+ |
hpe | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
hpe.com/info/compute | OTHER | 0.97+ |
end of this year | DATE | 0.97+ |
hpe.com | ORGANIZATION | 0.96+ |
DL 5 85 | COMMERCIAL_ITEM | 0.96+ |
earlier this month | DATE | 0.95+ |
up to 99% | QUANTITY | 0.95+ |
hundred percent | QUANTITY | 0.93+ |
day one | QUANTITY | 0.9+ |
ILO | ORGANIZATION | 0.89+ |
ProLiant | TITLE | 0.87+ |
Gen 10 | QUANTITY | 0.86+ |
Pliant | COMMERCIAL_ITEM | 0.84+ |
28 new world records | QUANTITY | 0.83+ |
gen 11 | QUANTITY | 0.83+ |
Gen 11 | QUANTITY | 0.82+ |
about 20 years ago | DATE | 0.81+ |
one of | QUANTITY | 0.77+ |
11 | OTHER | 0.7+ |
Nick Card | COMMERCIAL_ITEM | 0.69+ |
Gen11 | QUANTITY | 0.64+ |
HPE ProLiant | ORGANIZATION | 0.64+ |
Gen 11 | QUANTITY | 0.62+ |
years | QUANTITY | 0.62+ |
Gen | OTHER | 0.6+ |
Gen 11 | OTHER | 0.59+ |
11 | QUANTITY | 0.57+ |
Gen | QUANTITY | 0.52+ |
boxes | QUANTITY | 0.47+ |
spdm | TITLE | 0.44+ |
spdm | OTHER | 0.41+ |
pro | COMMERCIAL_ITEM | 0.38+ |