Evan Touger, Prowess | Prowess Benchmark Testing Results for AMD EPYC Genoa on Dell Servers
(upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch. I've got a special guest with me today from Prowess Consulting. His name is Evan Touger, he's a senior technical writer with Prowess. Evan, welcome. >> Hi, great to be here. Thanks. >> So tell us a little bit about Prowess, what does Prowess do? >> Yeah, we're a consulting firm. We've been around for quite a few years, based in Bellevue, Washington. And we do quite a few projects with folks from Dell to a lot of other companies, and dive in. We have engineers, writers, production folks, so pretty much end-to-end work, doing research testing and writing, and diving into different technical topics. >> So you- in this case what we're going to be talking about is some validation studies that you've done, looking at Dell PowerEdge servers that happened to be integrating in fourth-gen EPYC processors from AMD. What were the specific workloads that you were focused on in this study? >> Yeah, this particular one was honing in on virtualization, right? You know, obviously it's pretty much ubiquitous in the industry, everybody works with virtualization in one way or another. So just getting optimal performance for virtualization was critical, or is critical for most businesses. So we just wanted to look a little deeper into, you know, how do companies evaluate that? What are they going to use to make the determination for virtualization performance as it relates to their workloads? So that led us to this study, where we looked at some benchmarks, and then went a little deeper under the hood to see what led to the results that we saw from those benchmarks. >> So when you say virtualization, does that include virtual desktop infrastructure or are we just talking about virtual machines in general? >> No, it can include both. We looked at VMs, thinking in terms of what about database performance when you're working in VMs, all the way through to VDI and companies like healthcare organizations and so forth, where it's common to roll out lots of virtual desktops, and performance is critical there as well. >> Okay, you alluded to, sort of, looking under the covers to see, you know, where these performance results were coming from. I assume what you're referencing is the idea that it's not just all about the CPU when you talk about a system. Am I correct in that assumption and- >> Yeah, absolutely. >> What can you tell us? >> Well, you know, for companies evaluating, there's quite a bit to consider, obviously. So they're looking at not just raw performance but power performance. So that was part of it, and then what makes up that- those factors, right? So certainly CPU is critical to that, but then other things come into play, like the RAID controllers. So we looked a little bit there. And then networking, of course can be critical for configurations that are relying on good performance on their networks, both in terms of bandwidth and just reducing latency overall. So interconnects as well would be a big part of that. So with, with PCIe gen 5 or 5.0 pick your moniker. You know in this- in the infrastructure game, we're often playing a game of whack-a-mole, looking for the bottlenecks, you know, chasing the bottlenecks. PCIe 5 opens up a lot of bandwidth for memory and things like RAID controllers and NICs. I mean, is the bottleneck now just our imagination, Evan, have we reached a point where there are no bottlenecks? What did you see when you ran these tests? What, you know, what were you able to stress to a point where it was saturated, if anything? >> Yeah. Well, first of all, we didn't- these are particular tests were ones that we looked at industry benchmarks, and we were examining in particular to see where world records were set. And so we uncovered a few specific servers, PowerEdge servers that were pretty key there, or had a lot of- were leading in the category in a lot of areas. So that's what led us to then, okay, well why is that? What's in these servers, and what's responsible for that? So in a lot of cases they, we saw these results even with, you know, gen 4, PCIe gen 4. So there were situations where clearly there was benefit from faster interconnects and, and especially NVMe for RAID, you know, for supporting NVMe and SSDs. But all of that just leads you to the understanding that it means it can only get better, right? So going from gen 4 to- if you're seeing great results on gen 4, then gen 5 is probably going to be, you know, blow that away. >> And in this case, >> It'll be even better. >> In this case, gen 5 you're referencing PCIe >> PCIe right. Yeah, that's right. >> (indistinct) >> And then the same thing with EPYC actually holds true, some of the records, we saw records set for both 3rd and 4th gen, so- with EPYC, so the same thing there. Anywhere there's a record set on the 3rd gen, you know, makes us really- we're really looking forward to going back and seeing over the next few months, which of those records fall and are broken by newer generation versions of these servers, once they actually wrap to the newer generation processors. You know, based on, on what we're seeing for the- for what those processors can do, not only in. >> (indistinct) Go ahead. >> Sorry, just want to say, not only in terms of raw performance, but as I mentioned before, the power performance, 'cause they're very efficient, and that's a really critical consideration, right? I don't think you can overstate that for companies who are looking at, you know, have to consider expenditures and power and cooling and meeting sustainability goals and so forth. So that was really an important category in terms of what we looked at, was that power performance, not just raw performance. >> Yeah, I want to get back to that, that's a really good point. We should probably give credit where credit is due. Which Dell PowerEdge servers are we talking about that were tested and what did those interconnect components look like from a (indistinct) perspective? >> Yeah, so we focused primarily on a couple benchmarks that seemed most important for real world performance results for virtualization. TPCx-V and VMmark 3.x. the TPCx-V, that's where we saw PowerEdge R7525, R7515. They both had top scores in different categories there. That benchmark is great for looking at database workloads in particular, right? Running in virtualization settings. And then the VMmark 3.x was critical. We saw good, good results there for the 7525 and the R 7515 as well as the R 6525, in that one and that included, sorry, just checking notes to see what- >> Yeah, no, no, no, no, (indistinct) >> Included results for power performance, as I mentioned earlier, that's where we could see that. So we kind of, we saw this in a range of servers that included both 3rd gen AMD EPYC and newer 4th gen as well as I mentioned. The RAID controllers were critical in the TPCx-V. I don't think that came into play in the VM mark test, but they were definitely part of the TPCx-V benchmarks. So that's where the RAID controllers would make a difference, right? And in those tests, I think they're using PERC 11. So, you know, the newer PERC 12 controllers there, again we'd expect >> (indistinct) >> To see continued, you know, gains in newer benchmarks. That's what we'll be looking for over the next several months. >> Yeah. So I think if I've got my Dell nomenclature down, performance, no no, PowerEdge RAID Controller, is that right? >> Exactly, yeah, there you go. Right? >> With Broadcom, you know, powered by Broadcom. >> That's right. There you go. Yeah. Isn't the Dell naming scheme there PERC? >> Yeah, exactly, exactly. Back to your comment about power. So you've had a chance to take a pretty deep look at the latest stuff coming out. You're confident that- 'cause some of these servers are going to be more expensive than previous generation. Now a server is not a server is not a server, but some are awakening to the idea that there might be some sticker shock. You're confident that the bang for your buck, the bang for your kilowatt hour is actually going to be beneficial. We're actually making things better, faster, stronger, cheaper, more energy efficient. We're continuing on that curve? >> That's what I would expect to see, right. I mean, of course can't speak to to pricing without knowing, you know, where the dollars are going to land on the servers. But I would expect to see that because you're getting gains in a couple of ways. I mean, one, if the performance increases to the point where you can run more VMs, right? Get more performance out of your VMs and run more total VMs or more BDIs, then there's obviously a good, you know, payback on your investment there. And then as we were discussing earlier, just the power performance ratio, right? So if you're bringing down your power and cooling costs, if these machines are just more efficient overall, then you should see some gains there as well. So, you know, I think the key is looking at what's the total cost of ownership over, you know, a standard like a three-year period or something and what you're going to get out of it for your number of sessions, the performance for the sessions, and the overall efficiency of the machines. >> So just just to be clear with these Dell PowerEdge servers, you were able to validate world record performance. But this isn't, if you, if you look at CPU architecture, PCIe bus architecture, memory, you know, the class of memory, the class of RAID controller, the class of NIC. Those were not all state of the art in terms of at least what has been recently announced. Correct? >> Right. >> Because (indistinct) the PCI 4.0, So to your point- world records with that, you've got next-gen RAID controllers coming out, and NICs coming out. If the motherboard was PCIe 5, with commensurate memory, all of those things are getting better. >> Exactly, right. I mean you're, you're really you're just eliminating bandwidth constraints latency constraints, you know, all of that should be improved. NVMe, you know, just collectively all these things just open the doors, you know, letting more bandwidth through reducing all the latency. Those are, those are all pieces of the puzzle, right? That come together and it's all about finding the weakest link and eliminating it. And I think we're reaching the point where we're removing the biggest constraints from the systems. >> Okay. So I guess is it fair to summarize to say that with this infrastructure that you tested, you were able to set world records. This, during this year, I mean, over the next several months, things are just going to get faster and faster and faster and faster. >> That's what I would anticipate, exactly, right. If they're setting world records with these machines before some of the components are, you know, the absolute latest, it seems to me we're going to just see a continuing trend there, and more and more records should fall. So I'm really looking forward to seeing how that goes, 'cause it's already good and I think the return on investment is pretty good there. So I think it's only going to get better as these roll out. >> So let me ask you a question that's a little bit off topic. >> Okay. >> Kind of, you know, we see these gains, you know, we're all familiar with Moore's Law, we're familiar with, you know, the advancements in memory and bus architecture and everything else. We just covered SuperCompute 2022 in Dallas a couple of weeks ago. And it was fascinating talking to people about advances in AI that will be possible with new architectures. You know, most of these supercomputers that are running right now are n minus 1 or n minus 2 infrastructure, you know, they're, they're, they're PCI 3, right. And maybe two generations of processors old, because you don't just throw out a 100,000 CPU super computing environment every 18 months. It doesn't work that way. >> Exactly. >> Do you have an opinion on this question of the qualitative versus quantitative increase in computing moving forward? And, I mean, do you think that this new stuff that you're starting to do tests on is going to power a fundamental shift in computing? Or is it just going to be more consolidation, better power consumption? Do you think there's an inflection point coming? What do you think? >> That's a great question. That's a hard one to answer. I mean, it's probably a little bit of both, 'cause certainly there will be better consolidation, right? But I think that, you know, the systems, it works both ways. It just allows you to do more with less, right? And you can go either direction, you can do what you're doing now on fewer machines, you know, and get better value for it, or reduce your footprint. Or you can go the other way and say, wow, this lets us add more machines into the mix and take our our level of performance from here to here, right? So it just depends on what your focus is. Certainly with, with areas like, you know, HPC and AI and ML, having the ability to expand what you already are capable of by adding more machines that can do more is going to be your main concern. But if you're more like a small to medium sized business and the opportunity to do what you were doing on, on a much smaller footprint and for lower costs, that's really your goal, right? So I think you can use this in either direction and it should, should pay back in a lot of dividends. >> Yeah. Thanks for your thoughts. It's an interesting subject moving forward. You know, sometimes it's easy to get lost in the minutiae of the bits and bites and bobs of all the components we're studying, but they're powering something that that's going to effect effectively all of humanity as we move forward. So what else do we need to consider when it comes to what you've just validated in the virtualization testing? Anything else, anything we left out? >> I think we hit all the key points, or most of them it's, you know, really, it's just keeping in mind that it's all about the full system, the components not- you know, the processor is a obviously a key, but just removing blockages, right? Freeing up, getting rid of latency, improving bandwidth, all these things come to play. And then the power performance, as I said, I know I keep coming back to that but you know, we just, and a lot of what we work on, we just see that businesses, that's a really big concern for businesses and finding efficiency, right? And especially in an age of constrained budgets, that's a big deal. So, it's really important to have that power performance ratio. And that's one of the key things we saw that stood out to us in, in some of these benchmarks, so. >> Well, it's a big deal for me. >> It's all good. >> Yeah, I live in California and I know exactly how much I pay for a kilowatt hour of electricity. >> I bet, yeah. >> My friends in other places don't even know. So I totally understand the power constraint question. >> Yeah, it's not going to get better, so, anything you can do there, right? >> Yeah. Well Evan, this has been great. Thanks for sharing the results that Prowess has come up with, third party validation that, you know, even without the latest and greatest components in all categories, Dell PowerEdge servers are able to set world records. And I anticipate that those world records will be broken in 2023 and I expect that Prowess will be part of that process, So Thanks for that. For the rest of us- >> (indistinct) >> Here at theCUBE, I want to thank you for joining us. Stay tuned for continuing coverage of AMD's fourth generation EPYC launch, for myself and for Evan Touger. Thanks so much for joining us. (upbeat music)
SUMMARY :
Welcome to theCUBE's Hi, great to be here. to a lot of other companies, and dive in. that you were focused on in this study? you know, how do companies evaluate that? all the way through to VDI looking under the covers to see, you know, you know, chasing the bottlenecks. But all of that just leads you Yeah, that's right. you know, makes us really- (indistinct) are looking at, you know, and what did those interconnect and the R 7515 as well as So, you know, the newer To see continued, you know, is that right? Exactly, yeah, there you go. With Broadcom, you There you go. the bang for your buck, to pricing without knowing, you know, PCIe bus architecture, memory, you know, So to your point- world records with that, just open the doors, you know, with this infrastructure that you tested, components are, you know, So let me ask you a question that's we're familiar with, you know, and the opportunity to do in the minutiae of the or most of them it's, you know, really, it's a big deal for me. for a kilowatt hour of electricity. So I totally understand the third party validation that, you know, I want to thank you for joining us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Evan | PERSON | 0.99+ |
Evan Touger | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Prowess Consulting | ORGANIZATION | 0.99+ |
2023 | DATE | 0.99+ |
three-year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
R 6525 | COMMERCIAL_ITEM | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
3rd | QUANTITY | 0.99+ |
R 7515 | COMMERCIAL_ITEM | 0.99+ |
R7515 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
4th gen | QUANTITY | 0.99+ |
3rd gen | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
7525 | COMMERCIAL_ITEM | 0.98+ |
Prowess | ORGANIZATION | 0.98+ |
Bellevue, Washington | LOCATION | 0.98+ |
100,000 CPU | QUANTITY | 0.98+ |
PowerEdge | COMMERCIAL_ITEM | 0.97+ |
two generations | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
PCIe 5 | OTHER | 0.96+ |
today | DATE | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
this year | DATE | 0.93+ |
PCI 4.0 | OTHER | 0.92+ |
TPCx-V | COMMERCIAL_ITEM | 0.92+ |
fourth-gen | QUANTITY | 0.92+ |
gen 5 | QUANTITY | 0.9+ |
Moore | ORGANIZATION | 0.89+ |
fourth generation | QUANTITY | 0.88+ |
gen 4 | QUANTITY | 0.87+ |
PCI 3 | OTHER | 0.87+ |
couple of weeks ago | DATE | 0.85+ |
SuperCompute 2022 | TITLE | 0.8+ |
PCIe gen 5 | OTHER | 0.79+ |
VMmark 3.x | COMMERCIAL_ITEM | 0.75+ |
minus | QUANTITY | 0.74+ |
one way | QUANTITY | 0.74+ |
18 months | QUANTITY | 0.7+ |
PERC 12 | COMMERCIAL_ITEM | 0.67+ |
5.0 | OTHER | 0.67+ |
EPYC | COMMERCIAL_ITEM | 0.65+ |
months | DATE | 0.64+ |
5 | QUANTITY | 0.63+ |
PERC 11 | COMMERCIAL_ITEM | 0.6+ |
next few months | DATE | 0.6+ |
first | QUANTITY | 0.59+ |
VMmark 3.x. | COMMERCIAL_ITEM | 0.55+ |
EPYC Genoa | COMMERCIAL_ITEM | 0.53+ |
gen | OTHER | 0.52+ |
R7525 | COMMERCIAL_ITEM | 0.52+ |
1 | QUANTITY | 0.5+ |
2 | QUANTITY | 0.47+ |
PowerEdge | ORGANIZATION | 0.47+ |
Anahad Dhillon, Dell EMC | CUBE Conversation, October 2021
(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)
SUMMARY :
and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
November 5th | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Anahad Dhillon | PERSON | 0.99+ |
October 2021 | DATE | 0.99+ |
November 2nd | DATE | 0.99+ |
2020 | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Anahad | PERSON | 0.99+ |
ObjectScale | TITLE | 0.99+ |
VMware 2021 | TITLE | 0.99+ |
today | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
both products | QUANTITY | 0.99+ |
two product | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
early 2020 | DATE | 0.98+ |
OpenShift | TITLE | 0.98+ |
step one | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
hundreds of nodes | QUANTITY | 0.98+ |
two code streams | QUANTITY | 0.98+ |
ECS | TITLE | 0.97+ |
12 nodes | QUANTITY | 0.97+ |
single code | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
10 | QUANTITY | 0.96+ |
4.0 | OTHER | 0.96+ |
Red Hat OpenShift | TITLE | 0.95+ |
3.6 | OTHER | 0.95+ |
Dell Technology | ORGANIZATION | 0.94+ |
S3 | TITLE | 0.92+ |
Hundreds of notes | QUANTITY | 0.92+ |
two worlds | QUANTITY | 0.92+ |
EXF900 | COMMERCIAL_ITEM | 0.92+ |
up to 30 terabytes | QUANTITY | 0.91+ |
ObjectScale | ORGANIZATION | 0.91+ |
ECS 3.X | TITLE | 0.91+ |
petabytes | QUANTITY | 0.89+ |
VMware | TITLE | 0.89+ |
first | QUANTITY | 0.87+ |
3.X | TITLE | 0.87+ |
Dev Test Sandbox | TITLE | 0.87+ |
ECS | ORGANIZATION | 0.86+ |
Red Hat | TITLE | 0.84+ |
Anahad Dhillon, Dell EMC | CUBEConversation
(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing it infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)
SUMMARY :
and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
November 5th | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Anahad Dhillon | PERSON | 0.99+ |
November 2nd | DATE | 0.99+ |
2020 | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
Anahad | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
VMware 2021 | TITLE | 0.99+ |
ObjectScale | TITLE | 0.99+ |
two code streams | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
both products | QUANTITY | 0.99+ |
two product | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
early 2020 | DATE | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
step one | QUANTITY | 0.98+ |
ECS | TITLE | 0.98+ |
hundreds of nodes | QUANTITY | 0.98+ |
OpenShift | TITLE | 0.98+ |
12 nodes | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
single code | QUANTITY | 0.97+ |
single | QUANTITY | 0.96+ |
10 | QUANTITY | 0.96+ |
two worlds | QUANTITY | 0.96+ |
Red Hat OpenShift | TITLE | 0.95+ |
4.0 | OTHER | 0.94+ |
petabytes | QUANTITY | 0.94+ |
Dell Technology | ORGANIZATION | 0.94+ |
S3 | TITLE | 0.94+ |
ECS 3.X | TITLE | 0.93+ |
3.6 | OTHER | 0.91+ |
VMware | TITLE | 0.9+ |
ObjectScale | ORGANIZATION | 0.9+ |
EXF900 | COMMERCIAL_ITEM | 0.9+ |
Hundreds of notes | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
up to 30 terabytes | QUANTITY | 0.88+ |
Red Hat | TITLE | 0.88+ |
Breaking Analysis: Pat Gelsinger Must Channel Andy Grove and Recreate Intel
>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Much of the discussion around Intel's current challenges, is focused on manufacturing issues and it's ongoing market share skirmish with AMD. Of course, that's very understandable. But the core issue Intel faces is that it has lost the volume game forever. And in Silicon volume is king. As such incoming CEO Pat Gelsinger faces some difficult decisions. I mean, on the one hand he could take some logical steps to shore up the company's execution, maybe outsource a portion of its manufacturing. Make some incremental changes that would unquestionably please Wall Street and probably drive shareholder value when combined with the usual stock buybacks and dividends. On the other hand, Gelsinger could make much more dramatic moves shedding it's vertically integrated heritage and transforming Intel into a leading designer of chips for the emerging multi-trillion dollar markets that are highly fragmented and generally referred to as the edge. We believe Intel has no choice. It must create a deep partnership in our view with a semiconductor manufacturer with aspirations to manufacture on US soil and focus Intel's resources on design. Hello, everyone. And welcome to this week's Wikibon's Cube Insights powered by ETR. In this breaking analysis will put forth our prognosis for what Intel's future looks like and lay out what we think the company needs to do not only to maintain its relevance but to regain the position it once held as perhaps the most revered company in tech. Let's start by looking at some of the fundamental factors that we've been tracking and that have shaped and are shaping Intel and our thinking around Intel today. First, it's really important to point out that new CEO Gelsinger is walking into a really difficult situation. Intel's ascendancy and its dominance it was created by PC volumes. And its development of an ecosystem that the company created around the x86 instruction set. In semiconductors volume is everything. The player with the highest volumes has the lowest manufacturing costs. And the math around learning curves is very clear and it's compelling. It's based on Wright's law named after Theodore Wright T.P Wright. He was an aeronautical engineer and he discovered that for every cumulative doubling of units manufactured, costs are going to fall by a constant percentage. Now in semiconductor way for manufacturing that cost is roughly around 22% declines. And when you consider the economics of manufacturing a next generation technology, for example going from ten nanometers to seven nanometers this becomes huge. Because the cost of making seven nanometer tech for example is much higher relative to 10 nanometers. But if you can fit more circuits on a chip your wafer costs can drop by 30% or even more. Now this learning curve benefit is why volume is so important. If the time it takes to double volume is elongated then the learning curve benefit they get elongated as well and it become less competitive from a cost standpoint. And that's exactly what is happening to Intel. You see x86 PC volumes, they peaked in 2011 and that marked the beginning of the end of Intel's dominance from manufacturing and cost standpoint. You know, ironically HDD hard disk drive volumes peaked around the same time and you're seeing a similar fundamental shift in that market relative to flash. Now because Intel has a vertically integrated model it's designers are limited by the constraints in the manufacturing process. What used to be Intel's ace in the hole its process manufacturing has become a hindrance, frustrating Intel's chip designers and really seeding advantage to a number of competitors including AMD, ARM and Nvidia. Now, during this time we've seen high profile innovators adapting alternative processors companies like Apple which chose its own design based on ARM for the M1. Tesla is a fascinating case study where Intel was really not in the running. AWS probably Intel's largest customer is developing its own chips. You know through Intel, a little bone at the recent reinvent it announced its use of Intel's Habana chips in a practically the same sentence that talked about how it was developing a similar chip that would provide even better price performance. And just last month it was reported that Microsoft Intel's monopoly partner in the PC era was developing its own ARM-based chips for the surface PCs and for its servers. Intel's Zenith was marked by those peak PC volumes that we talked about. Now to stress this point this chart shows x86 PC volumes over time. That red highlighted area shows the peak years. Now, volumes actually grew in 2020 in part due to COVID which is not really reflected in this chart but the volume game was lost for Intel. When it has been widely reported that in 2005 Steve Jobs approached Intel as it was replacing IBM microprocessors with with Intel processors for the Mac and asked Intel to develop the chip for the iPhone Intel passed and the die was cast. Now to the earlier point, PC markets are actually quite good if you're Dell. Here's some ETR data that shows Dell's laptop net score. Net score is a measure of spending momentum for 2020 and into 2021. Dell's client business has been very good and profitable and frankly, it's been a pleasant surprise. You know, PCs they're doing well. And as you can see in this chart, Dell has momentum. There's approximately 275 million maybe as high as 300 million PC units shipped worldwide in 2020, you know up double digits by some estimates. However, ARM chip units shipped exceeded 20 billion units last year worldwide. And it's not apples to apples. You know, we're comparing x86 based PCs to ARM chips. So this excludes x86 servers, but the way for volume for ARM dwarfs that of x86 probably by a factor of 10 times. Back to Wright's law, how long is it going to take Intel to double wafer volumes? It's not going to happen. And trust me, Pat Gelsinger understands this dynamic probably better than anyone in the world and certainly better than I do. And as you look out to the future, the story for Intel and it's vertically integrated approach it's even tougher. This chart shows Wikibon's 2020 forecast for ARM based compared to x86 based PCs. It also includes some other devices but as you can see what happens by the end of the decade is ARM really starts to eat in to x86. As we've seen with the M1 at Apple, ARM is competing in PCs in much better position for these emerging devices that support things like video and virtual reality systems. And we think even will start to eat into the enterprise. So again, the volume game is over for Intel, period. They're never going to win it back. Well, you might ask what about revenue? Intel still dominates in the data center right? Well, yes. And that is much higher revenue per unit but we still believe that revenue from ARM-based systems are going to surpass that of x86 by the end of the decade. Arm compute revenue is shown in the orange area in this chart with x86 in the blue. This means to us that Intel's last mot is going to be its position in the data center. It has to protect that at all costs. Now the market knows this. It knows something's wrong with Intel. And you can see that is reflected in the valuations of semiconductor companies. This chart compares the trailing 12 month revenue in the market valuations for Intel, Nvidia, AMD and Qualcomm. And you can see at a trailing 12 month multiple revenue with 3 X compared to about 22 X for Nvidia about 10 X for AMT and Qualcomm, Intel is lagging behind in the street's view. And Intel, as you can see here, it's now considered a cheap stock by many, you know. Here's a graph that shows the performance over the past 12 months compared to the NASDAQ which you can see that major divergence. NASDAQ has been powered part by COVID and all the new tech and the work from home. The stock reacted very well to the appointment of Gelsinger. That's no surprise. The question people are asking is what's next for Intel? How will Pat turn the company's fortunes around? How long is it going to take? What moves can he and should he make? How will they be received by the market? And internally, very importantly, within Intel's culture. These are big chewy questions and people are split on what should be done. I've heard everything from Pat should just clean up the execution issues. It's no.. This is, you know, very workable and not make any major strategic moves all the way to Intel should do a hybrid outsourced model to Intel should aggressively move out of manufacturing. Let me read some things from Barron's and some other media. Intel has fallen behind rivals and the rest of tech Intel is replacing Bob Swan. Investors are cheering the move. Intel would likely turn to Taiwan semiconductor for chips. Here's who benefits most. So let's take a look at some of the opinions that are inside these articles. So, first one I'm going to pull out Intel has indicated a willingness to try new things and investors expect the company to announce a hybrid manufacturing approach in January. Now, if you take a look at that and you quote a CEO Swan, he says, what has changed is that we have much more flexibility in our designs. And with that type of design we have the ability to move things in and move things out. And that gives us a little more flexibility about what we will make and what we might take from the outside. So let's unpack that a little bit. The new Intel, we know is a highly vertically integrated workflow from design to manufacturing production. But to me, the designers are the artists and the flexibility you would think would come from outsourcing manufacturer to give designers more flexibility to take advantage of say seven nanometer or five nanometer process technologies versus having to wait for Intel to catch up. It used to be that Intel's process was the industry's best and it could supercharge a design or even mask certain design challenges so that Intel could maintain its edge but that's no longer the case. Here's a sentiment from an analyst, Daniel Donnelly. Donnelly is at Citi. It says he's confident. Donnelly is confident that Intel's decision to outsource more of its production won't result in the company divesting its entire manufacturing segment. And he cited three reasons. One, it would take roughly three years to bring a chip to market. And two, Intel would have to share IP. And three, it would hurt Intel's profit margins. He said it would negatively impact gross margins by 10 points and would cause a 25% decline in EPS. Now I don't know about this. I would... To that I would say one, Intel needs to reduce its current cycle time, to go from design to production from let's say three to four years where it is today. It's got to get it under you know, at least at two years maybe even less. Second, I would say is what good is intellectual property if it's not helping you win in the market? And three, I think profitability is nuance. So here's another take from a UBS analyst. His name is Timothy Arcuri. And he says, quote, We see but no option but for Intel to aggressively pursue an outsourcing strategy. He wrote that Intel could be 80% outsourced by 2026. And just by going to 50% outsourcing, he said would save the company $4 billion annually in CapEx and 25% would drop to free cashflow. So look, maybe Gelsinger has to sacrifice some gross margin in EPS for the time being. Reduce the cost of goods sold by outsourcing manufacturing lower its CapEx and fund innovation in design with free cash flow. Here's our take, Pat Gelsinger needs to look in the mirror and ask what would Andy Grove do? You know, Grove's quote that only the paranoid survive its famous less well-known are the words that proceeded that quote. Success breeds complacency and complacency breeds failure. Intel in our view is headed on a path to a long drawn out failure if it doesn't act aggressively. It simply can't compete on cost as an integrated manufacturer because it doesn't have the volume. So what will Pat Gelsinger do? You know, we've probably done 30 Cube interviews with Pat and I just don't think he's taking the job to make some incremental changes to Intel to get the stock price back up. Why would that excite Pat Gelsinger? Trends, markets, people, society, he's a dot connector and he loves Intel deeply. And he's a legend at the company. Here's what we strongly believe. We think Intel has to do a deal with TSM or maybe Samsung perhaps some kind of joint venture or other innovative structure that both protects its IP and secures its future. You know, both of these manufacturers would love to have a stronger US presence. In markets where Intel has many manufacturing facilities they may even be willing to take a loss to get this started and deeply partner with Intel for some period of time This would allow Intel to better compete on a cost basis with AMD. It would protect its core data center revenue and allow it to fight the fight in PCs with better cost structures. Maybe even gain some share that could count for, you know another $10 billion to the top line. Intel should focus on reducing its cycle times and unleashing its designers to create new solutions. Let a manufacturing partner who has the learning curve advantages enable Intel designers to innovate and extend ecosystems into new markets. Autonomous vehicles, factory floor use cases, military security, distributed cloud the coming telco explosion with 5G, AI inferencing at the edge. Bite the bullet, give up on yesterday's playbook and reinvent Intel for the next 50 years. That's what we'd like to see. And that's what we think Gelsinger will conclude when he channels his mentor. What do you think? Please comment on my LinkedIn posts. You can DM me at dvellante or email me at david.vellante@siliconangle.com. I publish weekly on wikibon.com and siliconangle.com. These episodes remember are also available as podcasts for your listening pleasure. Just search Breaking Analysis podcast. Many thanks to my friend and colleague David Floyer who contributed to this episode and that has done great work in the last better part of the last decade and has really thought through some of the cost factors that we talked about today. Also don't forget to check out etr.plus for all the survey action. Thanks for watching this episode of the Cube Insights powered by ETR. Be well. And we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and that marked the beginning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Donnelly | PERSON | 0.99+ |
Andy Grove | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Daniel Donnelly | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
UBS | ORGANIZATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Timothy Arcuri | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
10 nanometers | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Bob Swan | PERSON | 0.99+ |
10 times | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
ten nanometers | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Grove | PERSON | 0.99+ |
12 month | QUANTITY | 0.99+ |
three reasons | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
2005 | DATE | 0.99+ |
three years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Wright | PERSON | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
AMT | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
10 points | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
seven nanometers | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mac | COMMERCIAL_ITEM | 0.99+ |
3 X | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
last year | DATE | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
approximately 275 million | QUANTITY | 0.98+ |
five nanometer | QUANTITY | 0.98+ |
Parag Dave, Red Hat | AnsibleFest 2019
>> Narrator: Live from Atlanta, Georgia, it's theCUBE, covering Ansible Fest 2019. Brought to you by Red Hat. >> Welcome back, this is theCUBE's live coverage of Ansible Fest 2019, here in Atlanta, Gerogia. I'm Stu Miniman, my co-host is John Furrier and we're going to dig in and talk a bit about developers. Our guest on the program, Parag Dave, who is senior principle product manager with Red Hat. Thank you so much for joining us. >> Glad to be here, thanks for having me. >> Alright, so configuration management, really maturing into an entire automation journey for customers today, lets get into it. Tell us a little bit about your role and what brings you to the event. >> Yeah, so I actually have a very deep background in automation. I started by doing worker automation. Which is basically about how to help businesses do their processing. So, from processing an invoice, how do I create the flows to do that? And we saw the same thing, like automation was just kind of like a an operational thing and was brought on just to fulfill the business, make it faster and next thing you know it grew like, I don't know, like wildfire. I mean it was amazing and we saw the growth, and people saw the value, people saw how easy it was to use. Now, I think that combination is kicking in. So, now I'm focusing more on developers and the depth tools used at Red Hat and it's the same thing. You know, Parag, you know when you look in IT, you know Automation is not a new term. It's like we've been talking about this for decades. Talk to us a little bit about how it's different today and you know, you talked about some of the roles that are involved here, how does Ansible end up being a developer tool? >> Yeah, you know you see, it's very interesting, because Ansible was never really targeted for developers, right? And in fact, automation was always considered like an operational thing. Well, now what has happened is, the entire landscape of IT in a company is available to be executed programmatically. Before it was, interfaces were only available for a few programs. Everything else you had to kind of write your own programs to do, but now the advent of API's, you know with really rich CLI's it's very easy to interact with anything and not just like in software, you can interact with the other network devices, with your infrastructure, with your storage devices. So, all of the sudden when everything became available, developers who were trying to create applications and needed environments to test, to integrate, saw that automation is a great way to create something that cannot be replicated and be consistent every time you run it. So, the need for consistency and replication drove developers to adopt to the Ansible. And we were, you know cause they had the Ansible, we never marketed to developer and then we see that wow, they are really pulling it down, it's great. The whole infrastructure is code, which is one of the key pillars for devOps has become one of the key drivers for it, because now what you are seeing is the ability for developers to say that I can now, when I'm done with my coding and my application is ready for say a test environment or a staging environment, I can now provision everything I need right from configuring my network devices, getting the infrastructure ready for it, run my test, bring it down, and I can do all of that through code, right? So, that really drives the adoption for Ansible. >> And the could scale has shown customers at scale, whether its on-premises or cloud or Edge is really going to be a big factor in their architecture. The other thing that's interesting, and Stu were talking about this on our opening yesterday, is that you have the networking and the bottom of that stack moving up the stack and you have the applications kind of wanting to move down the stack. So, they're kind of meeting in the middle in this programmability in between them. You know, Containers, Kubernetes, Microservices, is developing as a nice middle layer between those two worlds. So, the networks have to telegraph up data and also be programmable, this is causing a lot of disruption and evasion. >> Parag: Absolutely. >> You're thought on this, 'cause it's DevSecOps beefs DevOps, that's DeVops. This is now all that's coming together. Exactly, and what's happening is, what we are seeing with developers is that there's a lot more empowerment going on. You know, before there was like a lot of silo's, there was like a lot of checks and balances in place that kind of made it hard to do things. It was okay, this what you, developers you write code, we will worry about all this. And now, this whole blending that has happened and developers being empowered to do it. And now, the empowerment is great and with great power comes great responsibility. SO, can you please make sure that you know, what you're using is enterprise grade, that it's going to be you know, you're not just doing things with your break environment So, once everybody become comfortable that yes, by merging these things together, we're actually not breaking things. You're actually increasing speed, 'cause what's the number one driver right now for organizations? Is speed with security, right? Can I achieve that business agility, so that by the time I need a feature develop, by the time I need a feature delivered in production and my tool comes for it, I need to close that gap. I cannot have a long gap between that. So, we are seeing a lot of that happening. >> People love automation, they love AI. These are two areas that, it's a no-brainer. When you have automation, you talk AI, yeah bring it on, right? What does that mean? So, when you think about automation the infrastructure that's in the hands of the operators, but also they want to enable applications to do it themselves as well, hence the DevOps. Where is the automation focus? Because that's the number one question. How do I land, get the adoption, and then expand out across. This seems to be the form that Ansible's kind of cracked the code on. The organic growth has been there, but now as a large enterprise comes in, I got to get the developers using it and it's got to be operator friendly. This seems to be the key, >> The balance has to be there >> the key to the kingdom. >> Yeah, no you're absolutely right. And so, when you look at it, like what do developers want? So, something that is frictionless to use, very quick, very easy, and so that I don't have to spend a lot of time learning it and doing it, right? And so we saw that with Ansible. It's like the fact that it's so easy to use, it's most of everything is in YAML. Which is very needed for developers, right? So, we see that from their perspective, they're very eager now, and they've been adopting it, if you look at the download stats it tells you. Like there's a lot of volume happening in terms of developers adopting it. What companies are now noticing is that, wait that's great, but now we have a lot developers doing their own thing. So, there is now like way of bringing all this together, right? So, it's like if I have 20 teams in one line of business and each team tries to do things their own way, what I'm going to end up with is a lot of repeatable, you know like a lot of work that gets repeated, I say it's duplicated. So, we see that's what we are seeing with collections for example. What Ansible is trying to bring to the table is okay, how do I help you kind of bring things into one umbrella? And how can I help you as a developer decide that, wow I got like 100 plus engine extra rolls I can use in Ansible. Well, which one do I pick? And you pick one, somebody else picks something else, Somebody creates a playbook with like one separate, you know one different thing in it, versus yours. How do we get our hands around it? And I think that's where we are seeing that happen. >> Right open star standpoint. I see Red Hat, Ansible doing great stuff and for the folks in the ivory tower, the executive CXO'S. They hear Ansible, glue layer, integration layer, and they go, wait a minute isn't that Kubernetes? Isn't Kubernetes suppose to provide all this stuff? So, talk about where Ansible fits in the wave that's coming with Kubernetes. Pat Gelsinger at VMware, thinks Kubernetes is going to be the dial-tone, it's going to be like the TCP/IP like protocol, to use his words, but there's a relationship that Ansible has with those Microservices that are coming. Can you explain that fit? >> You hit the nail on the head. Like, Kubernetes is like, we call it the new operating system. It's like that's what everything runs on now, right? And it's very easy for us, you know from a development perspective to say, great I have my Containers, I have my applications built, I can bring them up on demand, I don't have to worry about you know having the whole stack of an operating system delivered every time. So, Kubernetes has become like the defactual standard upon which things run. So, one of the concepts that has really caught a lot of momentum, is the operator framework, right? Which was introduced with the Kubernetes, the later Razor 3.x. Some of that, and operator framework, it's very easy now for application teams. I mean, it's not a great uptake from software vendors themselves. How do I give you my product, that you can very easily deliver on Kubernetes as a Container, but I'll give you enough configuration options, you can make it work the way you want to. So, we saw a lot oof software vendors creating and delivering their products as operators. Now we are seeing that a lot of software application developers themselves, for their own applications, want to create operators. It's a very easy way of actually getting your application deployed onto Kubernetes. So, Ansible operator is one of the easiest ways of creating an operator. Now, there are other options. You can do a Golang operator, you can do Helm, but Ansible operators has become extremely easier to get going. It doesn't require additional tools on top of it. Just because the operator SDK, you know, you're going to use playbooks. Which you're used to already and you're going to use playbooks to execute your application workflows. So, we feel that developers are really going to use Ansible operators as a way to create their own operators, get it out there, and this is true for any Kubernetes world. So, there's nothing different about, you know an Ansible operator versus any other operator. >> With no chains to Kubernetes, but Kubernetes obviously has the cons of the Microservices, which is literally non-user intervention. The apps take of all provisioning of services. This is an automation requirement, this feeds into the automation theme, right? >> Exactly, and what this does for you is it helps you, like if you look at operator framework, it goes all the way from basic deployers, everybody's use to, like okay, I want instantaneous deployment, automatically just does it. Automatically recognize changes that I give you in reconfiguration and go redeploy a new instance the way it should. So, how do I automate that? Like how do I ensure that my operator that is actually running my application can set up it's own private environment in Kubernetes and then it can actually do it automatically when I say okay now go make one change to it. Ansible operator allows you to do that and it goes all the way into the life cycle, the full five phases of life cycle that we have in the operator framework. Which is the last one's about autopilot. So, Autoscale, AutoRemedy itself. Your application now on Kubernetes through Ansible can do all that and you don't have to worry about coding at all. It's all provided to you because of the Ansible operator. >> Parag, in the demo this morning, I think the audience really, it resonated with the audience, it talked about some of the roles and how they worked together and it was kind of, okay the developers on this side and the developers expectation is, oh the infrastructure's not going to be ready, I'm not going to have what I need. Leave me alone, I'm going to play my video games until I can actually do my work and then okay, I'll get it done and do my magic. Speak a little bit to how Ansible is helping to break through those silo's and having developers be able to fully collaborate and communicate with all their other team members not just be off on their own. >> Oh yeah, that's a good point, you know. And what is happening is the developers, like what Ansible is bringing to the table is giving you a very prescriptive set of rules that you can actually incorporate into your developer flows. So, what developers are now doing is that I can't create a infrastructure contribution without actually having discussions with the infrastructure folks and the network team will have to share with me what is the ideal contribution I should be using. So, the empowerment that Ansible brings to the table is enabled cross team communications to happen. So, there is prescriptive way of doing things and you can create this all into an automation and then just set up so that it gets triggered every time a developer makes a change to it. So, internally they do that. Now other teams come and say, hey how are you doing this? Right, 'cause they need they same thing. Maybe you're destinations are going to be different obviously, but in the end the mechanism is the same, because you are under the same enterprise, right? So, you're going to have the same layer of network tools, same infrastructure tools. So, then teams start talking to each other. I was talking to the customer and they were telling me that they started with four teams working independently, building their own Ansible playbooks and then talking to the admins and next thing they know everybody had the full automation done and nobody knew about it. And now they're finding out and they were saying, wow, I got like hundreds of these teams doing this. So, A, I'm very happy, but B, now I would like these guests to talk to each other more and come up with a standard way of doing it. And going back to that collections concept. That's what's really going to help them. And we feel that with the collections it's very similar to what we did with Operator Hub for the OpenShift. It's where we have certified set of collections, so that they're supported by Red Hat. We have partners who contribute theirs and then they're supported by them, but we become a single source. So, as an enterprise you kind of have this way of saying, okay now I can feel confident about what I'm going to let you deploy in my environment and everybody's going to follow the same script and so now I can open up the floodgates in my entire organization and go for it. >> Yeah, what about how are people in the community getting to learn form everyone else? When you talk about a platform it should be if I do something not only can by organization learn from it, but potentially others can learn from it. That's kind of the value proposition of SaaS. >> Yes, yes it and having the galaxy offering out there, where we see so many users contributing, like we have close to a hundred thousand rolls out there now and that really brought the Ansible community together. It was already a strong community of contributors and everything. By giving them a platform where they can have these discussions, where they can see what everybody else is doing, it's the story is where you will now see a lot more happening like today, I think it was Ansible is like the top five Get Up projects in terms of progress that are happening out there. I mean the community is so wide run, it's incredible. Like they're driving this change and it's a community made up of developers, a lot of them. And that's what's creating this amazing synergy between all the different organizations. So, we feel that Ansible is actually bringing a lot of us together. Especially, as more and more automation becomes prevalent in the organizations. >> Alright, Parag want to give you a final word, Ansible Fest 2019, final take aways. >> No, this is great, this is my first one and I'd never been to one before and just the energy, and just seeing what all the other partners are also sharing, it's incredible. And Like I said with my backgrounds automations, I love this, anything automation for me, I think that's just the way to go. >> John: Alright, well that's it. >> Stu: Thank you so much for sharing the developer angle with us >> Thank you very much. >> For John Furrier, I'm Stu Miniman. Back to wrap-up from theCUBe's coverage of Ansible Fest 2019. Thanks for watching theCUBE. (intense music)
SUMMARY :
Brought to you by Red Hat. Thank you so much for joining us. and what brings you to the event. how do I create the flows to do that? but now the advent of API's, you know with really rich CLI's So, the networks have to telegraph up data that it's going to be you know, and it's got to be operator friendly. It's like the fact that it's so easy to use, and for the folks in the ivory tower, the executive CXO'S. So, one of the concepts that has really caught has the cons of the Microservices, It's all provided to you because of the Ansible operator. oh the infrastructure's not going to be ready, So, the empowerment that Ansible brings to the table That's kind of the value proposition of SaaS. it's the story is where you will now see Alright, Parag want to give you a final word, and I'd never been to one before and just the energy, Back to wrap-up from theCUBe's coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Pat Gelsinger | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
20 teams | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Parag Dave | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
each team | QUANTITY | 0.99+ |
Atlanta, Georgia | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
two areas | QUANTITY | 0.99+ |
one line | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
five phases | QUANTITY | 0.98+ |
two worlds | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
four teams | QUANTITY | 0.97+ |
Ansible Fest 2019 | EVENT | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
100 plus | TITLE | 0.95+ |
AnsibleFest | EVENT | 0.95+ |
today | DATE | 0.95+ |
single source | QUANTITY | 0.92+ |
Atlanta, Gerogia | LOCATION | 0.91+ |
DevSecOps | TITLE | 0.91+ |
Razor 3.x. | TITLE | 0.91+ |
Operator Hub | ORGANIZATION | 0.88+ |
playbooks | TITLE | 0.86+ |
one change | QUANTITY | 0.83+ |
hundred thousand | QUANTITY | 0.83+ |
one question | QUANTITY | 0.8+ |
this morning | DATE | 0.8+ |
SDK | TITLE | 0.8+ |
theCUBe | ORGANIZATION | 0.8+ |
DevOps | TITLE | 0.79+ |
Kubernetes | ORGANIZATION | 0.79+ |
top five | QUANTITY | 0.73+ |
decades | QUANTITY | 0.7+ |
Parag | ORGANIZATION | 0.62+ |
pillars | QUANTITY | 0.61+ |
devOps | TITLE | 0.6+ |