Anahad Dhillon, Dell EMC | CUBE Conversation, October 2021
(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)
SUMMARY :
and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
November 5th | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Anahad Dhillon | PERSON | 0.99+ |
October 2021 | DATE | 0.99+ |
November 2nd | DATE | 0.99+ |
2020 | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Anahad | PERSON | 0.99+ |
ObjectScale | TITLE | 0.99+ |
VMware 2021 | TITLE | 0.99+ |
today | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
both products | QUANTITY | 0.99+ |
two product | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
early 2020 | DATE | 0.98+ |
OpenShift | TITLE | 0.98+ |
step one | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
hundreds of nodes | QUANTITY | 0.98+ |
two code streams | QUANTITY | 0.98+ |
ECS | TITLE | 0.97+ |
12 nodes | QUANTITY | 0.97+ |
single code | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
10 | QUANTITY | 0.96+ |
4.0 | OTHER | 0.96+ |
Red Hat OpenShift | TITLE | 0.95+ |
3.6 | OTHER | 0.95+ |
Dell Technology | ORGANIZATION | 0.94+ |
S3 | TITLE | 0.92+ |
Hundreds of notes | QUANTITY | 0.92+ |
two worlds | QUANTITY | 0.92+ |
EXF900 | COMMERCIAL_ITEM | 0.92+ |
up to 30 terabytes | QUANTITY | 0.91+ |
ObjectScale | ORGANIZATION | 0.91+ |
ECS 3.X | TITLE | 0.91+ |
petabytes | QUANTITY | 0.89+ |
VMware | TITLE | 0.89+ |
first | QUANTITY | 0.87+ |
3.X | TITLE | 0.87+ |
Dev Test Sandbox | TITLE | 0.87+ |
ECS | ORGANIZATION | 0.86+ |
Red Hat | TITLE | 0.84+ |
Anahad Dhillon, Dell EMC | CUBEConversation
(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing it infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)
SUMMARY :
and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
November 5th | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Anahad Dhillon | PERSON | 0.99+ |
November 2nd | DATE | 0.99+ |
2020 | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
Anahad | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
VMware 2021 | TITLE | 0.99+ |
ObjectScale | TITLE | 0.99+ |
two code streams | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
both products | QUANTITY | 0.99+ |
two product | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
early 2020 | DATE | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
step one | QUANTITY | 0.98+ |
ECS | TITLE | 0.98+ |
hundreds of nodes | QUANTITY | 0.98+ |
OpenShift | TITLE | 0.98+ |
12 nodes | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
single code | QUANTITY | 0.97+ |
single | QUANTITY | 0.96+ |
10 | QUANTITY | 0.96+ |
two worlds | QUANTITY | 0.96+ |
Red Hat OpenShift | TITLE | 0.95+ |
4.0 | OTHER | 0.94+ |
petabytes | QUANTITY | 0.94+ |
Dell Technology | ORGANIZATION | 0.94+ |
S3 | TITLE | 0.94+ |
ECS 3.X | TITLE | 0.93+ |
3.6 | OTHER | 0.91+ |
VMware | TITLE | 0.9+ |
ObjectScale | ORGANIZATION | 0.9+ |
EXF900 | COMMERCIAL_ITEM | 0.9+ |
Hundreds of notes | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
up to 30 terabytes | QUANTITY | 0.88+ |
Red Hat | TITLE | 0.88+ |
Zongjie Diao, Cisco and Mike Bundy, Pure Storage | Cisco Live EU 2019
(bouncy music) >> Live, from Barcelona, Spain, it's theCUBE, covering Cisco Live Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back everyone. Live here in Barcelona it's theCUBE's exclusive coverage of Cisco Live 2019. I'm John Furrier. Dave Vellante, my co-host for the week, and Stu Miniman, who's also here doing interviews. Our next two guests is Mike Bundy, Senior Director of Global Cisco Alliance with Pure Storage, and Z, who's in charge of product strategy for Cisco. Welcome to theCUBE. Thanks for joining us. >> Thank you for having us here. >> You're welcome. >> Thank you. >> We're in the DevNet zone. It's packed with people learning real use cases, rolling up their sleeves. Talk about the Cisco Pure relationship. How do you guys fit into all this? What's the alliance? >> You want to start? >> Sure. So, we have a partnership with Cisco, primarily around a solution called Flashstack in the converged infrastructure space. And most recently, we've evolved a new use-case and application together for artificial intelligence that Z's business unit have just released a new platform that works with Cisco and NVIDEA to accomplish customer application needs mainly in machine learning but all aspects of artificial intelligence, so. >> So AI is obviously a hot trend in machine learning but today at Cisco, the big story was not about the data center as much anymore as it's the data at the center of the value proposition which spans the on-premises, IoT edge, and multiple clouds so data now is everywhere. You've got to store it. It's going to be stored in the cloud, it's on-premise. So data at the center means a lot of things. You can program with it. It's got to be addressable. It has to be smart and aware and take advantage of the networking. So with all of that as the background, backdrop, what is the AI approach? How should people think about AI in context to storing data, using data? Not just moving packets from point A to point B, but you're storing it, you're pulling it out, you're integrating it into applications. A lot of moving parts there. What's the-- >> Yeah, you got a really good point here. When people think about machine learning, traditionally they just think about training. But we look at it as more than just training. It's the whole data pack line that starts with collecting the data, store the data, analyze the data, train the data, and then deploy it. And then put the data back. So it's really a very, it's a cycle there. It's where you need to consider how you actually collect the data from edge, how you store them, in the speed that you can, and give the data to the training side. So I believe when we work with Pure, we try to create this as a whole data pack line and think about the entire data movement and the storage need that we look at here. >> So we're in the DevNet zone and I'm looking at the machine learning with Python, ML Library, (mumbles) Flow, Appache Spark, a lot of this data science type stuff. >> Yup. >> But increasingly, AI is a workload that's going mainstream. But what are the trends that you guys are seeing in terms of traditional IT's involvement? Is it still sort of AI off on an island? What are you seeing there? >> So I'll take a guess, a stab at it. So really, every major company industry that we work with have AI initiatives. It's the core of the future for their business. What we're trying to do is partner with IT to get ahead of the large infrastructure demands that will come from those smaller, innovative projects that are in pilot mode so that they are a partner to the business and the data scientists rather than a laggard in the business, the way that sometimes the reputation that IT gets. We want to be the infrastructure, solid, like a cloud-like experience for the data scientists so they can worry more about the applications, the data, what it means to the business, and less about the infrastructure. >> Okay. And so you guys are trying to simplify that infrastructure, whether it's converged infrastructure, and other unifying approaches. Are you seeing the shift of that heavy lifting, of people now shifting resources to new workloads like AI? Maybe you could discuss what the trends are there? >> Yeah, absolutely. So I think AI started with more like a data science experiment. You see a couple of data scientists experimenting. Now it's really getting into mainstream. More and more people are into that. And as, I apologize. >> Mike. >> Mike. >> Mike, can we restart that question? (all laughing) My deep apology. I need a GPU or something in my brain. I need to store that data better. >> You're on Fortnite. Go ahead. >> Yes, so as Mike has said earlier on, it's not just the data scientists. It's actually an IT challenge as well and I think with Cisco, what we're trying to do with Pure here is, you know that Cisco thing, we're saying, "We're a bridge." We want to bridge the gap between the data scientists and the IT and make it not just AI as an experiment but AI at scale, at production level, and be ready to actually create real impact with the technology infrastructure that we can enable. >> Mike, talk about Pure's position. You guys have announced Pure in the cloud? >> Yes. >> You're seeing that software focus. Software is the key here. >> Absolutely. >> You're getting into a software model. AI and machine learning, all this we're talking about is software. Data is now available to be addressed and managed in that software life cycle. How is the role of the software for you guys with converged infrastructure at the center of all the Cisco announcements. You were out on stage today with converged infrastructure to the edge. >> Yes, so, if you look at the platform that we built, it's referenced back, being called the Data Hub. The Data Hub has a very tight synergy with all the applications you're referring to: Spark, Tensor Flow, et cetera, et cetera, Cafe. So, we look at it as the next generation analytics and the platform has a super layer on top of all those applications because that's going to really make the integration possible for the data scientists so they can go quicker and faster. What we're trying to do underneath that is use the Data Hub that no matter what the size, whether it's small data, large data, transaction based or more bulk data warehouse type applications, the Data Hub and the FlashBlade solution underneath handle all of that very, very different and probably more optimized and easier than traditional legacy infrastructures. Even traditional, even Flash, from some of our competitors, because we built this purpose-built application for that. Not trying to go backwards in terms of technology. >> So I want to put both you guys on the spot for a question. We hear infrastructure as code going on many, many years since theCUBE started nine years ago. Infrastructure as code, now it's here. The network is programmable, the infrastructure is programmable, storage is programmable. When a customer or someone asks you, how is infrastructure, networks, and storage programmable and what do I do? I used to provision storage, I've got servers. I'm going to the cloud. What do I do? How do I become AI enabled so that I could program the infrastructure? How do you guys answer that question? >> So a lot of that comes to the infrastructure management layer. How do you actually, using policy and using the right infrastructure management to make the right configuration you want. And I think one thing from programmability is also flexibility. Instead of having just a fixed configuration, what we're doing with Pure here is really having that flexibility where you can put Pure storage, different kind of storage with different kind of compute that we have. No matter we're talking about two hour use, four hour use, that kind of compute power is different and can max with different storage, depending on what the customer use case is. So that flexibility driven to the programmability that is managed by the infrastructure management layer. And we're extending that. So Pure and Cisco's infrastructure management actually tying together. It's really single pane of glass within the side that we can actually manage both Pure and Cisco. That's the programmability that we're talking about. >> Your customers get Pure storage, end-to-end manageability? >> With the Cisco compute, it's a single pane of glass. >> Okay. >> So where do I buy? I want to get started. What do you got for me? (laughing) >> It's pretty simple. It's three basic components. Cisco Compute and a platform for machine learning that's powered by NVIDEA GPUs; Cisco FlashBlade, which is the Data Hub and storage component; and then network connectivity from the number one network provider in the world, from Cisco. It's very simple. >> And it's a SKU, it's a solution? >> Yup, it's very simple. It's data-driven. It's not tied to a specific SKU. It's more flexible than that so you have better optimization of the network. You don't buy a 1000 series X and then only use 50% of it. It's very customizable. >> Okay, do I can customize it for my, whatever, data science team or my IT workloads? >> Yes, and provision it for multi-purpose, same way a service provider would if you're a large IT organization. >> Trend around breaking silos has been discussed heavily. Can you talk about multiple clouds, on-premise in cloud and edge all coming together? How should companies think about their data architecture because silos are good for certain things, but to make multi-cloud work and all this end-to-end and intent-based networking and all the power of AI's around the corner, you got to have the data out there and it's got to be horizontally scalable, if you will. How do you break down those silos? What's your advice, is there a use case for an architecture? >> I think it's a classic example of how IT has evolved to not think just silos and be multi-cloud. So what we advocate is to have a data platform that transpires the entire community, whether it's development, test, engineering, production applications, and that runs holistically across the entire organization. That would include on-prem, it would include integration with the cloud because most companies now require that. So you can have different levels of high availability or lower cost if your data needs to be archived. So it's really building and thinking about the data as a platform across the company and not just silos for various applications. >> So replication never goes away. >> Never goes away. (laughing) >> It's going to be around for a long, long time. >> Dev Test never goes away either. >> Your thoughts on this? >> Yeah, so adding on top of that, we believe where your infrastructure should go is where the data goes. You want to follow where the data is and that's exactly why we want to partner with Pure here because we see a lot of the data are sitting today in the very important infrastructure which is built by Pure Storage and we want to make sure that we're not just building a silo box sitting there where you have to pour the data in there all the time, but actually connect to our server with Pure Storage in the most manageable way. And for IT, it's the same kind of manual layer. You're not thinking about, oh, I have to manage all this silo box, or the shadow IT that some data scientists would have under their desk. That's the least thing you want. >> And the other thing that came up in the key note today, which we've been saying on theCUBE, and all the experts reaffirm, is that moving data costs money. You've got latency costs and also just cost to move traffic around. So moving compute to the edge or moving compute to the data has been a big, hot trend. How has the compute equation changed? Because I've got storage. I'm not just moving packets around. I'm storing it, I'm moving it around. How does that change the compute? Does that put more emphasis on the compute? >> It's definitely putting a lot more emphasis on compute. I think it's where you want compute to happen. You can pull all the data and want it to happen in the center place. That's fine if that's the way you want to manage it. If you have already simplified the data, you want to put it in that's the way. If you want to do it at the edge, near where the data source is, you can also do the cleaning there. So we want to make sure that, no matter how you want to manage it, we have the portfolio that can actually help you to manage that. >> And it's alternative processors. You mentioned NVIDEA. >> Exactly. >> You guys are the first to do a deal with them. >> And other ways, too. You've got to take advantage of technology like Kubernetes, as an example. So you can move the containers where they need to be and have policy managers for the compute requirements and also storage, so that you don't have contention or data integrity issues. So embracing those technologies in a multi-cloud world is very, very essential. >> Mike, I want to ask you a question around customer trends. What are you seeing as a pattern from a customer standpoint, as they prepare for AI, and start re-factoring some of their IT and/or resources, is there a certain use-case that they set up with Pure in terms of how they set up their storage? Is it different by customer? Is there a common trend that you see? >> Yeah, there are some commonalities. Take financial services, quant-trading as an example. We have a number of customers that leverage our platform for that because it's very time-sensitive, high-availability data. So really, I think that the trend overall of that would be: step back, take a look at your data, and focus on, how can I correlate and organize that? And really get it ready so that whatever platform you use from a storage standpoint, you're thinking about all aspects of data and get it in a format, in a form, where you can manage and catalog, because that's kind of essential to the entire thing. >> It really highlights the key things that we've been saying in storage for a long time. High-availability, integrity of the data, and now you've got application developers programming with data. With APIs, you're slinging APIs around like it's-- >> The way it should be. >> That's the way it should be. This is like Nirvana finally got here. How far along are we in the progress? How far? Are we early? Are we moving the needle? Where are the customers? >> You mean in terms of a partnership? >> Partnership, customer AI, in general. You guys, you've got storage, you've got networking and compute all working together. It has to be flexible, elastic, like the cloud. >> My feeling, Mike can correct me, or you can disagree with me. (laughing) I think right now, if we look at what all the analysts are saying, and what we're saying, I think most of the companies, more than 50% of companies either have deployed AI MO or are considering a plan of deploying that. But having said that, we do see that we're still at a relatively early stage because the challenges of making AI deployment at scale, where data scientists and IT are really working together. You need that level of security and that level of skill of infrastructure and software and evolving DevNet. So my feeling is we're still at a relatively early stage. >> Yeah, I think we are in the early adopter phase. We've had customers for the last two years that have really been driving this. We work with about seven of the automated car-driving companies. But if you look at the data from Morgan Stanley and other analysts, there's about a $13 billion infrastructure that's required for AI over the next three years, from 2019-2021, so that is probably 6X, 7X what it is today, so we haven't quite hit that bell curve yet. >> So people are doing their homework right now, setting up their architecture? >> It's the leaders. It's leaders in the industry, not the mainstream. >> Got it. >> And everybody else is going to close that gap, and that's where you guys come in, is helping them do that. >> That's scale. (talking over one another) >> That's what we built this platform with Cisco on, is really, the Flashstack for AI is around scale, for tens and twenties of petabytes of data that will be required for these applications. >> And it's a targeted solution for AI with all the integration pieces with Cisco built in? >> Yes. >> Great, awesome. We'll keep track of it. It's exciting. >> Awesome. >> It's cliche to say future-proof but in this case, it literally is preparing for the future. The bridge to the future, as the new saying at Cisco goes. >> Yes, absolutely. >> This is theCube coverage live in Barcelona. We'll be back with more live coverage after this short break. Thanks for watching. I'm John Furrier with Dave Vallente. Stay with us. (upbeat electronic music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. Dave Vellante, my co-host for the week, We're in the DevNet zone. in the converged infrastructure space. So data at the center means a lot of things. the data to the training side. at the machine learning with Python, ML Library, But what are the trends that you guys are seeing and less about the infrastructure. And so you guys are trying to simplify So I think AI started with I need to store that data better. You're on Fortnite. and the IT and make it not just AI as an experiment You guys have announced Pure in the cloud? Software is the key here. How is the role of the software and the platform has a super layer on top So I want to put both you guys on the spot So a lot of that comes to the What do you got for me? network provider in the world, from Cisco. It's more flexible than that so you have Yes, and provision it for multi-purpose, and it's got to be horizontally scalable, if you will. and that runs holistically across the entire organization. (laughing) That's the least thing you want. How does that change the compute? That's fine if that's the way you want to manage it. And it's alternative processors. and also storage, so that you don't have Mike, I want to ask you a where you can manage and catalog, High-availability, integrity of the data, That's the way it should be. It has to be flexible, elastic, like the cloud. and that level of skill of infrastructure that's required for AI over the next three years, It's leaders in the industry, not the mainstream. and that's where you guys come in, is helping them do that. That's scale. is really, the Flashstack for AI is around scale, It's exciting. it literally is preparing for the future. I'm John Furrier with Dave Vallente.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Mike Bundy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
four hour | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Zongjie Diao | PERSON | 0.99+ |
Morgan Stanley | ORGANIZATION | 0.99+ |
more than 50% | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
1000 series X | COMMERCIAL_ITEM | 0.99+ |
today | DATE | 0.99+ |
Pure | ORGANIZATION | 0.98+ |
7X | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Barcelona, Spain | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
one thing | QUANTITY | 0.98+ |
6X | QUANTITY | 0.98+ |
nine years ago | DATE | 0.98+ |
NVIDEA | ORGANIZATION | 0.97+ |
Global Cisco Alliance | ORGANIZATION | 0.97+ |
Flash | TITLE | 0.97+ |
two guests | QUANTITY | 0.96+ |
Appache Spark | TITLE | 0.96+ |
2019-2021 | DATE | 0.96+ |
Nirvana | ORGANIZATION | 0.96+ |
Flow | TITLE | 0.93+ |
$13 billion | QUANTITY | 0.93+ |
FlashBlade | COMMERCIAL_ITEM | 0.91+ |
Fortnite | TITLE | 0.91+ |
Z | PERSON | 0.9+ |
Data Hub | TITLE | 0.9+ |
Europe | LOCATION | 0.9+ |
Spark | TITLE | 0.89+ |
three basic components | QUANTITY | 0.88+ |
ML Library | TITLE | 0.88+ |
tens and twenties of petabytes of data | QUANTITY | 0.88+ |
about seven of the automated car-driving companies | QUANTITY | 0.84+ |
last two years | DATE | 0.83+ |
Cisco Live 2019 | EVENT | 0.82+ |
two hour | QUANTITY | 0.81+ |
Cisco | EVENT | 0.8+ |
Flashstack | TITLE | 0.79+ |
single pane of | QUANTITY | 0.78+ |
single pane of glass | QUANTITY | 0.77+ |
Dev Test | TITLE | 0.77+ |
about | QUANTITY | 0.74+ |
Cisco Pure | ORGANIZATION | 0.73+ |
next three years | DATE | 0.72+ |
Kubernetes | TITLE | 0.69+ |
FlashBlade | TITLE | 0.65+ |
DevNet | TITLE | 0.65+ |