Jon Siegal, Dell Technologies & Dave McGraw, VMware | CUBE Conversation
(bright music) >> Hello, and welcome to this CUBE conversation. I'm John Furrier, your host of theCUBE, here in Palo Alto, California. It's a hybrid world, we're still doing remote in news. Of course, events are coming back in person, but more importantly conversations continue. We've got two great guests here, John Siegal, SVP ISG Marketing at Dell Technologies, and Dave McGraw, office of the CTO at VMware. Gentlemen, great to see you moving forward. Dell Technologies and VMware great partnership. Thanks for coming on. >> Great to be back. >> Yeah, hi, John, thanks for having us. >> You know, the world's coming back to kind of real life, Omnicon virus is out there, but people say it's not going to be as bad as we think, but it looks like events are happening. But more importantly, the cloud native, cloud operations is definitely forcing lots of great new things happening, new innovations on-premises and at the Edge. A lot of new things happening in Dell and VMware, both have been working together for a long time now. VMware a separate company, we'll get to that in a second, but let's get to the partnership. What's new, what's changed with the relationship? >> Yeah, so I mean, just to kick that off and certainly Dave can chime in, but I think in a word, you know, John, nothing changes in terms of my customer's perspective. I mean, in many ways our joint relationship has never been stronger. We've put a ton of investment in both joint engineering innovation, Joint Go To Market over the last several years. And we're really been making what was our vision a couple of years ago a reality, and we only expect that to continue. And I think much of the reason we expect that to continue is because we have a shared vision of this distributed multi-cloud, you know, cloud native, modern app environment that customers want to drive. >> Yeah, and John, I would add that we've been building platforms together for the last five years, a great example is VxRail. You know, it's a market-leading technology that we've co-engineered together. And now it's a platform that we're actually building out use cases on top of whether it's multi-cloud solutions, whether it's private and hybrid cloud or including Tansu for developer environments. You know, we're using the investments we made and then we're layering in and building more value into those investments together. And we put agreements in place by the way that, you know, multi-year agreements around commercial arrangements and partnering together as well as our technology collaboration together. So we feel really confident about the future and that's what we're communicating to our customer base. >> Yeah, indeed just go ahead sorry, John. >> No, good. >> I was going to say just to build on that, as he said, I really, when I say not much changes, I mean, VMware has always been an open ecosystem partner, right? With its OEM vendors out there. And I think the difference here is Dell has made a strategic choice and a decision to make a significant investment in joint innovation, joint engineering, joint testing for VMware environments. And so I think a lot of this comes down to the commitment and focus that we've already made. You mentioned VxRail, which is a fantastic example where we at Dell, we've invested our own IP. You know, HCI systems software, that's sort of the secret ingredient that the secret sauce that delivers that single click, you know, automated lifecycle management experience. And we're investing lots of dollars in test labs just to ensure that customers always have that, you know, that seamless experience. >> You know, one of the benefits of doing theCUBE for 11 years now, it's just been that long, both EMC World and Dell World back in the day was our first events we went to. We've watched you guys together over the years. One of the things that strikes to be consistently the same is this focus of end to end, but also modularity, but also interoperability and kind of componentizing kind of the solution, not to oversimplify it, but this is kind of the big discussion right now as cloud scale, horizontal scale is with cloud resources are being put into the development stream where modern applications now are clear using only cloud native operations. That doesn't mean it's just cloud. I mean, it's cloud everywhere, but it's distributed computing. So this is kind of the original vision if you go back even five years or more. You guys have been working on this. This is kind of an important inflection point because now it's well known that the modern application is going to have to be programmable under the hood. Meaning everything's going to be scaling and rise of superclouds or new Edge technologies, which is coming fast. This is the new normal. This is not something that we were talking about mainstream five years ago, but you guys have been working on this kind of simplicity solutions-based approach. What's your reaction? >> That's right, John, I'll tell you, you might remember at VMworld a couple of years ago we announced Project Monterey. And now this was really a redefining architecture for not only data center, core data centers, but also for cloud and Edge environments. And so it's leveraging technology, you know, data processing units also known as smart NICs. You know, we're essentially redefining what that infrastructure looks like, making it more efficient, more performance, depending on the use case. So we've been partnering very closely with Dell to develop that technology and it's going to really transform what you see at the Edge and what you also see in core data centers going forward. >> Yeah, and there's so many of those. I mean, I think it seems Monterey is a great example of one that we continue to invest in. I think there's also NBME over TCP is another, if you will key ingredient to how customer is going to essentially get the performance they need out of the infrastructure going forward. And so we were proud to be a partner there, at most recent VMware where we announced, you know, the ability to essentially automate the integration of MBME over TCP with Dell EMC system integrated with vSphere. And that's a great example as well, right? I think there's countless. >> John: Yeah. >> And I'll tell you, we are so excited to see what Dell has done in the storage business with PowerStore X, where they've integrated vSphere ESXi into a storage array. And, you know, that creates all kinds of opportunities going forward for better integration and really for plug and play of, you know, the storage technology into cloud infrastructure. >> What's interesting about what you guys talking about is remember the old DevOps moving infrastructure as code. Okay, that became DevSecOps. That's big part of Tansu and security. Now it's all about devs, right? So now devs have all that built in and now the operations are the big conversation because one of the things we pointed out in the theCUBE recently is that, you know, VMware has owned the IT operations world, in our opinion for a long, long time. Dell has owned the enterprise for a very long time in terms of infrastructure in front solutions. The operational efficiency of cloud hybrid is really kind of what's the gateway to multi-cloud. This has been a big part of IT transformation. Can you guys share how you guys were working together to make that flexibility to transform from the old IT to the new IT? And what are some of the things that you're seeing with your customers that can give them a map of how to do this? >> Yeah, so I would say, you know, one area in particular that we're really coming together is around APEX, right? From an as a service perspective. I think what APEX is really doing is really unifying much of what you just described. It's taking as a service, it's taking multi-cloud, it's taking cloud native development if you will, and modern app development. And we together partner to ensure that's a consistent experience for customers. And we have a number of new APEX cloud services that keep that in mind and that are built on joint innovations, like frankly, VxRail at the bottom of that as they've said earlier. So for customers are looking to get, you know, item managing infrastructure altogether, which we, you know, we're seeing more and more now, we recently announced the APEX Cloud Services With VMware Cloud you know, which is again, a joint solution that'll be available soon. And it's one that is managed by Dell, but, you know, it gives customers that simplicity and scale of the public cloud, but certainly that control and security and performance, if you will, that they prefer to have in the private club. >> Yeah, and I think because, you know, the APEX Cloud Service is designed with the VMware Cloud, you have a capability that drives consistency and portability of workloads for customers. So they don't have to re-skill and retrain to be able to manage the environment. They also are not locked in to any particular solution. They have this ability to move workloads depending on what their needs are; economically, performance, you know, logistics requirements, and they can react accordingly as they digitize their business going forward. >> It's interesting, you guys are talking about this demand in a way, addressing this demand for as a service, which is, you know, it can be one cloud or multiple clouds, but it's really more of an abstraction layer of what you deploy to essentially create that connective tissue between what's existing, what's new and how to make it all work together to again, satisfy the developer 'cause the new apps are coming, right? They want more data is coming into them. So this has been, is this the as a service focus, is that what's happening? >> Yes, absolutely, yeah. The, as a service focus is, you know, at the end of the day is how are we going to really simplify this. We've been on this journey now for at least a year and much more to go. And VMware has been a key partner here, you know, on that journey. So a number of cloud services. We've had APEX Hybrid Cloud, APEX Private Cloud, you know, out there for some time. In fact, that's where we're getting a lot of the traction right now, and this new offering that's going to come out soon that we just mentioned with VMware cloud is just going to build on that. >> And VMware is a super cloud, isn't it Dave? Because you guys would be considered by our new definition of Supercloud because you can sit on Amazon. You also have other clouds too, so your customers can operate on any cloud. >> Our view is that, you know, from a multi-cloud future for customers to be able to be on-premises with a, you know, APEX service, to be able to be operating in a Colo, to be able to operate in one of many different hyperscalers, you know, providing that consistency and flexibility is going to be key. And I think also you mentioned Tansu earlier, John. You know, being able to have the customer have choice around whether they're operating with VMs and containers is really key as well. So, you know, what Dell has done with APEX is they set up again, another platform that we can just provide our SASE offerings to very simply and easily and deliver that value to customers in a consistent fashion going forward here. >> You know, I just love the term Supercloud. Actually, I called it subclass, but Dave Vellante called them Superclouds. But the idea is that you can have all the super power in the cloud capabilities, but it's also distributed clouds, right? Where you have Edge, you've got the Core and the notion of a cloud isn't like one place in which there's distributed computing. This is what the world now realizes. Again, we've talked about in theCUBE many times. So let's discuss this whole Core to Edge dynamic because if everything's cloudified, if you will, or cloud operations, you've got devs and ops kind of working together with security, all that good stuff. Now you have almost a seamless environment where code can run anywhere, data should traverse anywhere, but the idea of an Edge changes dramatically and certainly with 5G. So can you guys tie that Edge computing story together how Dell and VMware are addressing this massive growth at the Edge? >> Yeah, I would say, you know, first and foremost, we are seeing a major shift. As you mentioned today, the data being generated at the Edge it's, I think Michael Dell has actually gone on record talking about the next frontier, right? So it's especially happening because we're seeing all these smart monitoring capabilities, IOT, right? At almost any end point now from retail, traffic lights, manufacturing floors, you name it. I think anywhere where data is being acted upon to generate critical insights, right? That's considered an Edge now and we're expecting to see, as ITC has already gone out there on record as saying 50% of the new infrastructure out there will be deployed at the Edge in the next couple of years, so. And it's a different world, right? I mean, I think in terms of what's needed and what the challenges are, there's certainly a lack of specialized technical resources, typically at the Edge, there's typically a scaling issue. How do you manage all those distributed endpoints and do so successfully? And how do you ensure you lay any concerns around security as well? So, you know, once again, we've had a very collaborative approach when it comes to working on challenges like Edge, and, you know, we, again, common theme here, but the VxRail, which is a leading, you know, joint ACI off in the market is the foundation of many of our Edge offerings out there in the market today. The new satellite nodes that we just announced just a few months ago, extends VxRail's, you know, value proposition to the Edge, using a single node deployment. And it's really perfect for customers that don't have that local technical resource expertise or specialized resources. And it still has cyber resilience built right in. >> And John, just to follow up on that real quick, before Dave chimes in. On the Edge, compute has been a huge issue. And I've talked with you guys about this too. You guys have the compute, you have the integrated systems now, any update there on what VxRail is doing different or other Edge power (John laughs) PowerEdge sounds familiar? We need some more power at the Edge. So what's new there? >> Well, you know, first of all, we had new PowerEdge platforms of course, come out in this past year, and, you know, there's, we're building on that. I mean, the latest VxRail is of course, leveraged that power of PowerEdge. Yeah, lots of a good naming arrogance, right? PowerEdge. >> John: I love that. And, but, you know, it's, you know, it's at the heart of much of what we're doing. We're taking a lot of our capabilities that have been IP, like streaming data platform, which enables streaming, video and real-time analytics and running that on a VxRail or PowerEdge platform. You know, we're doing the same thing, you know, with, in the manufacturing side. We're working with partners that have IOT Edge platforms, you know, and running those on VxRail and PowerEdge. So we are taking very much the idea here that, yes, you're right with our rich resources of infrastructure, both with PowerEdge and VxRail, you know, building on that. But working with partners like VMware and others to collapse an integrated solution for the Edge. And so we're seeing really good uptake so far. >> Dave, what's your take on the Dell Edge with VMware, because automation is big theme, not moving data across an internet that's obviously huge. And you got to have that operational stability there. >> Absolutely, and, you know, to your point, being able to do the processing at the Edge and move results around versus moving massive amounts of data around is really key to the future going forward. And, you know, we've taken an approach with Dell where we're working with customers, we're having detailed conversations, really using a "Tiger Team Approach" around the use cases; manufacturing and retail being two of the real key focuses, healthcare another one where we're understanding customer requirements, it's both today and where they want to go. And, you know, so it's about distributed computing, certainly at the Edge. Dell is coming out with some great new platforms that we're integrating our software with. At the same time, we have technology in STWIN and SASE that become part of that solution as well, with VeloCloud. And we're developing a global network of points of presence that really will help support distributed application environments and Edge-native Application environments working with Dell going forward. >> That's great stuff. The next ending question is what's next. I want to just tee that up by bringing up what you kind of made me think of there, Dave, and this is key supply chain on both hardware and software talking about security. So when you say those things you're talking about in terms of functionality, the question is security, right? Both hardware and software supply chain with open source, with automation. I mean, this is a big discussion. What do you guys react to that about what's next.. >> Yeah, I can tell you from a central engineering perspective, you know, we're looking at security compliance and privacy every day, we're working closely with Dell. In fact, we're in the middle of meetings today in this area. And, you know, I look at a few key areas of investment that we're making collectively together. One is in the area of end to end encryption of data. For virtualized environments or containerized environments, being able to have end-to-end encryption and manage a very efficient way, the keys and maintain the data compression and deduplication capabilities for customers, you know, efficiency and cost purposes while being very secure. The second area we're working closely on is in Zero Trust. You know, being able to develop Zero Trust infrastructure across Edge, to Core, to Colo, to Cloud and making sure that, you know, we have reference designs available to customers with procedures, policies, best practices, to be able to drive Zero Trust environments. >> John what you're (indistinct) is huge and you guys have, literally could be the keys to the kingdom pun intended. You guys are doing a lot of great security at the Edge too, whether the traffic stays with the Edge or goes across the network. >> That's all right, I'm as curious, like you said, it's been a joint focus and initiative across much of our portfolio for quite a while now. And I think, you know, you asked what's next and I think, you know, sky's the limit right now. I mean, we've got the shared vision, right? I think at the end of the day, you know, we've shared a number of joint initiatives that are ongoing right now with Project Monterrey. Obviously our integration with Tansu and a number of solutions we have there, work around APEX, et cetera. I think we have complimentary capabilities. You mentioned, you know, areas like supply chain, areas like security, you know, and I think these are all things that we both do well together. And the thing I will say that I think is probably the most key to us sustaining this great execution together is our collaborative cultures. I think, you know, there's something to be said for what we built, you know, all these last several years, you know, around these collaborative cultures, working together on joint roadmaps and focusing on really end of the day solving our customer's biggest challenges, whatever those may be, you know? And so at the end of the day behind us, we have the greatest supply chains, you know, services, support, and innovation engines. But I think, you know, I think that the passion, our groups working together I think is going to be key to us going forward. >> Well, great stuff moving forward together with Dell Technologies and VMware. David, thanks for coming on. John, great to see you. Thanks for sharing insight. Great CUBE conversation talking encryption, we've spoken about Edge and supply chain as well. Great stuff, great conversation. Thanks for coming on. >> Thank you >> Thank you so much, John. >> Okay, this is theCUBE conversation. I'm John Furrier, with theCUBE. You're watching CUBE coverage. Thank you so much for watching. (bright music)
SUMMARY :
of the CTO at VMware. and at the Edge. but I think in a word, you know, John, by the way that, you know, Yeah, indeed just always have that, you know, but you guys have been working on this and what you also see in core we announced, you know, and really for plug and play of, you know, in the theCUBE recently is that, you know, looking to get, you know, Yeah, and I think because, you know, of what you deploy to essentially create you know, at the end of the day Because you guys would be considered with a, you know, APEX service, But the idea is that you you know, joint ACI off in the market you guys about this too. Well, you know, first of all, And, but, you know, it's, you know, And you got to have that And, you know, so it's what you kind of made and making sure that, you know, is huge and you guys have, And I think, you know, John, great to see you. Thank you so much for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
John Siegal | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Jon Siegal | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Dave McGraw | PERSON | 0.99+ |
Zero Trust | ORGANIZATION | 0.99+ |
11 years | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Project Monterrey | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Tansu | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first events | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Edge | ORGANIZATION | 0.99+ |
EMC World | ORGANIZATION | 0.98+ |
APEX | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
five years | QUANTITY | 0.97+ |
Dell World | ORGANIZATION | 0.96+ |
VMworld | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.95+ |
ITC | ORGANIZATION | 0.95+ |
vSphere | TITLE | 0.94+ |
single click | QUANTITY | 0.94+ |
VxRail | COMMERCIAL_ITEM | 0.94+ |
Renen Hallak & David Floyer | CUBE Conversation 2021
(upbeat music) >> In 2010 Wikibon predicted that the all flash data center was coming. The forecast at the time was that flash memory consumer volumes, would drive prices of enterprise flash down faster than those of high spin speed, hard disks. And by mid decade, buyers would opt for flash over 15K HDD for virtually all active data. That call was pretty much dead on and the percentage of flash in the data center continues to accelerate faster than that, of spinning disk. Now, the analyst that made this forecast was David FLoyer and he's with me today, along with Renen Hallak who is the founder and CEO of Vast Data. And they're going to discuss these trends and what it means for the future of data and the data center. Gentlemen, welcome to the program. Thanks for coming on. >> Great to be here. >> Thank you for having me. >> You're very welcome. Now David, let's start with you. You've been looking at this for over a decade and you know, frankly, your predictions have caused some friction, in the marketplace but where do you see things today? >> Well, what I was forecasting was based on the fact that the key driver in any technology is volume, volume reduces the cost over time and the volume comes from the consumers. So flash has been driven over the years by initially by the iPod in 2006 the Nano where Steve Jobs did a great job with Samsung and introducing large volumes of flash. And then the iPhone in 2008. And since then, all of mobile has been flash and mobile has been taking in a greater and greater percentage share. To begin with the PC dropped. But now the PCs are over 90% are using flash when there delivered. So flash has taken over the consumer market, very aggressively and that has driven down the cost of flash much much faster than the declining market of HDD. >> Okay and now, so Renen I wonder if we could come to you, we've got I want you to talk about the innovations that you're doing, but before we get there, talk about why you started Vast. >> Sure, so it was five years ago and it was basically the kill of the hard drive. I think what David is saying resonates very, very well. In fact, if you look at our original presentation for Vast Data. It showed flash and tape. There was no hard drive in the middle. And we said 10 years from now, and this was five years ago. So even the dates match up pretty well. We're not going to have hard drives anymore. Any piece of information that needs to be accessible at all will be on flash and anything that is dormant and never gets read will be on tape. >> So, okay. So we're entering this kind of new phase now, with which is being driven by QLC. David maybe you could give us a quick what is QLC? Just give us a bumper sticker there. >> There's 3D NAND, which is the thing that's growing, very very fast and it's growing on several dimensions. One dimension is the number of layers. Another dimension is the size of each of those pieces. And the third dimension is the number of bits which a QLC is five bits per cell. So those three dimensions have all been improving. And the result of that is that on a wafer of, that you create, more and more data can be stored on the whole wafer on the chip that comes from that wafer. And so QLC is the latest, set of 3D NAND flash NAND flash. That's coming off the lines at the moment. >> Okay, so my understanding is that there's new architectures that are entering the data center space, that could take advantage of QLC enter Vast. Someone said they've rented this, a nice set up for you and maybe before we get into the architecture, can you talk a little bit more about the company? I mean, maybe not everybody's familiar with with Vast, you share why you started it but what can you tell us about the business performance and any metrics you can share would be great? >> Sure, so the company as I said is five years old, about 170, 180 people today. We started selling product just around two years ago and have just hit $150 million in run rate. That's with eight sales people. And so, as you can imagine, there's a lot of demand for flash all the way down the stack in the way that David predicted. >> Wow, okay. So you got pretty comfortable. I think you've got product market fit, right? And now you're going to scale. I would imagine you're going to go after escape velocity and you're going to build your moat. Now part of that, I mean a lot of that is product, right? Product is sales. Those are the cool two golden pillars, but, and David when you think back to your early forecast last decade it was really about block storage. That was really what was under attack. You know, kind of fusion IO got it started with Facebook. They were trying to solve their SQL database performance problems. And then we saw pure storage. They hit escape velocity. They drove a truck through EMC sym metrics HDD based install base which precipitated the acquisition of XtremeIO by EMC. Something Renan knows a little bit about having led development, of the product but flash was late to the NAS party guys, Renan let me start with you. Why is that? And what is the relevance of QLC in that regard? >> The way storage has been always, it looks like a pyramid and you have your block devices up at the top and then your NAS underneath. And today you have object down at the bottom of that pyramid. And the pyramid basically represents capacity and the Y axis is price performance. And so if you could only serve a small subset of the capacity, you would go for block. And that is the subset that needed high performance. But as you go to QLC and PLC will soon follow the price of all flash systems goes down to a point where it can compete on the lower ends of that pyramid. And the capacity grows to a point where there's enough flash to support those workloads. And so now with QLC and a lot of innovation that goes with it it makes sense to build an all flash, NAS and object store. >> Yeah, okay. And David, you and I have talked about the volumes and Renan sort of just alluded to that, the higher volumes of NAS, not to mention the fact that NAS is hard, you know files difficult, but that's another piece of the equation here, isn't it? >> Absolutely, NAS is difficult. It's a large, very large scale. We're talking about petabytes of data. You're talking about very important data. And you're talking about data, which is at the moment very difficult to manage. It takes a lot of people to manage it, takes a lot of resources and it takes up a lot, a lot of space as well. So all of those issues with NAS and complexity is probably the biggest single problem. >> So maybe we could geek out a little bit here. You guys go at it, but Renan talk about the Vast architecture. I presume it was built from the ground up for flash since you were trying to kill HTD. What else do we need to know? >> It was built for flash. It was also built for Crosspoint which is a new technology that came out from Intel and micron about three years ago. Cross point is basically another level of persistent media above flash and below Ram. But what we really set out to do is, as I said to kill the hard drive, and for that what you need is to get the price parity. And of course, flash and hard drives are not at price parity today. As David said, they probably will be in a few years from now. And so we wanted to, jumpstart that, to accelerate that. And so we spent a lot of time in building a new type of architecture with a lot of new metadata structures and algorithms on top to bring that effective price down to a point where it's competitive today. And in fact, two years ago the way we did it was by going out to talk to these vendors Intel with 3D Crosspoint and QLC flash Mellanox with NVMe over fabrics, and very fast ethernet networks. And we took those building blocks and we thought how can we use this to build a completely different type of architecture, that doesn't just take flash one level down the stack but actually allows us to break that pyramid, to collapse it down and to build a single system that is as fast as your fastest all flash block device or faster but as affordable as your hard drive based archives. And once that happens you don't need to think about storage anymore. You have a single system that's big enough and cheap enough to throw everything at it. And it's fast enough such that everything is accessible as sub-millisecond latencies. The way the architecture is built is pretty much the opposite of the way scale-out storage has been done. It's not based on shared nothing. The way XtremIO was the way Isilon is the way Hadoop and the Google file system are. We're basing it on a concept called Dis-aggregated Shared Everything. And what that means is that we have the media on one set of devices, the logic running in containers, just software and you can scale each of those independently. So you can scale capacity independently from performance and you have this shared metadata space, that all of the containers can see. So the containers don't actually have to talk to each other in the synchronous path. That means that it's much more scalable. You can go up to hundreds of thousands of nodes rather than just a few dozen. It's much more resilient. You can have all of them fail and you still didn't lose any data. And it's much more easy to use to David's point about complexity. >> Thank you for that. And then you, you mentioned up front that you not only built for flash, but built for Crosspoint. So you're using Crosspoint today. It's interesting. There was always been this sort of debate about Crosspoint It's less expensive than Ram, or maybe I got that wrong but it's persistent, >> It is. >> Okay, but it's more expensive than flash. And it was sort of thought it was a fence sitter cause it didn't have the volume but you're using it today successfully. That's interesting. >> We're using it both to offset the deficiencies of the low cost flash. And the nice thing about QLC and PLC is that you get the same levels of read performance as you would from high-end flash. The only difference between high cost and low cost flash today is in right cycles and in right performance. And so Crosspoint helps us offset both of those. We use it as a large right buffer and we use it as a large metadata store. And that allows us not just to arrange the information in a very large persistent right buffer before we need to place it on the low cost flash. But it also allows us to develop new types of metadata structures and algorithms that allow us to make better use of the low cost flash and reduce the effective price down even lower than the rock capacity. >> Very cool. David, what are your thoughts on the architecture? give us kind of the independent perspective >> I think it's brilliant architecture. I'd like to just go one step down on the network side of things. The whole use of NBME over fabric allows the users all of the servers to get any data across this whole network directly to it. So you've got great performance right away across the stack. And then the other thing is that by using RDMA for NASS, you're able, if you need to, to get down in microseconds to the data. So overall that's a thousand times faster than any HDD system could manage. So this architecture really allows an any to any simple, single level of storage which is so much easier to think about, architect use or manage is just so much simpler. >> If you had I mean, I said I don't know if there's an answer to this question but if you had to pick one thing Renan that you really were dogmatic about and you bet on from an architectural standpoint, what would that be? >> I think what we bet on in the early days is the fact that the pyramid doesn't work anymore and that tiering doesn't work anymore. In fact, we stole Johnson and Johnson's tagline No More Tears. Only, It's not spelled the same way. The reason for that is not because of storage. It's because of the applications as we move to applications more and more that are machine-based and machines are now not just generating the data. They're also reading the data and analyzing it and providing insights for humans to consume. Then the workloads changed dramatically. And the one thing that we saw is that you can't choose which pieces of information need to be accessible anymore. These new algorithms, especially around AI and machine learning and deep learning they need fast access to the entirety of the dataset and they want to read it over and over and over again in order to generate those insights. And so that was the driving force behind us building this new type of architecture. And we're seeing every single day when we talk to customers how the old architecture is simply break down in the face of these new applications. >> Very cool speaking of customers. I wonder if you could talk about use cases, customers you know, and this NASS arena maybe you could add some color there. >> Sure, our customers are large in data. We started half a petabyte and we grow into the exabyte range. The system likes to be big as, as it grows it grows super linearly. If you have a 100 nodes or a 1000 nodes you get more than 10X in performance, in capacity efficiency and resilience, et cetera. And so that's where we thrive. And those workloads are today. Mainly analytics workloads, although not entirely. If you look at it geographically we have a lot of life science in Boston research institutes medical imaging, genomics universities pharmaceutical companies here in New York. We have a lot of financials, hedge funds, Analyzing everything from satellite imagery to trade data to Twitter feeds out in California. A lot of AI, autonomous driving vehicles as well as media and entertainment both generation of films like animation, as well as content distribution are being done on top of best. >> Great thank you and David, when you look at the forecast that you've made over the years and when I imagine that they match nicely with your assumptions. And so, okay, I get that, but that doesn't, not everybody agrees, David. I mean, certainly the HDD guys don't agree but they, they're obviously fighting to hang on to their awesome run for 50 years, but as well there's others to do in hybrids and the like, and they kind of challenge your assumptions and you don't have a dog in this fight. We just want the truth and try to do our best to report it. But let me start with this. One of the things I've seen is that you're comparing deduped and compressed flash with raw HDD. Is that true or false? >> It's in terms of the fundamentals of the forecast, et cetera, it's false. What I'm taking is the new egg price. And I did it this morning and I looked up a two terabyte disc drive, NAS disc drive. I think it was $54. And if you look at the cost of a a NAND for two terabytes, it's about $200. So it's a four to one ratio. >> So, >> So and that's coming down from what people saw last year, which was five or six and every year has been, that ratio has been coming down. >> The ratio between the cost Delta, between HDD is still cheaper. So Renan I wonder one of the other things that Floyer has said is that because of the advantages of flash, not only performance but also data sharing, et cetera, which really drives other factors like TCO. That it doesn't have to be at parody in order for customers to consume that. I certainly saw that on my laptop, I could have got more storage and it could have been cheaper for per bit for my laptop. I took the flash. I mean, no problem. That that was an intelligence test but what are you seeing from customers? And by the way Floyer I think is forecasting by what, 2026 there will be actually a raw to raw crossover. So then it's game over. But what are you seeing in terms of what customers are telling you or any evidence you have that it doesn't have to be, even that customers actually get more value even if it's more expensive from flash, what are you seeing? >> Yeah in the enterprise space customers aren't buying raw flash they're buying storage systems. And so even if the raw numbers flash versus hard drive are still not there there is a lot of things that can be done at the system level to equalize those two. In fact, a lot of our IP is based on that we are taking flash today is, as David said more expensive than hard drives, but at the system level it doesn't remain more expensive. And the reason for that is storage systems waste space. They waste it on metadata, they waste it on redundancy. We built our new metadata structures, such that they everything lives in Crosspoint and is so much smaller because of the way Crosspoint is accessible at byte level granularity, we built our erasure codes in a way where you can sustain 10, 20, 30 drive failures but you only pay two or 1% in overhead. We built our data reduction mechanisms such that they can reduce down data even if the application has already compressed it and already de-duplicated it. And so there's a lot of innovation that can happen at the software level as part of this new direct dis-aggregated shared everything architecture that allows us to bridge that cost gap today without having customers do fancy TCO calculations. And of course, as prices of flash over the next few years continue declining, all of those advantages remain and it will just widen the gap between hard drives and flash. And there really is no advantage to hard drives once the price thing is solved. >> So thank you. So David, the other thing I've seen around these forecasts is that the comments that you can't really data reduce effectively hard disk. And I understand why the overhead and of course you can in flash you can use all kinds of data reduction techniques and not affect performance, or it's not even noticeable like put the cloud guys, do it upstream. Others do it upstream. What's your comment on that? >> Yes, if you take sequential data and you do a lot of work upfront you can write out in very lot big blocks and that's a perfect sequentially, good way of doing it. The challenge for the HDD people is if they go for that for that sort of sequential type of application that the cheapest way of doing that is to use tape which comes back to the discussion that the two things that are going to remain are tape and flash. So that part of the HDD market in my assertion will go towards tape and tape libraries. And those are serving very well at the moment. >> Yeah I mean, It's just the economics of tape are really attractive. I just feel like I've said this many times that the marketing of tape is lacking. Like I'd like to see, better thinking around how it could play. Cause I think customers have this perception tape, but there's actually a lot of value there. I want to carry on, >> Small point there. Yeah, I mean, there's an opportunity in the same way that Vast have created an architecture for flash. There's an opportunity out there for the tech people with flash to make an architecture that allows you to take that workload and really lower the price, enormously. >> You've called it Flape >> Flape yes. >> There's some interesting metadata opportunities there but we won't go into that. And then David, I want to ask you about NAND shortages. We saw this in 2016 and 2017. A lot of people saying there's an NAND shortage again. So that's a flaw in your forecast prices of you're assuming prices of flash continue to come down faster than those of HDD but the shortages of NAND could be problematic. What do you say to that? >> Well, I've looked at that in some detail and one of the big, important things is what's happening in the flash market and the Chinese, YMTC Chinese company has introduced a lot more volume into the market. They're making 100,000 wafers a month for this year. That's around six to 8% of market of NAND at this year, as a result, Samsung, micron, Intel, Hynix they're all increasing their volumes of NAND so that they're all investing. So I don't see that NAND itself is going to be a problem. There is certainly a shortage of processor chips which drive the intelligence in the NAND itself. But that's a problem for everybody. That's a problem for cars. It's a problem for disk drives. >> You could argue that's going to create an oversupply, potentially. Let's not go there, but you know what at the end of the day it comes back to the customer and all this stuff. It's interesting. I love talking about the architecture but it's really all about customer value. And so, so Renan, I want you to sort of close there. What should customers be paying attention to? And what should observers of Vast Data really watch as indicators for progress for you guys milestones and things in the market that we should be paying attention to but start with the customers. What's your advice to them? >> Sure, for any customer that I talked to I always ask the same thing. Imagine where you'll be five years from now because you're making an investment now that is at least five years long. In our case, we guaranteed the lifespan of the devices for a decade, such that you know that it's going to be there for you and imagine what is going to happen over those next five years. What we're seeing in most customers is that they have a lot of doormen data and with the advances in analytics and AI they want to make use of that data. They want to turn it from a cost center to a profit center and to gain insight from that data and to improve their business based on that information that they have the same way the hyperscalers are doing in order to do that, you need one thing you need fast access to all of that information. Once you have that, you have the foundation to step into this next generation type world where you can actually make money off of your information. And the best way to get very, very fast access to all of your information is to put it on Vast media like flash and Crosspoint. If I can give one example, Hedge Funds. Hedge funds do a lot of back-testing on Vast. And what makes sense for them is to test as much information back as they possibly can but because of storage limitations, they can't do that. And the other thing that's important to them is to have a real-time experience to be able to run those simulations in a few minutes and not as a batch process overnight, but because of storage limitations, they can't do that either. The third thing is if you have many different applications and many different users on the same system they usually step on each other's toes. And so the Vast architecture is solves those three problems. It allows you a lot of information very fast access and fast processing an amazing quality of service where different users of the system don't even notice that somebody else is accessing the same piece of information. And so Hedge Funds is one example. Any one of these verticals that make use of a lot of information will benefit from this architecture in this system. And if it doesn't cost any more, there's really no real reason delay this transition into all flash. >> Excellent very clear thinking. Thanks for laying that out. And what about, you know, things that we should how should we judge you? What are the things that we should watch? >> I think the most important way to judge us is to look at customer adoption and what we're seeing and what we're showing investors is a very high net dollar retention number. What that means is basically a customer buys a piece of kit today, how much more will they buy over the next year, over the next two years? And we're seeing them buy more than three times more, within a year of the initial purchase. And we see more than 90% of them buying more within that first year. And that to me indicates that we're solving a real problem and that they're making strategic decisions to stop buying any other type of storage system. And to just put everything on Vast over the next few years we're going to expand beyond just storage services and provide a full stack for these AI applications. We'll expand into other areas of infrastructure and develop the best possible vertically integrated system to allow those new applications to thrive. >> Nice, yeah. Think investors love that lifetime value story. If you can get above 3X of the customer acquisition cost is to IPO in the way. Guys hey, thanks so much for coming to the Cube. We had a great conversation and really appreciate your time. >> Thank you. >> Thank you. >> All right, Thanks for watching everybody. This is Dave Volante for the Cube. We'll see you next time. (gentle music)
SUMMARY :
that the all flash data center was coming. in the marketplace but where and the volume comes from the consumers. the innovations that you're doing, kill of the hard drive. David maybe you could give And so QLC is the latest, and any metrics you can in the way that David predicted. having led development, of the product And the capacity grows to a point where And David, you and I have talked about the biggest single problem. the ground up for flash that all of the containers can see. that you not only built for cause it didn't have the volume and PLC is that you get the same levels David, what are your all of the servers to get any data And the one thing that we saw I wonder if you could talk And so that's where we thrive. One of the things I've seen is that of the forecast, et cetera, it's false. So and that's coming down And by the way Floyer I at the system level to equalize those two. the comments that you can't really So that part of the HDD market that the marketing of tape is lacking. and really lower the price, enormously. but the shortages of NAND and one of the big, important I love talking about the architecture that it's going to be there for you What are the things that we should watch? And that to me indicates that of the customer acquisition This is Dave Volante for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Renen Hallak | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Renan | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
10 | QUANTITY | 0.99+ |
David FLoyer | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
$54 | QUANTITY | 0.99+ |
2006 | DATE | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Hynix | ORGANIZATION | 0.99+ |
$150 million | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
California | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
50 years | QUANTITY | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Vast Data | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
three dimensions | QUANTITY | 0.99+ |
three problems | QUANTITY | 0.99+ |
YMTC | ORGANIZATION | 0.99+ |
Floyer | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
Renen | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
100 nodes | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two terabytes | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
more than 90% | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
third dimension | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
two terabyte | QUANTITY | 0.99+ |
iPod | COMMERCIAL_ITEM | 0.99+ |
more than three times | QUANTITY | 0.98+ |
1000 nodes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
last decade | DATE | 0.98+ |
single problem | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
One dimension | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
about $200 | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
two years ago | DATE | 0.97+ |
single system | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
half a petabyte | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
micron | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Pradeep Sindhu CLEAN
>> As I've said many times on theCUBE for years, decades even we've marched to the cadence of Moore's law relying on the doubling of performance every 18 months or so, but no longer is this the main spring of innovation for technology rather it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build-out of a massively distributed computer network. Very importantly, the last several years alternative processors have emerged to support offloading work and performing specific tests. GPUs are the most widely known example of this trend with the ascendancy of Nvidia for certain applications like gaming and crypto mining and more recently machine learning. But in the middle of last decade we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years as we move into the next era of cloud. And with me is Pradeep Sindhu who's the co-founder and CEO of Fungible, a company specializing in the design and development of DPUs. Pradeep, welcome to theCUBE. Great to see you. >> Thank-you, Dave and thank-you for having me. >> You're very welcome. So okay, my first question is don't CPUs and GPUs process data already. Why do we need a DPU? >> That is a natural question to ask. And CPUs have been around in one form or another for almost 55, maybe 60 years. And this is when general purpose computing was invented and essentially all CPUs went to x86 architecture by and large and of course is used very heavily in mobile computing, but x86 is primarily used in data center which is our focus. Now, you can understand that that architecture of a general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time improvements you refer to Moore's law, which is really the improvements of the price, performance of silicon over time that combined with architectural improvements was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. You're not going to get very much, you're not going to squeeze more blood out of that storm from the general purpose computer architecture. what has also happened over the last decade is that Moore's law which is essentially the doubling of the number of transistors on a chip has slowed down considerably and to the point where you're only getting maybe 10, 20% improvements every generation in speed of the transistor if that. And what's happening also is that the spacing between successive generations of technology is actually increasing from two, two and a half years to now three, maybe even four years. And this is because we are reaching some physical limits in CMOS. These limits are well-recognized. And we have to understand that these limits apply not just to general purposive use but they also apply to GPUs. Now, general purpose CPUs do one kind of competition they're really general and they can do lots and lots of different things. It is actually a very, very powerful engine. And then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of a processor called the GPU which specializes in executing vector floating-point arithmetic operations much, much better than CPU maybe 20, 30, 40 times better. Well, GPUs have now been around for probably 15, 20 years mostly addressing graphics computations, but recently in the last decade or so they have been used heavily for AI and analytics computations. So now the question is, well, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago and I recognize I was still at Juniper Networks which is another company that I founded. I recognize that in the data center as the workload changes to addressing more and more, larger and larger corpuses of data, number one and as people use scale-out as these standard technique for building applications, what happens is that the amount of east-west traffic increases greatly. And what happens is that you now have a new type of workload which is coming. And today probably 30% of the workload in a data center is what we call data-centric. I want to give you some examples of what is a data-centric workload. >> Well, I wonder if I could interrupt you for a second. >> Of course. >> Because I want those examples and I want you to tie it into the cloud 'cause that's kind of the topic that we're talking about today and how you see that evolving. I mean, it's a key question that we're trying to answer in this program. Of course, early cloud was about infrastructure, little compute, little storage, little networking and now we have to get to your point all this data in the cloud. And we're seeing, by the way the definition of cloud expand into this distributed or I think a term you use is disaggregated network of computers. So you're a technology visionary and I wonder how you see that evolving and then please work in your examples of that critical workload, that data-centric workload. >> Absolutely happy to do that. So if you look at the architecture of our cloud data centers the single most important invention was scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now, the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPUs with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute. But this architecture did is it compute centric architecture and the reason it's a compute centric architecture is if you open this server node what you see is a connection to the network typically with a simple network interface card. And then you have CPUs which are in the middle of the action. Not only are the CPUs processing the application workload but they're processing all of the IO workload, what we call data-centric workload. And so when you connect SSDs, and hard drives, and GPUs, and everything to the CPU, as well as to the network you can now imagine the CPUs is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go through the CPU and you're executing instructions typically in the operating system and you're interrupting the CPU many, many millions of times a second. Now, general purpose CPUs and the architecture CPUs was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where there's a lot of data, a lot of these stress traffic the percentage of workload, which is data-centric has gone from maybe one to 2% to 30 to 40%. I'll give you some numbers which are absolutely stunning. If you go back to say 1987 and which is the year in which I bought my first personal computer the network was some 30 times slower than the CPU. The CPU is running at 15 megahertz, the network was running at three megabits per second. Or today the network runs at a 100 gigabits per second and the CPU clock speed of a single core is about three to 2.3 gigahertz. So you've seen that there's a 600X change in the ratio of IO to compute just the raw clock speed. Now, you can tell me that, hey, typical CPUs have lots, lots of cores, but even when you factor that in there's been close to two orders of magnitude change in the amount of IO to compute. There is no way to address that without changing the architecture and this is where the DPU comes in. And the DPU actually solves two fundamental problems in cloud data centers. And these are fundamental there's no escaping it. No amount of clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architecture the interactions between server nodes are very inefficient. That's number one, problem number one. Problem number two is that these data-centric computations and I'll give you those four examples. The network stack, the storage stack, the virtualization stack, and the security stack. Those four examples are executed very inefficiently by CPUs. Needless to say that if you try to execute these on GPUs you will run into the same problem probably even worse because GPUs are not good at executing these data-centric computations. So what we were looking to do at Fungible is to solve these two basic problems. And you don't solve them by just taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the last 40 years. So what we did was we created this new microprocessor that we call DPU from ground up. It's a clean sheet design and it solves those two problems fundamentally. >> So I want to get into that. And I just want to stop you for a second and just ask you a basic question which is if I understand it correctly, if I just took the traditional scale out, if I scale out compute and storage you're saying I'm going to hit a diminishing returns. It's not only is it not going to scale linearly I'm going to get inefficiencies. And that's really the problem that you're solving. Is that correct? >> That is correct. And the workloads that we have today are very data-heavy. You take AI for example, you take analytics for example it's well known that for AI training the larger the corpus of relevant data that you're training on the better the result. So you can imagine where this is going to go. >> Right. >> Especially when people have figured out a formula that, hey the more data I collect I can use those insights to make money- >> Yeah, this is why I wanted to talk to you because the last 10 years we've been collecting all this data. Now, I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research. And the first thing people said is they want to improve their infrastructure and they want to do that by moving to the cloud. And they also, there was a security angle there as well. That's a whole another topic we could discuss. The other stat that jumped out at me, there's 80% of the customers that you surveyed said there'll be augmenting their x86 CPU with alternative processing technology. So that's sort of, I know it's self-serving, but it's right on the conversation we're having. So I want to understand the architecture. >> Sure. >> And how you've approached this. You've clearly laid out this x86 is not going to solve this problem. And even GPUs are not going to solve the problem. >> They re not going to solve the problem. >> So help us understand the architecture and how you do solve this problem. >> I'll be very happy to. Remember I use this term traffic cop. I use this term very specifically because, first let me define what I mean by a data-centric computation because that's the essence of the problem we're solving. Remember I said two problems. One is we execute data-centric workloads at least an order of magnitude more efficiently than CPUs or GPUs, probably 30 times more efficient. And the second thing is that we allow nodes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first let's look at the data-centric piece. The data-centric piece for workload to qualify as being data-centric four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads so I'm not saying anything. Secondly, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently, thousands of them, okay? That's the number two. So a lot of multiplexing. Number three is that this workload is stateful. In other words you can't process back it's out of order. You have to do them in order because you're terminating network sessions. And the last one is that when you look at the actual computation the ratio of IO to arithmetic is medium to high. When you put all four of them together you actually have a data-centric workload, right? And this workload is terrible for general purpose CPUs. Not only the general purpose CPU is not executed properly the application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them you're going to be in trouble. So what did we do? Well, what we did was our architecture consists of very, very heavily multi-threaded general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some of those accelerators, DMA accelerators, then ratio coding accelerators, compression accelerators, crypto accelerators, compression accelerators. These are just some, and then look up accelerators. These are functions that if you do not specialize you're not going to execute them efficiently. But you cannot just put accelerators in there, these accelerators have to be multi-threaded to handle. We have something like a 1,000 different treads inside our DPU to address these many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that is very important to understand is that given the velocity of transistors I know that we have hundreds of billions of transistors on a chip, but the problem is that those transistors are used very inefficiently today if the architecture of a CPU or a GPU. What we have done is we've improved the efficiency of those transistors by 30 times, okay? >> So you can use a real estate much more effectively? >> Much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that we're going to end up in the same bucket where general purpose CPUs are today. We were trying to solve a specific problem of data-centric computations and of improving the note to note efficiency. So let me go to point number two because that's equally important. Because in a scalar or architecture the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that you start to get back at drops because there are some fundamental problems caused by congestion on the network which are unsolved as we speak today. There are only one solution which is to use TCP. Well, TCP is a well-known, is part of the TCP IP suite. TCP was never designed to handle the latencies and speeds inside data center. It's a wonderful protocol but it was invented 43 years ago now. >> Yeah, very reliable and tested and proven. It's got a good track record but you're right. >> Very good track record, unfortunately eats a lot of CPU cycles. So if you take the idea behind TCP and you say, okay, what's the essence of TCP? How would you apply it to the data center? That's what we've done with what we call FCP which is a fabric control protocol, which we intend to open. We intend to publish the standards and make it open. And when you do that and you embed FCP in hardware on top of this standard IP ethernet network you end up with the ability to run at very large-scale networks where the utilization of the network is 90 to 95%, not 20 to 25%. >> Wow, okay. >> And you end up with solving problems of congestion at the same time. Now, why is this important today? That's all geek speak so far. The reason this stuff is important is that it such a network allows you to disaggregate, pull and then virtualize the most important and expensive resources in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like DRAM wants to be disaggregated. And well, if I put everything inside a general purpose server the problem is that those resources get stranded because they're stuck behind a CPU. Well, once you disaggregate those resources and we're saying hyper disaggregate meaning the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources. >> And then you going to reaggregate them, right? I mean, that's obviously- >> Exactly and the network is the key in helping. >> Okay. >> So the reason the company is called Fungible is because we are able to disaggregate, virtualize and then pull those resources. And you can get for so scale-out companies the large AWS, Google, et cetera they have been doing this aggregation tooling for some time but because they've been using a compute centric architecture their disaggregation is not nearly as efficient as we can make. And they're off by about a factor of three. When you look at enterprise companies they are off by another factor of four because the utilization of enterprise is typically around 8% of overall infrastructure. The utilization in the cloud for AWS, and GCP, and Microsoft is closer to 35 to 40%. So there is a factor of almost four to eight which you can gain by dis-aggregating and pulling. >> Okay, so I want to interrupt you again. So these hyperscalers are smart. They have a lot of engineers and we've seen them. Yeah, you're right they're using a lot of general purpose but we've seen them make moves toward GPUs and embrace things like Arm. So I know you can't name names, but you would think that this is with all the data that's in the cloud, again, our topic today. You would think the hyperscalers are all over this. >> Well, the hyperscalers recognized here that the problems that we have articulated are important ones and they're trying to solve them with the resources that they have and all the clever people that they have. So these are recognized problems. However, please note that each of these hyperscalers has their own legacy now. They've been around for 10, 15 years. And so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some point. >> They have technical debt, you mean? (laughs) >> I'm not going to say they have technical debt, but they have a certain way of doing things and they are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you've heard the term SmartNIC. >> Yeah, right. >> Or your listeners must've heard that term. Well, a SmartNIC is not a DPU. What a SmartNIC is, is simply taking general purpose ARM cores, putting the network interface and a PCI interface and integrating them all on the same chip and separating them from the CPU. So this does solve a problem. It solves the problem of the data center workload interfering with the application workload, good job, but it does not address the architectural problem of how to execute data center workloads efficiently. >> Yeah, so it reminds me of, I understand what you're saying I was going to ask you about SmartNICs. It's almost like a bridge or a band-aid. >> Band-aid? >> It almost reminds me of throwing a high flash storage on a disc system that was designed for spinning disc. Gave you something but it doesn't solve the fundamental problem. I don't know if it's a valid analogy but we've seen this in computing for a longtime. >> Yeah, this analogy is close because okay, so let's take a hyperscaler X, okay? We won't name names. You find that half my CPUs are crippling their thumbs because they're executing this data-centric workload. Well, what are you going to do? All your code is written in C++ on x86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different let's say we use Arm simply because x86 licenses are not available to people to build their own CPUs so Arm was available. So they put a bunch of Arm cores, they stick a PCI express and a network interface and you bought that code from x86 to Arm. Not difficult to do but and it does you results. And by the way if for example this hyperscaler X, shall we called them, if they're able to remove 20% of the workload from general purpose CPUs that's worth billions of dollars. So of course, you're going to do that. It requires relatively little innovation other than to port code from one place to another place. >> Pradeep, that's what I'm saying. I mean, I would think again, the hyperscalers why can't they just do some work and do some engineering and then give you a call and say, okay, we're going to attack these workloads together. That's similar to how they brought in GPUs. And you're right it's worth billions of dollars. You could see when the hyperscalers Microsoft, and Azure, and AWS bolt announced, I think they depreciated servers now instead of four years it's five years. And it dropped like a billion dollars to their bottom line. But why not just work directly with you guys? I mean, let's see the logical play. >> Some of them are working with us. So that's not to say that they're not working with us. So all of the hyperscalers they recognize that the technology that we're building is a fundamental that we have something really special and moreover it's fully programmable. So the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit which is on the boundary of a server and the network, is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architecture where the functionality is programmable but it is also very high speed for this particular set of applications. So the analogy with GPUs is nearly perfect because GPUs and particularly Nvidia implemented or they invented CUDA which is the programming language for GPUs. And it made them easy to use, made it fully programmable without compromising performance. Well, this is what we're doing with DPUs. We've invented a new architecture, we've made them very easy to program. And they're these workloads, not workloads, computation that I talked about which is security, virtualization, storage and then network. Those four are quintessential examples of data center workloads and they're not going away. In fact, they're becoming more, and more, and more important over time. >> I'm very excited for you guys, I think, and really appreciate Pradeep, we have your back because I really want to get into some of the secret sauce. You talked about these accelerators, eraser code and crypto accelerators. But I want to understand that. I know there's NBMe in here, there's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now into this domain, this build-out of this, I like this term disaggregated, massive disaggregated network. >> Hyper disaggregated. >> It's so hyper disaggregated even better. And I would say this and then I got to go. But what got us here the last decade is not the same as what's going to take us through the next decade. >> That's correct. >> Pradeep, thanks so much for coming on theCUBE. It's a great conversation. >> Thank-you for having me it's really a pleasure to speak with you and get the message of Fungible out there. >> Yeah, I promise we'll have you back. And keep it right there everybody we've got more great content coming your way on theCUBE on cloud. This is Dave Vellante. Stay right there. >> Thank-you, Dave.
SUMMARY :
of compute and storage and the build-out Thank-you, Dave and is don't CPUs and GPUs is that the architectural interrupt you for a second. and I want you to tie it into the cloud in the amount of IO to compute. And that's really the And the workloads that we have And the first thing is not going to solve this problem. and how you do solve this problem. And the last one is that when you look the note to note efficiency. and tested and proven. the network is 90 to 95%, in the data center. Exactly and the network So the reason the data that's in the cloud, recognized here that the problems the compute centric way the data center workload I was going to ask you about SmartNICs. the fundamental problem. Well, the easiest thing to I mean, let's see the logical play. So all of the hyperscalers they recognize into some of the secret sauce. last decade is not the same It's a great conversation. and get the message of Fungible out there. Yeah, I promise we'll have you back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
Pradeep | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
15 megahertz | QUANTITY | 0.99+ |
30 times | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Juniper Networks | ORGANIZATION | 0.99+ |
Pradeep Sindhu | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
two problems | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
600X | QUANTITY | 0.99+ |
1987 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two problems | QUANTITY | 0.99+ |
1,000 different treads | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
30 times | QUANTITY | 0.99+ |
60 years | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
2.3 gigahertz | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
two functions | QUANTITY | 0.98+ |
25% | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
third element | QUANTITY | 0.98+ |
Fungible | ORGANIZATION | 0.98+ |
95% | QUANTITY | 0.98+ |
40 times | QUANTITY | 0.98+ |
two orders | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
Secondly | QUANTITY | 0.98+ |
last decade | DATE | 0.98+ |
two things | QUANTITY | 0.98+ |
two basic problems | QUANTITY | 0.97+ |
10, 20% | QUANTITY | 0.97+ |
a second | QUANTITY | 0.97+ |
around 8% | QUANTITY | 0.97+ |
one solution | QUANTITY | 0.97+ |
43 years ago | DATE | 0.97+ |
four | QUANTITY | 0.97+ |
four examples | QUANTITY | 0.96+ |
eight | QUANTITY | 0.96+ |
billions of dollars | QUANTITY | 0.96+ |
100 gigabits per second | QUANTITY | 0.96+ |
one side | QUANTITY | 0.95+ |
35 | QUANTITY | 0.94+ |
three megabits per second | QUANTITY | 0.94+ |
GCP | ORGANIZATION | 0.93+ |
Azure | ORGANIZATION | 0.92+ |
two fundamental problems | QUANTITY | 0.91+ |
hundreds of billions of transistors | QUANTITY | 0.91+ |
two and a half years | QUANTITY | 0.91+ |
Problem number two | QUANTITY | 0.9+ |
IBM and Brocade: Architecting Storage Solutions for an Uncertain Future | CUBE Conversation
>> Narrator: From theCUBE studios in Palo Alto in Boston connecting with our leaders all around the world. This is theCUBE conversation. >> Welcome to theCUBE and the special IBM Brocade panel. I'm Lisa Martin. And I'm having a great opportunity here to sit down for the next 20 minutes with three gentlemen please welcome Brian Sherman a distinguished engineer from IBM, Brian, great to have you joining us. >> Thanks for having me. >> And Matt key here. Flash systems SME from IBM, Matt, happy Friday. >> Happy Friday, Lisa. Thanks for having us. >> Our pleasure. And AIG Customer solution here from Brocade is here. AJ welcome. >> Thanks for having me along. >> AJ we're going to stick with you, IBM and Brocade have had a very long you said about 22 year strategic partnership. There's some new news. And in terms of the evolution of that talk to us about what's going on with with Brocade IBM and what is new in the storage industry? >> Yeah, so the the newest thing for us at the moment is that IBM just in mid-October launched our Gen seven platforms. So this is think about the stresses that are going on in the IT environments. This is our attempt to keep pace with with the performance levels that the IBM teams are now putting into their storage environments the All-Flash Data Centers and the new technologies around non-volatile memory express. So that's really, what's driving this along with the desire to say, "You know what people aren't allowed "to be in the data center." And so if they can't be in the data center then the fabrics actually have to be able to figure out what's going on and basically provide a lot of the automation pieces. So something we're referring to as the autonomous SAM. >> And we're going to dig into NBME of our fabrics in a second but I do want to AJ continue with you in terms of industries, financial services, healthcare airlines there's the biggest users, biggest need. >> Pretty much across the board. So if you look at the global 2000 as an example, something on the order of about 96, 97% of the global 2000 make use of fiber channel environments and in portions of their world generally tends to be a lot of the high end financial guys, a lot of the pharmaceutical guys, the automotive, the telcos, pretty much if the data matters, and it's something that's critical whether we talk about payment card information or healthcare environments, data that absolutely has to be retained, has to get there, has to perform then it's this combination that we're bringing together today around the new storage elements and the functionalities they have there. And then our ability in the fabric. So the concept of a 64 gig environment to help basically not be the bottleneck in the application demands, 'cause one thing I can promise you after 40 years in this industry is the software guys always figure out how to all the performance that the hardware guys put on the shelf, right? Every single time. >> Well there's gauntlet thrown down there. Matt, let's go to you. I want to get IBM's perspective on this. Again, as we said, a 22 year strategic partnership, as we look at things like not being able to get into the data center during these unprecedented times and also the need to be able to remove some of those bottlenecks how does IBM view this? >> Yeah, totally. It's certainly a case of raising the bar, right? So we have to as a vendor continue to evolve in terms of performance, in terms of capacity, cost density, escalating simplicity, because it's not just a case of not be able to touch the rates, but there's fewer people not being able to adjust the rates, right? It's a case where our operational density continues to have to evolve being able to raise the bar on the network and be able to still saturate those line rates and be able to provide that simply a cost efficiency that gets us to a utilization that raises the bar from our per capita ratio from not just talking about 200, 300 terabytes per admin but going beyond the petabyte scale per admin. And we can't do that unless people have access to the data. And we have to provide the resiliency. We have to provide the simplicity of presentation and automation from our side. And then this collaboration that we do with our network brother like Brocade here continued to stay out of the discussion when it comes to talking about networks and who threw the ball next. So we truly appreciate this Gen seven launch that they're doing we're happy to come in and fill that pipe on the flash side for them. >> Excellent and Brian as a distinguished engineer and let me get your perspectives on the evolution of the technology over this 22 year partnership. >> Thanks Lisa. It certainly has been a longstanding, a great relationship, great partnership all the way from inventing joint things, to developing, to testing and deploying to different technologies through the course of time. And it's been one of those that where we are today, like AJ had talked about being able to sustain what the applications require today in this always on time type of environment. And as Matt said, bringing together the density and operational simplicity to make that happen 'cause we have to make it easier from the storage side for operations to be able to manage this volume of data that we have coming out and our due diligence is to be able to serve the data up as fast as we can and as resilient as we can. >> And so sticking with you, Brian that simplicity is key because as we know as we get more and more advances in technology the IT environment is only becoming more complex. So really truly enabling organizations in any industry to simplify is absolute table stakes. >> Yeah, it definitely is. And that's core to what we're focused on and how do we make the storage environment simple. It's been one those through the years and historically, we've had entry-level us and the industry as a whole, is that an entry-level product mid range level products, high-end level products. And earlier this year, we said enough, enough of that it's one product portfolio. So it's the same software stack it's just, okay. Small, medium and large in terms of the appliances that get delivered. Again, building on what Matt said, from a density perspective where we can have a petabyte of uncompressed and data reduced storage in a two Enclosure. So it becomes from a overall administration perspective, again, one software stake, one automation stack, one way to do point in time copies, replication. So in focusing on how to make that as simple for the operations as we possibly can. >> I think we'd all take a little bit of that right now. Matt, let's go to you and then AJ view, let's talk a little bit more, dig into the IBM storage arrays. I mean, we're talking about advances in flash, we're talking about NBME as a forcing function for applications to change and evolve with the storage. Matt, give us your thoughts on that. >> We saw a monumental leap in where we take some simplicity pieces from how we deliver our arrays but also the technology within the arrays. About nine months ago, in February we launched into the latest generation of non technology and with that born the story of simplicity one of the pieces that we've been happily essentially negating of value prop is storage level tiering and be able to say, "Hey, well we still support the idea of going down "to near line SaaS and enterprise disc in different flavors "of solid state whether it's tier one short usage "the tier zero high performance, high usage, "all the way up to storage class memory." While we support those technologies and the automated tiering, this elegance of what we've done as latest generation technology that we launched nine months ago has been able to essentially homogenize the environments to we're able to deliver that petabyte per rack unit ratio that Brian was mentioning be able to deliver over into all tier zero solution that doesn't have to go through woes of software managed data reduction or any kind of software managed hearing just to be always fast, always essentially available from a 100% data availability guaranteed that we offer through a technology called hyper swap, but it's really kind of highlighting what we take in from that simplicity story, by going into that extra mile and meeting the market in technology refresh. I mean, if you say the words IBM over the Thanksgiving table, you're kind of thinking, how big blue, big mainframe, old iron stuff but it's very happy to say over in distributed systems that we are in fact leading this pack by multiple months not just the fact that, "Hey, we announced sooner." But actually coming to delivering on-prem the actual solution itself nine, 10 months prior to anybody else and when that gets us into new density flavors gets us into new efficiency offerings. Not just talk about, "Hey, I can do this petabyte scale "a couple of rack units but with the likes of Brocade." That actually equates to a terabyte per second and a floor tile, what's that do for your analytics story? And the fact that we're now leveraging NBME to undercut the value prop of spinning disc in your HBC analytics environments by five X, that's huge. So now let's take near line SaaS off the table for anything that's actually per data of an angle of value to us. So in simplicity elements, what we're doing now will be able to make our own flash we've been deriving from the tech memory systems acquisition eight years ago and then integrating that into some essentially industry proven software solutions that we do with the bird flies. That appliance form factor has been absolutely monumental for us in the distributed systems. >> And thanks for giving us a topic to discuss at our socially distant Thanksgiving table. We'll talk about IBM. I know now I have great, great conversation. AJ over to you lot of advances here also in such a dynamic times, I want to get Brocade's perspective on how you're taking advantage of these latest technologies with IBM and also from a customer's perspective, what are they feeling and really being able to embrace and utilize that simplicity that Matt talked about. >> So there's a couple of things that fall into that to be honest, one of which is that similar to what you heard Brian described across the IBM portfolio for storage in our SaaS infrastructure. It's a single operating system up and down the line. So from the most entry-level platform we have to the largest platform we have it's a single software up and down. It's a single management environment up and down and it's also intended to be extremely reliable and extremely performance because here's part of the challenge when Matt's talking about multiple petabytes in a two U rack height, but the conversation you want to flip on its head there a little bit is "Okay exactly how many virtual machines "and how many applications are you going to be driving "out of that?" Because it's going to be thousands like between six and 10,000 potentially out of that, right? So imagine then if you have some sort of little hiccup in the connectivity to the data store for 6,000 to 10,000 applications, that's not the kind of thing that people get forgiving about. When we're all home like this. When your healthcare, when your finance, when your entertainment, when everything is coming to you across the network and remotely in this version and it's all application driven, the one thing that you want to make sure of is that network doesn't hiccup because humans have a lot of really good characteristics. Patience would not be one of those. And so you want to make sure that everything is in fact in play and running. And that's as one of the things that we work very hard with our friends at IBM to make sure of is that the kinds of analytics that Matt was just describing are things that you can readily get done. Speed is the new currency of business is a phrase you hear from... A quote you hear from Marc Benioff at Salesforce, right. And he's right if you can get data out of intelligence out of the data you've been collecting, that's really cool. But one of the other sort of flip sides on the people not being able to be in the data center and then to Matt's point, not as many people around either is how are humans fast enough when you look... Honestly when you look at the performance of the platforms, these folks are putting up how is human response time going to be good enough? And we all sort of have this headset of a network operations center where you've got a couple dozen people in a half lit room staring at massive screens on the thing to pop. Okay, if the first time a red light pops the human begins the investigation at what point is that going to be good enough? And so our argument for the autonomy piece of of what we're doing in the fabrics is you can't wait on the humans. You need to augment it. I get that people still want to be in charge and that's good. Humans are still smarter than the Silicon. We're not as repeatable, but we're still so far smarter about it. And so we needed to be able to do that measurement. We need to be able to figure out what normal looks like. We need to be able to highlight to the storage platform and to the application admins, when things go sideways because the demand from the applications isn't going to slow down. The demands from your environment whether you want to think about take the next steps with not just your home entertainment home entertainment systems but learning augmented reality, right. Virtual reality environments for kids, right? How do you make them feel like they're part and parcel of the classroom, for as long as we have to continue living a modified world and perhaps past it, right? If you can take a grade school from your local area and give them a virtual walkthrough of the loop where everybody's got a perfect view and it all looks incredibly real to them those are cool things, right? Those are cool applications, right? If you can figure out a new vaccine faster, right. Not a bad thing, right. If we can model better, not a bad thing. So we need to enable those things we need to not be the bottleneck, which is you get Matt and Brian over an adult beverage at some point and ask them about the cycle time for the Silicon they're playing with. We've never had Moore's law applied to external storage before never in the history of external storage. Has that been true until now. And so their cycle times, Matt, right? >> Yeah you struck a nerve there AJ, cause it's pretty simple for us to follow the linear increase in capacity and computational horsepower, right. We just ride the X86 bandwagon, ride the Silicon bandwagon. But what we have to do in order to maintain But what we have to do in order to maintain the simplicity story is followed more important one is the resiliency factor, right? 'Cause as we increased the capacity as we increased the essentially the amount of data responsible for each admin we have to literally log rhythmically increase the resiliency of these boxes because we're going to talk about petabyte scale systems and hosting them really 10,000 virtual machines in the two U form factor. I need to be able to accommodate that to make sure things don't blip. I need resilient networks, right. Have redundancy and access. I need to have protection schemes at every single layer of the stack. And so we're quite happy to be able to provide that as we leapfrog the industry and go in literally situations that are three times the competitive density that we you see out there and other distributed systems that are still bound by the commercial offerings, then, hey we also have to own that risk from a vendor side we have to make these things is actually rate six protection scheme equivalent from a drive standpoint and act back from controllers everywhere. Be able to supply the performance and consistency of that service throughout even the bad situations. >> And to that point, one of the things that you talked about, that's interesting to me that I'd kind of like you to highlight is your recovery times, because bad things will happen. And so you guys do something very, very different about that. That's critical to a lot of my customers because they know that Murphy will show up one day. So, I mean 'cause it happens, so then what. >> Well, speaking of that, then what Brian I want to go over to you. You mentioned Matt mentioned resiliency. And if we think of the situation that we're in in 2020 many companies are used to DR and BC plans for natural disasters, pandemics. So as we look at the shift and then the the volume of ransomware, that's going up one ransomware attack every 11 seconds this year, right now. How Brian what's that change that businesses need to make from from cyber security to cyber resiliency? >> Yeah, it's a good point in, and I try to hammer that home with our clients that, you're used to having your business continuity disaster recovery this whole cyber resiliency thing is a completely separate practice that we have to set up and think about and go through the same thought process that you did for your DR What are you going to do? What are you going to pretest? How are you going to test it? How are you going to detect whether or not you've got ransomware? So I spent a lot of time with our clients on that theme of you have to think about and build your cyber resiliency plan 'cause it's going to happen. It's not like a DR plan where it's a pure insurance policy and went and like you said, every 11 seconds there's an event that takes place. It's going to be a win not then. Yeah and then we have to work with our customers to put in a place for cyber resiliency and then we spent a lot of discussion on, okay what does that mean for my critical applications, from a restore time of backup and mutability. What do we need for those types of services, right? In terms of quick restore, which are my tier zero applications that I need to get back as fast as possible, what other ones can I they'll stick out on tape or virtual tape in and do things like that. So again, there's a wide range of technology that we have available in the in the portfolio for helping our clients from cyber resiliency. And then we try to distinguish that cyber resiliency versus cyber security. So how do we help to keep every, everybody out from a cybersecurity view? And then what can we do from the cyber resiliency, from a storage perspective to help them once once it gets to us, that's a bad thing. So how can we help? How help our folks recover? Well, and that's the point that you're making Brian is that now it's not a matter of, could this happen to us? It's going to, how much can we tolerate? But ultimately we have to be able to recover. We can't restore that data and one of those things when you talk about ransomware and things, we go to that people as the weakest link insecurity AJ talked about that, there's the people. Yeah there's probably quite a bit of lack of patients going on right now. But as we look as I want to go back over to you to kind of look at, from a data center perspective and these storage solutions, being able to utilize things to help the people, AI and Machine Learning. You talked about AR VR. Talk to me a little bit more about that as you see, say in the next 12 months or so as moving forward, these trends these new solutions that are simplified. >> Yeah, so a couple of things around that one of which is iteration of technology the storage platforms the Silicon they're making use of Matt I think you told me 14 months is the roughly the Silicon cycle that you guys are seeing, right? So performance levels are going to continue to go up the speeds. The speeds are going to continue to go up. The scale is going to is going to continue to shift. And one of the things that does for a lot of the application owners is it lets them think broader. It lets them think bigger. And I wish I could tell you that I knew what the next big application was going to be but then we'd be having a conversation about which Island in the Pacific I was going to be retiring too. But they're going to come and they're going to consume this performance because if you look at the applications that you're dealing with in your everyday life, right. They continue to get broader. The scope of them continues to scale out, right. There's things that we do. I saw I think it was an MIT development recently where they're talking about being able to and they were originally doing it for Alzheimer's and dementia, but they're talking about being able to use the microphones in your smartphone to listen to the way you cough and use that as a predictor for people who have COVID that are not symptomatic yet. So asymptomatic COVID people, right? So when we start talking about where this, where this kind of technology can go and where it can lead us, right. There's sort of this unending possibility for it. But what that on, in part is that the infrastructure has to be extremely sound, right? The foundation has to be there. We have to have the resilience, the reliability and one of the points that Brian was just making is extremely key. We talk about disaster tolerance business continuous, so business continuance is how do you recover? Cyber resilience is the same conversation, right? So you have the protection side of it. Here's my defenses. Now what happens when they actually get in. And let's be honest, right? Humans are frequently that weak link, right. For a variety of behaviors that the humans that humans have. And so when that happens, where's the software in the storage that tells you, "Hey, wait there's an odd traffic behavior here "where data is being copied "at rates and to locations that that are not normal." And so that's part of when we talk about what we're doing in our side of the automation is how do you know what normal looks like? And once you know what normal looks like you can figure out where the outliers are. And that's one of the things that people use a lot for trying to determine whether or not ransomware is going on is, "Hey, this is a traffic pattern, that's new. "This is a traffic pattern. "That's different." Are they doing this because they're copying the dataset from here to here and encrypting it as they go, right? 'Cause that's one of the challenges you got to, you got to watch for. So I think you're going to see a lot of advancement in the application space. And not just the MIT stuff, which is great. The fact that people are actually able to or I may have misspoken, maybe Johns Hopkins. And I apologize to the Johns Hopkins folks that kind of scenario, right. There's no knowing what they can make use of here in terms of the data sets, right. Because we're gathering so much data, the internet of things is an overused phrase but the sheer volume of data that's being generated outside of the data center, but manipulated analyzed and stored internally. 'Cause you got to have it someplace secure. Right and that's one of the things that we look at from our side is we've got to be that as close to unbreakable as we can be. And then when things do break able to figure out exactly what happened as rapidly as possible and then the recovery cycle as well. >> Excellent and I want to finish with you. We just have a few seconds left, but as AJ was talking about this massive evolution and applications, for example when we talk about simplicity and we talk about resiliency and being able to recover when something happens, how did these new technologies that we've been unpacking today? How did these help the admin folks deal with all of the dynamics that are happening today? >> Yeah so I think the biggest the drop, the mic thing we can say right now is that we're delivering 100% tier zero in Vme without data reduction value props on top of it at a cost that undercuts off-prem S3 storage. So if you look at what you can do from an off-prem solution for air gap and from cyber resiliency you can put your data somewhere else. And it's going to take whatever long time to transfer that data back on prem, to read get back to your recover point. But when you work at economics that we're doing right now in the distributed systems, hey, you're DR side, your copies of data do not have to wait for that. Off-prem bandwidth to restore. You can actually literally restore it in place. And you couple that with all of the the technology on the software side that integrates with it I get incremental point in time. Recovery is either it's on the primary side of DRS side, wherever, but the fact that we get to approach this thing from a cost value then by all means I can naturally absorb a lot of the cyber resiliency value in that too. And because it's all getting all the same orchestrated capabilities, regardless of the big, small, medium, all that stuff, it's the same skillsets. And so I don't need to really learn new platforms or new solutions to providing cyber resiliency. It's just part of my day-to-day activity because fundamentally all of us have to wear that cyber resiliency hat. But as, as our job, as a vendor is to make that simple make it cost elegance, and be able to provide a essentially a homogenous solutions overall. So, hey, as your business grows, your risk gets averted on your recovery means also get the thwarted essentially by your incumbent solutions and architecture. So it's pretty cool stuff that we're doing, right. >> It is pretty cool. And I'd say a lot of folks would say, that's the Nirvana but I think the message that the three of you have given in the last 20 minutes or so is that IBM and Brocade together. This is a reality. You guys are a cornucopia of knowledge. Brian, Matt, AJ, thank you so much for joining me on this panel I really enjoyed our conversation. >> Thank you. >> Thank you again Lisa. >> My pleasure. From my guests I'm Lisa Martin. You've been watching this IBM Brocade panel on theCUBE.
SUMMARY :
all around the world. Brian, great to have you joining us. And Matt key here. Thanks for having us. And AIG Customer solution And in terms of the evolution of that that are going on in the IT environments. but I do want to AJ continue with you data that absolutely has to be retained, and also the need to be able to remove that raises the bar on the evolution of the technology is to be able to serve the data up in any industry to simplify And that's core to what we're focused on Matt, let's go to you and then AJ view, the environments to we're AJ over to you lot of advances here in the connectivity to the data store I need to be able to accommodate that And to that point, that businesses need to make Well, and that's the point And one of the things that does for a lot and being able to recover And because it's all getting all the same of you have given in the last 20 minutes IBM Brocade panel on theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Marc Benioff | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Brian Sherman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
6,000 | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
February | DATE | 0.99+ |
AIG | ORGANIZATION | 0.99+ |
Brocade | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
22 year | QUANTITY | 0.99+ |
22 year | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
64 gig | QUANTITY | 0.99+ |
mid-October | DATE | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Friday | DATE | 0.99+ |
Johns Hopkins | ORGANIZATION | 0.99+ |
eight years ago | DATE | 0.99+ |
COVID | OTHER | 0.98+ |
one day | QUANTITY | 0.98+ |
about 96, 97% | QUANTITY | 0.98+ |
AJ | PERSON | 0.98+ |
this year | DATE | 0.98+ |
10,000 applications | QUANTITY | 0.98+ |
Thanksgiving | EVENT | 0.97+ |
each admin | QUANTITY | 0.97+ |
nine | QUANTITY | 0.97+ |
nine months ago | DATE | 0.97+ |
10,000 virtual machines | QUANTITY | 0.97+ |
three times | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
two | QUANTITY | 0.97+ |
10,000 | QUANTITY | 0.97+ |
14 months | QUANTITY | 0.96+ |
40 years | QUANTITY | 0.96+ |
Krishna Doddapaneni, VP, Software Engineering, Pensando | Future Proof Your Enterprise 2020
>>From the cube studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is a cute conversation. Hi, welcome back. I'm Stu middleman. And this is a cube conversation digging in with, talking about what they're doing to help people. Yeah. Really bringing some of the networking ideals to cloud native environment, both know in the cloud, in the data centers program, Krishna penny. He is the vice president of software. Thanks so much for joining us. Thank you so much for talking to me. Alright, so, so Krishna the pin Sandow team, uh, you know, very well known in the industry three, uh, you innovation. Yeah. Especially in the networking world. Give us a little bit about your background specifically, uh, how long you've been part of this team and, uh, you know, but, uh, you know, you and the team, you know? Yeah. >>And Sando. Yup. Um, so, uh, I'm VP of software in Sandow, um, before Penn Sarno, before founding concern, though, I worked in a few startups in CME networks, uh, newer systems and Greenfield networks, all those three startups have been acquired by Cisco. Um, um, my recent role before this, uh, uh, this, this company was a, it was VP of engineering and Cisco, uh, I was responsible for a product called ACA, which is course flagship SDN tonic. Mmm. So I mean, when, why did we find a phone, uh, Ben Sandoz? So when we were looking at the industry, uh, the last, uh, a few years, right? The few trends that are becoming clear. So obviously we have a lot of enterprise background. We were watching, you know, ECA being deployed in the enterprise data centers. One sore point for customers from operational point of view was installing service devices, network appliances, or storage appliances. >>So not only the operational complexity that this device is bringing, it's also, they don't give you the performance and bandwidth, uh, and PPS that you expect, but traffic, especially from East West. So that was one that was one major issue. And also, if you look at where the intelligence is going, has been, this has been the trend it's been going to the edge. The reason for that is the motors or switches or the devices in the middle. They cannot handle the scale. Yeah. I mean, the bandwidths are growing. The scale is growing. The stateful stuff is going in the network and the switches and the appliances not able to handle it. So you need something at the edge close to the application that can handle, uh, uh, this kind of, uh, services and bandwidth. And the third thing is obviously, you know, x86, okay. Even a few years back, you know, every two years, you know, you're getting more transistors. >>I mean, obviously the most lined it. And, uh, we know we know how that, that part is going. So the it's cycles are more valuable and we don't want to use them for this network services Mmm. Including SDN or firewalls or load balancer. So NBME, mutualization so looking at all these trends in the industry, you know, we thought there is a good, uh, good opportunity to do a domain specific processor for IO and build products around it. I mean, that's how we started Ben signed off. Yeah. So, so Krishna, it's always fascinating to watch. If you look at startups, they are often yeah. Okay. The time that they're in and the technologies that are available, you know, sometimes their ideas that, you know, cakes a few times and, you know, maturation of the technology and other times, you know, I'll hear teams and they're like, Oh, well we did this. >>And then, Oh, wow. There was this new innovation came out that I wish I had add that when I did this last time. So we do, a generation. Oh, wow. Talking about, you know, distributed architectures or, you know, well, over a decade spent a long time now, uh, in many ways I feel edge computing is just, you know, the latest discussion of this, but when it comes to, and you know, you've got software, uh, under, under your purview, um, what are some of the things that are available for that might not have been, you know, in your toolkit, you know, five years ago. Yeah. So the growth of open source software has been very helpful for us because we baked scale-out microservices. This controller, like the last time I don't, when we were building that, you know, we had to build our own consensus algorithm. >>We had to build our own dishwasher database for metrics and humans and logs. So right now, uh, we, I mean, we have, because of open source thing, we leverage CD elastic influx in all this open source technologies that you hear, uh, uh, since we want to leverage the Kubernetes ecosystem. No, that helped us a lot at the same time, if you think about it. Right. But even the software, which is not open source, close source thing, I'm maturing. Um, I mean, if you talk about SDN, you know, seven APS bank, it was like, you know, the end versions of doing off SDN, but now the industry standard is an ADPN, um, which is one of the core pieces of what we do we do as Dean solution with DVA. Um, so, you know, it's more of, you know, the industry's coming to a place where, you know, these are the standards and this is open source software that you could leverage and quickly innovate compared to building all of this from scratch, which will be a big effort for us stocked up, uh, to succeed and build it in time for your customer success. >>Yeah. And Krishna, I, you know, you talk about open forum, not only in the software, the hardware standards. Okay. Think about things, the open compute or the proliferation of, you know, GPS and, uh, everything along that, how was that impact? I did. So, I mean, it's a good thing you're talking about. For example, we were, we are looking in the future and OCP card, but I do know it's a good thing that SEP card goes into a HP server. It goes into a Dell software. Um, so pretty much, you know, we, we want to, I mean, see our goal is to enable this platform, uh, that what we built in, you know, all the use cases that customer could think of. Right. So in that way, hardware, standardization is a good thing for the industry. Um, and then same thing, if you go in how we program the AC, you know, we at about standards of this people, programming, it's an industry consortium led by a few people. >>Um, we want to make sure that, you know, we follow the standards for the customer who's coming in, uh, who wants to program it., it's good to have a standards based thing rather than doing something completely proprietary at the same time you're enabling innovations. And then those innovations here to push it back to the open source. That's what we trying to do with before. Yeah. Excellent. I've had some, some real good conversations about before. Um, and, and the way, uh, and Tondo is, is leveraging that, that may be a little bit differently. You know, you talk about standards and open source, oftentimes it's like, well, is there a differentiator there, there are certain parts of the ecosystem that you say, well, kind of been commodified. Mmm. Obviously you're taking a lot of different technologies, putting them together, uh, help, help share the uniqueness. Okay. And Tondo what differentiates, what you're doing from what was available in the market or that I couldn't just cobbled together, uh, you know, a bunch of open source hardware and software together. >>Yeah. I mean, if you look at a technologist, I think the networking that both of us are very familiar with that. If you want to build an SDN solution, or you can take a, well yes. Or you can use exhibit six and, you know, take some much in Silicon and cobble it together. But the problem is you will not get the performance and bandwidth that you're looking for. Okay. So let's say, you know, uh, if you want a high PPS solution or you want a high CPS solution, because the number of connections are going for your IOT use case or Fiji use case, right. If you, uh, to get that with an open source thing, without any assist, uh, from a domain specific processor, your performance will be low. So that is the, I mean, that's once an enterprise in the cloud use case state, as you know, you're trying to pack as many BMCs containers in one set of word, because, you know, you get charged. >>I mean, the customer, uh, the other customers make money based on that. Right? So you want to offload all of those things into a domain specific processor that what we've built, which we call the TSC, which will, um, which we'll, you know, do all the services at pretty much no cost to accept a six. I mean, it's to six, you'll be using zero cycles, a photo doing, you know, features like security groups or VPCs, or VPN, uh, or encryption or storage virtualization. Right. That's where that value comes in. I mean, if you count the TCO model using bunch of x86 codes or in a bunch of arm or AMD codes compared to what we do. Mmm. A TCO model works out great for our customers. I mean, that's why, you know, there's so much interest in a product. Excellent. I'm proud of you. Glad you brought up customers, Christina. >>One of the challenges I have seen over the years with networking is it tends to be, you know, a completely separate language that we speak there, you know, a lot of acronyms and protocols and, uh, you know, not necessarily passable to people outside of the silo of networking. I think back then, you know, SDN, uh, you know, people on the outside would be like, that stands for still does nothing, right? Like networking, uh, you know, mumbo jumbo there for people outside of networking. You know what I think about, you know, if I was going to the C suite of an enterprise customer, um, they don't necessarily care about those networking protocols. They care about the, you know, the business results and the product Liberty. How, how do you help explain what pen Sandow does to those that aren't, you know, steeped in the network, because the way I look at it, right? >>What is customer looking? But yeah, you're writing who doesn't need, what in cap you use customer is looking for is operational simplicity. And then he wants looking for security. They, it, you know, and if you look at it sometimes, you know, both like in orthogonal, if you make it very highly secure, but you make it like and does an operational procedure before you deploy a workload that doesn't work for the customer because in operational complexity increases tremendously. Right? So it, we are coming in, um, is that we want to simplify this for the customer. You know, this is a very simple way to deploy policies. There's a simple way to deploy your networking infrastructure. And in the way we do it is we don't care what your physical network is, uh, in some sense, right? So because we are close to the server, that's a very good advantage. >>We have, we have played the policies before, even the packet leaves the center, right? So in that way, he knows his fully secure environment and we, and you don't want to manage each one individually, we have this, okay, Rockwell PSM, which manages, you know, all this service from a central place. And it's easy to operationalize a fabric, whether you talk about upgrades or you talk about, you know, uh, deploying new services, it's all driven with rest API, and you can have a GUI, so you can do it a single place. And that's where, you know, a customer's value is rather than talking about, as you're talking about end caps or, you know, exactly the route to port. That is not the main thing that, I mean, they wake up every day, they wake up. Have you been thinking about it or do I have a security risk? >>And then how easy for me is to deploy new, uh, in a new services or bring up new data center. Right. Okay. Krishna, you're also spanning with your product, a few different worlds out. Yeah. You know, traditionally yeah. About, you know, an enterprise data center versus a hyperscale public cloud and ed sites, hi comes to mind very different skillset for management, you know, different types of okay. Appointments there. Mmm. You know, I understand right. You were going to, you know, play in all of those environments. So talk a little bit about that, please. How you do that and, you know, you know, where you sit in, in that overall discussion. Yes. So, I mean, a number one rule inside a company is we are driven by customers and obviously not customer success is our success. So, but given said that, right. What we try to do is that we try to build a platform that is kind of, you know, programmable obviously starting from, you know, before that we talked about earlier, but it's also from a software point of view, it's kind of plugable right. >>So when we build a software, for example, at cloud customers, and they use BSC, they use the same set of age KPI's or GSP CRS, TPS that DSC provides their controller. But when we ship the same, uh, platform, what enterprise customers, we built our own controller and we use the same DC APS. So the way we are trying to do is things is fully leverage yeah. In what we do for enterprise customers and cloud customers. Mmm. We don't try to reinvent the wheel. Uh, obviously at the same time, if you look at the highest level constructs from a network perspective, right. Uh, audience, for his perspective, what are you trying to do? You're trying to provide connectivity, but you're trying to avoid isolation and you're trying to provide security. Uh, so all these constructs we encapsulated in APA is a, which, you know, uh, in some, I, some, some mostly like cloud, like APS and those APIs are, are used, but cloud customers and enterprise customers, and the software is built in a way of it. >>Any layer is, can be removed on any layer. It can be hard, right? Because it's not interested. We don't want to be multiple different offers for different customers. Right. Then we will not scale. So the idea when we started the software architecture, is that how we make it pluggable and how will you make the program will that customer says, I don't want this piece of it. You can put them third party piece on it and still integrate, uh, at a, at a common layer with using. Yeah. Yeah. Well, you know, Krishna, you know, I have a little bit of appreciation where some of the hard work, what your team has been doing, you know, a couple of years in stealth, but, you know, really accelerating from, uh, you know, the announcement coming out of stealth, uh, at the end of 2019. Yeah. Just about half a year, your GA with a major OEM of HPE, definitely a lot of work that needs to be done. >>It brings us to, you know, what, what are you most proud about from the work that your team's doing? Uh, you know, we don't need to hear any, you know, major horror stories, but, you know, there always are some of them, you know, not holes or challenges that, uh, you know, often get hidden yeah. Behind the curtain. Okay. I mean, personally, I'm most proud of the team that we've made. Um, so, uh, you know, obviously, you know, uh, our executors have it good track record of disrupting the market multiple times, but I'm most proud of the team because the team is not just worried about that., uh, that, uh, even delegate is senior technologist and they're great leaders, but they're also worried about the customer problem, right? So it's always about, you know, getting the right mix, awfully not execution combined with technology is when you succeed, that is what I'm most proud of. >>You know, we have a team with, and Cletus running all these projects independently, um, and then releasing almost we have at least every week, if you look at all our customers, right. And then, you know, being a small company doing that is a, Hmm, it's pretty challenging in a way. But we did, we came up with methodologists where we fully believe in automation, everything is automated. And whenever we release software, we run through the full set of automation. So then we are confident that customer is getting good quality code. Uh, it's not like, you know, we cooked up something and that they should be ready and they need to upgrade to the software. That's I think that's the key part. If you want to succeed in this day and age, uh, developing the features at the velocity that you would want to develop and still support all these customers at the same time. >>Okay. Well, congratulations on that, Christian. All right. Final question. I have for you give us a little bit of guidance going forward, you know, often when we see a company out and we, you know, to try to say, Oh, well, this is what company does. You've got a very flexible architecture, lot of different types of solutions, what kind of markets or services might we be looking at a firm, uh, you know, download down the road a little ways. So I think we have a long journey. So we have a platform right now. We already, uh, I mean, we have a very baby, we are shipping. Mmm Mmm. The platforms are really shipping in a storage provider. Uh, we are integrating with the premier clouds, public clouds and, you know, enterprise market, you know, we already deployed a distributed firewall. Some of the customers divert is weird firewall. >>So, you know, uh, so if you take this platform, it can be extendable to add in all the services that you see in data centers on clubs, right. But primarily we are driven from a customer perspective and customer priority point of view. Mmm. So BMW will go is even try to add more ed services. We'll try to add more storage features. Mmm. And then we, we are also this initial interest in service provider market. What we can do for Fiji and IOT, uh, because we have the flexible platform. We have the, see, you know, how to apply this platform, this new application, that's where it probably will go into church. All right. Well, Krishna not a penny vice president of software with Ben Tondo. Thank you so much for joining us. Thank you, sir. It was great talking to you. All right. Be sure to check out the cube.net. You can find lots of interviews from Penn Sundo I'm Stu Miniman and thank you. We're watching the cute.
SUMMARY :
uh, you know, very well known in the industry three, uh, you innovation. you know, ECA being deployed in the enterprise data centers. you know, every two years, you know, you're getting more transistors. and, you know, maturation of the technology and other times, you know, I'll hear teams and they're like, This controller, like the last time I don't, when we were building that, you know, we had to build our own consensus Um, so, you know, it's more of, you know, the industry's coming to a place where, this platform, uh, that what we built in, you know, all the use cases that customer could Um, we want to make sure that, you know, we follow the standards for the customer who's coming in, I mean, that's once an enterprise in the cloud use case state, as you know, you're trying to pack as many BMCs I mean, that's why, you know, there's so much interest in a product. to be, you know, a completely separate language that we speak there, you know, you know, and if you look at it sometimes, you know, both like in orthogonal, And that's where, you know, a customer's value is rather than talking about, as you're talking about end caps you know, programmable obviously starting from, you know, before that we talked about earlier, Uh, obviously at the same time, if you look at the highest but, you know, really accelerating from, uh, you know, the announcement coming out of stealth, Um, so, uh, you know, obviously, you know, uh, our executors have it good track And then, you know, being a small company doing that is a firm, uh, you know, download down the road a little ways. So, you know, uh, so if you take this platform, it can be extendable to add
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Christina | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Ben Sandoz | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Ben | PERSON | 0.99+ |
Ben Tondo | PERSON | 0.99+ |
Krishna Doddapaneni | PERSON | 0.99+ |
Sando | PERSON | 0.99+ |
Krishna | PERSON | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
cube.net | OTHER | 0.99+ |
both | QUANTITY | 0.99+ |
one major issue | QUANTITY | 0.98+ |
six | QUANTITY | 0.98+ |
Stu middleman | PERSON | 0.98+ |
five years ago | DATE | 0.98+ |
2020 | DATE | 0.98+ |
one set | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Penn Sundo | ORGANIZATION | 0.97+ |
HPE | ORGANIZATION | 0.97+ |
AMD | ORGANIZATION | 0.96+ |
One sore point | QUANTITY | 0.96+ |
DVA | ORGANIZATION | 0.94+ |
ECA | ORGANIZATION | 0.94+ |
Cletus | PERSON | 0.94+ |
each one | QUANTITY | 0.93+ |
single place | QUANTITY | 0.93+ |
2019 | DATE | 0.92+ |
One | QUANTITY | 0.91+ |
Sandow | LOCATION | 0.9+ |
zero cycles | QUANTITY | 0.9+ |
end | DATE | 0.9+ |
Rockwell PSM | ORGANIZATION | 0.88+ |
Penn Sarno | ORGANIZATION | 0.88+ |
Sandow | PERSON | 0.86+ |
Fiji | ORGANIZATION | 0.86+ |
seven | QUANTITY | 0.85+ |
Pensando | ORGANIZATION | 0.84+ |
ACA | ORGANIZATION | 0.83+ |
Kubernetes | ORGANIZATION | 0.82+ |
IOT | ORGANIZATION | 0.82+ |
Tondo | ORGANIZATION | 0.79+ |
APS | ORGANIZATION | 0.79+ |
word | QUANTITY | 0.77+ |
Christian | ORGANIZATION | 0.77+ |
about half a year | QUANTITY | 0.77+ |
a few years back | DATE | 0.76+ |
SDN | ORGANIZATION | 0.76+ |
Liberty | ORGANIZATION | 0.75+ |
x86 | OTHER | 0.74+ |
over a decade | QUANTITY | 0.72+ |
two years | QUANTITY | 0.68+ |
East West | LOCATION | 0.67+ |
NBME | ORGANIZATION | 0.64+ |
APS | TITLE | 0.54+ |
Future Proof Your Enterprise | TITLE | 0.52+ |
BSC | TITLE | 0.52+ |
ike cloud | TITLE | 0.51+ |
six | OTHER | 0.39+ |
Derek Dicker, Micron | Micron Insight 2019
>>Live from San Francisco. It's the cube covering my groin. Insight 2019 brought to you by micron. >>Welcome back to pier 27 in San Francisco. I'm your host Dave Vellante with my cohost David foyer and this is the cube, the leader in live tech coverage. This is our live coverage of micron insight 2019 we were here last year talking about some of the big picture trends. Derek ticker is here, he's the general manager and vice president of the storage business unit at micro and great to see you again. Thank you so much for having me here. Welcome. So you know we talk about the super powers a lot, you know, cloud data, AI and these new workloads that are coming in. And this, this, I was talking to David earlier in our kickoff like how real is AI? And it feels like it's real. It's not just a bunch of vendor industry hype and it comes in a lot of different forms. Derek, what are you seeing in terms of the new workloads and the big trends in artificial intelligence? >>I think just on the, on the front end, you guys are absolutely right. The, the role of artificial intelligence in the world is, uh, is absolutely transformational. I was sitting in a meeting in the last couple of days and somebody was walking through a storyline that I have to share with you. That's a perfect example of why this is becoming mainstream. In Southern California at a children's hospital, there were a set of parents that had a few days old baby and this baby was going through seizures and no one could figure out what it was. And during the periods of time of the seizure, the child's brain activity was zero. There was no brain activity whatsoever. And what they did is they performed a CT scan, found nothing, check for infections, found nothing. And can you imagine a parent just sitting there dealing with their child and that situation, you feel hopeless. >>This particular institution is so much on the bleeding edge. They've been investing in personalized medicine and essentially what they were able to do was extract a sample of blood from that sample of blood within a matter of minutes. They were able to run an algorithm that could sift through 5 million genetic variants to go find a potential match for a genetic variant that existed within this child. They found one that was 0.01% of the population found a tiny, tiny, call it a less than a needle in the haystack. And what they were able to do is translate that actual insight into a treatment. And that treatment wasn't invasive. It didn't involve surgery. It involves supplements and providing this shower, just the nutrients that he needed to combat this genetic variant. But all of this was enabled through technology and through artificial intelligence in general. And a big part of the show that we're here at today is to talk about the industry coming together and discussing what are the great advances that are happening in that domain. >>It's just, it's super exciting to see something that touches that close to our life. I love that story and that's, that's why I love this event. I mean, well, obviously micron memories, you know, DRAM, NAND, et cetera, et cetera. But this event is all about connecting to the impacts on our lives. You take, you take that, I used to ask this question a lot of when will machines be able to make better diagnoses than, than doctors. And I think, you know, a lot people say, well they already can, but the real answer is it's really about the augmentation. Yeah. You know, machines helping doctors get to that, you know, very, you know, uh, a small probability 0.1001% yes. And it'd be able to act on it. That's really how AI is affecting our lives every day. >> Wholeheartedly agree. And actually that's a, that's a big part of our mission. >>Our mission is to transform how the world uses information to enrich life. That's the heart and soul of what you just described. Yeah. And we're actually, we're super excited about what we see happening in storage as a result of this. Um, one of the, one of the things that we've noticed as we've gotten engaged with a broad host of customers in the industry is that there's a lot of focus on artificial intelligence workloads being handled based on memory and memory bandwidth and larger amounts of memory being required. If you look at systems of today versus systems of tomorrow, based on the types of workloads that are evolving from machine learning, the need for DRAM is growing dramatically. Multiple factors, we see that, but what nobody ever talks about or rarely talks about is what's going on in the storage subsystem and one of the biggest issues that we've found over time or challenges that exist is as you look at the AI workloads going back to 2014 the storage bandwidth required was a few megabytes per second and called tens of, but if you just look every year, over time we're exceeding at gigabyte, two gigabytes of bandwidth required out of the storage subsystem. >>Forget the memory. The storage is being used as a cash in it flushes, but once you get into a case where you actually want to do more work on a given asset, which of course everybody wants to do from a TCO perspective, you need super high performance and capability. One of the things that that we uncovered was by delivering an SSD. This is our 9,300 drive. We actually balanced both the read IOPS and the ride IOPS at three gigs per second. And what we allow to have happened is not just what you can imagine as almost sequential work. You load up a bunch of data into a, into a training machine, the machine goes and processes on it, comes back with a result, load more data in by actually having a balanced read and write a model. Your ingest times go faster. So while you're working on a sequence, you can actually ingest more data into the system and it creates this overall efficiency. And it's these types of things that I think provided a great opportunity for innovation in the storage domain for these types of that's working >> requiring new architectures in storage, right? I mean, yeah, >>I mean, th th so one of the things that's happened in, in bringing SSDs in is that the old protocols were very slow, etc. And now we all the new protocols within in Vme and potentially even more new protocols coming in, uh, into this area. What's micron? What, how is micron making this thing happen? This speed that's gonna provide these insights? >>It's a fan fan. Fantastic question and you're absolutely right. The, the world of standards is something that we found over the course of time. If you can get a group of industry players wrapped around a given set of standards, you can create a large enough market and then people can innovate on top of that. And for us in the, in the storage domain, the big transitions had been in Sada and NBME. You see that happening today when we talked a little bit about maybe a teaser for what's coming a little later at, at our event, um, in some of the broader areas in the market, we're talking about how fabrics attach storage and infrastructure. And interestingly enough, where people are innovating quite a bit right now is around using the NBME infrastructure over fabrics themselves, which allows for shared storage across a network as opposed to just within a given server there. >>There's some fantastic companies that are out there that are actually delivering both software stacks and hardware accelerators to take advantage of existing NBME SSDs. But the protocol itself gets preserved. But then they can share these SSDs over a network, which takes a scenario where before you were locked with your storage stranded within a server and now you can actually distribute more broad. It's amazing difference, isn't it at that potential of looking at data over as broad an area as you want to. Absolutely. And being able to address it directly and having it done with standards and then having it done with low enough latency such that you aren't feeling severely disadvantaged, taking that SSD out of a box and making it available across a broad network. So you guys have a huge observation space. Uh, you sell storage to the enterprise, you sell storage to the cloud everywhere. >>I want to ask you about the macro because when you look at the traditional storage suppliers, you know, some of them are struggling right now. There aren't many guys that are really growing and gaining share because the cloud is eating away at that. You guys sell to the cloud. So that's fine. Moving, you know, arms dealer, whoever wins it may the best man win. Um, but, but at the same time, customers have ingested so much all flash. It's giving them head room and so they're like, Hey, I'm good for awhile. I used to have this spinning disc. I'd throw spinning disc at it at the problem till I said, give me performance headroom. That has changed. Now we certainly expect a couple of things that that will catch up and there'll be another step function. But there's also elasticity. Yes. Uh, you saw for instance, pure storage last quarter said, wow, hit the price dropped so fast, it actually hurt our revenues. >>And you'd say, well, wait a minute. If the price drops, we want people to buy more. There's no question that they will. It just didn't happen fast enough from the quarter. All of these interesting rip currents going on. I wonder what you're seeing in terms of the overall macro. Yeah. It's actually a fantastic question. If you go back in time and you look at the number of sequential quarters, when we had ASP decreases across the industry, it was more than six. And the duration from peak to trough on the spot markets was high double digit percentages. Not many markets go through that type of a transition. But as you suggested, there's this notion of elasticity that exists, which is once the price gets below a certain threshold, all of a sudden new markets open up. And we're seeing that happen today. We're seeing that happen in the client space. >>So, so these devices actually, they're going through this transition where companies are actually saying, you know what, we're going to design out the hard drive cages for all platforms across our portfolio going into the future. That's happening now. And it's happening largely because these price points are enabling that, that situation and the enterprise a similar nature in terms of average capacities and drives being deployed over time. So it's, I told you, I think the last time we saw John, I told just one of the most exciting times to be in the memory and storage industry. I'll hold true to that today. I, I'm super excited about it, but I just bought a new laptop and, and you know, I have, you know, a half a half a terabyte today and they said for 200 bucks you can get a terabyte. Yes. And so I said, Oh wow, I could take everything from 1983 and bring it, bring it over. >>Yeah. Interestingly, it was back ordered, you know, so I think, wow, it am I the only one, but this is going to happen. I mean, everybody's going to have, you know, make the price lower. Boom. They'll buy more. We, we, we believe that to be the case for the foreseeable future. Okay. Do you see yourself going in more into the capacity market as well with SSTs and I mean, this, this, this drop, let's do big opportunity or, yeah. Actually, you know, one of the areas that we feel particularly privileged to be able to, to engage in is the, the use of QLC technology, right. You know, quad level solar for bits per cell technology. We've integrated this into a family of, uh, of SSDs for the enterprise, or interestingly enough, we have an opportunity to displace hard drives at an even faster rate because the core capability of the products are more power efficient. >>They've got equal to, or better performance than existing hard drives. And when you look at the TCO across a Reed intensive workloads, it's actually, it's a no brainer to go replace those HDD workloads in the client space. There's segments of the market where we're seeing QLC to play today for higher, higher capacity value segments. And then there's another segment for performance. So it's actually each segment is opening up in a more dramatic way. So the last question, I know you got some announcements today. They haven't hit the wire yet, but what, can you show us a little leg, Derrick? What can you tell us? So I, I'll, I'll give you this much. The, um, the market today, if you go look in the enterprise segment is essentially NBME and SATA and SAS. And if you look at MDME in 20 2019 essential wearing crossover on a gigabyte basis, right? >>And it's gonna grow. It's gonna continue to grow. I mentioned earlier the 9,300 product that we use for machine learning, AI workloads, super high performance. There's a segment of the market that we haven't announced products in today that is a, a a mainstream portion of that market that looks very, very interesting to us. In addition, we can never forget that transitions in the enterprise take a really long time, right, and Sada is going to be around for a long time. It may be 15% of the market and 10% out a few years, but our customers are being very clear. We're going to continue to ship Satta for an extended period of time. The beautiful thing about about micron is we have wonderful 96 layer technology. There's a need in the market and both of the segments I described, and that's about as much as I can give you, I don't bet against data. Derek, thanks very much for coming on. Thank you guys so much. You're welcome. There's a lot of facts. Keep it right there, buddy. We'll be back at micron insight 2019 from San Francisco. You're watching the cube.
SUMMARY :
Insight 2019 brought to you by micron. he's the general manager and vice president of the storage business unit at micro and great to see you again. And can you imagine a parent And a big part of the show that we're here at today is to talk about the industry coming together and discussing what are the great And I think, you know, a lot people say, And actually that's a, that's a big part of our mission. That's the heart and soul of what you just described. And what we allow to have happened is not just what you can imagine as almost in bringing SSDs in is that the old protocols were very slow, If you can get a group of industry players So you guys have a huge I want to ask you about the macro because when you look at the traditional storage suppliers, If you go back in time and you look at the number of sequential quarters, when we had ASP I have, you know, a half a half a terabyte today and they said for 200 bucks you can get a I mean, everybody's going to have, you know, make the price lower. And when you look at the TCO across a Reed There's a segment of the market that we haven't announced products in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Derek | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Derek Dicker | PERSON | 0.99+ |
last year | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Southern California | LOCATION | 0.99+ |
0.01% | QUANTITY | 0.99+ |
200 bucks | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
1983 | DATE | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Derrick | PERSON | 0.99+ |
9,300 | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
SATA | ORGANIZATION | 0.99+ |
0.1001% | QUANTITY | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
two gigabytes | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
today | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
20 2019 | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
NBME | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
more than six | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
each segment | QUANTITY | 0.98+ |
zero | QUANTITY | 0.96+ |
micron | ORGANIZATION | 0.96+ |
Sada | ORGANIZATION | 0.96+ |
pier 27 | LOCATION | 0.95+ |
2019 | DATE | 0.95+ |
micron insight | ORGANIZATION | 0.95+ |
9,300 drive | QUANTITY | 0.93+ |
half a half a terabyte | QUANTITY | 0.91+ |
less than a needle | QUANTITY | 0.89+ |
three gigs per second | QUANTITY | 0.89+ |
gigabyte | QUANTITY | 0.87+ |
a minute | QUANTITY | 0.87+ |
5 million genetic variants | QUANTITY | 0.86+ |
David foyer | PERSON | 0.84+ |
layer | OTHER | 0.82+ |
both software | QUANTITY | 0.74+ |
year | QUANTITY | 0.74+ |
micron insight 2019 | ORGANIZATION | 0.74+ |
few days old | QUANTITY | 0.73+ |
few megabytes per second | QUANTITY | 0.7+ |
Micron Insight | ORGANIZATION | 0.7+ |
last couple of days | DATE | 0.69+ |
things | QUANTITY | 0.69+ |
MDME | ORGANIZATION | 0.59+ |
96 | QUANTITY | 0.59+ |
Insight | ORGANIZATION | 0.46+ |
Sada | TITLE | 0.4+ |
terabyte | QUANTITY | 0.37+ |
Satta | COMMERCIAL_ITEM | 0.35+ |
2019 | TITLE | 0.27+ |
Prakash Darji, PureStorage | CUBEConversation, May 2018
Right, welcome to the studios here in Palo Alto. I'm John Three cohost in the queue. We here for Special News Conversation with Prakash, dodgy general manager of the flash array business at pure storage, some exciting cloud news for pure storage. Great to see you prakash. Thanks for coming in. Thanks for having us. So you guys got some big news. So I'm excited by this because I've been ranting and raving about how cloud native has been impacting the enterprise. It's pretty well documented that everyone's going going cloud operations. You guys are announcing a kind of a historic milestone for pure storage in that you guys had been doing great on the storage side within covering new since inception, but now as you guys continue to grow, you now have a new offering that's in the cloud. This is new for you guys. Talk about this announcement. What does it mean? You're an on premises storage, has done great to grow. Has Been Amazing gun public that now with the cloud growth you have a cloud offering. What's going on? Well, interestingly people were looking at storage for performance, cost and reliability reasons. That's kind of the three holy grails that you know, everyone expects out of storage. We added a fourth dimension in simplicity. Storage didn't need to be hard, and that's kind of the brand of pier and as we took a look, there was a fifth dimension that we realized was somewhat missing. While we made things simple. We didn't have the agility that public cloud offered. So as we were taking a liquid like, okay, public cloud brings to this instant available capacity, agility model, but do you have to trade off on the other dimensions? Performance costs for liability or simplicity, and our goal to bring customer value was to avoid tradeoffs. So why would you have to trade off on any of those dimensions? And then the second piece was why do you have to choose? Why do you have to choose between on premises or public cloud? And if you make the wrong choice, how do you have freedom to move? So the problem set that we were trying to address was that unification across all those dimensions, the onboarding of agility and frankly the ability to avoid people having to choose between them and use the best of what's available. Where know, I think you nailed something important that I want to get into the why you guys are doing this little bit deeper, but this notion of tradeoffs is an old it kind of philosophy. I got to trade this off to get that. Whether it's, you know, I want compute and stability or flexibility, agility, but with cloud and cloud operations, the operating model now is you choose, as you said, so this cloud operations on premises and cloud has to look the same. This is what we're hearing from Ceos and practitioners, cloud architects, they're re architecting their enterprises now because you know, the, the three main of it, storage, networking and compute never go away. It's just changing. This is a critical, fundamental piece of the architecture of it operations. Why now? Why cloud was the customer demand? Was it a natural progression for you guys? Explain. Explaining the why now. Well, I'll start with not the storage computer networking, but what they're used for and fundamentally the world's using those three dimensions for long, one of two sites, either building applications or building automation. That's kind of the two major trends in the industry. Now if we take a look, if you are running an application, primarily you would choose am I running it on premises or in the public cloud and as as the journeys emerged like public cloud probably introduction of as around 15 years ago, but initially there was this enamored. Everything's going there and then people settled down to some things will go here and some things will go here, but we believe that's a middle state where people are actually trying to do is deliver applications that solve problems and we believe that future is a hybrid application. Now, what is a hybrid application today? If you've got an on premises finance system, should you be able to use ai algorithms from Google's cloud? They book journal Entries for month end close. Let me, because it's now not a choice of am I using pads for the. That doesn't mean the whole application needs to sit in platform as a service. You should be able to use the best capabilities of what's available, where the same way today, anyone who is selling anything and using salesforce crm needs to ensure that what you've sold is booked in a finance system. That could be an sap finance system on premises, so what is the APP? It's an APP without borders now and these are modern day hybrid applications now coming and bringing that down to compute storage and networking. Trying to bring that together and actually deliver that in a consistent and operational way is difficult. It's a difficult across your application architectures. There are different. It's different across your management, even your consumption and how you bill cap ex versus Opex, but the big difference is that the storage layer, because the application architecture on premises relies on your storage for your reliability, but in the cloud they've actually moved that reliability characteristic to the middle tier. You're sharding and doing scaleout distributed application because you can't rely on the same characteristics out of your storage and we found this as an opportunity to bring these two worlds together. We call it the cloud divide. Talk about the cloud device. I think that's important because one of the things we talked with a lot of the end user customers, your customers and others, their challenges again, to focus on the outcomes that they want, the application that's going to drive their and and the value, not so much what the infrastructure, they have them create an infrastructure to enable that. What is this cloud divide when it comes to storage? In your mind, what did you guys discover? What were the key pain points? What were the, what was the customer's telling you around what and what is the cloud divide? No. Uh, the cloud divided, coming back to it is how you deal with applications, how you deal with management and how you deal with storage different between the enterprise in the cloud. We like to say the enterprise is not very cloudy, meaning you don't have instant available capacity in the cloud is not very enterprisey. Now what does that mean? What do we call enterprise? And there's a how it works with the rest of my landscape, what the API is our, uh, what the reliability characteristics, our performance and cost characteristics are also different. So if you want to adopt public cloud, you have to go ahead and say, I got to do a hard left, right? Because you're kind of going down this way and you got to choose a different path. And if you choose that hard left, you're now stuck on that road. It's a one way road. And we're trying to do is say, you know what, what if we could bridge these environments, like let's dig into the application architecture on the cloud divide. Pretty much people are using scale up or scale out as application architectures and then they're deciding, you know, vms or containers yet a, that those are common application development paradigms. What if you could use either one anywhere, right? Those technologies. Now, if you look at what vm ware is doing with Vm ware cloud and you look at what kubernetes is doing across on premise and cloud, there is now a unification happening at application architecture across management. What if you could have a seamless api in a seamless pane of glass around how you manage your applications? That's emerging, but as we looked around, no one was unifying the storage paradigm and actually that was the hardest we we thought that to unify the hardware or the storage paradigm, you have to build a data centric architecture and that's what we've been focused on doing. We've introduced our concept of data centric architecture a year ago and we're now extending that concept to the public cloud. What I like about what you guys are doing here, and I want to get your thoughts on this because this is. I think the trend that's really big in here is that you guys have been great storage provider since again, since inception can been following you guys and you have hardware and hardware has been a rack and stack kind of enterprise paradigm enterprises. We've got gear, we protected, we secure, but now with public cloud becoming more secure and more mainstream and with the Dev ops application environment developing. You mentioned the ems, the containers and Guth Coobernetti's. You're now having an operating model that's changing. You guys are doing software, so it's not a boxer. You're not shipped boxes to Amazon. They have stores. You got s three right out of the services. You're now extending the software component of your business. I want you to take a minute to explain for the people that might not know the extent of the software business at pure and specifically the cloud component software piece. It's not hardware and software, but it works with on premises. Talk about that dynamic of software in the cloud and the impact of the on premise piece of it. Well, I'll rewind a little back intel. What peers been known for peers been known as kind of this all flash company, but if you unwind that. When I took a look at it as I've joined pier actually about six months ago, what I realized is the unique skill that peer has is software engineering. To get the best out of any infrastructure that you give it. The medium happened to be flash initially, so what we've built with our direct flash and NBME and a lot of the advancements in our software has been to deal with the flash medium, but the core skill in Ip we have is software development to get the best out of a medium. What we've introduced is another medium. This medium is infrastructure as a service. We treat that as another medium and we believe that we're uniquely qualified to get the most out of that medium, which is the cloud. Alright, so I want to get into the infrastructure piece. You guys are well known for being a cloud, a data infrastructure component or data infrastructure. You mentioned the history of flash storage has been a great place to store data on premises. When you get into the cloud, you guys call this cloud data services and I'm going to get in another video on that, on the details of that, but when you hear about cloud data services, but pops in my mind is more is coming. You need to store it somewhere. You have to manage that data for applications, hybrid applications. You need to store the protect that data. You need to make that data available. They'll be able to recover all the same things that get done with data in the past on storage has to happen at a whole nother level. Describe what is cloud data services mean? What does that mean to you guys at pure and what does it mean to your customers? If you back up a little bit where we started and where a lot of our initial customers were at where sas customers and what we delivered to them was what we called cloud data infrastructure. That cloud data infrastructure allowed some of the largest sas companies, either consumer or enterprise to go ahead and use peer to build their sass applications. Companies like service now workday, those types of companies, but what was missing was how do you get that same value on infrastructure as a service environments, aws, Azure and GCP. So what we realized was the consistency model was not the same. The apis were not the same and you had to choose or not. And so our cloud data services are a set of services that give you, for example, the same block storage that you had on premises in the public cloud, gives you the same Api. And from a management and operations standpoint, we have pure one which is a cloud data management solution where all of your data, wherever it sits, because as you said, data is growing. You can see all of your Ras. It was interesting as we built the software, uh, when we first built it internally, we realized that hey, we went into pier one and we see all these storage volumes, but we didn't know which ones were on premises or cloud because our software is the same. We actually had to do some engineering to make it look different. Like, Hey, let's color the cloud volumes different. Or we had to actually think about that because we started from the place of driving consistency. And then we've extended the cloud data services dead. Go ahead and say not only can we allow you to run in either place, but how do you extend that to data protection? Because today, as you mentioned earlier on premises, people have workflows for backup and data protection and initially those workflows could have been disk to disk to tape to truck and we see that there's now a more modern way where you can do flash to flash to cloud where you can have your primary mission critical applications and flash and if you want one hop for backup, people looked at backup as an insurance policy. What happens if something goes wrong, but what's really important is when something goes wrong, how quickly can you recover? So providing flash in that second medium and then third, extending the step for cost optimization by leveraging public cloud in s three allows us to drive a consistency model and we can drive that same workflow on premises or in the public cloud. So the consistency to me, I should maybe put you on the spot here. So and consistency. Are we talking about if I'm a pure customer and I'm running pure on premises and I'm using, I'm using all the management something pure one, all this other great stuff and I want to use cloud. Does my job change at all? Does it look the same? So as a dashboard into the storage and the data because I want, I want, I want persistent data, I want ai and I want analytics now. Now I've got cloud going on. There's a lot of things out there, sage maker, tensorflow on the AI side. Lot of things. Goodness out there. What changes for me or does it change and how do you guys solve that problem? Because what I don't want is I don't want to have to hire developers to go do an integration with Amazon and Azure and Google cloud. I want to have a single consistent environment. Do you guys provide that from a data standpoint? We do. So this is a journey because when you start, you need to ensure that your data consistency and management across all of those environments, aws, azure and Google and on premises is the same. So we're introducing our solution cloud data services on Amazon first, but we are planning on extending that to the azure and Google environments in the cloud standpoint. So let's take Amazon. So I said, hey, I want to use some of that cloud. I just go to Amazon. It's extensible, fully extensible as if I'm using pure cloud formation template on Amazon. You just go in, it's there, you can pick it up, you could choose it, use it, and then what really is the difference is your platform services at a higher layer, maybe a little bit different because some of the things you mentioned and the pads are Amazon specific. Yeah. So if you start using pads services, it could impact your application development architecture, but the good news is if your goal is to drive the ability to use, what's the best thing that's available where as you take a look at evolutions in Vm ware, cloud and Coobernetti's combined with our cloud data services, you're now able to put together a use. Best of what available wherever you guys. I mean that's the. That's the application side. So you guys are providing a consistent layer for the data and the storage. Absolutely. That's going to. If I'm building an Amazon as a developer, I'm going to use those anyway. So it's not like it's a dependency per se, it's just you're going to allow for those hybrid apps to run across premises and in cloud and all the data takes care of itself. Right? It's like they get that, right? Yeah, and what's great about it is we've learned some things along the way. For example, we've been trying to get the best out of the flash medium in the past by enhancing performance characteristics or efficiency characteristics for cost optimization. We can bring some of those same value props to the Amazon world. So if you need to aggregate iops, we can do that. If you need to go ahead and drive efficiency, we have techniques to drive efficiency around thin provisioning. Those types of opens up more use cases for the customer to add more policy based things to their application. It makes data programmable. Well, it's interesting. There was one customer that we were speaking to a as part of our alpha usage and it's a online education company. They do curriculum development and that type of thing and they brought this use case to us. They have their APP that they've built for their curriculum on Amazon and then they want to take a lot of snapshots. So what they. One of the technologies, we have his space saving snapshots so they're like, oh, that'd be great if I could use your cloud block store data service on Amazon that way. But then they thought about it and they're like, well, every time we develop a new curriculum we have to send a snapshot out to a different location and site and what we could do is set up a your hardware in a direct attached way to Amazon because your software is the same. And we have active synchronous replication technology where we can now synchronously replicated between the public cloud and this private hosted direct attached diversion. And then they can do work here or even take snapshots from here. And the reason they were doing it was go ahead and say, use that space saving snapshot to reduce their overall cost profile on exports. That's a great example of cloudifying being cloudified, but more options. This brings up the question about competition. How do you guys compare to the competition? So you guys are. It's the first move for you guys in the cloud, within this operating model, which is consistent, you know, pure on premises and the cloud, get the consistency, loved the agility of the ability for applications and get all that goodness. What about the competition? How do you guys stand versus the competition? Well, when we take a look at what was going on, I think a lot of people wanted to check the box on cloud. So let's throw something out there and you know, see how people use. As we've done this market introduction, we've been very careful about that because peer has a certain brand reputation around when we say we're going to deliver some of these characteristics, we deliver and deliver those characteristics. And we didn't want to lose the value proposition of simplicity and agility. So as we launched this, we didn't just say let's throw it out there and see what happens. We did it with the deliberate intent of saying we want to provide agility is a characteristic that people could use and we want to deliver that agility with the same simplicity that they've come to know and love with peer. So those are the principles that we're focused on and as we take a look at the competition, you know, they've thrown their software out there but we don't see that it's been broadly adopted and then they're still the tradeoffs of should I go on premises or public cloud so they're stuck in the divide and that they're in the storage or the cloud, divide on premise, different operating models. And our goal is to really enable that replicates those guys are stuck on the divide. Yeah, and if you think about these hybrid applications that we see the world moving to think about it this way, the world's evolving where you're going to have more application to application integration. Gone is the days where you're going to have one monolithic application doing everything. So what's evolved is the application to application integration is exponentially growing. Now, if you assume that if you need to do a production to Dev test copy, do you need to do it for one app or for that entire set of apps that you treat as one monolithic entity because now they're all connected. He otherwise you have to decide, okay, I'm snapshotting this one and then I got to choose this one and I got to choose that one. So you, there's now a need to go ahead and consolidate a lot of application workloads and treat the management and operations of that as a unique entity. So hybrid apps are actually making you rethink how you deal with management of compute networking and storage. Yeah, I think that's a great example. I think application to application integration and totally agree with you is going to be happening at a much accelerated rate, but it changed the role of data. The role of data is central to that because as you mentioned, that other example, if you're doing a financial app and you want to use some ai from a cloud over here, the best tool for the job needs to be integrated in seamlessly and storage. Should they be part of that conversation? It should just be stored somewhere. That's what you guys are doing with this announcement and you guys are bringing that to the table. Um, so I got to get. I guess I'll ask you the final question here because it's exciting news. You guys are cloudified it made it. He bridged the divide on the storage cloud storage divide. What's the bottom line for this announcement? As you look at this impact to customers, what's the impact to pure customers and what does it mean for prospects that aren't yet your customers? What's the bottom line? This announcement? Well, I'll give it to you. For me, each perspective for our existing customers, this adds the agility tool set to their bag of tricks they've got and it does it in a way where they can start, get that instant available capacity and if they want, they can go ahead and now start benchmarking across both environments without having to re architect because the kpis are the same and for net new customers and prospects. It's interesting. As we speak to customers, we find that people are on a different educated education journey in the public cloud. Some are already using the public cloud and as we've been discussing this with them, they're like, hey, this could improve on some of these characteristics. Either I have performance challenges, cost challenges, reliability or manageability challenges. So we find that the customers or the prospects that are most educated or the ones that have already leaped, right? They've jumped in the pool and now they realize, hey, you know what? The water's cold and I need something, and there's another set of customers that are still haven't jumped in that pool. And what we're saying is for those customers, you have to make a choice. Right now you have to decide between multiple public clouds, you have to decide between on premise and what we're doing is we're de-risking that choice by allowing them to get the best of what's available where and most importantly ensuring that if they've chosen, if they've chosen something but one of the other choices evolves or matures to be a better option for them, they have the ability to move and I think also the focus we hear from the practitioners that they are investing more and more of their time and energy on building applications, hybrid applications as you're calling them, ones that are going to be a in the cloud or on premises, but solving a problem. They want to shift their resources and attention from mundane storage admin like maintenance problems and make the storage invisible to them. So the developer that they said, I know my thing's working great in the cloud. One of my apps are productive. My developers are programming and the storage resources are invisible and it's never a headache. That's kind of what you guys are getting at here. You're making storage pervasive and important to the developers and the it so that it kind of goes away in their mind, isn't it the sleep better at night, Kinda well, take Kubernetes, for example, um, a lot of application developers using it, but storage is not necessarily transparent. We, six months ago we introduced a pure service orchestrator that made storage transparent, so you have a block file object interface you, you just call and use storage, spin it up, use it as you need and let go, but you should not have to worry about, let me go phone someone creative volume decider either. So you need that transparent and elasticity and we've been focused on delivering that and now few modernize were kind of application development is going, we can provide that. It's always on. It always works. It's globally consistent, it shared, and it's easy to manage from wherever you're saying progress. Thanks for coming in and sharing the news on the new hybrid cloud applications that are hitting the market. Of course, having the right solutions and having the cloud data services available from pure storage. I'm here percussion, just general manager of the flash of rapists and pure storage. This is a special cube conversation. I'm John Furrier. Thanks for watching.
SUMMARY :
What does that mean to you guys at pure
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Prakash | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Prakash Darji | PERSON | 0.99+ |
John Three | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
May 2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
One | QUANTITY | 0.99+ |
fifth dimension | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
prakash | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
each perspective | QUANTITY | 0.99+ |
fourth dimension | QUANTITY | 0.99+ |
third | QUANTITY | 0.98+ |
two sites | QUANTITY | 0.98+ |
one customer | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
first move | QUANTITY | 0.98+ |
one app | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
two worlds | QUANTITY | 0.97+ |
three | QUANTITY | 0.96+ |
Azure | TITLE | 0.96+ |
both environments | QUANTITY | 0.94+ |
around 15 years ago | DATE | 0.93+ |
Ceos | ORGANIZATION | 0.93+ |
first | QUANTITY | 0.92+ |
one way road | QUANTITY | 0.92+ |
GCP | TITLE | 0.91+ |
three holy grails | QUANTITY | 0.89+ |
about six months ago | DATE | 0.88+ |
aws | ORGANIZATION | 0.88+ |
Azure | ORGANIZATION | 0.84+ |
two major trends | QUANTITY | 0.82+ |
three dimensions | QUANTITY | 0.79+ |
Coobernetti | ORGANIZATION | 0.77+ |
NBME | ORGANIZATION | 0.73+ |
Guth Coobernetti | PERSON | 0.73+ |
second medium | QUANTITY | 0.71+ |
apps | QUANTITY | 0.64+ |
aws | TITLE | 0.63+ |
azure | ORGANIZATION | 0.63+ |
PureStorage | ORGANIZATION | 0.63+ |
Opex | TITLE | 0.58+ |
intel | ORGANIZATION | 0.54+ |
Entries | TITLE | 0.51+ |
Kubernetes | TITLE | 0.41+ |
Alan Stearn, Cisco | VeeamON 2018
>> Narrator: Live from Chicago, Illinois It's theCUBE covering VeeamOn 2018 Brought to you by Veeam. >> Dave: Welcome back to VeeamOn 2018. You're watching theCUBE, the leader in live tech coverage. We go out to the events, we extract the signal from the Noise. My name is Dave Vellante and I'm here with my cohost, Stu Miniman. This is our second year at VeeamOn, #VeeamOn. Alan Stern is here. He's the technical solutions architect at Cisco. Alan, thanks for coming to theCUBE. >> Alan: Great to be here. It's a real honor and privilege, so I'm excited. >> It's a great show. It's smallish. It's not as big as Cisco Live which will be at the next month but it's clean, it's focused. Let's start with your role at Cisco as a solutions architect. What's your focus? >> So my focus is really on three areas of technology. Data protection being one of them, software defined storage or object storage, and then the Hadoop ecosystem. And I work with our sales teams to help them understand how the technology is relevant to Cisco as a solutions partner, and also work with the partners to help them understand how Cisco-- the benefit of working with Cisco is advantageous to all of us in order to help our customers come to solutions that benefit their enterprise. So your job as a catalyst and a technical expert-- so you identify workloads, use cases, and figure out how can we take Cisco products and services and point them there and add the most value for customers. That's really your job. >> To some degree, yeah, I mean in a lot of these solutions, this is an area that our executive team has said, "Hey this is something we can go help our customers with" and then it's handed down to my team and my job is then to make it happen. Along with a lot of other people. >> So let's look at these. Data protection is obviously relevant at VeeamOn. What role does Cisco play in the data protection matrix? >> So Cisco provides an optimal platform for great partners like Veeam to land these backups. It's critical, it's funny we often talk about backup, and what we should be talking about is restore. Cause nobody backs up just for the sake of backing up. But how do I restore quickly, and having that backup on premise on an optimized platform where Cisco has done all of the integration work to make sure everything is going to work is critical to the customer's success. Because as we know maintenance windows and downtime are a thing of the past. They don't exist anymore. We live in an always-on enterprise and that's really where folks like Veeam are focused. >> For you younger people out there, we used to talk about planned downtime which is just-- what? What is that? Why would anybody plan for downtime? It's ridiculous. >> Stu: Alan, what if we can unpack that a little. I think back and the data center group, you and Cisco launched UCS, the memory that it had was really geared for virtualization and I could see why Veeam and Cisco would work well together because some unique architecture that's there. This is a few years ago now that UCS has been on the market, What's the differentiation and maybe bring us inside some of the engineering work that happened between Cisco and Veeam in some of these spaces. >> So we take our engineers and lock them in with Veeam engineers into a lab and they go in and deploy the solution, they turn all the various nerd knobs to get the platform optimized. Primarily we talk about our S3260 which in a 4U space holds about 672 terabytes of storage and they optimize it and then publish a document that goes with it. We call them Cisco-validated designs. And these designs allow the customer to deploy the solution without having to go through the hit-or-miss of "what happens when I turn "this nerd knob or that nerd knob, "alter this network configuration or that one" and to get the best performance in the shortest possible time. >> Those CVDs are critical, but field knows them, they trust them, can you speak a bit to -- the presence that you have having Veeam in your pricebook, what that means, to kind of take that out to the broad Cisco ecosystem. Yeah, and it's more than just having it on the pricelist. It's the integrated support, so that the customer knows that if there's a problem they're not going to end up in a finger-pointing solution of Cisco saying "Call Veeam" or Veeam saying "Call Cisco." They have a solution and we're in lockstep so that there aren't going to be the problems. The CVD insures that problems are kept to a minimum. Cisco has fantastic support, Veeam has great support. They were talking this morning about the net promoter score being 73 which is unbelievably good. So that in the event that there is a problem, they know they're going to get to resolution incredibly quickly and they're going to get their environment restored as quickly as possible. >> So when I think about the three areas of your focus, data protection, object storage, and Hadoop ecosystem, there's definitely intersection amongst those. We talked a little bit about data protection. The object store piece, the whole software defined, is a trend that's taking off, we were talking earlier about some of the trade-offs of software defined. Bill Philbin was saying, "Well if I go out "and put it together myself when there's "a problem, I've got to fix it myself." So there's a trade-off there. I don't know if you watch Silicon Valley, Stu but the box. Sometimes it's nice to have an appliance. What are you seeing in terms of the trends toward software defined-- What's driving that? Is it choice, is it flexibility? What are the trade-offs? >> It's a couple of things. The biggest thing that's driving it is just the explosion of data. Data that's born in the cloud-- It's probably pretty good to store with one of the cloud providers. But data that's born in your data center or that is extremely proprietary and sensitive; customers are increasingly looking to say "You know what, I want to keep that onsite." and that's in addition to the regulatory issues that we're going to see with GDPR and others. So they want to keep it on site, but they like the idea of the ease of use of cloud and the nature of object storage and the cost-- the cost model for object storage is great. I take a X86 based server like UCS and I overlay a storage software that's going to give me that resiliency through erasure coding or replication. And now I've got a cost model that looks a lot like the cloud, but it's on premise forming. So that also allows me, I'm putting archival data there, I can store it cheaply and bring it back quickly. Because the one challenge with the cloud is my connectivity to my cloud provider is finite. >> Just a quick follow-up on that, I know Scality's a partner or there are other options for optic storage. >> Sure, both Scality and Swiftstack are on our global pricelist like Veeam. We also work with some other folks like IBM cloud object store, Cohesity, which sort of fits in between space, as well as, we're doing some initial work with Cloudy. >> Think about the hadoop ecosystem. That brings in new challenges, I mean A lot of Hadoop is basically software defined file system. And it's also in a distributed-- The idea of bringing five megabytes of computing to a petabyte of data. So it's leave the data where it is. So that brings new challenges with regard to architectures, protecting that data, talk about that a little bit. >> The issue with Hadoop is data has gravity. Moving lots of data around is really inefficient. That's where MapReduce was born. The data is already there. I don't have to move it across the network to process it. Data protection was sort of an afterthought. You do have replication of data, but that was really for locality, not so much for data protection. >> Or recovery to your earlier. >> But even with all of that the network is still critical. Without sounding like an advertisement for Cisco, we're really the only server provider that thought about the network as we're building the servers and as we're designing the entire ecosystem. Nobody else can do that. Nobody has that expertise. And a number of hardware features that we have in the products give us that advantage like the Cisco virtual interface card. >> That's a true point, you managed your heritage so of course that's where you started. So what advantage does that give you and one of the things we talked about in theCUBE a lot is, Flash changed everything. We used to just use spinning disks to persist and we certainly didn't it for performance. Did unnatural acts to try to get performance going. So, in many respects, Flash exposed some of the challenges with network performance. So how has that affected the market, technology, and Cisco's business? >> We're in this period of shift on Flash. Because if you think about it, at the end of the day, the Flash is still sitting on a PCI bus, it's probably ISCSI with a SATA interface. >> You got the horrible storage stack >> We move the bottleneck away from the disk drive itself, now to the bus. Now we're going to solve a lot of that with NBME and then it will come to the network. But the network's already ahead of that. We're looking at-- We have 10 gig, 40 gig, we're going to see 100 gig ethernet. So we're in pretty good shape in order to survive and really flourish as the storage improves the performance. We know with compute, the bottlenecks just move. You know, I think this morning you said Whack-a-Mole. >> Thinking about the next progression in the Whack-a-Mole, what is the next bottleneck? Is it the latency to the cloud, is it-- I mean if it's not the network, because it sounds like you're prepared for NVMe. Is it getting outside the data center? Is the next bottleneck? >> I think that's always going to be the bottleneck I use analogies like roads. We think about a roadway inside my network it's sort of the superhighway but then once I go off, I'm on a connector road. And gigabit ethernet, multi-gigabit, some folks will have fiber in the metropolitan area, but at some point they're going to hit that bottleneck. And so it becomes increasingly important to manage the data properly so that you're not moving the data around unnecessarily. >> I wonder if we could talk a little bit about the cloud here. at the Veeam show we're talking about beyond just the data center virtualization. Talking about a multi-cloud world. I had the opportunity to go to Cisco Live Barcelona, interviewed Rowan Trollope, he talked heavily about Cisco's software strategy, living in that multi-cloud world, maybe help connect the dots for us as to how Cisco and Veeam go beyond the data center and where Cisco lives beyond that. >> So beyond the data center, we really believe the multi-cloud world is where it's going to happen. Whether the cloud is on-prem, off-prem, multiple providers, software, and servers, all of those things and both Cisco and Veeam are committed to giving that consistent performance, availability, security. Veeam, obviously, is an expert at the data management, data availability. Cisco, we're going to provide some application availability and performance through apt dynamics, we have our security portfolio in order to protect the data in the cloud and then the virtualized networking features that are there to again insure that the network policy is consistent whether you're on prem in Cloud A, Cloud B, or the Cloud yet to be developed. >> So we'll come back backup, which is the first of the three that we talked about. What's Cisco's point of view, your point of view, on how that's evolving from one -- think about Veeam started out as a virtualization specialist generally but specifically for Veeamware. Now we've got messaging around the digital economy, multi-cloud, hyperavailability, etc. What does that mean from a customer's standpoint? How is it evolving? >> Well, it's evolving in ways we couldn't have imagined. Everything is connected now, and that data -- that's the value. The data that the customer has is their crown jewels. What Veeam has done really well is yeah they start off as a small virtualization player, but as they've seen the market grow and evolve, they've made adaptations to really be able to expand and stay with their customers as their needs have morphed and changed. And in many ways, similar to Cisco. We didn't start in the server space, we saw an opportunity to do something that nobody else was doing, to make sure the network was robust and well-built and the system was well managed, and that's when we entered the space. So I think it's two companies that understand consistency is critical and availability is critical. And we both evolved with our customers as the markets and demands of the business had changed. >> Last question: What are some of the biggest challenges you're working on with customers that get you excited, that you say, "Alright I'm really going to "attack this one" Give me some color on that. >> I think the biggest challenge we're seeing today is a lot of customers are-- their infrastructure because of budgets, hasn't been able to evolve fast enough and they have legacy platforms and legacy software on those platforms in terms of availability that they've got to make the migration to. So helping them determine which platform is going to be best, which platform is going to let them scale the way they need, and then which software package is going to give them all the tools and features that they need. That's exciting because you're making sure that that company is going to be around tomorrow. >> Well that's a great point. And we've been talking all day Stu, about some of the research that we've done at WikiBon the day before, quantified in a Fortune1000, they leave between one and a half and 2 billion dollars over a three to four year period on the table because of poorly architected, or non-modern infrastructure and poorly architected availability, and backup and recovery procedures. It's a hard problem because you can't just snap your fingers and modernize and the CFO's going "How we going to pay for this" We've got this risk, this threat, We're sort of losing soft dollars, but at the end of the day they actually come down they do affect the bottom line. Do you agree that-- I said last question I lied. Do you agree that CXOs are becoming aware of this problem and ideally will start to fund it? >> Absolutely, because we talked earlier about the days of planned downtime are gone. Let a CXO have a minute of downtime and look at the amount of lost revenue that he sees and suddenly you've got his/her attention. >> Great point. Alan we've got to run. Thanks very much for coming to theCUBE >> My pleasure. Great to meet you both. >> Thanks for watching everybody. This is theCUBE live from VeeamON 2018 in Chicago. We'll be right back.
SUMMARY :
Brought to you by Veeam. We go out to the events, Alan: Great to be here. Let's start with your role at and add the most value for customers. and my job is then to make it happen. the data protection matrix? has done all of the integration work What is that? UCS has been on the market, and to get the best performance So that in the event about some of the trade-offs and the nature of object storage I know Scality's a partner or we're doing some initial work with Cloudy. So it's leave the data where it is. the network to process it. the network is still critical. So how has that affected the market, it, at the end of the day, But the network's already ahead of that. Is it the latency to the cloud, is it-- in the metropolitan area, I had the opportunity to So beyond the data around the digital economy, The data that the customer Last question: What are some of the is going to be best, but at the end of the day they and look at the amount of lost revenue Alan we've got to run. Great to meet you both. This is theCUBE live from
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Alan | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Bill Philbin | PERSON | 0.99+ |
Alan Stern | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
10 gig | QUANTITY | 0.99+ |
Rowan Trollope | PERSON | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Alan Stearn | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
40 gig | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
WikiBon | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
Chicago, Illinois | LOCATION | 0.99+ |
73 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
four year | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Stu | PERSON | 0.98+ |
UCS | ORGANIZATION | 0.98+ |
VeeamOn | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Hadoop | TITLE | 0.98+ |
CVD | ORGANIZATION | 0.97+ |
three | QUANTITY | 0.97+ |
about 672 terabytes | QUANTITY | 0.96+ |
S3260 | COMMERCIAL_ITEM | 0.95+ |
Scality | ORGANIZATION | 0.95+ |
this morning | DATE | 0.94+ |
next month | DATE | 0.94+ |
tomorrow | DATE | 0.94+ |
few years ago | DATE | 0.94+ |
one challenge | QUANTITY | 0.94+ |
VeeamON 2018 | EVENT | 0.93+ |
Flash | TITLE | 0.92+ |
2 billion dollars | QUANTITY | 0.91+ |
Veeam | PERSON | 0.9+ |
Cloud B | TITLE | 0.9+ |
one and a half | QUANTITY | 0.89+ |
Cloud A | TITLE | 0.88+ |