Image Title

Search Results for 200 gigabit per second:

Power Panel: Does Hardware Still Matter


 

(upbeat music) >> The ascendancy of cloud and SAS has shown new light on how organizations think about, pay for, and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays, and maximizing server utilization has been superseded by demand for cloud architects, DevOps pros, developers with expertise in microservices, container, application development, and like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware. Begs the question, is hardware going the way of Coball? Well, not likely. Software has to run on something, but the labor needed to deploy, and troubleshoot, and manage hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by X86 processors value is flowing to alternatives like Nvidia and arm based designs. Moreover, other componentry like NICs, accelerators, and storage controllers are becoming more advanced, integrated, and increasingly important. The question is, does it matter? And if so, why does it matter and to whom? What does it mean to customers, workloads, OEMs, and the broader society? Hello and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at TECHnalysis Research. Zeus Kerravala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townson is CEO and founder of CTO Advisor. And Marc Staimer is the chief dragon slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to theCUBE. Thanks so much for spending some time here. >> Good to be here. >> Thanks. >> Thanks for having us. >> Okay before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers and I'm showing a subset of the taxonomy here. This XY axis and the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past, I don't know, eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud and cloud of course is very impressive because not only is it elevated in the vertical access, but you know it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector. The server, the storage, the networking, and even PCs despite the work from home are depressed in relative terms. And of course, data center collocation services. Okay so you're seeing obviously hardware is not... People don't have the spending momentum today that they used to. They've got other priorities, et cetera, but I want to start and go kind of around the horn with each of you, what is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? >> Sure Dave, so look, I mean, hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable. It's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPU for, you know, computer vision processing. We've got AI-dedicated accelerators, we've got all kinds of other network acceleration tools and AI-powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is (coughs), excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge, which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory the edge and where edge devices are, depending on your definition, changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. >> Excellent, great, great points. Thank you, Bob. Zeus up next, please. >> Yeah, and I think the other thing when you look at this chart to remember too is, you know, through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know, from really all the network manufacturers anyways. They always had projects underway to upgrade networks. They put 'em on hold. Now that people are starting to come back to the office, they're looking at that now. So we might see some change there, but Bob's right. The size of those market are quite a bit different. I think the other big trend here is the hardware companies, at least in the areas that I look at networking are understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon. Some like data forwarding and things like that. Historically when you look at the way network devices were built, you did everything in hardware. You configured in hardware, they did all the data for you, and did all the management. And that's been decoupled now. So more and more of the control element has been placed in software. A lot of the high-performance things, encryption, and as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware, but not everything is done in hardware. And so it's a combination of the two. I think, for the people that work with the equipment as well, there's been more shift to understanding how to work with software. And this is a mistake I think the industry made for a while is we had everybody convinced they had to become a programmer. It's really more a software power user. Can you pull things out of software? Can you through API calls and things like that. But I think the big frame here is, David, it's a combination of hardware, software working together that really make a difference. And you know how much you invest in hardware versus software kind of depends on the performance requirements you have. And I'll talk about that later but that's really the big shift that's happened here. It's the vendors that figured out how to optimize performance by leveraging the best of all of those. >> Excellent. You guys both brought up some really good themes that we can tap into Dave Nicholson, please. >> Yeah, so just kind of picking up where Bob started off. Not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that's involved from a hardware perspective, from a kind of a server or service design perspective has become increasingly important. I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard, you know we're not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it's extremely significant when you start looking down at the component level. It's really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model. We can talk more about that later. >> Yeah, great. And thank you, David, and Keith Townsend I really interested in your perspectives on this. I mean, for years you worked in a data center surrounded by hardware. Now that we have the software defined data center, please chime in here. >> Well, you know, I'm going to dig deeper into that software-defined data center nature of what's happening with hardware. Hardware is meeting software infrastructure as code is a thing. What does that code look like? We're still trying to figure out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they're legacy, traditional data center, workloads, AI ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dale now with Apex taking what we need, these bare bone components, moving it forward with DDR five, six CXL, et cetera, and surfacing that as cold or as services. This is a very tough problem. As we transition from consuming a hardware-based configuration to this infrastructure as cold paradigm shift. >> Yeah, programmable infrastructure, really attacking that sort of labor discussion that we were having earlier, okay. Last but not least Marc Staimer, please. >> Thanks, Dave. My peers raised really good points. I agree with most of them, but I'm going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can't run software on the air. You can't run it in an ephemeral cloud, although there's the technical cloud and that's a different issue. The cloud is kind of changed everything. And from a market perspective in the 40 plus years I've been in this business, I've seen this perception that hardware has to go down in price every year. And part of that was driven by Moore's law. And we're coming to, let's say a lag or an end, depending on who you talk to Moore's law. So we're not doubling our transistors every 18 to 24 months in a chip and as a result of that, there's been a higher emphasis on software. From a market perception, there's no penalty. They don't put the same pressure on software from the market to reduce the cost every year that they do on hardware, which kind of bass ackwards when you think about it. Hardware costs are fixed. Software costs tend to be very low. It's kind of a weird thing that we do in the market. And what's changing is we're now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we'll talk about that more in length. >> You know, I want to follow up on that. And I wonder if you guys have a thought on this, Bob O'Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore's laws could have waning. Pat Gelsinger recently at their investor meeting said that he promised that Moore's law is alive and well. And the point I made in breaking analysis was okay, great. You know, Pat said, doubling transistors every 18 to 24 months, let's say that Intel can do that. Even though we know it's waning somewhat. Look at the M1 Ultra from Apple (chuckles). In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have this sort of these alternative processors that are really changing things. And to Dave Nicholson's point, there's a whole lot of supporting components as well. Do you have a comment on that, Bob? >> Yeah, I mean, it's a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You're seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego block style. And what that's also going to allow, not only is that going to give interesting performance possibilities 'cause of the faster interconnect. So you can share, have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you're going to see more diversity in the types of solutions that can be built. So we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements. And oh, by the way, the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller. We've been so focused and so obsessed when it comes to Moore's law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that's absolutely true, but we've already hit the point where things like RF for 5g and wifi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size. They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four nanometer and five nanometer along with 14 nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we're just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see. >> Yeah, David. David Nicholson's got a graphic on that. They're going to show later. Before we do that, I want to introduce some data. I actually want to ask Keith to comment on this before we, you know, go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware. And you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripheral servers, storage are having moderately difficult procurement issues. That's the sort of pinkish or significant challenges. So Keith, I mean, what are you seeing with your customers in the hardware supply chains and bottlenecks? And you know we're seeing it with automobiles and appliances but so it goes beyond IT. The semiconductor, you know, challenges. What's been the impact on the buyer community and society and do you have any sense as to when it will subside? >> You know, I was just asked this question yesterday and I'm feeling the pain. People question, kind of a side project within the CTO advisor, we built a hybrid infrastructure, traditional IT data center that we're walking with the traditional customer and modernizing that data center. So it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit, ARISTA switches, some older Dell's 730 XD switches, you know, speeds and feeds. And we said we would modern that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we'd easily migrate from 10 gig networking to 25 gig networking path that customers are going on. The 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity, even the 25 gig switches. And we're focused on networking right now. It's hard to procure. We're talking about nine to 12 months or more lead time. So we're seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn't have a capacity agreement. So customers are keeping an eye on that. There's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware VP or some other virtualization technology where it doesn't matter who can get me the hardware, they can just get me the hardware because it's critically impacting projects and timelines. >> So that's a great setup Zeus for you with Keith mentioned the earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is monetized and it's all about the software, or are we there already? >> No, we're not there already. And I don't see that really happening any time in the near future. I do think it's changed though. And just to be clear, I mean, when you look at that data, this is saying customers have had problems procuring the equipment, right? And there's not a network vendor out there. I've talked to Norman Rice at Extreme, and I've talked to the folks at Cisco and ARISTA about this. They all said they could have had blowout quarters had they had the inventory to ship. So it's not like customers aren't buying this anymore. Right? I do think though, when it comes to networking network has certainly changed some because there's a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and you know, where they're going to use it and, you know, what its purpose is. Because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down, right? It just doesn't have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I've talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance though is more important, that's when you need that kind of turnkey hardware system. And I've actually seen more and more customers reverting back to that model. In fact, when you talk to even some startups I think today about when they come to market, they're delivering things more on appliances because that's what customers want. And so there's this kind of app pivot this pendulum of agility and performance. And if performance absolutely matters, that's when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that's when you can go more to software, but the underlying hardware still does matter. So I think, you know, will we ever have a day where you can just run it on whatever hardware? Maybe but I'll long be retired by that point. So I don't care. >> Well, you bring up a good point Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors. They don't use EMC storage, they just run on commodity storage. And then of course, low and behold, you know, they've trot out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit. >> Well, (indistinct) been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commodity of hardware. And it's never really happened because you can still drive. As long as you can drive innovation into it, customers will always lean towards the innovation cycles 'cause they get more features faster and things. And so the vendors have done a good job of keeping that cycle up but it'll be a long time before. >> Yeah, and that's why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We're going to bring up the slide four. I want to go back to something that Bob O'Donnell was talking about, the sort of supporting act. The diversity of silicon and we've marched to the cadence of Moore's law for decades. You know, we asked, you know, is Moore's law dead? We say it's moderating. Dave Nicholson. You want to talk about those supporting components. And you shared with us a slide that shift. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let's bring up slide four and you can talk to that. >> Yeah, yeah. So first, I want to echo this sentiment that the question does hardware matter is sort of the answer is of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you're an end user using an application on your mobile device, maybe you don't care how the architecture is put together. You just care that the service is delivered but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you're going to get for your buck. So it's not only a quantitative question in terms of how much can you deliver? But it also ends up being a qualitative change as capabilities allow for things we couldn't do before, because we just didn't have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. So it happens to be Dell servers with Broadcom components. And the point here was to peel back, you know, peel off the top of the server and look at what's in that server, starting with, you know, the PCI interconnect. So PCIE gen three, gen four, moving forward. What are the effects on from an interconnect versus on performance application performance, translating into new orders per minute, processed per dollar, et cetera, et cetera? If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here what this is highlighting is just as a very specific example, you take a card that's designed as a gen three PCIE device, and you plug it into a gen four slot. Now the card is the bottleneck. You plug a gen four card into a gen four slot. Now the gen four slot is the bottleneck. So we're constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it's critically important. So there's no question that it matters. But of course, various people in this food chain won't care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it's a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They're critically important. >> So, okay. So all I want to get to the, okay. So what does this all mean to customers? And so what I'm hearing from you is to balance a system it's becoming, you know, more complicated. And I kind of been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disc, the last mechanical. So people who wrote software knew that when they were doing it right, the disc had to go and do stuff. And so they were doing other things in the software. And now with all these new interconnects and flash and things like you could do atomic rights. And so that opens up new software possibilities and combine that with alternative processes. But what's the so what on this to the customer and the application impact? Can anybody address that? >> Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I'm a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14 nanometer, 10 nanometer, five nanometer, soon three nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip 'cause the wires get smaller. People don't realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it's 1300 picoseconds. That's on the chip. This is why they're not getting faster. So we maybe getting a little bit slowing down in Moore's law. But even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip. It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we're seeing, what I'm seeing and I'm talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let's say your transactional database to your machine learning, it's the bottleneck, it's moving the data. And so when you look at it from a distributed environment, now you've got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data. Go ahead. >> So is this what you mean when Nicholson was talking about a shift from a processor centric world to a connectivity centric world? You're talking about moving the bits across all the different components, not having the processor you're saying is essentially becoming the bottleneck or the memory, I guess. >> Well, that's one of them and there's a lot of different bottlenecks, but it's the data movement itself. It's moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it. It's like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it's a sensor, how do you do AI at the edge? When you don't have enough power, you don't have enough computable. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing. Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we're seeing in the PCIE bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that's all great but none of that deals with the speed of light latency. And that's an-- Go ahead. >> You know Marc, no, I just want to just because what you're referring to could be looked at at a macro level, which I think is what you're describing. You can also look at it at a more micro level from a systems design perspective, right? I'm going to be the resident knuckle dragging hardware guy on the panel today. But it's exactly right. You moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I'm referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for the like IO. Now you have essentially offload engines in the form of storage controllers, rate controllers, of course, for ethernet NICs, smart NICs. And so when you can have these sort of offload engines and we've gone through these waves over time. People think, well, wait a minute, raid controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you're actually at a micro level doing exactly what you're referring to. You're bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn't solve the macro issue that you're referring to but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar. >> Yeah. >> Well this whole drive performance has created some really interesting architectural designs, right? Like Nickelson, the rise of the DPU right? Brings more processing power into systems that already had a lot of processing power. There's also been some really interesting, you know, kind of innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, you know, optimized for self-driving cars, right? They partnered with Pure Storage and ARISTA to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure about when the three companies rolled that out. He said, "Look, if you're going to do AI, "you need good store. "You need fast storage, fast processor and fast network." And so for customers to be able to put that together themselves was very, very difficult. There's a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. And so in that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. And I think it, part of the what Bob brought up at the beginning about the different chip design. >> Yeah, Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that's a radically different architecture, Bob, isn't it? >> It is, it's a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave talked about system architecture and really this that's what this boils down to, right? But it's looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There's this new thing called UCIE universal connection. I forget what it stands answer for, but it's a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, 'cause it's all fine and good. If you have this SOC that's tuned and optimized, but it has to talk to the rest of the system. And that's where you see other issues. And you've seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect 'cause it's really wonky and really technical and not that sexy, but at the end of the day it's incredibly important exactly. To the other points that were being raised like mark raised, for example, about getting that compute closer to where the data is and that's where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to, for example, maybe it's an FPGA, maybe it's a dedicated AI chip. It's another kind of chip architecture that's being created to do that inferencing on the edge. Because again, it's that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we're tackling bigger problems. So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east west data center movement conversation that we hear Nvidia and others talk about, it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really is about tackling where the processing is needed, having the interconnect and the ability to get where the data you need to the right place at the right time. And because those needs are diversifying, we're just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential I would argue than it is today. And so I think what we're going to see not only does hardware matter, it's going to matter even more in the future than it does now. >> Great, yeah. Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably you want to look for another job. So maybe clearly hardware matters, but with software defined everything, do people with hardware expertise matter outside of for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset and VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software defined hyperscale cloud and how do you see the shifting demand for skills in enterprise IT? >> So I love the question and I'll take a different view of it. If you're a data analyst and your primary value add is that you do ETL transformation, talk to a CDO, a chief data officer over midsize bank a little bit ago. He said 80% of his data scientists' time is done on ETL. Super not value ad. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don't have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software defined nature and the automation that we're seeing vendors undertake, whether it's Dell, HP, Lenovo take your pick that Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don't spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored. Data is retrievable, data's protected, et cetera. I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where should the shift be, or look be, you know, focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point. They need to get at those datasets and where they need it. We talked a lot about where they need out. One last point about this interconnecting. I have this vision and I think we all do of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don't scale out. >> You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who's the founder of Data Mesh last week. And she made a really interesting point. She said, "Think about, we have separate stacks. "We have an application stack and we have "a data pipeline stack and the transaction systems, "the transaction database, we extract data from that," to your point, "We ETL it in, you know, it takes forever. "And then we have this separate sort of data stack." If we're going to inject more intelligence and data and AI into applications, those two stacks, her contention is they have to come together. And when you think about, you know, super cloud bringing compute to data, that was what Haduck was supposed to be. It ended up all sort of going into a central location, but it's almost a rhetorical question. I mean, it seems that that necessitates new thinking around hardware architectures as it kind of everything's the edge. And the other point is to your point, Keith, it's really hard to secure that. So when you can think about offloads, right, you've heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, you know, that 30%, 25 to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security. It seems like maybe Zeus you have a comment on this. It seems like new architectures need to come other to support, you know, all of that stuff that Keith and I just dispute. >> Yeah, and by the way, I do want to Keith, the question you just asked. Keith, it's the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of 'em had ever made an API call, which you know that that kind of shows this big skillset change, you know, that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multi-cloud, we never really had multi-cloud. We'd use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and at different edge locations. And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinkin' about how our applications interact with all those different parts of that edge ecosystem, you know, to create a single experience. The consumer apps, a lot of consumer apps largely works that way. If you think of like app like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates pretty cool experience. We're just starting to get to that point in the business world now. There's a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more towards turnkey systems, at least initially in order to, you know, ensure performance and security. >> Right. Marc, I wanted to go to you. You had indicated to me that you wanted to chat about this a little bit. You've written quite a bit about the integration of hardware and software. You know, we've watched Oracle's move from, you know, buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What's your take on all that? I know you also have some thoughts on the shift from CapEx to OPEX chime in on that. >> Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can't do in a commodity basis. If you own the software and somebody else has the hardware, I'll give you an example would be Oracle. As you talked about with their exit data platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory PMM in their storage systems, not MVME, SSD PMM. I'm talking about the cards itself. So there are advantages you can take advantage of if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware. Okay, that's great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less but you get less performance. What Zeus had said earlier, it depends where you're running your application. How much performance do you need? What kind of performance do you need? One of the things about moving to the edge and I'll get to the OPEX CapEx in a second. One of the issues about moving to the edge is what kind of processing do you need? If you're running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you're getting and move it somewhere else and get processed and the information is sent back? I mean, there are companies out there like Brain Chip that have developed AI chips that can run on the sensor without a CPU. Without any additional memory. So, I mean, there's innovation going on to deal with this question of data movement. There's companies out there like Tachyon that are combining GPUs, CPUs, and DPUs in a single chip. Think of it as super composable architecture. They're looking at being able to do more in less. On the OPEX and CapEx issue. >> Hold that thought, hold that thought on the OPEX CapEx, 'cause we're running out of time and maybe you can wrap on that. I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked whatever $40 billion for himself and Silverlake, I was always a fan of a spin in with VMware basically become the Oracle of hardware. Now I know it would've been a nightmare for the ecosystem and culturally, they probably would've had a VMware brain drain, but what does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper. >> I got to eat a little crow. I did not like the Dale VMware acquisition for the industry in general. And I think it hurt the industry in general, HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of a afterthought when it came to competing. So that spin in, when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack. Dell was well positioned to do that with a potential span in of VMware. >> Yeah, we're going to be-- Go ahead please. >> Yeah, in fact, I think you're right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the number one workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyper-converged market and just started taking share because of that relationship. So, you know, this sort I guess it's, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties, right? Networking and service and things like they could have given the dominance that VMware had. From an industry perspective though, I do think it's better to have them be coupled. So. >> I agree. I mean, they could. I think they could have dominated in super cloud and maybe they would become the next Oracle where everybody hates 'em, but they kick ass. But guys. We got to wrap up here. And so what I'm going to ask you is I'm going to go and reverse the order this time, you know, big takeaways from this conversation today, which guys by the way, I can't thank you enough phenomenal insights, but big takeaways, any final thoughts, any research that you're working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We'll go in reverse order. Maybe Marc, you could start us off please. >> Sure, on the research front, I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I would wanted to chat about real quickly, OPEX versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you're only paying for what you use, period. You're not for what you don't use. I mean, it's compute time, everything else. The bad side about that is you have no predictability in your bill. It's elastic, but every user I've talked to says every month it's different. And from a budgeting perspective, it's very hard to set up your budget year to year and it's causing a lot of nightmares. So it's just something to be aware of. From a CapEx perspective, you have no more CapEx if you're using that kind of base system but you lose a certain amount of control as well. So ultimately that's some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues. >> Great, thank you. Okay, CTO advisor, give us your final thoughts. >> All right. Really, really great commentary. Again, I'm going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area what we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back private data center and telling that story and walking that walk with our customers. This has been a super enlightening panel. >> Yeah, thank you. Real, real different world coming. David Nicholson, please. >> You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I'm sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lions share of spend will be in coming years, which is OnPrem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When's the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner's standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming? And so I've been very, very focused on looking at how these connectivity components like rate controllers and NICs. I know it's not as sexy as talking about cloud but just how these opponents completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though gen 16 is coming, let's say 12 months from now. So that's where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I'm in there under the hood and it's not as sexy. But yeah, so that's what I'm focused on Dave. >> Well, you know, to paraphrase it, maybe derivative paraphrase of, you know, Larry Ellison's rant on what is cloud? It's operating systems and databases, et cetera. Rate controllers and NICs live inside of clouds. All right. You know, one of the reasons I love working with you guys is 'cause have such a wide observation space and Zeus Kerravala you, of all people, you know you have your fingers in a lot of pies. So give us your final thoughts. >> Yeah, I'm not a propeller heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they're looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We're putting more data, more compute in more places all the way down to like little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT you know, pre-Y2K, we didn't have a lot of choice in things, right? We had a server that was rack mount or standup, right? And there wasn't a whole lot of, you know, differences in choice. But today we can deploy, you know, these really high-performance compute systems on little blades inside servers or inside, you know, autonomous vehicles and things. I think the world from here gets... You know, just the choice of what we have and the way hardware and software works together is really going to, I think, change the world the way we do things. We're already seeing that, like I said, in the consumer world, right? There's so many things you can do from, you know, smart home perspective, you know, natural language processing, stuff like that. And it's starting to hit businesses now. So just wait and watch the next five years. >> Yeah, totally. The computing power at the edge is just going to be mind blowing. >> It's unbelievable what you can do at the edge. >> Yeah, yeah. Hey Z, I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you 'cause we know that you studied basket weaving. >> I was actually a physics math major, so. >> Good man. Another math major. All right, Bob O'Donnell, you're going to bring us home. I mean, we've seen the importance of semiconductors and silicon in our everyday lives, but your last thoughts please. >> Sure and just to clarify, by the way I was a great books major and this was actually for my final paper. And so I was like philosophy and all that kind of stuff and literature but I still somehow got into tech. Look, it's been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it's going to be how software tools evolve to adapt to that new world. So I look at things like what Intel's trying to do with oneAPI. You know, what Nvidia has done with CUDA. What other platform companies are trying to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there. And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures. That can leverage all these new interconnects. That can leverage all these new system architectures and figure out ways to make that all happen, I think is going to be critically important. And then finally, I'll mention the research I'm actually currently working on is on private 5g and how companies are thinking about deploying private 5g and the potential for edge applications for that. So I'm doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks. >> Yeah, look forward to that. Guys, again, thank you so much. Outstanding conversation. Anybody going to be at Dell tech world in a couple of weeks? Bob's going to be there. Dave Nicholson. Well drinks on me and guys I really can't thank you enough for the insights and your participation today. Really appreciate it. Okay, and thank you for watching this special power panel episode of theCube Insights powered by ETR. Remember we publish each week on Siliconangle.com and wikibon.com. All these episodes they're available as podcasts. DM me or any of these guys. I'm at DVellante. You can email me at David.Vellante@siliconangle.com. Check out etr.ai for all the data. This is Dave Vellante. We'll see you next time. (upbeat music)

Published Date : Apr 25 2022

SUMMARY :

but the labor needed to go kind of around the horn the applications to those edge devices Zeus up next, please. on the performance requirements you have. that we can tap into It's really important that you optimize I mean, for years you worked for the applications that I need? that we were having earlier, okay. on software from the market And the point I made in breaking at the edge, in the data center, you know, and society and do you have any sense as and I'm feeling the pain. and it's all about the software, of the components you use. And I remember the early days And I mean, all the way back Yeah, and that's why you see And the answer to that is the disc had to go and do stuff. the compute to the data. So is this what you mean when Nicholson the processing closer to the data? And so when you can have kind of innovation in the area that the future is going to be the ability to get where and how do you see the shifting demand And the opportunity is to to support, you know, of that edge ecosystem, you know, that you wanted to chat One of the things about moving to the edge I mean, other than the and the ability to create solutions Yeah, we're going to be-- And I remember talking to Chad the order this time, you know, in the sense that you can use hardware us your final thoughts. So the landscape and service area Yeah, thank you. in the direction of cloud. You know, one of the reasons And I think this is going to The computing power at the edge you can do at the edge. on the wall behind you I was actually a of semiconductors and silicon and the manner with which Okay, and thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Marc StaimerPERSON

0.99+

Keith TownsonPERSON

0.99+

David NicholsonPERSON

0.99+

Dave NicholsonPERSON

0.99+

KeithPERSON

0.99+

Dave VellantePERSON

0.99+

MarcPERSON

0.99+

Bob O'DonnellPERSON

0.99+

DellORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

LenovoORGANIZATION

0.99+

2004DATE

0.99+

Charlie GiancarloPERSON

0.99+

ZK ResearchORGANIZATION

0.99+

PatPERSON

0.99+

10 nanometerQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

10 gigQUANTITY

0.99+

25QUANTITY

0.99+

Pat GelsingerPERSON

0.99+

80%QUANTITY

0.99+

ARISTAORGANIZATION

0.99+

64 terabytesQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Zeus KerravalaPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Larry EllisonPERSON

0.99+

25 gigQUANTITY

0.99+

14 nanometerQUANTITY

0.99+

2017DATE

0.99+

2016DATE

0.99+

Norman RicePERSON

0.99+

OracleORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Michael DellPERSON

0.99+

69%QUANTITY

0.99+

30%QUANTITY

0.99+

OPEXORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

$40 billionQUANTITY

0.99+

Dragon Slayer ConsultingORGANIZATION

0.99+

Richard Hummel, Netscout | Threat Report Episode 1


 

>>Kicking things off for Netscout's latest threat intelligence reports. I'm Lisa Martin with Richard Hummel manager of threat intelligence at NetScout. We're going to be talking about DDoSs for hire. It's a free for all Richard, welcome to the program. >>Thanks for having me. At least that's always a pleasure to do interviews with you here on acuity. >>Likewise. So, which are the dark web is a dangerous place. We know that we're adversaries own and operate DDoS for hire platforms and botnets to launch everything from free tests to high powered multi-vector attacks. What did you find? What kind of attacks are being launched on the dark web, >>Sadly, any and every type of attack you. And I think you put it eloquently that it's free a little while ago. I got a question come in from a media journalists that I was talking to and they asked me what is the average cost of a DDoS attack? And my gut reaction was mad, 10, 20 USD. I even asked another reporter later on, what do you think it costs? And he came out with two or 300 USD. And so that was kinda my expectations. Well, just because of that question, I broke up my lab and I said, you know what? I'm just going to kind of sleuth a little bit. And so I started logging in, I started looking at these underground platforms and I spend time on 19 of hundreds. There's a website out there that lists all with like three or 400 of these things, but I just chose the top 19. >>And when I started looking at these, every platform that I evaluated had some form of free attacks during launch. And these are the typical for your five attacks like NTP, cl doubt, DNS amplification. These are the, the rope or routine types of attacks we see in the DDoS threat landscape and it's free. And then it scales from there. You have $5 entry fees to do trials. You have a week trial, you can go all the way up to 6,500 USD. And the adversary reports to launch one terabit per second attack with that costs. There's another one that says, Hey, we have 150,000 button-up nodes. He has $2,500, and then you can launch it from this platform. And they also have customization. They have these little sliders on there. You can go in and say, you know what? I have five targets. I want to launch 10 attacks at once. I want it to last this many minutes. These are the vectors I want to use. And then it just tells you here's what you got to pay. Now, it used to be, you needed to have a crypto wallet to even launch a DDoS attack. Well, that's no longer the case. Second. It used to be crypto currency. Well, now they take PayPal. They take wire transfers. They do Western union transfers. And so yeah, this barrier to entry, it doesn't exist anymore. >>Wow. The evolution of data also attacks the low barrier to entry. The customization. You mentioned that you researched the top 19 validated DDoS for hire services. You guys captured the types of attacks, reported number of users and the costs to launch what you went through. What are some of the things that really stuck out to you that you found? >>I think the biggest thing, the biggest outlier that I saw with a lot of these things is that this, the sheer amount of attacks or tech types that they purport to launch that combined with one other metric that I'll, I'll tell you in just a minute. But when I started adding all of these out, I came out with a list of something like 450 different line items. This is taking the attack types from all 19 of these platforms and putting it into a spreadsheet. And then when I actually got rid of the duplicates and I started looking at each one of these to see, did they call it this? And then this one called it, this, there was still 200 different types of attacks. And these attacks are not just your typical volume metric things or your typical like botnet net related things. I mean, they're going after applications. >>They're going after capture pages. They're going after some website based anti DDoSs stuff. They're going after specific games, grand theft, auto Counter-Strike, all of these things. And they have specific attacks designed to overwhelm those layers. And you can actually see in some of the, the, the news or the update boxes they have on their platforms that they put rolling updates similar to like what you would see with Microsoft update. Here's what changed. And so they'll list, oh, we added this capture bypass, or we tweak this bypass, or guess what? We added a new server. And now you have this, this more power to launch bigger attacks. The other thing that really surprised me was the sheer number of users and attacks that they put for it to have and have launched. So across these 19 platforms, I counted over 1 million registered users. Now it could be that multiple users are registered across multiple platforms. >>And so maybe that's a little redundant, but a million or 19. And then the attacks, just whatever they showed in their platform. Now, I don't know what time segment that says it could be all time. It could be a certain snapshot, whatever, 19 of several hundred of these things, more than 10 million attacks. Now, if we look at 2020, we saw 10 million attacks on the whole year, 2021, we saw 9.7 million. So you can just see it. I mean, we're not seeing the whole breadth of the threat landscape. We see about a third probably of the world's internet traffic. And so if what they say is true, there's a lot more attacks out there than even. We talk about >>A lot more attacks than, than are even uncovered. That's shocking. The evolution of DDoSs is, is also quite shocking. One of the things I noticed in the first half 2021 threat intelligence report that NetScout published was some of the underground services offer blacklists or delisting services to prevent attacks. And I thought that sounds like a good thing, but what does that really mean? >>So actually, when we were writing the last chart report, a colleague of mine role in Dobbins had actually talked about this and he's like, Hey, I saw this thing where it's this quasi illegal organization. And they were talking about listing you as this. And they actually turn around and sell these lists. And so I started researching that a little bit. And what it turns out is these organizations, they report to be VPN services. Yeah. And they also say, you know what, we're offer these kinds of lists or block lists. We offer this VPN service, but we are also collecting your IP address. And so if you don't want us to basically resell that to somebody else, or if you want us to add that so that people can attack you based on what they're seeing on the VPN, then you can pay us money and you can do like different tiers of this. >>You can say, block me for a week or a block me for a lifetime and all of these different platforms. I wouldn't say all of them, probably four of the 19 that I looked at had this service. Now as a user, I'm not going to go to every single DDoS for hire platform. I'm not going to purchase the VPN from every single one of these. I'm not going to go and add myself to their denialist across all of these things. That's, that's kind of way too much work for one. And the cost is going to be in the thousands, if not tens of thousands, as you start to add all of these things together. And so they, they report to do something good and in turn, take your information and sell it. And what's worse is they actually assign your username or your handle or your gamer tag to that IP address. >>And so now you have this full list of IPS with gamer tags. And so an adversary Alto that has no qualms or scruples about launching DDoS attacks can then purchase that list. And guess what, Hey, this, this gamer over here who has this gamer tag, he always tells me I don't, I don't want to face them anymore. So anytime I see him in a match, I'm going to go over here to this DDoS for hire platform. And I'm going to just launch attack against him, try to knock them off of them. And so that's the kind of shady business practices that we're seeing here in the underground forums. >>Well, I knew that wasn't a good, I knew that you would actually give me the skinny on what that was. So another thing that I was wondering if it was a good, you know, despite this, you talked about the incredible diversity of these platforms, the majority of attack types that you sign are recognized and mitigated by standard defensive practices. Is that another good, bad disguise as good? >>No, in this case, it is very much good. So I, as far as I've seen, there's not a single DDoS attack type from a Google stressor service to date that you can't mitigate using preparation and your, your typical DDoSs platforms, mitigation protection systems. And even, even the bandwidth, the throughput, what some people call the size or the speed of attacks. We don't really see anything in the terabit per second range from these services. Now they'll, they'll boast about having the capability to do X number of packets per second, or this size of an attack. And so some of them will even say that, Hey, you pay us this money and we're going to give you a one terabit per second attack to date in the four years that I've been here on NetScout. And even some of my colleagues who've been around the space for decades. >>They have yet to see an attack source from one of these details for higher platforms that exceed one terabit per second in bandwidth or volume. And so they might talk a big game. They might boast about these things, but oftentimes it's, it's smoke and mirrors. It's a way to get people into their platforms to purchase things. If I had to pick kind of an average volume or size of attacks for these beer stressors on the high-end, I would say around the 150 to 200 gigabit per second. Now they're a small organization that might seem huge, but to a service provider, that's, that's probably a drop in the bucket and they can easily saturate that across their network, or observe, absorb that even without the top of the line mitigation services. So just being able to have something in place, understand how adversaries are launching these attacks, what attack vectors they are, you know, do some research. >>We have this portal called ominous threat horizon, where you can actually go in there and into your industry segment and your country. And you can just look to see, are there attacks against people like me in my country? And so, but understanding if you are the target of attacks, which it's not, if it's a win, then you can understand, okay, I need to probably have provisions in place for up to this threshold and ensure there's a tax that will exceed that. But at least you're doing due diligence to have some measure of protection, understanding that these are the typical kinds of attacks that you can expect. >>Yeah. That due diligence is key. Richard, thanks for joining me talking about DDoSs for hire a lot of interesting things there that was uncovered in a moment. Richard and I are going to be back to talk about the rise of server class bot net armies.

Published Date : Mar 22 2022

SUMMARY :

We're going to be talking about DDoSs for At least that's always a pleasure to do interviews with you here on acuity. What did you find? And I think you put it eloquently that it's And the adversary reports to launch one terabit per second attack with that costs. What are some of the things that really stuck out to you that you found? And then this one called it, this, there was still 200 different And you can actually see in some of the, the, the news or the update boxes they have on their And so if what they say is And I thought that sounds like a good thing, And so if you don't want us to basically resell that to somebody else, or if you want us And the cost is going to be in the thousands, if not tens of thousands, as you start to add all of these things together. And so now you have this full list of IPS with gamer tags. the majority of attack types that you sign are recognized and mitigated by standard And so some of them will even say that, Hey, you pay us this money and we're going to give you a one terabit per second attack to date And so they might And you can just look to see, are there attacks against people like me in my country? Richard and I are going to be back to talk

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

twoQUANTITY

0.99+

Lisa MartinPERSON

0.99+

Richard HummelPERSON

0.99+

10QUANTITY

0.99+

Richard HummelPERSON

0.99+

9.7 millionQUANTITY

0.99+

$5QUANTITY

0.99+

150,000QUANTITY

0.99+

$2,500QUANTITY

0.99+

19QUANTITY

0.99+

threeQUANTITY

0.99+

19 platformsQUANTITY

0.99+

2020DATE

0.99+

PayPalORGANIZATION

0.99+

10 attacksQUANTITY

0.99+

NetScoutORGANIZATION

0.99+

2021DATE

0.99+

MicrosoftORGANIZATION

0.99+

five attacksQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

a weekQUANTITY

0.99+

thousandsQUANTITY

0.99+

SecondQUANTITY

0.99+

NetscoutORGANIZATION

0.99+

300 USDQUANTITY

0.99+

GoogleORGANIZATION

0.99+

a millionQUANTITY

0.99+

five targetsQUANTITY

0.99+

OneQUANTITY

0.99+

Counter-StrikeTITLE

0.99+

Western unionORGANIZATION

0.98+

more than 10 million attacksQUANTITY

0.98+

10 million attacksQUANTITY

0.98+

oneQUANTITY

0.97+

four yearsQUANTITY

0.97+

each oneQUANTITY

0.96+

450 different line itemsQUANTITY

0.96+

200 different typesQUANTITY

0.96+

over 1 million registered usersQUANTITY

0.95+

decadesQUANTITY

0.95+

singleQUANTITY

0.94+

one terabit per secondQUANTITY

0.94+

20 USDQUANTITY

0.94+

up to 6,500 USDQUANTITY

0.9+

200 gigabit per secondQUANTITY

0.88+

half 2021DATE

0.86+

one otherQUANTITY

0.85+

19 of these platformsQUANTITY

0.85+

about a thirdQUANTITY

0.84+

secondQUANTITY

0.81+

fourQUANTITY

0.81+

one terabit per second attackQUANTITY

0.8+

firstQUANTITY

0.77+

Threat ReportTITLE

0.77+

hundredsQUANTITY

0.74+

400 of theseQUANTITY

0.72+

top 19 validatedQUANTITY

0.71+

hundredQUANTITY

0.66+

Episode 1OTHER

0.55+

a minuteQUANTITY

0.54+

DobbinsORGANIZATION

0.53+

150QUANTITY

0.53+

r attacksQUANTITY

0.52+

Richard Hummel, Netscout Episode 2


 

>>Kicking things off I'm Lisa Martin with Richard Hummel manager of threat intelligence at NetScout in this segment, we're going to be talking about the rise of server class bot net armies. Richard. Good to see you >>Again, Lisa, as always >>Likewise, so botnet armies, it sounds a bit ominous, especially given the current global climate. Now the first botnets came in the early 1990s. Those were comprised of servers followed over the years by PCs and then it botnets. But recently in the second half of 2021, what have you seen with respect to botnets and the armies? >>Yeah, so I think it's important for us to look at the history of where did we come from? How did we get here? What kind of kicked off this phenomena of botnets specifically DDoSs related botnets and bonnets have existed for a long time. Lisa, you mentioned it in the nineties, and then we move into kind of the two thousands and talking about IOT devices entering the scene. And then 2013, you start to see, hear more about these IOT botnets and in their surge, but then it wasn't until 2016, when the Mariah code was publicly released. And we all heard about the dine attacks at the time, which were record-breaking oh man, we launched this 600 gigabit per second attack using an IOT button and the world's is on fire and everything's going to burn down. And that was kind of the feeling at the time. >>Uh, little did we know that IOT based botnets typically have limits? And the reason for that as an IOT device itself, doesn't have a whole lot of processing capability. Often they're sitting in home networks, home networks that maybe don't have high bandwidth high throughput. Now that is changing, right? The world is adopting this 5g. And even for jeez, you're using mobile hotspots and now IOT devices being directly connected to 5g networks, you're talking about much more bandwidth throughput capabilities. However, they're still limited to what that device is capable of doing. And so an IOT device itself probably can't generate a whole lot of throughput or bandwidth, but what happens if you're able to compromise really high powered devices, such as routers or even server grade routers or even servers themselves sitting in data centers. So inter kind of what we're seeing the second half of the year, I think a lot of us heard about some of the recent attacks with the nearest bottleneck taking down notable websites and Maris is a little bit different because it uses what's called HTTP pipeline. >>And essentially what that does is the bot itself will take all of its butted nodes. And in today is sitting on Microtech routers using a old vulnerability from 2018 managed to be able to compromise these things. And it will generate a bunch of these HTTP requests and then it will release the gate. And so all of these requests essentially flood a web server and the web server just can't handle it. So maybe the first few thousand it can process, but eventually it starts to slow, slow down before it completely chokes off. And so that's kind of how that attack works. Now, the Maris button itself leveraging these Microtech routers. And again, like I said, a vulnerability from 2018 that a lot of these used to compromise these routers on, but what was notable about that vulnerability is that you could force the router itself to give you the username and password, and even patching those routers in, unless you explicitly change the usernames and passwords and those persistent the patch. >>And so inter a new button that called the Venice that also takes advantage of this same existing vulnerability, but leveraging these credentials that then are able to compromise. So now you have two botnets operating on these Microtech riders that often sit in high bandwidth, high throughput networks, being able to launch these really fast potent attacks. Now into the third one here, getting a ride. This is a version of Mariah that has been forked and now uses your vulnerability or an exploit against get servers and where to compromise server grade hardware. So if it wasn't bad enough that you have these high powered routers. Now you're talking about a server that maybe it has a TIG 10 gig interface. What happens if you get a hundred or even a thousand of these things launching a really fast attack? And so, yes, it's the rise of a server class button at army and army I think is very apt here. >>Um, often we think about button ads and we used to use the term zombies or zombie network and ever really heard that too much lately because zombie is basically these things exist. They're kind of out there. They don't really get initiated until they're used, but in the DDoSs world, these botnets are typically always active. So I don't really consider them zombies, um, because they're always brute forcing, and they're always trying to propagate and they're doing this automatically. And so a lot of times when we see these connections coming into like things like our honeypot, these are Muray or Satoria Lucifer GAF kit XR DDoSs I could go on, right? There's a lot of these different IOT botnets out there, but more and more they're turning towards these more high powered hardware in these servers in order to up the potency of their attacks. >>Let's talk about speed for a second. You mentioned the new server class, Mariah botnets. One of the things that the report uncovered was that online criminals were able to really quickly employ them to launch attacks that were details had talks that were pretty vicious. Why were they able to do that so quickly? >>The ecosystem and the criminal underground is so fast. It's so rapid. They have no red tape. You know, let's look at it from a defensive standpoint, there's a new hardware software that rolls out. There's a new patch that rolls out. What do we have to do? We have to go through this process of validating, testing it against our network, figuring out is it going to tip anything over? Maybe we deploy a first to a staging environment. Then we have to get executive bless off and approval. It has to evaluate this. We have to go to industry standards, okay, is it meeting these benchmarks? And we have this whole process, right? And sometimes even for critical patches, it can take us months to be able to roll these out for deployment. Adversaries have none of that. They have no, they have no oversight. A new vulnerability comes out. New capability comes out new exploits, come out the very next day, we're seeing this in metal split modules. A couple of days later, we're seeing it in Mariah and various other IOT flavors of Mauer. And so these guys have super fast, rapid adoption of new things that are coming out with zero overhead. And so they can implement this in practice very, very quickly, not just in bots, but even in DDoS for hire platforms. They're starting to use these kinds of novel attack vectors very, very quickly after they'd been uncovered or reveal >>No overhead, no red table. That must be like another thing that I noticed in the report in the second half of 2021 was that NetScout saw the first known terabit class direct path DDoSs attack terabit class. What's the significance of that. >>And so the significance here is, like I said, with IOT, achieving those kinds of levels is very, very difficult because IOT devices cannot gen up to that amount of bandwidth. But with these botnets existing on segments of the internet that have one gig or even 10 gig of capacity and the power by which to generate enough traffic to achieve those volumes. So it's, it's something we've never seen before, even going all the way back to the diner tacks with the IOT and marae, we were talking to hundreds of thousands of devices here contributing to that 600 gigabit per second range. That was a lot by those standards, right. And I would say that we probably have more button that's existing today, but the more fragmented, right? So you might have 30,000 over here. You might have 50,000 over here. Maybe you have a hundred thousand over here. Um, and so a lot of these botnets are a little bit smaller, but now if we can do 10,000 routers with one particular button ad that has the capacity to do one gig each, I mean, we're talking massive amounts of traffic here. And so that's really, it, that's the evolution that we're seeing. And I think that the, the advent and introduction of 5g more and more across the world is going to make this exponentially worse in terms of what botnets are capable of launching. >>Let's dig into that in about a minute or so. The significance of 5g, you know, we were talking about that as so much opportunity that that's going to unlock, but is that potentially going to be a bad thing? >>It could be in the DDoSs world. Um, we have some statistics actually, where we're already starting to see more attacks against the wireless. And so wireless is in, uh, it used to be Latin time would have a lot of wireless and mobile type stuff because a lot of gamers over there use mobile hotspots, but we're seeing them move over to the lad time. And in fact, globally, we saw 32% increase in wireless attacks. And I believe firmly that a lot of that is attributed to this rollout of 5g across the world. >>Interesting. We'll have to keep our eye on that. Well, I'm sure not Scott. Well, another thing, if we think about one of the things that we've been through the last couple of years in the pandemic, the adoption and the embracing of this hybrid work model, that we're many of us still in, what does NetScout expect to see with respect to expansion of botnets into our homes, into our residences. >>That is the key question there, because what, what happened when COVID kicked off, everybody took their corporate machines. We took all of our devices that were sitting inside a corporate office. We went home, we went home behind routers that have no firewall that had no IDs to have no IPS. In fact, most of us probably don't even know how to log into our routers to change things. And so they're using your default usernames and passwords, or maybe you haven't patched it, or there's no auto patching setup. So you are taking all of your essential vital components for working in you're leaving the castle. And now you are out in an open field and adversaries have free reign to do whatever they want. Couple that with the fact that a lot of us don't even care about the security of our IOT devices, uh, I always like to use this example of Christmas day. >>You get these cool new gadgets and tech devices. And for me, that's pretty much all I get because I love tech. And if you see this now I've got four monitors, plus my laptop and all kinds of stuff here on my desktop. But when I get a new device on Christmas morning, it's not my first instinct or gut reaction to get online and change my default using passwords, or to make sure it's patched or to update it. Now, sometimes those are being forced now, which is awesome. We need to do more of that, but it's not your first reaction, but we know that as soon as an IOT device goes online, you have about five minutes at most before you start getting inundated with, through forcing attempts. And so, yeah, the, the global work from home has really changed how we need to think about security and how organizations and enterprises really should consider how they secure those at-home devices versus being inside the enterprise. >>A lot to think about Richard. And if you're not thinking about it first on Christmas day, then I certainly am not thinking about it. Thanks so much for talking to us about what you guys uncovered with respect to that armies. A lot of interesting evolution there, and the fact that there's no red tape. Wow. What an environment in a moment, Richard and I are going to be back to talk about the vertical industries where attackers zeroed in for DDoSs attacks. You're watching the cube, the leader in tech enterprise coverage.

Published Date : Mar 22 2022

SUMMARY :

Good to see you But recently in the second half of 2021, what have you seen with respect to botnets And then 2013, you start to see, hear more about these IOT botnets and And the reason for that as an IOT device itself, doesn't have a whole lot of processing capability. And so all of these requests essentially flood a And so inter a new button that called the Venice that also takes advantage of this same And so a lot of times when we see these connections coming into like things like our honeypot, these are Muray One of the things that the report And so these guys have super fast, What's the significance of that. And so that's really, it, that's the evolution that we're seeing. much opportunity that that's going to unlock, but is that potentially going to be a bad thing? And I believe firmly that a lot of that is attributed to this rollout of 5g across the world. We'll have to keep our eye on that. And so they're using your default usernames and passwords, or maybe you haven't patched it, or there's no auto patching setup. And if you see this now I've got four monitors, plus my laptop and all kinds of stuff here on my desktop. Thanks so much for talking to us about what you guys uncovered with respect to that armies.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

Richard HummelPERSON

0.99+

Lisa MartinPERSON

0.99+

50,000QUANTITY

0.99+

30,000QUANTITY

0.99+

LisaPERSON

0.99+

10 gigQUANTITY

0.99+

2016DATE

0.99+

2013DATE

0.99+

one gigQUANTITY

0.99+

10,000 routersQUANTITY

0.99+

NetScoutORGANIZATION

0.99+

two botnetsQUANTITY

0.99+

ScottPERSON

0.99+

firstQUANTITY

0.99+

third oneQUANTITY

0.99+

2018DATE

0.99+

early 1990sDATE

0.99+

MicrotechORGANIZATION

0.99+

first botnetsQUANTITY

0.99+

600 gigabitQUANTITY

0.98+

first reactionQUANTITY

0.98+

first instinctQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.96+

oneQUANTITY

0.96+

first few thousandQUANTITY

0.96+

second half of 2021DATE

0.96+

a hundredQUANTITY

0.96+

ChristmasEVENT

0.95+

two thousandsQUANTITY

0.95+

ninetiesDATE

0.95+

5gQUANTITY

0.93+

MurayORGANIZATION

0.93+

about five minutesQUANTITY

0.93+

pandemicEVENT

0.91+

A couple of days laterDATE

0.91+

next dayDATE

0.89+

about a minuteQUANTITY

0.87+

a thousandQUANTITY

0.87+

one particular buttonQUANTITY

0.86+

last couple of yearsDATE

0.85+

NetscoutTITLE

0.85+

CoupleQUANTITY

0.85+

hundreds of thousands of devicesQUANTITY

0.84+

second rangeQUANTITY

0.84+

32% increaseQUANTITY

0.83+

MariahTITLE

0.83+

SatoriaORGANIZATION

0.82+

600 gigabit per second attackQUANTITY

0.82+

four monitorsQUANTITY

0.81+

Christmas dayEVENT

0.77+

hundred thousandQUANTITY

0.77+

a secondQUANTITY

0.76+

MariahPERSON

0.73+

halfQUANTITY

0.65+

of the yearDATE

0.64+

eachQUANTITY

0.61+

LatinLOCATION

0.61+

secondDATE

0.58+

zeroQUANTITY

0.51+

MauerORGANIZATION

0.49+

EpisodeQUANTITY

0.46+

MarisORGANIZATION

0.45+

LuciferCOMMERCIAL_ITEM

0.42+

COVIDEVENT

0.41+

2OTHER

0.38+

5gOTHER

0.36+

Bob Thome, Tim Chien & Subban Raghunathan, Oracle


 

>>Earlier this week, Oracle announced the new X nine M generation of exit data platforms for its cloud at customer and legacy on prem deployments. And the company made some enhancements to its zero data loss, recovery appliance. CLRA something we've covered quite often since its announcement. We had a video exclusive with one Louisa who was the executive vice president of mission critical database technologies. At Oracle. We did that on the day of the announcement who got his take on it. And I asked Oracle, Hey, can we get some subject matter experts, some technical gurus to dig deeper and get more details on the architecture because we want to better understand some of the performance claims that Oracle is making. And with me today is Susan. Who's the vice president of product management for exit data database machine. Bob tome is the vice president of product management for exit data cloud at customer. And Tim chin is the senior director of product management for DRA folks. Welcome to this power panel and welcome to the cube. >>Thank you, Dave. >>Can we start with you? Um, Juan and I, we talked about the X nine M a that Oracle just launched a couple of days ago. Maybe you could give us a recap, some of the, what do we need to know? The, especially I'm interested in the big numbers once more so we can just understand the claims you're making around this announcement. We can dig into that. >>Absolutely. They've very excited to do that. In a nutshell, we have the world's fastest database machine for both LTP and analytics, and we made that even faster, not just simply faster, but for all LPP we made it 70% faster and we took the oil PPV ops all the way up to 27.6 million read IOPS and mind you, this is being measured at the sequel layer for analytics. We did pretty much the same thing, an 87% increase in analytics. And we broke through that one terabyte per second barrier, absolutely phenomenal stuff. Now, while all those numbers by themselves are fascinating, here's something that's even more fascinating in my mind, 80% of the product development work for extra data, X nine M was done during COVID, which means all of us were remote. And what that meant was extreme levels of teamwork between the development teams, manufacturing teams, procurement teams, software teams, the works. I mean, everybody coming together as one to deliver this product, I think it's kudos to everybody who touched this product in one way or the other extremely proud of it. >>Thank you for making that point. And I'm laughing because it's like you the same bolt of a mission-critical OLT T O LTP performance. You had the world record, and now you're saying, adding on top of that. Um, but, okay. But, so there are customers that still, you know, build the builder and they're trying to build their own exit data. What they do is they buy their own servers and storage and networking components. And I do that when I talk to them, they'll say, look, they want to maintain their independence. They don't want to get locked in Oracle, or maybe they believe it's cheaper. You know, maybe they're sort of focused on the, the, the CapEx the CFO has him in the headlock, or they might, sometimes they talk about, they want a platform that can support, you know, horizontal, uh, apps, maybe not Oracle stuff, or, or maybe they're just trying to preserve their job. I don't know, but why shouldn't these customers roll their own and why can't they get similar results just using standard off the shelf technologies? >>Great question. It's going to require a little involved answer, but let's just look at the statistics to begin with. Oracle's exit data was first productized in Delaware to the market in 2008. And at that point in time itself, we had industry leadership across a number of metrics. Today, we are at the 11th generation of exit data, and we are way far ahead than the competition, like 50 X, faster hundred X faster, right? I mean, we are talking orders of magnitude faster. How did we achieve this? And I think the answer to your question is going to lie in what are we doing at the engineering level to make these magical numbers come to, uh, for right first, it starts with the hardware. Oracle has its own hardware server design team, where we are embedding in capabilities towards increasing performance, reliability, security, and scalability down at the hardware level, the database, which is a user level process talks to the hardware directly. >>The only reason we can do this is because we own the source code for pretty much everything in between, starting with the database, going into the operating system, the hypervisor. And as I, as I just mentioned the hardware, and then we also worked with the former elements on this entire thing, the key to making extra data, the best Oracle database machine lies in that engineering, where we take the operating system, make it fit like tongue and groove into, uh, a bit with the opera, with the hardware, and then do the same with the database. And because we have got this deep insight into what are the workloads that are, that are running at any given point in time on the compute side of extra data, we can then do micromanagement at the software layers of how traffic flows are flowing through the entire system and do things like, you know, prioritize all PP transactions on a very specific, uh, you know, queue on the RDMA. >>We'll converse Ethan at be able to do smart scan, use the compute elements in the storage tier to be able to offload SQL processing. They call them the longer I used formats of data, extend them into flash, just a whole bunch of things that we've been doing over the last 12 years, because we have this deep engineering, you can try to cobble a system together, which sort of looks like an extra data. It's got a network and it's got storage, tiering compute here, but you're not going to be able to achieve anything close to what we are doing. The biggest deal in my mind, apart from the performance and the high availability is the security, because we are testing the stack top to bottom. When you're trying to build your own best of breed kind of stuff. You're not going to be able to do that because it depended on the server that had to do something and HP to do something else or Dell to do something else and a Brocade switch to do something it's not possible. We can do this, we've done it. We've proven it. We've delivered it for over a decade. End of story. For as far as I'm concerned, >>I mean, you know, at this fine, remember when Oracle purchased Sohn and I know a big part of that purchase was to get Java, but I remember saying at the time it was a brilliant acquisition. I was looking at it from a financial standpoint. I think you paid seven and a half billion for it. And it automatically, when you're, when Safra was able to get back to sort of pre acquisition margins, you got the Oracle uplift in terms of revenue multiples. So then that standpoint, it was a no brainer, but the other thing is back in the Unix days, it was like HP. Oracle was the standard. And, and in terms of all the benchmarks and performance, but even then, I'm sure you work closely with HP, but it was like to get the stuff to work together, you know, make sure that it was going to be able to recover according to your standards, but you couldn't actually do that deep engineering that you just described now earlier, Subin you, you, you, you stated that the X sign now in M you get, oh, LTP IO, IOP reads at 27 million IOPS. Uh, you got 19 microseconds latency, so pretty impressive stuff, impressive numbers. And you kind of just went there. Um, but how are you measuring these numbers versus other performance claims from your competitors? What what's, you know, are you, are you stacking the deck? Can you give you share with us there? >>Sure. So Shada incidents, we are mentioning it at the sequel layer. This is not some kind of an ion meter or a micro benchmark. That's looking at just a flash subsystem or just a persistent memory subsystem. This is measured at the compute, not doing an entire set of transactions. And how many times can you finish that? Right? So that's how it's being measured. Now. Most people cannot measure it like that because of the disparity and the number of vendors that are involved in that particular solution, right? You've got servers from vendor a and storage from vendor B, the storage network from vendor C, the operating system from vendor D. How do you tune all of these things on your own? You cannot write. I mean, there's only certain bells and whistles and knobs that are available for you to tune, but so that's how we are measuring the 19 microseconds is at the sequel layer. >>What that means is this a real world customer running a real world. Workload is guaranteed to get that kind of a latency. None of the other suppliers can make that claim. This is the real world capability. Now let's take a look at that 19 microseconds we boast and we say, Hey, we had an order of magnitude two orders of magnitude faster than everybody else. When it comes down to latency. And one things that this is we'll do our magic while it is magical. The magic is really grounded in deep engineering and deep physics and science. The way we implement this is we, first of all, put the persistent memory tier in the storage. And that way it's shared across all of the database instances that are running on the compute tier. Then we have this ultra fast hundred gigabit ethernet RDMA over converged ethernet fabric. >>With this, what we have been able to do is at the hardware level between two network interface guides that are resident on that fabric, we create paths that enable high priority low-latency communication between any two end points on that fabric. And then given the fact that we implemented persistent memory in the storage tier, what that means is with that persistent memory, sitting on the memory bus of the processor in the storage tier, we can perform it remote direct memory access operation from the compute tier to memory address spaces in the persistent memory of the storage tier, without the involvement of the operating system on either end, no context, switches, knowing processing latencies and all of that. So it's hardware to hardware, communication with security built in, which is immutable, right? So all of this is built into the hardware itself. So there's no software involved. You perform a read, the data comes back 19 microseconds, boom. End of story. >>Yeah. So that's key to my next topic, which is security because if you're not getting the OSTP involved and that's, you know, very oftentimes if I can get access to the OSTP, I get privileged. Like I can really take advantage of that as a hacker. But so, but, but before I go there, like Oracle talks about, it's got a huge percentage of the Gayety 7% of the fortune 100 companies run their mission, critical workloads on exit data. But so that's not only important to the companies, but they're serving consumer me, right. I'm going to my ATM or I'm swiping my credit card. And Juan mentioned that you use a layered security model. I just sort of inferred anyway, that, that having this stuff in hardware and not have to involve access to the OS actually contributes to better security. But can you describe this in a bit more detail? >>So yeah, what Brian was talking about was this layered security set differently. It is defense in depth, and that's been our mantra and philosophy for several years now. So what does that entail? As I mentioned earlier, we designed our own servers. We do this for performance. We also do it for security. We've got a number of features that are built into the hardware that make sure that we've got immutable areas of form where we, for instance, let me give you this example. If you take an article x86 server, just a standard x86 server, not even express in the form of an extra data system, even if you had super user privileges sitting on top of an operating system, you cannot modify the bias as a user, as a super user that has to be done through the system management network. So we put gates and protection modes, et cetera, right in the hardware itself. >>Now, of course the security of that hardware goes all the way back to the fact that we own the design. We've got a global supply chain, but we are making sure that our supply chain is protected monitored. And, uh, we also protect the last mile of the supply chain, which is we can detect if there's been any tampering of form where that's been, uh, that's occurred in the hardware while the hardware shipped from our factory to the customers, uh, docks. Right? So we, we know that something's been tampered with the moment it comes back up on the customer. So that's on the hardware. Let's take a look at the operating system, Oracle Linux, we own article the next, the entire source code. And what shipping on exit data is the unbreakable enterprise Connell, the carnal and the operating system itself have been reduced in terms of eliminating all unnecessary packages from that operating system bundle. >>When we deliver it in the form of the data, let's put some real numbers on that. A standard Oracle Linux or a standard Linux distribution has got about 5,000 plus packages. These things include like print servers, web servers, a whole bunch of stuff that you're not absolutely going to use at all on exit data. Why ship those? Because the moment you ship more stuff than you need, you are increasing the, uh, the target, uh, that attackers can get to. So on AXA data, there are only 701 packages. So compare this 5,413 packages on a standard Linux, 701 and exit data. So we reduced the attack surface another aspect on this, when we, we do our own STIG, uh, ASCAP benchmarking. If you take a standard Linux and you run that ASCAP benchmark, you'll get about a 30% pass score on exit data. It's 90 plus percent. >>So which means we are doing the heavy lifting of doing the security checks on the operating system before it even goes out to the factory. And then you layer on Oracle database, transparent data encryption. We've got all kinds of protection capabilities, data reduction, being able to do an authentication on a user ID basis, being able to log it, being able to track it, being able to determine who access the system when and log back. So it's basically defend at every single layer. And then of course the customer's responsibility. It doesn't just stop by getting this high secure, uh, environment. They have to do their own job of them securing their network perimeters, securing who has physical access to the system and everything else. So it's a giant responsibility. And as you mentioned, you know, you as a consumer going to an ATM machine and withdrawing money, you would do 200. You don't want to see 5,000 deducted from your account. And so all of this is made possible with exited and the amount of security focus that we have on the system >>And the bank doesn't want to see it the other way. So I'm geeking out here in the cube, but I got one more question for you. Juan talked about X nine M best system for database consolidation. So I, I kinda, you know, it was built to handle all LTP analytics, et cetera. So I want to push you a little bit on this because I can make an argument that, that this is kind of a Swiss army knife versus the best screwdriver or the best knife. How do you respond to that concern and how, how do you respond to the concern that you're putting too many eggs in one basket? Like, what do you tell people to fear you're consolidating workloads to save money, but you're also narrowing the blast radius. Isn't that a problem? >>Very good question there. So, yes. So this is an interesting problem, and it is a balancing act. As you correctly pointed out, you want to have the economies of scale that you get when you consolidate more and more databases, but at the same time, when something happens when hardware fails or there's an attack, you want to make sure that you have business continuity. So what we are doing on exit data, first of all, as I mentioned, we are designing our own hardware and a building in reliability into the system and at the hardware layer, that means having redundancy, redundancy for fans, power supplies. We even have the ability to isolate faulty cores on the processor. And we've got this a tremendous amount of sweeping that's going on by the system management stack, looking for problem areas and trying to contain them as much as possible within the hardware itself. >>Then you take it up to the software layer. We used our reliability to then build high availability. What that implies is, and that's fundamental to the exited architecture is this entire scale out model, our based system, you cannot go smaller than having two database nodes and three storage cells. Why is that? That's because you want to have high availability of your database instances. So if something happens to one server hardware, software, whatever you got another server that's ready to take on that load. And then with real application clusters, you can then switch over between these two, why three storage cells. We want to make sure that when you have got duplicate copies of data, because you at least want to have one additional copy of your data in case something happens to the disc that has got that only that one copy, right? So the reason we have got three is because then you can Stripe data across these three different servers and deliver high availability. >>Now you take that up to the rack level. A lot of things happen. Now, when you're really talking about the blast radius, you want to make sure that if something physically happens to this data center, that you have infrastructure that's available for it to function for business continuity, we maintain, which is why we have the maximum availability architecture. So with components like golden gate and active data guard, and other ways by which we can keep to this distant systems in sync is extremely critical for us to deliver these high availability paths that make, uh, the whole equation about how many eggs in one basket versus containing the containment of the blast radius. A lot easier to grapple with because business continuity is something which is paramount to us. I mean, Oracle, the enterprise is running on Xcel data. Our high value cloud customers are running on extra data. And I'm sure Bob's going to talk a lot more about the cloud piece of it. So I think we have all the tools in place to, to go after that optimization on how many eggs in one basket was his blast radius. It's a question of working through the solution and the criticalities of that particular instance. >>Okay, great. Thank you for that detailed soup. We're going to give you a break. You go take a breath, get a, get a drink of water. Maybe we'll come back to you. If we have time, let's go to Bob, Bob, Bob tome, X data cloud at customer X nine M earlier this week, Juan said kinda, kinda cocky. What we're bothering, comparing exit data against your cloud, a customer against outpost or Azure stack. Can you elaborate on, on why that is? >>Sure. Or you, you know, first of all, I want to say, I love, I love baby. We go south posts. You know why it affirms everything that we've been doing for the past four and a half years with clouded customer. It affirms that cloud is running that running cloud services in customers' data center is a large and important market, large and important enough that AWS felt that the need provide these, um, you know, these customers with an AWS option, even if it only supports a sliver of the functionality that they provide in the public cloud. And that's what they're doing. They're giving it a sliver and they're not exactly leading with the best they could offer. So for that reason, you know, that reason alone, there's really nothing to compare. And so we, we give them the benefit of the doubt and we actually are using their public cloud solutions. >>Another point most customers are looking to deploy to Oracle cloud, a customer they're looking for a per performance, scalable, secure, and highly available platform to deploy. What's offered their most critical databases. Most often they are Oracle databases does outposts for an Oracle database. No. Does outpost run a comparable database? Not really does outposts run Amazon's top OTP and analytics database services, the ones that are top in their cloud public cloud. No, that we couldn't find anything that runs outposts that's worth comparing against X data clouded customer, which is why the comparisons are against their public cloud products. And even with that still we're looking at numbers like 50 times a hundred times slower, right? So then there's the Azure stack. One of the key benefits to, um, you know, that customers love about the cloud that I think is really under, appreciated it under appreciated is really that it's a single vendor solution, right? You have a problem with cloud service could be I as pass SAS doesn't matter. And there's a single vendor responsible for fixing your issue as your stack is missing big here, because they're a multi-vendor cloud solution like AWS outposts. Also, they don't exactly offer the same services in the cloud that they offer on prem. And from what I hear, it can be a management nightmare requiring specialized administrators to keep that beast running. >>Okay. So, well, thanks for that. I'll I'll grant you that, first of all, granted that Oracle was the first with that same, same vision. I always tell people that, you know, if they say, well, we were first I'm like, well, actually, no, Oracle's first having said that, Bob and I hear you that, that right now, outpost is a one Datto version. It doesn't have all the bells and whistles, but neither did your cloud when you first launched your cloud. So let's, let's let it bake for a while and we'll come back in a couple of years and see how things compare. So if you're up for it. Yeah. >>Just remember that we're still in the oven too. Right. >>Okay. All right. Good. I love it. I love the, the chutzpah. One also talked about Deutsche bank. Um, and that, I, I mean, I saw that Deutsche bank announcement, how they're working with Oracle, they're modernizing their infrastructure around database. They're building other services around that and kind of building their own sort of version of a cloud for their customers. How does exit data cloud a customer fit in to that whole Deutsche bank deal? Is, is this solution unique to Deutsche bank? Do you see other organizations adopting clouded customer for similar reasons and use cases? >>Yeah, I'll start with that. First. I want to say that I don't think Georgia bank is unique. They want what all customers want. They want to be able to run their most important workloads. The ones today running their data center on exit eight as a non other high-end systems in a cloud environment where they can benefit from things like cloud economics, cloud operations, cloud automations, but they can't move to public cloud. They need to maintain the service levels, the performance, the scalability of the security and the availability that their business has. It has come to depend on most clouds can't provide that. Although actually Oracle's cloud can our public cloud Ken, because our public cloud does run exit data, but still even with that, they can't do it because as a bank, they're subject to lots of rules and regulations, they cannot move their 40 petabytes of data to a point outside the control of their data center. >>They have thousands of interconnected databases, right? And applications. It's like a rat's nest, right? And this is similar many large customers have this problem. How do you move that to the cloud? You can move it piecemeal. Uh, I'm going to move these apps and, you know, not move those apps. Um, but suddenly ended up with these things where some pieces are up here. Some pieces are down here. The thing just dies because of the long latency over a land connection, it just doesn't work. Right. So you can also shut it down. Let's shut it down on, on Friday and move everything all at once. Unfortunately, when you're looking at it, a state decides that most customers have, you're not going to be able to, you're going to be down for a month, right? Who can, who can tolerate that? So it's a big challenge and exited cloud a customer let's then move to the cloud without losing control of their data. >>And without unhappy having to untangle that thousands of interconnected databases. So, you know, that's why these customers are choosing X data, clouded customer. More importantly, it sets them up for the future with exited cloud at customer, they can run not just in their data center, but they could also run in public cloud, adjacent sites, giving them a path to moving some work out of the data center and ultimately into the public cloud. You know, as I said, they're not unique. Other banks are watching and some are acting and it's not just banks. Just last week. Telefonica telco in Spain announced their intent to migrate the bulk of their Oracle databases to excavate a cloud at customer. This will be the key cloud platform running. They're running in their data center to support both new services, as well as mission critical and operational systems. And one last important point exited cloud a customer can also run autonomous database. Even if customers aren't today ready to adopt this. A lot of them are interested in it. They see it as a key piece of the puzzle moving forward in the future and customers know that they can easily start to migrate to autonomous in the future as they're ready. And this of course is going to drive additional efficiencies and additional cost savings. >>So, Bob, I got a question for you because you know, Oracle's playing both sides, right? You've got a cloud, you know, you've got a true public cloud now. And, and obviously you have a huge on-premise state. When I talk to companies that don't own a cloud, uh, whether it's Dell or HPE, Cisco, et cetera, they have made, they make the point. And I agree with them by the way that the world is hybrid, not everything's going into the, to the cloud. However, I had a lot of respect for folks at Amazon as well. And they believed long-term, they'll say this, they've got them on record of saying this, that they believe long-term ultimately all workloads are going to be running in the cloud. Now, I guess it depends on how you define the cloud. The cloud is expanding and all that other stuff. But my question to you, because again, you kind of on both sides, here are our hybrid solutions like cloud at customer. Do you see them as a stepping stone to the cloud, or is cloud in your data center, sort of a continuous sort of permanent, you know, essential play >>That. That's a great question. As I recall, people debated this a few years back when we first introduced clouded customer. And at that point, some people I'm talking about even internal Oracle, right? Some people saw this as a stop gap measure to let people leverage cloud benefits until they're really ready for the public cloud. But I think over the past four and a half years, the changing the thinking has changed a little bit on this. And everyone kind of agrees that clouded customer may be a stepping stone for some customers, but others see that as the end game, right? Not every workload can run in the public cloud, not at least not given the, um, you know, today's regulations and the issues that are faced by many of these regulated industries. These industries move very, very slowly and customers are content to, and in many cases required to retain complete control of their data and they will be running under their control. They'll be running with that data under their control and the data center for the foreseeable future. >>Oh, I got another question for kind of just, if I could take a little tangent, cause the other thing I hear from the, on the, the, the on-prem don't own, the cloud folks is it's actually cheaper to run in on-prem, uh, because they're getting better at automation, et cetera. When you get the exact opposite from the cloud guys, they roll their eyes. Are you kidding me? It's way cheaper to run it in the cloud, which is more cost-effective is it one of those? It depends, Bob. >>Um, you know, the great thing about numbers is you can make, you can, you can kind of twist them to show anything that you want, right? That's a have spreadsheet. Can I, can, I can sell you on anything? Um, I think that there's, there's customers who look at it and they say, oh, on-premise sheet is cheaper. And there's customers who look at it and say, the cloud is cheaper. If you, um, you know, there's a lot of ways that you may incur savings in the cloud. A lot of it has to do with the cloud economics, the ability to pay for what you're using and only what you're using. If you were to kind of, you know, if you, if you size something for your peak workload and then, you know, on prem, you probably put a little bit of a buffer in it, right? >>If you size everything for that, you're gonna find that you're paying, you know, this much, right? All the time you're paying for peak workload all the time with the cloud, of course, we support scaling up, scaling down. We supply, we support you're paying for what you use and you can scale up and scale down. That's where the big savings is now. There's also additional savings associated with you. Don't have the cloud vendors like work. Well, we manage that infrastructure for you. You no longer have to worry about it. Um, we have a lot of automation, things that you use to either, you know, probably what used to happen is you used to have to spend hours and hours or years or whatever, scripting these things yourselves. We now have this automation to do it. We have, um, you eyes that make things ad hoc things, as simple as point and click and, uh, you know, that eliminates errors. And, and it's often difficult to put a cost on those things. And I think the more enlightened customers can put a cost on all of those. So the people that were saying it's cheaper to run on prem, uh, they, they either, you know, have a very stable workload that never changes and their environment never changes, um, or more likely. They just really haven't thought through the, all the hidden costs out there. >>All right, you got some new features. Thank you for that. By the way, you got some new features in, in cloud, a customer, a what are those? Do I have to upgrade to X nine M to, to get >>All right. So, you know, we're always introducing new features for clouded customer, but two significant things that we've rolled out recently are operator access control and elastic storage expansion. As we discussed, many organizations are using Axeda cloud a customer they're attracting the cloud economics, the operational benefits, but they're required by regulations to retain control and visibility of their data, as well as any infrastructure that sits inside their data center with operator access control, enabled cloud operations, staff members must request access to a customer system, a customer, it team grants, a designated person, specific access to a specific component for a specific period of time with specific privileges, they can then kind of view audit controls in real time. And if they see something they don't like, you know, Hey, what's this guy doing? It looks like he's, he's stealing my data or doing something I don't like, boom. >>They can kill that operators, access the session, the connections, everything right away. And this gives everyone, especially customers that need to, you know, regulate remote access to their infrastructure. It gives them the confidence that they need to use exit data cloud, uh, conduct, customer service. And, and the other thing that's new is, um, elastic storage expansion. Customers could out add additional service to their system either at initial deployment or after the fact. And this really provides two important benefits. The first is that they can right size their configuration if they need only the minimum compute capacity, but they don't need the maximum number of storage servers to get that capacity. They don't need to subscribe to kind of a fixed shape. We used to have fixed shapes, I guess, with hundreds of unnecessary database cores, just to get the storage capacity, they can select a smaller system. >>And then incrementally add on that storage. The second benefit is the, is kind of key for many customers. You are at a storage, guess what you can add more. And that way, when you're out of storage, that's really important. Now they'll get to your last part of that question. Do you need a deck, a new, uh, exit aquatic customer XIM system to get these features? No they're available for all gen two exited clouded customer systems. That's really one of the best things about cloud. The service you subscribed to today just keeps getting better and better. And unless there's some technical limitation that, you know, we, and it, which is rare, most new features are available even for the oldest cloud customer systems. >>Cool. And you can bring that in on from my, my last question for you, Bob is a, another one on security. Obviously, again, we talked to Susan about this. It's a big deal. How can customer data be secure if it's in the cloud, if somebody, other than the, their own vetted employees are managing the underlying infrastructure, is is that a concern you hear a lot and how do you handle that? >>You know, it's, it's only something because a lot of these customers, they have big, you know, security people and it's their job to be concerned about that kind of stuff. And security. However, is one of the biggest, but least appreciate appreciated benefits of cloud cloud vendors, such as Oracle hire the best and brightest security experts to ensure that their clouds are secure. Something that only the largest customers can afford to do. You're a small, small shop. You're not going to be able to, you know, hire some of this expertise. So you're better off being in the cloud. Customers who are running in the Oracle cloud can also use articles, data, safe tool, which we provide, which basically lets you inspect your databases, insurance. Sure that everything is locked down and secure and your data is secure. But your question is actually a little bit different. >>It was about potential internal threats to company's data. Given the cloud vendor, not the customer's employees have access to the infrastructure that sits beneath the databases and really the first and most important thing we do to protect customers' data is we encrypt that database by default. Actually Subin listed a whole laundry list of things, but that's the one thing I want to point out. We encrypt your database. It's, you know, it's, it's encrypted. Yes. It sits on our infrastructure. Yes. Our operations persons can actually see those data files sitting on the infrastructure, but guess what? They can't see the data. The data is encrypted. All they see as kind of a big encrypted blob. Um, so they can't access the data themselves. And you know, as you'd expect, we have very tight controls over operations access to the infrastructure. They need to securely log in using mechanisms by stuff to present, prevent unauthorized access. >>And then all access is logged and suspicious. Activities are investigated, but that still may not be enough for some customers, especially the ones I mentioned earlier, the regulated industries. And that's why we offer app operator access control. As I mentioned, that gives customers complete control over the access to the infrastructure. The, when the, what ops can do, how long can they do it? Customers can monitor in real time. And if they see something they don't like they stop it immediately. Lastly, I just want to mention Oracle's data ball feature. This prevents administrators from accessing data, protecting data from road operators, robot, world operations, whether they be from Oracle or from the customer's own it staff, this database option. A lot of ball is sorry. Database ball data vault is included when running a license included service on exited clouded customer. So basically to get it with the service. Got it. >>Hi Tom. Thank you so much. It's unbelievable, Bob. I mean, we've got a lot to unpack there, but uh, we're going to give you a break now and go to Tim, Tim chin, zero data loss, recovery appliance. We always love that name. The big guy we think named it, but nobody will tell us, but we've been talking about security. There's been a lot of news around ransomware attacks. Every industry around the globe, any knucklehead with, uh, with a high school diploma could become a ransomware attack or go in the dark web, get, get ransomware as a service stick, a, put a stick in and take a piece of the VIG and hopefully get arrested. Um, with, when you think about database, how do you deal with the ransomware challenge? >>Yeah, Dave, um, that's an extremely important and timely question. Um, we are hearing this from our customers. We just talk about ha and backup strategies and ransomware, um, has been coming up more and more. Um, and the unfortunate thing that these ransoms are actually paid, um, uh, in the hope of the re you know, the, uh, the ability to access the data again. So what that means it tells me is that today's recovery solutions and processes are not sufficient to get these systems back in a reliable and timely manner. Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now for databases. This can have a huge impact because we're talking about transactional workloads. And so even a compromise of just a few minutes, a blip, um, can affect hundreds or even thousands of transactions. This can literally represent hundreds of lost orders, right? If you're a big manufacturing company or even like millions of dollars worth of, uh, financial transactions in a bank. Right. Um, and that's why protecting databases at a transaction level is especially critical, um, for ransomware. And that's a huge contrast to traditional backup approaches. Okay. >>So how do you approach that? What do you, what do you do specifically for ransomware protection for the database? >>Yeah, so we have the zero data loss recovery appliance, which we announced the X nine M generation. Um, it is really the only solution in the market, which offers that transaction level of protection, which allows all transactions to be recovered with zero RPO, zero again, and this is only possible because Oracle has very innovative and unique technology called real-time redo, which captures all the transactional changes from the databases by the appliance, and then stored as well by the appliance, moreover, the appliance validates all these backups and reading. So you want to make sure that you can recover them after you've sent them, right? So it's not just a file level integrity check on a file system. That's actual database level of validation that the Oracle blocks and the redo that I mentioned can be restored and recovered as a usable database, any kind of, um, malicious attack or modification of that backup data and transmit that, or if it's even stored on the appliance and it was compromised would be immediately detected and reported by that validation. >>So this allows administrators to take action. This is removing that system from the network. And so it's a huge leap in terms of what customers can get today. The last thing I just want to point out is we call our cyber vault deployment, right? Um, a lot of customers in the industry are creating what we call air gapped environments, where they have a separate location where their backup copies are stored physically network separated from the production systems. And so this prevents ransomware for possibly infiltrating that last good copy of backups. So you can deploy recovery appliance in a cyber vault and have it synchronized at random times when the network's available, uh, to, to keep it in sync. Right. Um, so that combined with our transaction level zero data loss validation, it's a nice package and really a game changer in protecting and recovering your databases from modern day cyber threats. >>Okay, great. Thank you for clarifying that air gap piece. Cause I, there was some confusion about that. Every data protection and backup company that I know as a ransomware solution, it's like the hottest topic going, you got newer players in, in, in recovery and backup like rubric Cohesity. They raised a ton of dough. Dell has got solutions, HPE just acquired Zerto to deal with this problem. And other things IBM has got stuff. Veem seems to be doing pretty well. Veritas got a range of, of recovery solutions. They're sort of all out there. What's your take on these and their strategy and how do you differentiate? >>Yeah, it's a pretty crowded market, like you said. Um, I think the first thing you really have to keep in mind and understand that these vendors, these new and up and coming, um, uh, uh, vendors start in the copy data management, we call CDN space and they're not traditional backup recovery designed are purpose built for the purpose of CDM products is to provide these fast point in time copies for test dev non-production use, and that's a viable problem and it needs a solution. So you create these one time copy and then you create snapshots. Um, after you apply these incremental changes to that copy, and then the snapshot can be quickly restored and presented as like it's a fully populated, uh, file. And this is all done through the underlying storage of block pointers. So all of this kind of sounds really cool and modern, right? It's like new and upcoming and lots of people in the market doing this. Well, it's really not that modern because we've, we know storage, snapshot technologies has been around for years. Right. Um, what these new vendors have been doing is essentially repackaging the old technology for backup and recovery use cases and having sort of an easier to use automation interface wrapped around it. >>Yeah. So you mentioned a copy data management, uh, last year, active FIO. Uh, they started that whole space from what I recall at one point there, they value more than a billion dollars. They were acquired by Google. Uh, and as I say, they kind of created that, that category. So fast forward a little bit, nine months a year, whatever it's been, do you see that Google active FIO offer in, in, in customer engagements? Is that something that you run into? >>We really don't. Um, yeah, it was really popular and known some years ago, but we really don't hear about it anymore. Um, after the acquisition, you look at all the collateral and the marketing, they are really a CDM and backup solution exclusively for Google cloud use cases. And they're not being positioned as for on premises or any other use cases outside of Google cloud. That's what, 90, 90 plus percent of your market there that isn't addressable now by Activia. So really we don't see them in any of our engagements at this time. >>I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that modern. Uh, I mean it's, if they certainly position it as modern, a lot of the engineers who are building there's new sort of backup and recovery capabilities came from the hyperscalers, whether it's copy data management, you know, the bot mock quote, unquote modern backup recovery, it's kind of a data management, sort of this nice all in one solution seems pretty compelling. How does recovery clients specifically stack up? You know, a lot of people think it's a niche product for, for really high end use cases. Is that fair? How do you see a town? >>Yeah. Yeah. So it's, I think it's so important to just, you know, understand, again, the fundamental use of this technology is to create data copies for test W's right. Um, and that's really different than operational backup recovery in which you must have this ability to do full and point in time recoverability in any production outage or Dr. Situation. Um, and then more importantly, after you recover and your applications are back in business, that performance must continue to meet servers levels as before. And when you look at a CDM product, um, and you restore a snapshot and you say with that product and the application is brought up on that restored snapshot, what happens or your production application is now running on actual read rideable snapshots on backup storage. Remember they don't restore all the data back to the production, uh, level stores. They're restoring it as a snapshot okay. >>Onto their storage. And so you have a huge difference in performance. Now running these applications where they instantly recovered, if you will database. So to meet these true operational requirements, you have to fully restore the files to production storage period. And so recovery appliance was first and foremost designed to accomplish this. It's an operational recovery solution, right? We accomplish that. Like I mentioned, with this real-time transaction protection, we have incremental forever backup strategies. So that you're just taking just the changes every day. And you, you can create these virtual full backups that are quickly restored, fully restored, if you will, at 24 terabytes an hour. And we validate and document that performance very clearly in our website. And of course we provide that continuous recovery validation for all the backups that are stored on the system. So it's, um, it's a very nice, complete solution. >>It scales to meet your demands, hundreds of thousands of databases, you know, it's, um, you know, these CDM products might seem great and they work well for a few databases, but then you put a real enterprise load and these hundreds of databases, and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, uh, in that scale. Uh, and, and this is important because customers read their marketing and read the collateral like, Hey, instant recovery. Why wouldn't I want that? Well, it's, you know, nicer than it looks, you know, it always sounds better. Right. Um, and so we have to educate them and about exactly what that means for the database, especially backup recovery use cases. And they're not really handled well, um, with their products. >>I know I'm like way over. I had a lot of questions on this announcement and I was gonna, I was gonna let you go, Tim, but you just mentioned something that, that gave me one more question if I may. So you talked about, uh, supporting hundreds of thousands of databases. You petabytes, you have real world use cases that, that actually leverage the, the appliance in these types of environments. Where does it really shine? >>Yeah. Let me just give you just two real quick ones. You know, we have a company energy transfer, the major natural gas and pipeline operator in the U S so they are a big part of our country's critical infrastructure services. We know ransomware, and these kinds of threats are, you know, are very much viable. We saw the colonial pipeline incident that happened, right? And so the attack, right, critical services while energy transfer was running, lots of databases and their legacy backup environments just couldn't keep up with their enterprise needs. They had backups taking like, well, over a day, they had restores taking several hours. Um, and so they had problems and they couldn't meet their SLS. They moved to the recovery appliance and now they're seeing backwards complete with that incremental forever in just 15 minutes. So that's like a 48 times improvement in backup time. >>And they're also seeing restores completing in about 30 minutes, right. Versus several hours. So it's a, it's a huge difference for them. And they also get that nice recovery validation and monitoring by the system. They know the health of their enterprise at their fingertips. The second quick one is just a global financial services customer. Um, and they have like over 10,000 databases globally and they, they really couldn't find a solution other than throw more hardware kind of approach to, uh, to fix their backups. Well, this, uh, not that the failures and not as the issues. So they moved to recovery appliance and they saw their failed backup rates go down for Matta plea. They saw four times better backup and restore performance. Um, and they have also a very nice centralized way to monitor and manage the system. Uh, real-time view if you will, that data protection health for their entire environment. Uh, and they can show this to the executive management and auditing teams. This is great for compliance reporting. Um, and so they finally done that. They have north of 50 plus, um, recovery appliances a day across that on global enterprise. >>Love it. Thank you for that. Um, uh, guys, great power panel. We have a lot of Oracle customers in our community and the best way to, to help them is to, I get to ask you a bunch of questions and get the experts to answer. So I wonder if you could bring us home, maybe you could just sort of give us the, the top takeaways that you want to your customers to remember in our audience to remember from this announcement. >>Sure, sorry. Uh, I want to actually pick up from where Tim left off and talk about a real customer use case. This is hot off the press. One of the largest banks in the United States, they decided to, that they needed to update. So performance software update on 3000 of their database instances, which are spanning 68, exited a clusters, massive undertaking, correct. They finished the entire task in three hours, three hours to update 3000 databases and 68 exited a clusters. Talk about availability, try doing this on any other infrastructure, no way anyone's going to be able to achieve this. So that's on terms of the availability, right? We are engineering in all of the aspects of database management, performance, security availability, being able to provide redundancy at every single level is all part of the design philosophy and how we are engineering this product. And as far as we are concerned, the, the goal is for forever. >>We are just going to continue to go down this path of increasing performance, increasing the security aspect of the, uh, of the infrastructure, as well as our Oracle database and keep going on this. You know, this, while these have been great results that we've delivered with extra data X nine M the, the journey is on and to our customers. The biggest advantage that you're going to get from the kind of performance metrics that we are driving with extra data is consolidation consolidate more, move, more database instances onto the extended platform, gain the benefits from that consolidation, reduce your operational expenses, reduce your capital expenses. They use your management expenses, all of those, bring it down to accelerator. Your total cost of ownership is guaranteed to go down. Those are my key takeaways, Dave >>Guys, you've been really generous with your time. Uh Subin uh, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe to toe, really? Thanks for your time. >>You're welcome, David. Thank you. Thank you. >>And thank you for watching this video exclusive from the cube. This is Dave Volante, and we'll see you next time. Be well.

Published Date : Oct 4 2021

SUMMARY :

We did that on the day of the announcement who got his take on it. Maybe you could give us a recap, 80% of the product development work for extra data, that still, you know, build the builder and they're trying to build their own exit data. And I think the answer to your question is going to lie in what are we doing at the engineering And as I, as I just mentioned the hardware, and then we also worked with the former elements on in the storage tier to be able to offload SQL processing. you know, make sure that it was going to be able to recover according to your standards, the storage network from vendor C, the operating system from vendor D. How do you tune all of these None of the other suppliers can make that claim. remote direct memory access operation from the compute tier to And Juan mentioned that you use a layered security model. that are built into the hardware that make sure that we've got immutable areas of form Now, of course the security of that hardware goes all the way back to the fact that we own the design. Because the moment you ship more stuff than you need, you are increasing going to an ATM machine and withdrawing money, you would do 200. And the bank doesn't want to see it the other way. economies of scale that you get when you consolidate more and more databases, but at the same time, So if something happens to one server hardware, software, whatever you the blast radius, you want to make sure that if something physically happens We're going to give you a break. of the functionality that they provide in the public cloud. you know, that customers love about the cloud that I think is really under, appreciated it under I always tell people that, you know, if they say, well, we were first I'm like, Just remember that we're still in the oven too. Do you see other organizations adopting clouded customer for they cannot move their 40 petabytes of data to a point outside the control of their data center. Uh, I'm going to move these apps and, you know, not move those apps. They see it as a key piece of the puzzle moving forward in the future and customers know that they can You've got a cloud, you know, you've got a true public cloud now. not at least not given the, um, you know, today's regulations and the issues that are When you get the exact opposite from the cloud guys, they roll their eyes. the cloud economics, the ability to pay for what you're using and only what you're using. Um, we have a lot of automation, things that you use to either, you know, By the way, you got some new features in, in cloud, And if they see something they don't like, you know, Hey, what's this guy doing? And this gives everyone, especially customers that need to, you know, You are at a storage, guess what you can add more. is is that a concern you hear a lot and how do you handle that? You're not going to be able to, you know, hire some of this expertise. And you know, as you'd expect, that gives customers complete control over the access to the infrastructure. but uh, we're going to give you a break now and go to Tim, Tim chin, zero Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now So you want to make sure that you can recover them Um, a lot of customers in the industry are creating what we it's like the hottest topic going, you got newer players in, in, So you create these one time copy Is that something that you run into? Um, after the acquisition, you look at all the collateral I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that And when you look at a CDM product, um, and you restore a snapshot And so you have a huge difference in performance. and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, I had a lot of questions on this announcement and I was gonna, I was gonna let you go, And so the attack, right, critical services while energy transfer was running, Uh, and they can show this to the executive management to help them is to, I get to ask you a bunch of questions and get the experts to answer. They finished the entire task in three hours, three hours to increasing the security aspect of the, uh, of the infrastructure, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe Thank you. And thank you for watching this video exclusive from the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

SusanPERSON

0.99+

BrianPERSON

0.99+

CiscoORGANIZATION

0.99+

2008DATE

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Dave VolantePERSON

0.99+

48 timesQUANTITY

0.99+

70%QUANTITY

0.99+

OracleORGANIZATION

0.99+

JuanPERSON

0.99+

Bob ThomePERSON

0.99+

Tim ChienPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

TimPERSON

0.99+

BobPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

40 petabytesQUANTITY

0.99+

3000QUANTITY

0.99+

DelawareLOCATION

0.99+

87%QUANTITY

0.99+

50 timesQUANTITY

0.99+

three hoursQUANTITY

0.99+

19 microsecondsQUANTITY

0.99+

Tim chinPERSON

0.99+

90QUANTITY

0.99+

ConnellORGANIZATION

0.99+

5,000QUANTITY

0.99+

hundredsQUANTITY

0.99+

Deutsche bankORGANIZATION

0.99+

TodayDATE

0.99+

90 plus percentQUANTITY

0.99+

5,413 packagesQUANTITY

0.99+

80%QUANTITY

0.99+

last weekDATE

0.99+

HPORGANIZATION

0.99+

68QUANTITY

0.99+

seven and a half billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

FirstQUANTITY

0.99+

SpainLOCATION

0.99+

AXAORGANIZATION

0.99+

two ordersQUANTITY

0.99+

United StatesLOCATION

0.99+

one copyQUANTITY

0.99+

Bob tomePERSON

0.99+

27 millionQUANTITY

0.99+

LouisaPERSON

0.99+

24 terabytesQUANTITY

0.99+

15 minutesQUANTITY

0.99+

Jerome Lecat and Chris Tinker | CUBE Conversation 2021


 

>>and welcome to this cube conversation. I'm john for a host of the queue here in Palo alto California. We've got two great remote guests to talk about, some big news hitting with scalability and Hewlett Packard enterprise drill, MCAT ceo of sexuality and chris Tinker, distinguished technologist from H P E. Hewlett Packard enterprise U room chris, Great to see you both. Cube alumni's from an original gangster days. As we say Back then when we started almost 11 years ago. Great to see you both. >>It's great to be back. >>So let's see. So >>really compelling news around kind of this next generation storage, cloud native solution. Okay. It's a, it's really kind of an impact on the next gen. I call, next gen devops meets application, modern application world and some, we've been covering heavily, there's some big news here around sexuality and HP offering a pretty amazing product. You guys introduced essentially the next gen piece of it are pesca, we'll get into in a second. But this is a game changing announcement you guys announces an evolution continuing I think it's more of a revolution but I think you know storage is kind of abstraction layer of evolution to this app centric world. So talk about this environment we're in and we'll get to the announcement which is object store for modern workloads but this whole shift is happening jerome, this is a game changer to storage, customers are gonna be deploying workloads. >>Yeah skeleton. Really I mean I personally really started working on Skele T more than 10 years ago 15 now And if we think about it I mean cloud has really revolutionized IT. and within the cloud we really see layers and layers of technology. I mean we all started around 2006 with Amazon and Google and finding ways to do initially we was consumer it at very large scale, very low incredible reliability and then slowly it creeped into the enterprise and at the very beginning I would say that everyone was kind of wizards trying things and and really coupling technologies together uh and to some degree we were some of the first wizard doing this But we're now close to 15 years later and there's a lot of knowledge and a lot of experience, a lot of schools and this is really a new generation, I'll call it cloud native, you can call it next year and whatever, but there is now enough experience in the world, both at the development level and at the infrastructure level to deliver truly distributed automate systems that run on industry standard service. Obviously good quality server deliver a better service than the service. But there is now enough knowledge for this to truly go at scale and call this cloud or call this cloud native. Really the core concept here is to deliver scalable I. T at very low cost, very high level of reliability. All based on software. We've we've been participated in this solution but we feel that now the draft of what's coming is at the new level and it was time for us to think, develop and launch a new product that specifically adapted to that. And chris I will let you comment on this because customers or some of them you can add a custom of you to that. >>Well, you know, you're right. You know, I've been in there have been like you have been in this industry for uh, well a long time, a little longer to 20, years. This HPV and engineering and look at the actual landscape has changed with how we're doing scale out, suffered to find storage for particular workloads and were a catalyst has evolved. Here is an analytic normally what was only done in the three letter acronyms and massively scale out politics name, space, file systems, parallel file systems. The application space has encroached into the enterprise world where the enterprise world needed a way to actually take a look at how to help simplify the operations. How do I actually be able to bring about an application that can run in the public cloud or on premise or hybrid. Be able to actually look at a workload off my stat that aligns the actual cost to the actual analytics that I'm going to be doing the work load that I'm going to be doing and be able to bridge those gaps and be able to spin this up and simplify operations. And you know, and if you if you are familiar with these parallel fossils, which by the way we we actually have on our truck. I do engineer those. But they are they are they are they have their own unique challenges. But in the world of enterprise where customers are looking to simplify operations, then take advantage of new application, analytic workloads, whether it be sparred may so whatever it might be right. If I want to spend the Mongol BB or maybe maybe a last a search capability, how do I actually take those technologies embrace a modern scale out storage stack that without without breaking the bank but also provide a simple operations. And that's that's why we look for object storage capabilities because it brings us this massive parallelization. Thank you. >>Well, before we get into the product, I want to just touch on one thing from you mentioned and chris you, you brought up the devoPS piece, next gen, next level, whatever term you use it is cloud Native. Cloud Native has proven that deVOPS infrastructure as code is not only legit being operationalized in all enterprises, add security in there. You have def sec ops this is the reality and hybrid cloud in particular has been pretty much the consensus. Is that standard. So or de facto saying whatever you want to call it, that's happening. Multi cloud on the horizon. So these new workloads have these new architectural changes, cloud on premises and edge, this is the number one story and the number one challenge, all enterprises are now working on how do I build the architecture for the cloud on premises and edge. This is forcing the deVOPS team to flex and build new apps. Can you guys talk about that particular trend and is and is that relevant here? >>Yeah, I, I not talk about uh really storage anywhere and cloud anywhere. And and really the key concept is edged to go to cloud. I mean we all understand now that the Edge will host a lot of data and the edges many different things. I mean it's obviously a smartphone, whatever that is, but it's also factories, it's also production, it's also, you know, moving uh moving machinery, trains, playing satellites, um that that's all the Edge cars obviously uh and a lot of that, I will be both produced and processed there. But from the Edge you will want to be able to send that uh for analysis for backup for logging to a court. And that core could be regional maybe not, you know, one call for the whole planet, but maybe one corporate region uh state in the US. Uh and then from there, you will also want to push some of the data to probably cloud. Uh One of the things that we see more and more is that the the our data center, the disaster recovery is not another physical data center, it's actually the cloud and that's a very efficient infrastructure, very cost efficient. Especially so really it's changing the padding on how you think about storage because you really need to integrate these three layers in a consistent approach, especially around the topic of security because you want the data to be secure all along the way and the data is not just data data and who can access the data, can modify the data. What are the conditions that allow modification or automatically ratios that are in some cases it's super important that data be automatically raised 10 years and all this needs to be transported fromage Co two cloud. So that that's one of the aspects, another aspect that resonates for me with what you said is a word you didn't say but it's actually crucial this whole revolution. It's kubernetes mean Cuban it isn't now a mature technology and it's just, you know, the next level of automaticity operation for distributed system Which we didn't have five or 10 years ago and that is so powerful that it's going to allow application developers to develop much faster system that can be distributed again edge to go to crowd because it's going to be an underlying technology that spans the three layers >>chris your thoughts. Hybrid cloud, I've been, I've been having conscious with the HP folks for got years and years on hybrid clouds now here. >>Well, you know, and it's exciting in a layout, right? So if you look at like a whether it be enterprise virtualization that is a scale out gender purpose fertilization workload. Whether the analytic workloads, whether we know data protection is a paramount to all of this orchestration is paramount. Uh if you look at that depth laptops absolutely you mean securing the actual data. The digital last set is absolutely paramount. And if you look at how we do this, look at the investments we're making we're making. And if you look at the collaborative platform development which goes to our partnership with reality it is we're providing them an integral aspect of everything we do. Whether we're bringing as moral which is our suffer be used orchestration. Look at the veneer of its control plane controlling kubernetes being able to actually control the african area clusters in the actual backing store for all the analytics. And we just talked about whether it be a web scale out That is traditionally using politics. Name space has now been modernized to take advantage of newer technologies running an envy me burst buffers or 100 gig networks with slingshot network at 200 and 400 gigabit. Looking at how do we actually get the actual analytics the workload to the CPU and have it attached to the data at rest? Where is the data? How do we land the data and how do we actually align essentially locality, locality of the actual asset to the compute. This is where, you know, we can leverage whether it be a juror or google or name your favorite hyper scaler, leverage those technologies leveraging the actual persistent store and this is where scale it is with this object store capability has been an industry trend setter, uh setting the actual landscape of how to provide an object store on premise and hybrid cloud running into public cloud but be able to facilitate data mobility and tie it back to and tie it back to an application. And this is where a lot of things have changed in the world of the, of analytics because the applications, the newer technologies that are coming on the market have taken advantage of this particular protocol as three so they can do web scale massively parallel concurrent workloads, >>you know what, let's get into the announcement, I love cool and relevant products and I think this hits the Mark Scaletta you guys have are Tesco which is um, just announced and I think, you know, we obviously we reported on it. You guys have a lightweight, true enterprise grade object store software for kubernetes. This is the announcement, Jerome. Tell us about it. >>What's the big >>deal? Cool and >>relevant? Come on, >>this is cool. All right, tell us >>I'm super excited. I'm not sure that it did. That's where on screen, but I'm super, super excited. You know, we, we introduced the ring 11 years ago and this is our biggest announcements for the past 11 years. So yes, do pay attention. Uh, you know, after after looking at all these trends and understanding where we see the future going, uh, we decided that it was time to embark block. So there's not one line of code that's the same as the previous generation product. They will both could exist. They both have space in the market, uh, and artist that was specifically this design for this cloud native era. And what we see is that people want something that's lightweight, especially because it had to go to the edge. They still want the enterprise grade, the security is known for and it has to be modern. What we really mean by modern is uh, we see object storage now being the primary storage for many application more and more applications and so we have to be able to deliver the performance that primary storage expects. Um this idea of skeletons serving primary storage is actually not completely new When we launched guilty 10 years ago, the first application that we were supporting West consumer email for which we were and we are still today the primary story. So we have we know what it is to be the primary store, we know what's the level of reliability you need to hit. We know what, what latest thinking and latency is different from fruit, but you really need to optimize both. Um, and I think that's still today. We're the only object storage company that protects that after both replication and the red recording because we understand that replication is factor the recording is better and more larger file were fast in terms of latency doesn't matter so much. So we, we've been bringing all that experience but really rethinking a product for that new generation that really is here now. And so we're truly excited against a little bit more about the product. It's a software was guilty is a software company and that's why we love to partner with HP who's producing amazing service. Um, you know, for the record and history, the very first deployment of skeleton in 2000 and 10 was on the HP service. So this is a, a long love story here. Um, and so to come back to artistic, uh, is lightweight in the sense that it's easy to use. We can start small, we can start from just one server or 11 VM instance. I mean start really small. Can grow infinitely. The fact that we start small, we didn't, you know, limit the technology because of that. Uh, so you can start from one too many. Um, and uh, it's contaminated in the sense that it's completely Cuban, it is compatible. It's communities orchestrated. It will deploy on many Cuban distributions. We're talking obviously with Admiral, we're also talking with Ponzu and with the other in terms of uh, communities distribution will also be able to be run in the cloud. I'm not sure that there will be many uh, true production deployment of artists in the club because you already have really good object storage by the cloud providers. But when you are developing something and you want to test their, um, you know, just doing it in the cloud is very practical. So you'll be able to deploy our discount communities cloud distribution and it's modern object storage in the sense that its application century. A lot of our work is actually validating that our storage is fit for a single purpose application and making sure that we understand the requirement of this application that we can guide our customers on how to deploy. And it's really designed to be the primary storage for these new workloads. >>The big part of the news is your relationship with Hewlett Packard Enterprises? Some exclusivity here as part of this announced, you mentioned, the relationship goes back many, many years. We've covered your relationship in the past chris also, you know, we cover HP like a blanket. Um, this is big news for h P E as >>well. >>What is the relationship talk about this? Exclusivity could you share about the partnership and the exclusivity piece? >>Well, the partnership expands into the pan HPV portfolio. We look we made a massive investment in edge IOT devices. Uh, so we actually have, how do we align the cost to the demand for our customers come to us wanting to looking at? Uh think about what we're doing with green, like a consumption based modeling, they want to be able to be able to consume the asset without having to do a capital outlay out of the gate uh, number to look at, you know, how do you deploy? Technology really demand? It depends on the scale. Right? So in a lot of your web skill, you know, scale out technologies, uh, putting them on a diet is challenging, meaning how skinny can you get it getting it down into the 50 terabyte range and then the complexities of those technologies at as you take a day one implementation and scale it out over, you know, you know, multiple iterations of recorders. The growth becomes a challenge. So, working with scalability, we we believe we've actually cracked this nut. We figured out how to a number one, how to start small but not limited customers ability to scale it out incrementally or grotesquely grotesque. A you can depending on the quarters the month, whatever whatever the workload is, how do you actually align and be able to consume it? Uh So now, whether it be on our edge line products are D. L. Products go back there. Now what the journalist talking about earlier, you know, we ship a server every few seconds. That won't be a problem. But then of course into our density optimized compute with the Apollo product. Uh This where uh our two companies have worked in an exclusivity where the, the scaly software bonds on the HP ecosystem. Uh and then we can of course provide you our customers the ability to consume that through our Green link financial models or through a complex parts of >>awesome. So jerome and chris who's the customer here? Obviously there's an exclusive period talk about the target customer. And how do customers get the product? How do we get the software? And how does this exclusivity with HP fit into it? >>Yeah. So there's really three types of customers and we really, we've worked a lot with a company called use design to optimize the user interface for each of the three types of customers. So we really thought about each uh customer role and providing with each of them the best product. Uh So the first type of customer application owners who are deploying application that requires an object storage in the back end. They typically want a simple objects to of one application. They wanted to be temple and work. I mean yesterday they want no freedom to just want an object store that works and they want to be able to start as small as they start with their application. Often it's, you know, the first department, maybe a small deployment. Um, you know, applications like backup like female rubric or uh, analytics like Stone Carver, tikka or false system now available as a software. Uh, you know, like Ceta does a really great department or nass that works very well. That means an object store in the back end of high performance computing. Wake up file system is an amazing file system. Um, we also have vertical application like broad peak, for example, who provides origin and view the software, the broadcasters. So all these applications, they request an object store in the back end and you just need a simple, high performance, working well object store and I'll discuss perfect. The second type of people that we think will be interested by artists. Uh essentially developers who are currently developing some communities of collaborative application your next year. Um and as part of their development stack, um it's getting better and better when you're developing a cloud native application to really target an object storage rather than NFS as you're persistently just, you know, think about generations of technologies and um, NFS and file system were great 25 years ago. I mean, it's an amazing technology. But now when you want to develop a distributed scalable application, objects toys a better fit because it's the same generation and so same thing. I mean, you know, developing something, they need uh an object so that they can develop on so they wanted very lightweight, but they also want the product that they're enterprise or their customers will be able to rely on for years and years on and this guy is really great for today. Um, the third type of customer are more architecture with security architects that are designing, uh, System where they're going to have 50 factories, 1000 planes, a million cars are going to have some local storage, which will they want to replicate to the core and possibly also to the club. And uh, as the design is really new generation workloads that are incredibly distributed. But with local storage, uh, these guys are really grateful for that >>and talk about the HP exclusive chris what's the, how does that fit into? They buy through sexuality. Can they get it for the HP? Are you guys working together on how customers can procure >>place? Yeah. Both ways they can procure it through security. They can secure it through HP. Uh, and it is the software stack running on our density, optimized compute platforms which you would choose online does. And to provide an enterprise quality because if it comes back to it in all of these use cases it's how do we align up into a true enterprise step? Um bringing about multi Tennessee, bringing about the fact that, you know, if you look at like a local racial coding, uh one of the things that they're bringing to it so that we can get down into the deal 3 25. So with the exclusivity, uh you actually get choice and that choice comes into our entire portfolio, whether it be the edge line platform, the D. L 3:25 a.m. B. Processing stack or the intel deal three eighties or whether whether it be the Apollo's or Alexa, there's there's so many ample choices there that facilitates this and it just allows us to align those two strategies >>awesome. And I think the kubernetes pieces really relevant because, you know, I've been interviewing folks practitioners um and kubernetes is very much maturing fast. It's definitely the centerpiece of the cloud native, both below the line, if you will under the hood for the, for the infrastructure and then for apps, um they want to program on top of it. That's critical. I mean, jeremy, this is like this is the future. >>Yeah. And if you don't mind, like to come back for a minute on the exclusive with HP. So we did a six month exclusive and the very reason we could do this is because HP has suffered such wrath of server portfolio and so we can go from, you know, really simple, very cheap, you know, HDD on the L 3 80 means a machine that retails for a few $4. I mean it's really like Temple System 50 terabyte. Uh we can have the dl 3 25. That uh piece mentioned there is really a powerhouse. All envy any uh slash uh all the storage is envy any uh very fast processors or uh you know, dance large large system like the Apollo 4500. So it's a very large breath of portfolio. We support the whole portfolio and we work together on this. So I want to say that you know, one of the reasons I want to send kudos to HP for for the breath of the silver lining rio as mentioned, um Jessica can be ordered from either company, hand in hand together. So anyway you'll see both of us uh and our field is working incredibly well together. >>We'll just on that point, I think just for clarification, uh was this co design by scalability and H P E. Because chris you mentioned, you know, the configuration of your systems. Can you guys quickly talk about the design, co design >>from from from the code base? The software entirely designed and developed by security from a testing and performance. So this really was a joint work with HP providing both hardware and manpower so that we could accelerate the testing phase. >>You know, chris H P E has just been doing such a great job of really focused on this. And you know, I've been Governor for years before it was fashionable the idea of apps working no matter where it lives. Public Cloud data center Edge, you mentioned. Edge line has been around for a while. You know, apps centric, developer friendly cloud first has been an H P E. Kind of guiding first principle for many, many years. >>But it has and you know, you know as our our ceo internal areas cited by 2022 everything will be able to be consumed as a service in our portfolio. Uh And then this stack allows us the simplicity and the consume ability of the technology and degranulation of it allows us to simplify the installation, simplify the actual deployment bringing into a cloud ecosystem. But more importantly for the end customer, they simply get an enterprise quality product running on identity optimized stack that they can consume through a orchestrated simplistic interface. That's that's cos that's what they're warning for today is where they come to me and asked hey how do I need a, I've got this new app new project and you know it goes back to who's actually coming, it's no longer the I. T. People who are actually coming to us, it's the lines of business. It's it's that entire dimension of business owners coming to us going this is my challenge and how can you HP help us And we rely on our breath of technology but also a breath of partners to come together and are of course reality is hand in hand and are collaborative business unit are collaborative storage product engineering group that actually brought this market. So we're very excited about this solution >>chris thanks for that input. Great insight, Jerome, congratulations on a great partnership with H. P. E. Obviously um great joint customer base congratulations on the product release here. Big moving the ball down the field as they say new functionality, clouds cloud native object store, phenomenal um So wrap wrap wrap up the interview. Tell us your vision for scalability in the future of storage. >>Yeah. Yeah I start I mean skeleton is going to be an amazing leader is already um but yeah so you know I have three themes that I think will govern how storage is going and obviously um Mark Andrews had said it software is everywhere and software is eating the world so definitely that's going to be true in the data center in storage in particular. Uh But the free trends that are more specific. First of all I think that security performance and agility is now basic expectation. It's not you know, it's not like an additional feature. It's just the best table, stakes, security performance and a job. Um The second thing is and we've talked about it during this conversation is edged to go you need to think your platform with Edge Co and cloud. You know you don't want to have separate systems separate design interface point for edge and then think about corn and think about clouds and then think about the divers. All this needs to be integrated in the design. And the third thing that I see as a major trend for the next 10 years is that a sovereignty uh more and more. You need to think about where is the data residing? What are the legal challenges? What is the level of protection against who are you protected? What what is your independence uh strategy? How do you keep as a company being independent from the people? You need to be independent. And I mean I say companies, but this is also true for public services. So these these for me are the three big trends. I do believe that uh software find distributed architecture are necessary for these tracks. But you also need to think about being truly enterprise grade. And there has been one of our focus with the design of a fresca. How do we combine a lot with product With all of the security requirements and that our sovereignty requirements that we expect to have in the next 10 years? >>That's awesome. Congratulations on the news scale. D Artois ca the big release with HP exclusive um, for six months, chris tucker, distinguished engineer at H P E. Great to ceo, jeremy, katz, ceo sexuality. Great to see you as well. Congratulations on the big news. I'm john for the cube. Thanks for watching. >>Mhm. >>Yeah.

Published Date : Apr 28 2021

SUMMARY :

from H P E. Hewlett Packard enterprise U room chris, Great to see you both. So let's see. but I think you know storage is kind of abstraction layer of evolution to this app centric world. the infrastructure level to deliver truly distributed And you know, Well, before we get into the product, I want to just touch on one thing from you mentioned and chris you, So that that's one of the aspects, another aspect that resonates for me with what you said Hybrid cloud, I've been, I've been having conscious with the HP folks for got locality of the actual asset to the compute. this hits the Mark Scaletta you guys have are Tesco which is um, this is cool. So we have we know what it is to be the primary store, we know what's the level of reliability you in the past chris also, you know, we cover HP like a blanket. number to look at, you know, how do you deploy? And how do customers get the product? I mean, you know, and talk about the HP exclusive chris what's the, how does that fit into? So with the exclusivity, uh you actually get choice And I think the kubernetes pieces really relevant because, you know, I've been interviewing folks all the storage is envy any uh very fast processors or uh you know, scalability and H P E. Because chris you mentioned, you know, the configuration of your from from from the code base? And you know, and asked hey how do I need a, I've got this new app new project and you know it goes back Big moving the ball down the field as they say new functionality, What is the level of protection against who are you protected? Great to see you as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

Chris TinkerPERSON

0.99+

two companiesQUANTITY

0.99+

Hewlett PackardORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

JessicaPERSON

0.99+

Mark AndrewsPERSON

0.99+

USLOCATION

0.99+

1000 planesQUANTITY

0.99+

2000DATE

0.99+

jeremyPERSON

0.99+

200QUANTITY

0.99+

50 factoriesQUANTITY

0.99+

Jerome LecatPERSON

0.99+

TescoORGANIZATION

0.99+

six monthsQUANTITY

0.99+

100 gigQUANTITY

0.99+

three typesQUANTITY

0.99+

jeromePERSON

0.99+

katzPERSON

0.99+

six monthQUANTITY

0.99+

chrisPERSON

0.99+

50 terabyteQUANTITY

0.99+

next yearDATE

0.99+

10 yearsQUANTITY

0.99+

$4QUANTITY

0.99+

20QUANTITY

0.99+

chris tuckerPERSON

0.99+

bothQUANTITY

0.99+

Hewlett Packard EnterprisesORGANIZATION

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

eachQUANTITY

0.99+

yesterdayDATE

0.99+

Palo alto CaliforniaLOCATION

0.99+

10 years agoDATE

0.99+

FirstQUANTITY

0.99+

11 years agoDATE

0.99+

Edge CoORGANIZATION

0.99+

chris TinkerPERSON

0.99+

third thingQUANTITY

0.99+

a million carsQUANTITY

0.98+

15 years laterDATE

0.98+

L 3 80COMMERCIAL_ITEM

0.98+

two strategiesQUANTITY

0.98+

one applicationQUANTITY

0.98+

25 years agoDATE

0.98+

second thingQUANTITY

0.98+

first applicationQUANTITY

0.98+

secondQUANTITY

0.98+

third typeQUANTITY

0.98+

2022DATE

0.98+

one serverQUANTITY

0.98+

first departmentQUANTITY

0.97+

fiveDATE

0.97+

three themesQUANTITY

0.97+

one thingQUANTITY

0.97+

three letterQUANTITY

0.97+

Both waysQUANTITY

0.97+

one lineQUANTITY

0.97+

todayDATE

0.96+

Apollo 4500COMMERCIAL_ITEM

0.96+

H P E.ORGANIZATION

0.96+

11 VMQUANTITY

0.96+

Jerome Lecat, Scality and Chris Tinker, HPE | CUBE Conversation


 

(uplifting music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCube here in Palo Alto, California. We've got two great remote guests to talk about some big news hitting with Scality and Hewlett Packard Enterprise. Jerome Lecat CEO of Scality and Chris Tinker, Distinguished Technologist from HPE, Hewlett Packard Enterprise, Jerome, Chris, great to see you both Cube alumnis from an original gangster days as we'd say back then when we started almost 11 years ago. Great to see you both. >> It's great to be back. >> Good to see you John. >> So, really compelling news around kind of this next generation storage cloud native solution. Okay, it's really kind of an impact on the next gen, I call next gen, dev ops meets application, modern application world and something we've been covering heavily. There's some big news here around Scality and HPE offering a pretty amazing product. You guys introduced essentially the next gen piece of it, Artesca, we'll get into in a second, but this is a game-changing announcement you guys announced, this is an evolution continuing I think is more of a revolution, but I think, you know storage is kind of abstractionally of evolution to this app centric world. So talk about this environment we're in and we'll get to the announcement, which is object store for modern workloads, but this whole shift is happening Jerome. This is a game changer to storage and customers are going to be deploying workloads. >> Yeah, Scality really, I mean, I personally really started working on Scality more than 10 years ago, close to 15 now. And if we think about it I mean the cloud has really revolutionized IT. And within the cloud, we really see layers and layers of technology. I mean, it all start at around 2006 with Amazon and Google and Facebook finding ways to do initially what was consumer IT at very large scale, very low credible reliability and then slowly creeped into the enterprise. And at the very beginning, I would say that everyone was kind of wizards trying things and really coupling technologies together. And to some degree we were some of the first wizard doing this, but we, we're now close to 15 years later and there's a lot of knowledge and a lot of experience, a lot of tools. And this is really a new generation. I'll call it cloud native, or you can call it next gen whatever, but there is now enough experience in the world, both at the development level and at the infrastructure level to deliver truly distributed automated systems that run on industry standard servers. Obviously good quality server deliver a better service than others, but there is now enough knowledge for this to truly go at scale. And call this cloud or call this cloud native. Really the core concept here is to deliver scalable IT at very low cost, very high level of reliability, all based on software. And we've, we've been participated in this motion, but we feel that now the breadth of what's coming is at the new level, and it was time for us to think, develop and launch a new product that's specifically adapted to that. And Chris, I will let you comment on this because the customers or some of them, you can add a customer, you do that. >> Well, you know, you're right. You know, I've been in the, I've been like you I've been in this industry for a, well, along time. Give a long, 20 to 21 years in HPE in engineering. And look at the actual landscape has changed with how we're doing scale-out software-defined storage for particular workloads. And we're a catalyst has evolved here is an analytics normally what was only done in the three letter acronyms and massively scale-out parallel namespace file systems, parallel file systems. The application space has encroached into the enterprise world where the enterprise world needed a way to actually take a look at how to, how do I simplify the operations? How do I actually be able to bring about an application that can run in the public cloud or on premise or hybrid, be able to actually look at a workload optimized step that aligns the actual cost to the actual analytics that I'm going to be doing the workload that I'm going to be doing and be able to bridge those gaps and be able to spin this up and simplify operations. And you know, and if you, if you are familiar with these parallel processes which by the way we actually have on our truck, I, I do engineer those, but they are, they are, they are they have their own unique challenges, but in the world of enterprise where customers are looking to simplify operations, then take advantage of new application, analytic workloads whether it be smart, Mesa, whatever it might be, right. I mean, if I want to spin up a Mongo DB or maybe maybe a, you know, last a search capability how do I actually take those technologies, embrace a modern scale-out storage stack that without without breaking the bank, but also provide a simple operations. And that's, that's why we look for object storage capabilities because it brings us this massive parallelization. Back to you John. >> Well before we get into the product. I want to just touch on one thing Jerome you mentioned, and Chris, you, you brought up the DevOps piece next gen, next level, whatever term you use. It is cloud native, cloud native has proven that DevOps infrastructure is code is not only legit. It's being operationalized in all enterprises and add security in there, you have DevSecOps, this is the reality and hybrid cloud in particular has been pretty much the consensus is that standard. So our defacto center whatever you want to call it, that's happening. Multicloud are on the horizon. So these new workloads are have these new architectural changes, cloud on premises and edge. This is the number one story. And the number one challenge all enterprises are now working on. How do I build the architecture for the cloud on premises and edge? This is forcing the DevOps team to flex and build new apps. Can you guys talk about that particular trend? And is it, and is that relevant here? >> Yeah, I, I now talk about really storage anywhere and cloud anywhere and and really the key concept is edge to go to cloud. I mean, we all understand now that the edge will host a lot of that time and the edge is many different things. I mean, it's obviously a smartphone, whatever that is, but it's also factories, it's also production. It's also, you know, moving moving machinery, trains, planes, satellites that that's all the edge, cars obviously. And a lot of that I will be both produced and process there, but from the edge who will want to be able to send the data for analysis, for backup, for logging to a call, and that call could be regional, maybe not, you know, one call for the whole planet, but maybe one corporate region the state in the U.S. And then from there you will also want to push some of the data to public cloud. One of the thing that we see more and more is that the D.R that has centered the disaster recovery is not another physical data center. It's actually the cloud, and that's a very efficient infrastructure very cost efficient, especially. So really it, it, it's changing the paradigm on how you think about storage because you really need to integrate these three layers in a consistent approach especially around the topic of security because you want the data to be secure all along the way. And data is not just data, its data, and who can access the data, who can modify the data what are the conditions that allow modification all automatically erasure of the data? In some cases, it's super important that the data automatically erased after 10 years and all this needs to be transported from edge to core to cloud. So that that's one of the aspects. Another aspects that resonates for me with what you said is a word you didn't say, but it's actually crucial this whole revolution. It's Kubernetes I mean, Kubernetes is in now a mature technology, and it's, it's just, you know the next level of automatized operation for distributed system, which we didn't have 5 or 10 years ago. And that is so powerful that it's going to allow application developers to develop much faster system that can be distributed again edge to go to cloud, because it's going to be an underlying technology that spans the three layers. >> Chris, your thoughts hybrid cloud. I've been, I've been having questions with the HPE folks for God years and years on hybrid clouds, now here. >> Right (chuckles) >> Well, you know, and, and it's exciting in a layout right, so you look at like a, whether it be enterprise virtualization, that is a scale-out general purpose virtualization workloads whether it be analytic workloads, whether it be no data protection is a paramount to all of this, orchestration is paramount. If you look at that DevSecOps, absolutely. I mean, securing the actual data the digital last set is, is absolutely paramount. And if you look at how we do this look at the investments we're making, we're making enough and look at the collaborative platform development which goes to our partnership with Scality. It is, we're providing them an integral aspect of everything we do, whether we're bringing in Ezmeral which is our software we use for orchestration look at the veneer of its control plane, controlling Kubernetes. Being able to actually control the active clusters and the actual backing store for all the analytics that we just talked about. Whether it be a web-scale app that is traditionally using a politics namespace and now been modernized and take advantage of newer technologies running an NBME burst buffers or a hundred gig networks with Slingshot network of 200 and 400 gigabit looking at how do we actually get the actual analytics, the workload to the CPU and have it attached to the data at risk. Where's the data, how do we land the data? How do we actually align, essentially locality, locality of the actual asset to the computer. And this is where, you know, we can look leverage whether it be a Zair or Google or name your favorite hybrid, hyperscaler, leverage those technologies leveraging the actual persistent store. And this is where Scality is, with this object store capability has it been an industry trendsetter, setting the actual landscape of how provide an object store on premise and hybrid cloud run it in a public cloud, but being able to facilitate data mobility and tie it back to, and tie it back to an application. And this is where a lot of things have changed in the world of analytics, because the applications that you, the newer technologies that are coming on the market have taken advantage of this particular protocol as threes. So they can do web scale massively parallel concurrent workloads. >> You know what let's get into the announcement. I love cool and relevant products. And I think this hits the mark. Scality you guys have Artesca, which is just announced. And I think it, you know, we obviously we reported on it. You guys have a lightweight true enterprise grade object store software for Kubernetes. This is the announcement, Jerome, tell us about it. What's the big deal? Cool and relevant, come on, this is cool. Right, tell us. >> I'm super excited. I'm not sure, if you can see it as well on the screen, but I'm super, super excited. You know, we, we introduced the ring 11 years ago and they says our biggest announcements for the past 11 years. So yes, do pay attention. And, you know, after, after looking at, at all these trends and understanding where we see the future going. We decided that it was time to embark (indistinct) So there's not one line of code that's the same as our previous generation product. They will both exist, they both have a space in the market. And Artesca was specifically designed for this cloud native era. And what we see is that people want something that's lightweight especially because it had to go to the edge. They still want the enterprise grid that Scality is known for. And it has to be modern. What we really mean by modern is, we see object storage now being the primary storage for many application more and more applications. And so we have to be able to deliver the performance, that primary storage expects. This idea of a Scality of serving primary storage is actually not completely new. When we launched Scality 10 years ago, the first application that we were supporting was consumer email for which we were, and we are still today, the primary storage. So we have, we know what it is to be the primary store. We know what's the level of reliability you need to hit. We know what, what latency means and latency is different from throughput, you really need to optimize both. And I think that still today we're the only object storage company that protects data from both replication and original encoding Because we understand that replication is faster, but the original encoding is more better, and more, of file where fast internet latency doesn't matter so much. So we we've been being all that experience, but really rethinking of product for that new generation that really is here now. And so where we're truly excited, I guess people a bit more about the product. It's a software, Scality is a software company and that's why we love to partner with HPE who's producing amazing servers, you know for the record and the history. The very first deployment of Scality in 2010 was on the HP servers. So this is a long love story here. And so to come back to our desk is lightweight in the sense that it's easy to use. We can start small, we can start from just one server or one VM I mean, you would start really small, but he can grow infinitely. The fact that we start small, we didn't, you know limit the technology because of that. So you can start from one to many and it's cloud native in the sense that it's completely Kubernetes compatible it's Kubernetes office traded. It will deploy on many Kubernetes distributions. We're talking obviously with Ezmeral we're also talking with zoo and with the other all those of communities distribution it will also be able to be run in the cloud. Now, I'm not sure that there will be many true production deployment of Artesca going the cloud, because you already have really good object storage by the cloud providers but when you are developing something and you want to test that, you know just doing it in the cloud is very practical. So you'll be able to deploy our Kubernetes cloud distribution, and it's more than object storage in the sense that it's application centric. A lot of our work is actually validating that our storage is fit for this single purpose application. And making sure that we understand the requirement of these application, that we can guide our customers on how to deploy. And it's really designed to be the primary storage for these new workloads. >> The big part of the news is your relationship with Hewlett Packard Enterprise is some exclusivity here as part of this and as you mentioned the relationship goes back many, many years. We've covered the, your relationship in the past. Chris also, you know, we cover HP like a blanket. This is big news for HPE as well. >> This is very big news. >> What is the relationship, talk about this exclusivity Could you share about the partnership and the exclusivity piece? >> Well, there's the partnership expands into the pan HPE portfolio. we look, we made a massive investment in edge IOT device. So we actually have how did we align the cost to the demand. Our customers come to us, wanting to looking at think about what we're doing with Greenlake, like in consumption based modeling. They want to be able to be able to consume the asset without having to do a capital outlay out of the gate. Number two, look at, you know how do you deploy technology, really demand. It depends on the scale, right? So in a lot of your web skill, you know, scale out technologies, it putting them on a diet is challenging. Meaning how skinny can you get it. Getting it down into the 50 terabyte range and then the complexities of those technologies at as you take a day one implementation and scale it out over you know, you know, multiple iterations over quarters, the growth becomes a challenge so working with Scality we, we believe we've actually cracked this nut. We figured out how to a number one, how to start small, but not limit a customer's ability to scale it out incrementally or grotesquely. You can eat depending on the quarters, the month, whatever whatever the workload is, how do you actually align and be able to consume it? So now whether it be on our Edgeline products our DL products go right there, now what that Jerome was talking about earlier you know, we, we, we ship a server every few seconds. That won't be a problem. But then of course, into our density optimized compute with the Apollo products. And this where our two companies have worked in an exclusivity where they scale the software bonds on the HP ecosystem. And then we can, of course provide you, our customers the ability to consume that through our GreenLake financial models or through a CapEx partners. >> Awesome, so Jerome and, and Chris, who's the customer here obviously, there's an exclusive period. Talk about the target customer and how the customers get the product and how they get the software. And how does this exclusivity with HP fit into it? >> Yeah, so there there's really a three types of customers and we've really, we've worked a lot with a company called UseDesign to optimize the user interface for each the types of customers. So we really thought about each customer role and providing with each of them the best product. So the, the first type of customer are application owners who are deploying an application that requires an object storage in the backend, you typically want a simple object store for one application, they want it to be simple and work. Honestly they want no thrill, just want an object store that works. And they want to be able to start as small as they start with their application. Often it's, you know, the first deployment maybe a small deployment, you know applications like a backup like VML, Rubrik, or analytics like (indistinct), file system that now, now available as a software, you know like CGI does a really great departmental NAS that works very well that needs an object store in the backend. Or for high performance computing a wake-up house system is an amazing file system. We will also have vertical application like road peak, for example, who provides origin and the view of the software broadcasters. So all these are application, they request an object store in the backend and you just need a simple high-performance working well object store and I'll discuss perfect for that. Now, the second type of people that we think will be interested by Artesca are essentially developer who are currently developing some capabilities or cloud native application, your next gen. And as part of their development stack, it's getting better and better when you're developing a cloud native application to really target an object storage rather than NFS, as you're persistent. It just, you know, think about generations of technologies and NFS and filesystem were great 25 years ago. I mean, it's an amazing technology. Now, when you want to develop a distributed scalable application object storage is a better fit because it's the same generation. And so same thing, I mean, you know, they're developing something they need an object store that they can develop on. So they want it very lightweight, but they also want the product that their enterprise or their customers will be able to rely on for years and years on. And this guy's really great fit to do that. The third type of customer are more architects, I would say are the architects that are designing a system where they are going to have 50 factories, a thousand planes, a million cars, they are going to have some local storage which will they want to replicate to the core and possibly also to the cloud. And as the design is really new generation workloads that are incredibly distributed but with local storage Artesca are really great for that. >> And tell about the HPE exclusive Chris. What's the, how does that fit in? Do they buy through Scality? Can they get it for the HP? Are you guys working together on how customers can procure it? >> Both ways, yeah both ways they can procure it through Scality. They can secure it through HPE and it's, it's it's the software stack running on our density optimized compute platforms which you would choose and align those and to provide an enterprise quality. Cause if it comes back to it in all of these use cases is how do we align up into a true enterprise stack, bringing about multitenancy bringing about the, the, the fact that you know, if you look at like a local coding one of the things that they're bringing to it, so that we can get down into the DL325. So with the exclusivity, you actually get choice. And that choice comes into our entire portfolio whether it be the Edgeline platform the DL325 AMD processing stack or the Intel 380, or whether it be the Apollos or like I said, there's, there's, there's so many ample choices there that facilitate this, and it's this allows us to align those two strategies. >> Awesome, and I think the Kubernetes piece is really relevant because, you know, I've been interviewing folks practitioners and Kubernetes is very much maturing fast. It's definitely the centerpiece of the cloud native both below the, the line, if you will below under the hood for the, for the infrastructure and then for apps, they want a program on top of it that's critical. I mean, Jerome, this is like, this is the future. >> Yeah, and if you don't mind like to come back to the myth on the exclusivity with HP. So we did a six month exclusive and the very reason we could do this is because HP has such breadth of server portfolio. And so we can go from, you know, really simple, very cheap you know, DL380, machine that we tell us for a few dollars. I mean, it's really like simple system, 50 terabyte. We can have the DL325 that Chris mentioned that is really a powerhouse all NVME, clash over storage is NVME, very fast processors you know, dense, large, large system, like the APOE 4,500. So it's a very large graph of portfolio. We support the whole portfolio and we work together on this. So I want to say that you know, one of the reason I want to send kudos to HP for the breadth of their server line really. As mentioned, Artesca can be ordered from either company. In hand-in-hand together, so anyway, you'll see both of us and our field working incredibly well together. >> Well, just on that point, I think just for clarification was this co-design by Scality and HPE, because Chris you mentioned, you know, the, the configuration of your systems. Can you guys, Chris quickly talk about the design. >> From, from, from the code base the software is entirely designed and developed by Scality, from testing and performance, so this really was a joint work with HP providing both a hardware and manpower so that we could accelerate the testing phase. >> You know, Chris HPE has just been doing such a great job of really focused on this. I know I've been covering it for years before it was fashionable. The idea of apps working no matter where it lives, public cloud, data center, edge. And you mentioned edge line's been around for awhile, you know, app centric, developer friendly, cloud first, has been an HPE kind of guiding first principle for many, many years. >> Well, it has. And, you know, as our CEO here intended, by 2022 everything will be able to be consumed as a service in our portfolio. And then this stack allows us the simplicity and the consumability of the technology and the granulation of it allows us to simplify the installation. Simplify the actual deployment bringing into a cloud ecosystem, but more importantly for the end customer. They simply get an enterprise quality product running on an optimized stack that they can consume through a orchestrated simplistic interface. That customers that's what they're wanting for today's but they come to me and ask, hey how do I need a, I've got this new app, new project. And, you know, it goes back to who's actually coming. It's no longer the IT people who are actually coming to us. It's the lines of business. It's that entire dimension of business owners coming to us, going this is my challenge. And how can you, HPE help us? And we rely on our breadth of technology, but also our breadth of partners to come together in our, of course Scality is hand in hand and our collaborative business unit our collaborative storage product engineering group that actually brought, brought this to market. So we're very excited about this solution. >> Chris, thanks for that input and great insight. Jerome, congratulations on a great partnership with HPE obviously great joint customer base. Congratulations on the product release here. Big moving the ball down the field, as they say. New functionality, clouds, cloud native object store. Phenomenal, so wrap, wrap, wrap up the interview. Tell us your vision for Scality and the future of storage. >> Yeah, I think I started in, Scality is going to be an amazing leader, it is already. But yeah, so, you know I have three things that I think will govern how storage is going. And obviously Marc Andreessen said it software is everywhere and software is eating the world. So definitely that's going to be true in the data center in storage in particular, but the three trends that are more specific are first of all, I think that security performance and agility is now basic expectation. It's, it's not, you know it's not like an additional feature. It's just the basic tables, security performance and our job. The second thing is, and we've talked about it during this conversation is edge to go. You need to think your platform with edge, core and cloud. You know, you, you don't want to have separate systems separate design interface point for edge and then think about the core and then think about cloud, and then think about the diverse power. All this needs to be integrated in a design. And the third thing that I see as a major trend for the next 10 years is data sovereignty. More and more, you need to think about where is the data residing? What are the legal challenges? What is the level of protection, against who are you protected? What is your independence strategy? How do you keep as a company being independent from the people you need to be in the band? And I mean, I say companies, but this is also true for public services. So these, these for me are the three big trends. And I do believe that software defined distributed architecture are necessary for these trends but you also need to think about being truly enterprise grade. and that has been one of our focus with design of Artesca. How do we combine a lightweight product with all of the security requirements and data sovereignty requirements that we expect to have in the next thing? >> That's awesome. Congratulations on the news Scality, Artesca. The big release with HPE exclusive for six months, Chris Tinker, Distinguished Engineer at HPE. Great to see you Jerome Lecat CEO of Scality, great to see you as well. Congratulations on the big news. I'm John Furrier from theCube. Thanks for watching. (uplifting music)

Published Date : Apr 26 2021

SUMMARY :

Great to see you both. an impact on the next gen, And at the very beginning, I would say that aligns the actual cost And the number one challenge So that that's one of the aspects. for God years and years on that are coming on the And I think it, you know, we in the sense that it's easy to use. The big part of the align the cost to the demand. and how the customers get the product in the backend and you just need a simple And tell about the HPE exclusive Chris. and it's, it's it's the of the cloud native both below and the very reason we could do this is talk about the design. the software is entirely designed And you mentioned edge line's been around and the consumability of the and the future of storage. from the people you great to see you as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JeromePERSON

0.99+

AmazonORGANIZATION

0.99+

Jerome LecatPERSON

0.99+

Marc AndreessenPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Chris TinkerPERSON

0.99+

two companiesQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

HPORGANIZATION

0.99+

2010DATE

0.99+

Jerome LecatPERSON

0.99+

FacebookORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

ScalityORGANIZATION

0.99+

20QUANTITY

0.99+

HPEORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

50 factoriesQUANTITY

0.99+

50 terabyteQUANTITY

0.99+

a million carsQUANTITY

0.99+

six monthsQUANTITY

0.99+

GreenlakeORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

three typesQUANTITY

0.99+

a thousand planesQUANTITY

0.99+

CapExORGANIZATION

0.99+

both waysQUANTITY

0.99+

10 years agoDATE

0.99+

U.S.LOCATION

0.99+

DL325COMMERCIAL_ITEM

0.99+

six monthQUANTITY

0.99+

21 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

ArtescaORGANIZATION

0.99+

eachQUANTITY

0.99+

one applicationQUANTITY

0.99+

one serverQUANTITY

0.99+

11 years agoDATE

0.99+

200QUANTITY

0.99+

second thingQUANTITY

0.98+

third thingQUANTITY

0.98+

first typeQUANTITY

0.98+

each customerQUANTITY

0.98+

first applicationQUANTITY

0.98+

5DATE

0.98+

Chetan Kapoor, AWS & Eitan Medina, Habana Labs | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS >>reinvent 2020 sponsored >>by Intel, AWS and our community partners. Welcome back to the cubes. Virtual coverage of AWS reinvent 2020. It's virtual this year. We're not in person, so we're doing remote interviews. Part of the three weeks we'll be covering wall to wall a lot of great conversations. News to cover and joining me today Off Fresh off the news off Andy Jackson's keynote, We have two great guests here. Jason Kapoor, senior product manager for Accelerated Computing at A. W S and eight time Medina Chief business officer, Havana Labs, which was recently acquired by Intel Folks. Thanks for coming on, gentlemen. Thank you for spending the time for coming on the key. Appreciate it. >>Thanks for having us. >>J Town. So talk about the news, actually. Uh, computers changing. It's being reinvented. That's the theme from Andy's keynote. What did Andy announced? Could you take a minute to explain the announcement? What services? What ap What's gonna be supported? What's this about? Take a minute to explain. >>Yeah, absolutely. Yeah. So today >>we >>announced our plans to launch and easy to instance based on hardware accelerators from Havana labs. We expect these businesses to be available in the first time from next year. And these air custom designed for accelerating training off deep learning models, a zoo we all know like training of deep learning models is a really competition. Aly extensive task. Oftentimes it takes too long and cost too much. And we're really excited about getting these instances out of the market as we expect for them to provide up to 40% better price performance. Thani on top of the line GPU instances, >>a lot of improvements. Why did anybody do this? Why heaven or what's the what the working backwards document tell you? What is it customers looking for here is or specific use case? >>Yeah, absolutely. So, you know, over the years, uh, the use of machine learning and deep learning has, like, really skyrocketed, right? So we're seeing companies from all the way like 14, 500 to like start ups just reinventing their business models and use using deep learning more pervasively. Right. So we have companies like Pinterest, you know, you'd use deep learning for content recommendations and object detection to Toyota Research Institute that are advancing the science behind autonomous vehicles. And there's a consistent cream from a lot of these customers that are, you know, innovating in the deep learning space that you know the cost it takes to experiment, train and optimize the deep learning models. It's too high. And, you know, they're looking at us as one of their partners to help them optimize their costs, you know, bring them as well as possible while giving them really performing products and enable them to actually bring their markets, their innovations to market as soon as possible. Right? S o. Do you answer your questions straight on your wants? The working backwards. It's a feedback from customers that they want choice on. They want our help Thio lower. Uh, the amount of compute resources and the cost it takes to train the new planning models. >>Hey, Tom, why don't you weigh in here on Havana and now part of intel? What trends are driving this? What's the motivation? Were you guys fit in? What's your view on this? >>Yeah, So Havana was founded in 2016 to deliver a I processors for the data center and cloud for training and inference deep learning models. So while building chips is hard, building, the software and ecosystem is even harder. So joining forces with intel simply helps us connect the dots. Ever since the acquisition last year, we were able to significantly boost our armed. The resource is, and now we're leveraging inter scale in number of customers and ecosystem and partner support. >>So what's the name of the product? Is there a chip name got? Was it Gowdy is the name? >>Yes, the product is man angle. >>Okay. And so it's gonna be hardware. So it's the hardware software. What's involved? Take us through the product. >>Yes. So Gandhi was designed from the ground up to do one task which is training deep learning models. To do that well, we focus the architectural to aspect efficiency and scalability. The computer architectures is a combination of fully programmable TPC tensile process, of course, and a central g M n G. These DPC course are programmable Villa W seen the machines that we designed with custom instruction, set architecture, er and special functions that will developed specifically for a I. The Gandhi cheap integrates also 32 gigabyte off H B M to memory which makes it easy to port to. For GPU developers, Gandhi is unique in integrating 10 parts of 100 gigabit Internet rocky on cheap. And this is opposed to other architectural, which use proprietary interfaces. So overall, improving the cost performance is achieved through efficiency, namely higher utilization off the computer and memory resource is on cheap and the native integration off the rocky interfaces >>J Town. This is actually interesting, as this is the theme for reinvent. We're seeing it right on stage today. Play out again another command performance by Andy Jassy. Slew of announcements. How does Gowdy fit into the AI portfolio or Amazon strategy? Because what a town saying is it sounds like he's doing the heavy lifting on all this training stuff when people want to just get to the outcome. I mean, the theme has been, just let the product do what they do kind of put stuff under the covers and just let it scale. Is that the theme here is this. >>What does this >>all fit in? Take us through how this fits into the A, I strategy for Amazon and also what what what is Havana Intel bring to the table? >>Absolutely. Yeah. So with respect to our overall strategy and portfolio units, it's relatively straightforward, right? So we're laser focused on making sure we have the broadest and deepest portfolio off services for machine learning, right? So these range from infrastructure services specifically compute networking and storage all the way up to, like, managed and all services, which come with pre trained models and customers can simply invoke them using an A P. I call right eso. So from a strategy perspective, you want to make sure that we provide a customer to a choice, uh, enable them to pick the right platform for the right use case, help them get to the Khan structure they actually want, right eso with Havana. And you know, their acquisition with Intel, we finally have access to hardware software and the ability to kind of build out a ecosystem beyond what you know judicially is being used. Which is was a GP used right eso. So the engagement with with Havana, you know, allows us to take their products and capabilities, wrap it around, and easy to instance, which is what customers will be able to launch right on doing so. We're enabling them to tap into the innovation that Teton the rest of the Havana team are working on while having a solution that is integrated with the full AWS stack. Right? So you don't you don't have to rack in stock hard. Bring your data center thes. They're gonna be available standard. Easy to instances. You can just click and launch them. Get access to software that's already pre integrated and big den and ready to go right. Eso so it actually comes down to taking their innovations, coupling it with an AWS solution and making it too easy for customers together. I've been running with the respective training the deepened models. >>Well, here is the question that I want to get to. I think everyone's on everyone's mind is how is it Gowdy different or similar than other GPU? Specifically, you mentioned the software stack on the AWS What you get the software stack inside the chip. How is this different or similar? Two other GP use. And what's the difference between the software stack versus a traditional libraries? >>So from day one, we were focused on the software experience and we were mindful in the need to make it easy for developers to use the innovations we have in the hardware. Most developers, if not all of them, are using deep learning frameworks such as tensile flowing pytorch for building their deep learning models. So God is synapse AI software suite comes integrated and optimized for tensorflow and pilotage, so we expect most developers to be able to take their existing models and with minor changes to the training strips to be able to run them on Gowdy based instances. In addition, expert developers that are familiar with writing their own kernels will be provided with food too sweet for writing their own TPC kernels that can augment the Havana provided library. >>So that's the user experience for the developers, right? That's what you're saying >>exactly, exactly, and we will provide detailed guides for developers. In doing that, Havana will provide open access to documentation library software models and left toe Havana's kita and bi directional communication with the Havana developer community. All these resources will be available concurrently with the AWS Instances launch. >>Okay, so I'm a developer. How did I get involved? It's software on git hub I use the hardware is on Amazon, obviously, in their instances. It's a new instance. Take me through the workflow develop. I'm into this. I wanna I wanna get involved. What I what am I doing? Take me >>through? Yes, I think it s so If the developer is accustomed to using GPS for training the deep learning models three experience is gonna be practically the same, right? So they'll have multiple options to get started. One of them would be, for example, to take our deep learning, Um, it's or Amazon machine images that will come integrated with software from Havana labs. Right. So customers will take the deep Learning Army and launch it on an easy to instance, featuring the gaudy accelerators. Right? So when with that, they'll have, you know, the baseline construct off software and hardware available to get up and running with right, we'll support, you know, all different types of work flows. So if customers want to use containerized solutions, thes instances will be supported R E C s and E s services. Eso using containerized kubernetes you know, thes the solution will just work on. Lastly, we also intend to support these instances through sage maker eso. Just a quick recap on stage maker. That's a manage service that does end to end that provides end to end capabilities for training, debugging, building and deploying machine learning applications. Eso these instances will also be supporting sage maker. So if you're fiddling with sage maker, you can get up and running with this. This is fairly quickly. >>It sounds like it's gonna enable a lot of action and sage maker level. Then can that layer on the use cases? I gotta ask you guys quickly, What's the low hanging fruit use case applications for this product thing? This partnership, Because you know that's gonna be the first Traction said, What are some of these applications gonna be used for? What can we expect to see? >>So typical applications would be image classifications, object detection, natural language processing, the recommendation systems. You'll find reference models in our get up for that and will be growing at least a Z you can imagine. >>Okay, where can people find more info? Give us the data. Take him in to explain. Put a plug in for how What's all the coordinates? U r l sites support how people create, Um, how people get involved. The community. >>Yeah, so customers will be able to access information on AWS websites and also on Havana Labs website. So you will be kicking off a preview early next year. Eso I would highly recommend for customers to find our product pages and signed up for already access and previous information. Utah. >>Yes, and you'll find more information on Havana. A swell a Savannah's get up over time. >>Great announcement. Congratulations. Thanks for sharing the news and some commentary on it. This is really the big theme. You know what Cove in 19 and this pandemic has shown is massive acceleration of digital transformation and having the software and hardware out there that accelerates the heavy lifting and creates value around the data. Super valuable. Thanks for for doing that. Appreciate taking the time. Thank >>you so much. >>Yeah. Thanks for having >>us. Okay, this is the cubes coverage at 80. Best reinvent next three weeks. We're here on the ground. Will remote. We're live inside the studio. We wish we could be there in person, but it's remote this year. But stay tuned. Check out silicon angle dot com. Exclusive interviews with Andy Jassy and Amazon executives and the big news covering. They're all there in one spot. Check it out. We'll be back with more coverage after this break. Thanks for watching. Yeah.

Published Date : Dec 8 2020

SUMMARY :

It's the Cube with digital coverage Part of the three weeks we'll be covering wall That's the theme from Andy's keynote. Yeah, absolutely. the first time from next year. What is it customers looking for here is or specific use case? So we have companies like Pinterest, you know, for the data center and cloud for training and inference deep learning models. So it's the hardware software. So overall, improving the cost performance is achieved through efficiency, Is that the theme here is this. the ability to kind of build out a ecosystem beyond what you know judicially Well, here is the question that I want to get to. be able to take their existing models and with minor changes to the training strips to be able the Havana developer community. is on Amazon, obviously, in their instances. to get up and running with right, we'll support, you know, all different types of work flows. Then can that layer on the use cases? in our get up for that and will be growing at least a Z you can imagine. Put a plug in for how What's all the coordinates? So you will be kicking off a preview early next year. Yes, and you'll find more information on Havana. This is really the big theme. We're here on the ground.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

2016DATE

0.99+

Andy JassyPERSON

0.99+

Jason KapoorPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JacksonPERSON

0.99+

Havana LabsORGANIZATION

0.99+

AWSORGANIZATION

0.99+

TomPERSON

0.99+

Toyota Research InstituteORGANIZATION

0.99+

Chetan KapoorPERSON

0.99+

last yearDATE

0.99+

todayDATE

0.99+

HavanaORGANIZATION

0.99+

next yearDATE

0.99+

IntelORGANIZATION

0.99+

UtahLOCATION

0.99+

GowdyPERSON

0.99+

GandhiPERSON

0.99+

OneQUANTITY

0.99+

14, 500QUANTITY

0.99+

HavanaLOCATION

0.99+

three weeksQUANTITY

0.99+

Habana LabsORGANIZATION

0.98+

10 partsQUANTITY

0.98+

PinterestORGANIZATION

0.98+

firstQUANTITY

0.98+

oneQUANTITY

0.98+

one spotQUANTITY

0.97+

intelORGANIZATION

0.97+

Intel FolksORGANIZATION

0.97+

eight timeQUANTITY

0.97+

early next yearDATE

0.97+

this yearDATE

0.97+

two great guestsQUANTITY

0.97+

100 gigabitQUANTITY

0.97+

J TownPERSON

0.97+

first timeQUANTITY

0.96+

32 gigabyteQUANTITY

0.96+

Havana IntelORGANIZATION

0.96+

Eitan MedinaPERSON

0.96+

up to 40%QUANTITY

0.95+

pandemicEVENT

0.95+

TwoQUANTITY

0.95+

one taskQUANTITY

0.91+

EsoORGANIZATION

0.87+

SavannahLOCATION

0.86+

next three weeksDATE

0.86+

day oneQUANTITY

0.84+

A. W SORGANIZATION

0.82+

GowdyTITLE

0.81+

MedinaORGANIZATION

0.78+

80QUANTITY

0.73+

2020DATE

0.69+

TetonPERSON

0.69+

threeQUANTITY

0.65+

ThaniORGANIZATION

0.64+

themQUANTITY

0.6+

synapseTITLE

0.56+

2020TITLE

0.55+

ThioORGANIZATION

0.54+

InventEVENT

0.53+

esoORGANIZATION

0.51+

Maurizio Davini, University of Pisa and Thierry Pellegrino, Dell Technologies | VMworld 2020


 

>> From around the globe, it's theCUBE, with digital coverage of VMworld 2020, brought to you by the VMworld and its ecosystem partners. >> I'm Stu Miniman, and welcome back to theCUBES coverage of VMworld 2020, our 11th year doing this show, of course, the global virtual event. And what do we love talking about on theCUBE? We love talking to customers. It is a user conference, of course, so really happy to welcome to the program. From the University of Pisa, the Chief Technology Officer Maurizio Davini and joining him is Thierry Pellegrini, one of our theCUBE alumni. He's the vice president of worldwide, I'm sorry, Workload Solutions and HPC with Dell Technologies. Thierry, thank you so much for joining us. >> Thanks too. >> Thanks to you. >> Alright, so let, let's start. The University of Pisa, obviously, you know, everyone knows Pisa, one of the, you know, famous city iconic out there. I know, you know, we all know things in Europe are a little bit longer when you talk about, you know, some of the venerable institutions here in the United States, yeah. It's a, you know, it's a couple of hundred years, you know, how they're using technology and everything. I have to imagine the University of Pisa has a long storied history. So just, if you could start before we dig into all the tech, give us our audience a little bit, you know, if they were looking up on Wikipedia, what's the history of the university? >> So University of Pisa is one of the oldest in the world because there has been founded in 1343 by a pope. We were authorized to do a university teaching by a pope during the latest Middle Ages. So it's really one of the, is not the oldest of course, but the one of the oldest in the world. It has a long history, but as never stopped innovating. So anything in Pisa has always been good for innovating. So either for the teaching or now for the technology applied to a remote teaching or a calculation or scientific computing, So never stop innovating, never try to leverage new technologies and new kind of approach to science and teaching. >> You know, one of your historical teachers Galileo, you know, taught at the university. So, you know, phenomenal history help us understand, you know, you're the CTO there. What does that encompass? How, you know, how many students, you know, are there certain areas of research that are done today before we kind of get into the, you know, the specific use case today? >> So consider that the University of Pisa is a campus in the sense that the university faculties are spread all over the town. Medieval like Pisa poses a lot of problems from the infrastructural point of view. So, we have bought a lot in the past to try to adapt the Medieval town to the latest technologies advancement. Now, we have 50,000 students and consider that Pisa is a general partners university. So, we cover science, like we cover letters in engineering, medicine, and so on. So, during the, the latest 20 years, the university has done a lot of effort to build an infrastructure that was able to develop and deploy the latest technologies for the students. So for example, we have a private fiber network covering all the town, 65 kilometers of a dark fiber that belongs to the university, four data centers, one big and three little center connected today at 200 gigabit ethernet. We have a big data center, big for an Italian University, of course, and not Poland and U.S. university, where is, but also hold infrastructure for the enterprise services and the scientific computing. >> Yep, Maurizio, it's great that you've had that technology foundation. I have to imagine the global pandemic COVID-19 had an impact. What's it been? You know, how's the university dealing with things like work from home and then, you know, Thierry would love your commentary too. >> You know, we, of course we were not ready. So we were eaten by the pandemic and we have to adapt our service software to transform from imperson to remote services. So we did a lot of work, but we are able, thanks to the technology that we have chosen to serve almost a 100% of our curriculum studies program. We did a lot of work in the past to move to virtualization, to enable our users to work for remote, either for a workstation or DC or remote laboratories or remote calculation. So virtualization has designed in the past our services. And of course when we were eaten by the pandemic, we were almost ready to transform our service from in person to remote. >> Yeah, I think it's, it's true, like Maurizio said, nobody really was preparing for this pandemic. And even for, for Dell Technologies, it was an interesting transition. And as you can probably realize a lot of the way that we connect with customers is in person. And we've had to transition over to modes or digitally connecting with customers. We've also spent a lot of our energy trying to help the community HPC and AI community fight the COVID pandemic. We've made some of our own clusters that we use in our HPC and AI innovation center here in Austin available to genomic research or other companies that are fighting the the virus. And it's been an interesting transition. I can't believe that it's already been over six months now, but we've found a new normal. >> Detailed, let's get in specifically to how you're partnering with Dell. You've got a strong background in the HPC space, working with supercomputers. What is it that you're turning to Dell in their ecosystem to help the university with? >> So we are, we have a long history in HPC. Of course, like you can imagine not to the biggest HPC like is done in the U.S. so in the biggest supercomputer center in Europe. We have several system for doing HPC. Traditionally, HPC that are based on a Dell Technologies offer. We typically host all kind of technology's best, but now it's available, of course not in a big scale but in a small, medium scale that we are offering to our researcher, student. We have a strong relationship with Dell Technologies developing together solution to leverage the latest technologies, to the scientific computing, and this has a lot during the research that has been done during this pandemic. >> Yeah, and it's true. I mean, Maurizio is humble, but every time we have new technologies that are to be evaluated, of course we spend time evaluating in our labs, but we make it a point to share that technology with Maurizio and the team at the University of Pisa, That's how we find some of the better usage models for customers, help tuning some configurations, whether it's on the processor side, the GPU side, the storage and the interconnect. And then the topic of today, of course, with our partners at VMware, we've had some really great advancements Maurizio and the team are what we call a center of excellence. We have a few of them across the world where we have a unique relationship sharing technology and collaborating on advancement. And recently Maurizio and the team have even become one of the VMware certified centers. So it's a great marriage for this new world where virtual is becoming the norm. >> But well, Thierry, you and I had a conversation to talk earlier in the year when VMware was really geering their full kind of GPU suite and, you know, big topic in the keynote, you know, Jensen, the CEO of Nvidia was up on stage. VMware was talking a lot about AI solutions and how this is going to help. So help us bring us in you work with a lot of the customers theory. What is it that this enables for them and how to, you know, Dell and VMware bring, bring those solutions to bear? >> Yes, absolutely. It's one statistic I'll start with. Can you believe that only on average, 15 to 20% of GPU are fully utilized? So, when you think about the amount of technology that's are at our fingertips and especially in a world today where we need that technology to advance research and scientistic discoveries. Wouldn't it be fantastic to utilize those GPU's to the best of our ability? And it's not just GPU's , I think the industry has in the IT world, leverage virtualization to get to the maximum recycles for CPU's and storage and networking. Now you're bringing the GPU in the fold and you have a perfect utilization and also flexibility across all those resources. So what we've seen is that convergence between the IT world that was highly virtualized, and then this highly optimized world of HPC and AI because of the resources out there and researchers, but also data scientists and company want to be able to run their day to day activities on that infrastructure. But then when they have a big surge need for research or a data science use that same environment and then seamlessly move things around workload wise. >> Yeah, okay I do believe your stat. You know, the joke we always have is, you know, anybody from a networking background, there's no such thing as eliminating a bottleneck, you just move it. And if you talk about utilization, we've been playing the shell game for my entire career of, let's try to optimize one thing and then, oh, there's something else that we're not doing. So,you know, so important. Retail, I want to hear from your standpoint, you know, virtualization and HPC, you know, AI type of uses there. What value does this bring to you and, you know, and key learnings you've had in your organization? >> So, we as a university are a big users of the VMware technologies starting from the traditional enterprise workload and VPI. We started from there in the sense that we have an installation quite significant. But also almost all the services that the university gives to our internal users, either personnel or staff or students. At a certain point that we decided to try to understand the, if a VMware virtualization would be good also for scientific computing. Why? Because at the end of the day, their request that we have from our internal users is flexibility. Flexibility in the sense of be fast in deploying, be fast to reconfiguring, try to have the latest beats on the software side, especially on the AI research. At the end of the day we designed a VMware solution like you, I can say like a whiteboard. We have a whiteboard, and we are able to design a new solution of this whiteboard and to deploy as fast as possible. Okay, what we face as IT is not a request of the maximum performance. Our researchers ask us for flexibility then, and want to be able to have the maximum possible flexibility in configuring the systems. How can I say I, we can deploy as more test cluster on the visual infrastructure in minutes or we can use GPU inside the infrastructure tests, of test of new algorithm for deep learning. And we can use faster storage inside the virtualization to see how certain algorithm would vary with our internal developer can leverage the latest, the beat in storage like NVME, MVMS or so. And this is why at the certain point, we decided to try visualization as a base for HPC and scientific computing, and we are happy. >> Yeah, I think Maurizio described it it's flexibility. And of course, if you think optimal performance, you're looking at the bare medal, but in this day and age, as I stated at the beginning, there's so much technology, so much infrastructure available that flexibility at times trump the raw performance. So, when you have two different research departments, two different portions, two different parts of the company looking for an environment. No two environments are going to be exactly the same. So you have to be flexible in how you aggregate the different components of the infrastructure. And then think about today it's actually fantastic. Maurizio was sharing with me earlier this year, that at some point, as we all know, there was a lot down. You could really get into a data center and move different cables around or reconfigure servers to have the right ratio of memory, to CPU, to storage, to accelerators, and having been at the forefront of this enablement has really benefited University of Pisa and given them that flexibility that they really need. >> Wonderful, well, Maurizio my understanding, I believe you're giving a presentation as part of the activities this week. Give us a final glimpses to, you know, what you want your peers to be taking away from what you've done? >> What we have done that is something that is very simple in the sense that we adapt some open source software to our infrastructure in order to enable our system managers and users to deploy HPC and AI solution fastly and in an easy way to our VMware infrastructure. We started doing a sort of POC. We designed the test infrastructure early this year and then we go fastly to production because we had about the results. And so this is what we present in the sense that you can have a lot of way to deploy Vitola HPC, Barto. We went for a simple and open source solution. Also, thanks to our friends of Dell Technologies in some parts that enabled us to do the works and now to go in production. And that's theory told before you talked to has a lot during the pandemic due to the effect that stay at home >> Wonderful, Thierry, I'll let you have the final word. What things are you drawing customers to, to really dig in? Obviously there's a cost savings, or are there any other things that this unlocks for them? >> Yeah, I mean, cost savings. We talked about flexibility. We talked about utilization. You don't want to have a lot of infrastructure sitting there and just waiting for a job to come in once every two months. And then there's also the world we live in, and we all live our life here through a video conference, or at times through the interface of our phone and being able to have this web based interaction with a lot of infrastructure. And at times the best infrastructure in the world, makes things simpler, easier, and hopefully bring science at the finger tip of data scientists without having to worry about knowing every single detail on how to build up that infrastructure. And with the help of the University of Pisa, one of our centers of excellence in Europe, we've been innovating and everything that's been accomplished for, you know at Pisa can be accomplished by our customers and our partners around the world. >> Thierry, Maurizio, thank you much for so much for sharing and congratulations on all I know you've done building up that COE. >> Thanks to you. >> Thank you. >> Stay with us, lots more covered from VMworld 2020. I'm Stu Miniman as always. Thank you for watching the theCUBE. (soft music)

Published Date : Sep 30 2020

SUMMARY :

brought to you by the VMworld of course, the global virtual event. here in the United States, yeah. So either for the teaching or you know, you're the CTO there. So consider that the University of Pisa and then, you know, Thierry in the past our services. that are fighting the the virus. background in the HPC space, so in the biggest Maurizio and the team are the keynote, you know, Jensen, because of the resources You know, the joke we in the sense that we have an and having been at the as part of the activities this week. and now to go in production. What things are you drawing and our partners around the world. Thierry, Maurizio, thank you much Thank you for watching the theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaurizioPERSON

0.99+

ThierryPERSON

0.99+

Thierry PellegriniPERSON

0.99+

EuropeLOCATION

0.99+

15QUANTITY

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

AustinLOCATION

0.99+

Stu MinimanPERSON

0.99+

University of PisaORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

JensenPERSON

0.99+

Maurizio DaviniPERSON

0.99+

1343DATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

United StatesLOCATION

0.99+

65 kilometersQUANTITY

0.99+

50,000 studentsQUANTITY

0.99+

U.S.LOCATION

0.99+

200 gigabitQUANTITY

0.99+

PisaLOCATION

0.99+

three little centerQUANTITY

0.99+

GalileoPERSON

0.99+

todayDATE

0.99+

11th yearQUANTITY

0.99+

VMworld 2020EVENT

0.99+

over six monthsQUANTITY

0.99+

20%QUANTITY

0.98+

oneQUANTITY

0.98+

two different partsQUANTITY

0.97+

Thierry PellegrinoPERSON

0.97+

pandemicEVENT

0.97+

four data centersQUANTITY

0.96+

one bigQUANTITY

0.96+

earlier this yearDATE

0.96+

this weekDATE

0.96+

Middle AgesDATE

0.96+

COVID pandemicEVENT

0.96+

theCUBEORGANIZATION

0.95+

VMworldORGANIZATION

0.95+

100%QUANTITY

0.95+

early this yearDATE

0.95+

20 yearsQUANTITY

0.91+

HPCORGANIZATION

0.9+

two different research departmentsQUANTITY

0.9+

two different portionsQUANTITY

0.89+

PolandLOCATION

0.88+

one thingQUANTITY

0.87+

WikipediaORGANIZATION

0.86+

Mario Baldi, Pensando | Future Proof Your Enterprise 2020


 

(bright music) >> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a Cube conversation. >> Hi, I'm Stu Miniman, and welcome to a Cube conversation. I'm coming to you from our Boston area studio. And we're going to be digging into P4, which is, the programming protocol independent packet processors. And to help me with that, first time guest on the program, Mario Baldi, he is a distinguished technologist with Pensando. Mario, so nice to see you. Thanks for joining us. >> Thank you. Thank you for inviting. >> Alright, so Mario, you have you have a very, you know, robust technical career, lot of patents, you've worked on, you know, many technologies, you know, deep in the networking and developer world, but give our audience a little bit of your background and what brought you to Pensando. >> Yeah, yes, absolutely. So I started my my professional life in academia, actually, I worked for many years in academia, about 15 years exclusively in academia, and I was focusing both my teaching in research on computer networking. And then I also worked in a number of startups and established companies, in the last about eight years almost exclusively in the industry. And before joining Pensando, I worked for a couple of years at Cisco on a P4 programmable switch and that's where I got in touch with P4 actually. For the occasion I wore a T shirt of one of the P4 workshops. Which reminds me a bit of those people when you ask them, whether they do any sports, they tell you they have a membership at the gym. So I don't just have membership, I didn't just show up at the workshop. I've really been involved in the community and so when I learned what pensando was doing, I immediately got very excited that the ASIC that Pensando has developed these is really extremely powerful and flexible because it's fully programmable, partly programmable, with P4 partly programmable differently. And Pensando is starting to deploy these ASIC at the edge and Haas. And I think such a powerful and flexible device, at the edge of the network really opens incredible opportunities to, on the one hand implement what we have been doing in a different way, on the other hand, implement completely different solution. So, you know, I've been working most of my career in innovation, and when when I saw these, I immediately got very excited and I realized that Pensando was really the right place for me to be. >> Excellent. Yeah, interesting, you know, many people in the industry, they talk about innovation coming out of the universities, you know, Stanford often gets mentioned, but the university that you, you know, attended and also were associate professor at in Italy, a lot of the networking team, your MPLS, you know, team at Pensando, many of them came from them. Silvano guy, you know, written many books, they're, you know, very storied career in that environment. P4, maybe step back for a second, you know, you're you're deep in this group, help us understand what that is, how long it's been around, you know, and who participates in it with P4? >> Yeah, yeah. So as you were saying before, one of the few P4 from whom I've heard saying it, because everyone calls it P4 and nobody says what it really means. So programming protocol, independent packet processor. So it's a programming language for packet processors. And it's protocol independent. So it doesn't start from assuming that we want to use certain protocols. So P4 first of all allows you to specify what packets look like. So what the headers look like, and how they can be parsed. And secondly, because P4 is specifically designed for packet processing, and it's based on the idea that you want to look up values in tables. So it allows you to define tables, in keys that are being used to look up those tables and find an entry in the table. And when you find an entry, that entry contains an action and parameters to be used for that action. So the idea is that the package descriptions that you have in the program, define how the package should be processed. Header fields should be parsed, values extracted from them, and those values are being used as keys to look up into tables. And when the appropriate entry in the table is found, an action is executed and that action is going to modify those header fields, and these happens a number of times, the program specifies a sequence of tables that are being looked up, header fields being modified. In the end, those modified header fields are used to construct new packets that are being sent out of the device. So this is the basic idea of a P4 program. You specify a bunch of tables that are being looked up using values extracted from packets. So this is very powerful for a number of reasons. So first of all, its input, which is always good as we know, especially in networking, and then it maps very well on what we need to do, when we do packet processing. So writing a packet processing program, is relatively easy and fast. Could be difficult to write a generic programming in P4, you could not, but the packet processing program, it's easy to write. And last but not least, P4 really maps well on hardware that was designed specifically to process packet. What we call domain specific processes, right. And those processes are, in fact designed to quickly look up tables that might have decamping side, they might have processes that are specialized in performing, in building keys and performing table lookup, and modifying those header fields. So when you have those processors that are usually organized in pipelines to achieve a good throughput, then you can very efficiently take a P4 program and compile it to execute it very high speed on those processors. And this way, you get the same performance of a fixed function ASIC, but it's fully programmable, nothing is fixed. Which means that you can develop your features much faster, you can add features and fix bugs, you know, with a very short cycle, not with a four or five year cycle of baking a new ASIC. And this is extremely powerful. This is the strong value proposition of P4. >> Yeah, absolutely. I think that that resonates Mario, you know, I used to do presentations about the networking industry and you would draw timelines out there in decades. Because from the standard to get deployed for, you know, the the hardware to get baked, the customers to do the adoption, things take a really long time. You brought up, you know, edge computing, obviously, you know, we are, you know, it is really exciting, but it is changing really fast, and there's a lot of different, you know, capabilities out there. So if you could help us, you know, connect the dots between what P4 does and what the customers need. You know, we talked about multi-cloud and edge. What is it that you know, P4 in general, and what Pensando is doing with P4 specifically, enables this next generation architecture? >> Yeah, sure. So, Pensando has developed these card, which we call DSC distribute services card, that is built around an ASIC, that has a very very versatile architecture. It's a fully programmable. And it's fully programmable it's various levers, and one of them is in fact P4. Now this card and has a PCIE interface. So it can be installed in horse. And by the way, this is not the only way this powerful as you can be deployed. It's the first way Pensando has decided to use it. And so we have this card, it can be plugged into a host, it has two network interfaces. So it can be used as a network adapter. But in reality, because the card is fully programmable and it has several processors inside, it can be used to implement very sophisticated services. Things that you wouldn't even dream of doing with the typical network adapter, with a typical NIC. So in particular, this card, this ASIC contains a sizable amount of memory. Right now we have two sizes four, an eight gig but we are going to have versions of the card with even larger memory. Then it has some specialized hardware for specific functions like cryptographic functions, compression, computation of CRCs and if sophisticated queueing system with packet buffer with the queuing system to end the packets that have to go out to the interfaces or coming from the interfaces. Then it is several types of processors. It has generic processors, specifically arms, arm processors that can be programmed with general purpose languages. And then a set of processors that are specific for packet processing that are organized in a pipeline. In those, idea to be programmed with P4. We can very easily map a P4 program, on those pipeline of processor. So that's where Pensando is leveraging P4, is the language for programming those processes that allow us to process packets at the line rate of our 200 gigabit interfaces that we have in the card. >> Great. So Mario, what about from a customer viewpoint? Do they need to understand you know, how to program in P4, is this transparent to them? What's the customer interaction with it? >> Oh yeah, not at all. The Pensando platform, Pensando is offering a platform that is a completely turnkey solution. Basically the platform, first of all, the platform has a controller with which the user interacts, the user can configure policies on this controller. So using an intent based paradigm, the user defines policies that the controller is going to push those policies to the cards. So in your data center in your horse, in your data center, you can deploy thousands of those cards. Those cards implement distributed services. Let's say, just to give a very simple example, a distributed stateful firewall implemented on the all of those cards. The user writes a security policy, says this particular application can talk to these other particular application, and then translate it into configuration for those cards. It's transparently deployed on the cards that start in force the policies. So the user can use this system at this very high level. However, if the user has more specific needs, then the system, the platform offers several interfaces and several API's to program the platform through those interfaces. So the one at the highest level, is a REST API to the controller. So if the customer has an orchestrator, they can use that orchestrator to automatically send policies to the controller. Or if a customer already have their own controller, they can interact directly with the DSCs with the cards on the horse, with another API's that's fully open, is based on GRPC. And in this way, they can control the cards directly. If they need something even more specific, if they need a functionality that Pensando doesn't offer on those card, hasn't already ever written software for the cards, then customers can program the card, and the first level at which they can program it is the ARM processors. We have ARM processors, those are running in version of Linux, so customers can program it by writing C-code or Python. But if they have very specific needs, like when they write a software for the ARM processor, they can leverage the P4 code that we have already written for the card for those specialized packet processors. So they can leverage all of the protocols that our P4 program is already supported. And by the way because that's software, they can pick and choose in a Manga library of many different protocols and features we support, and decide to deploy them and then integrate them in their software running on the ARM processor. However, if they want to add their own proprietary protocols, if they want, if they need to execute some functionalities at very high performance, then they that's when they can write P4 code. And even in that case, we are going to make it very simple for them. Because they don't have to write everything from scratch. They don't have to worry about how to process AP packets, how to terminate TCP, we have to solve the P4 code for them. They can focus just on their own feature. And we are going to give them a development environment that allows them to focus on their own little feature and integrate it with the rest of our P4 program. Which by the way, is something that P4 is not designed for. P4 is not designed for having different programmers, write different pieces of the program and put them together. But we have the means to enable this. >> Okay, interesting. So, you know, maybe bring us inside a little bit, you know the P4 community, you're very active in it, when I look online, there's a large language consortium, many of, you know, all the hardware and software companies that I would expect in the networking space are on that list. So what's Pensando's participation in the community? And you were just teasing through, you know, what does P4 do and then what does Pensando, maybe enable, you know, above and beyond what, you know, P4 just does on its own? >> Yeah, so yes Pensando is very much involved in the community. There has been recently an event, online event that substituted the yearly P4 workshop. It was called the P4 expert round-table series. And Pensando had very strong participation. our CTO, Vipin Jain, had the keynote speech. Talking about how P4 can be extended beyond packet processing. P4, we said, has been designed for packet processing, but today, there are many applications that require message processing, which is more sophisticated then. And he gave a speech on how we can go towards that direction. Then we had a talk that was resulting from a submission that was reviewed and accepted on in fact, the architecture of our ASIC, and how it can be used to implement many interesting use cases. And finally, we participated into a panel in which we discussed how to use P4 in mix-ins Martin at the edge of the network. And there we argued with some use cases and example and code, how before it needs to be extended a little bit because NICs have different needs and open up different opportunities rather than switches. Now P4 was never really meant only for switches. But if we looked at what happened, the community has worked mostly on switches. For example it is defined that what is called the PSA, portable switch architecture. And we see that the NICs have an edge devices, have a little bit different requirements. So, one of the things we are doing within the communities working within one of the working groups, is called the architecture work group. And they are working in there to create the definition of a PNA, Portable NIC Architecture. Now, we didn't start this activity, this activity has started already in 2018. But it did slow down significantly, mostly because there wasn't so much of a push. So now Pensando coming on the market with this new architecture really gave new life to this activity. And we are contributing, actively we have proposed a candidate for a new architecture which has been discussed within the community. And, you know, just to give you an example, why do we need a new architecture? Because if you think of the switch, there are several reasons but one, it's very intuitive. If you think of a switch, you have packets coming in, they've been processed and packets go out. As we said before, there's the PMA then sorry, PSA architecture is meant for these kinds of operation. If you think of a NIC, it's a little bit different because yes, you have packets coming in, and yes, if you have multiple interfaces like our card, you might take those packets and send them out. But most likely what you want to do, you want to process those packets, and then not give the packets to the host. Otherwise the host CPU will have to process them again, to pass them again. You want to give some artifacts to the host, some pre-processed information. So you want to, I don't know take those packets for example, assemble many TCP messages and provide a stream of bytes coming out of this TCP connection. Now, these requires a completely different architecture, packets come in, something else goes out. And goes out, for example, through a PCI bus. So, you need the some different architecture and then you will need in the P4 language, different constructs to deal with the fact that you are modifying memory, you are moving data from the card to the host and vice versa. So again, back to your question, how are we involved in the workgroups? We are involved in the architecture workgroup right now to define the PNA, the Portable NIC Architecture. And also, I believe in the future we will be involved in the language group to propose some extensions to the language. >> Excellent. Well, Mario, thank you so much for giving us a deep dive into P4, where it is and you know some of the potential futures for where it will go in the future. Thanks so much for joining us. >> Thank you. >> Alright. I'm Stu Miniman, thank you so much for watching the Cube. (gentle music)

Published Date : Jun 17 2020

SUMMARY :

Announcer: From the Cube I'm coming to you from Thank you for inviting. and what brought you to Pensando. that the ASIC that Pensando a lot of the networking and it's based on the idea What is it that you know, P4 in general, And by the way, this is not the only way Do they need to understand you know, and the first level at which above and beyond what, you And also, I believe in the future some of the potential futures thank you so much for watching the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarioPERSON

0.99+

Mario BaldiPERSON

0.99+

2018DATE

0.99+

PensandoORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

ItalyLOCATION

0.99+

Stu MinimanPERSON

0.99+

BostonLOCATION

0.99+

thousandsQUANTITY

0.99+

P4ORGANIZATION

0.99+

five yearQUANTITY

0.99+

StanfordORGANIZATION

0.99+

PythonTITLE

0.99+

Vipin JainPERSON

0.99+

200 gigabitQUANTITY

0.99+

first levelQUANTITY

0.99+

P4TITLE

0.99+

eight gigQUANTITY

0.99+

fourQUANTITY

0.99+

bothQUANTITY

0.99+

SilvanoPERSON

0.98+

about 15 yearsQUANTITY

0.98+

LinuxTITLE

0.98+

first wayQUANTITY

0.97+

Future Proof Your EnterpriseTITLE

0.97+

CubeORGANIZATION

0.97+

oneQUANTITY

0.96+

first timeQUANTITY

0.96+

P4COMMERCIAL_ITEM

0.96+

two network interfacesQUANTITY

0.95+

two sizesQUANTITY

0.94+

todayDATE

0.92+

secondlyQUANTITY

0.92+

about eight yearsQUANTITY

0.9+

HaasORGANIZATION

0.89+

2020DATE

0.87+

ASICORGANIZATION

0.84+

firstQUANTITY

0.83+

MartinPERSON

0.8+

PNATITLE

0.8+

secondQUANTITY

0.78+

those cardsQUANTITY

0.75+

Silvano Gai, Pensando | Future Proof Your Enterprise 2020


 

>> Narrator: From the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, and welcome to this CUBE conversation, I'm Stu Min and I'm coming to you from our Boston area studio, we've been digging in with the Pensando team, understand how they're fitting into the cloud, multi-cloud, edge discussion, really thrilled to welcome to the program, first time guest, Silvano Gai, he's a fellow with Pensando. Silvano, really nice to see you again, thanks so much for joining us on theCUBE. >> Stuart, it's so nice to see you, we used to work together many years ago and that was really good and is really nice to come to you from Oregon, from Bend, Oregon. A beautiful town in the high desert of Oregon. >> I do love the Pacific North West, I miss the planes and the hotels, I should say, I don't miss the planes and the hotels, but going to see some of the beautiful places is something I do miss and getting to see people in the industry I do like. As you mentioned, you and I crossed paths back through some of the spin-ins, back when I was working for a very large storage company, you were working for SISCO, you were known for writing the book, you were a professor in Italy, many of the people that worked on some of those technologies were your students. But Silvano, my understanding is you retired so, maybe share for our audience, what brought you out of that retirement and into working once again with some of your former colleagues and on the Pensando opportunity. >> I did retire for a while, I retired in 2011 from Cisco if I remember correctly. But at the end of 2016, beginning of 2017, some old friend that you may remember and know called me to discuss some interesting idea, which was basically the seed idea that is behind the Pensando product and their idea were interesting, what we built, of course, is not exactly the original idea because you know product evolve over time, but I think we have something interesting that is adequate and probably superb for the new way to design the data center network, both for enterprise and cloud. >> All right, and Silvano, I mentioned that you've written a number of books, really the authoritative look on when some new products had been released before. So, you've got a new book, "Building a Future-Proof Cloud Infrastructure," and look at you, you've got the physical copy, I've only gotten the soft version. The title, really interesting. Help us understand how Pensando's platform is meeting that future-proof cloud infrastructure that you discuss. >> Well, network have evolved dramatically in the data center and in the cloud. You know, now the speed of classical server in enterprise is probably 25 gigabits, in the cloud we are talking of 100 gigabit of speed for a server, going to 200 gigabit. Now, the backbone are ridiculously fast. We no longer use Spanning Tree and all the stuff, we no longer use access code aggregation. We switched to closed network, and with closed network, we have huge enormous amount of bandwidth and that is good but it also imply that is not easy to do services in a centralized fashion. If you want to do a service in a centralized fashion, what you end up doing is creating a giant bottleneck. You basically, there is this word that is being used, that is trombone or tromboning. You try to funnel all this traffic through the bottleneck and this is not really going to work. The only place that you can really do services is at the edge, and this is not an invention, I mean, even all the principles of cloud is move everything to the edge and maintain the network as simple as possible. So, we approach services with the same general philosophy. We try to move services to the edge, as close as possible to the server and basically at the border between the sever and the network. And when I mean services I mean three main categories of services. The networking services of course, there is the basic layer, two-layer, three stuff, plus the bonding, you know VAMlog and what is needed to connect a server to a network. But then there is the overlay, overlay like the xLAN or Geneva, very very important, basically to build a cloud infrastructure, and that are basically the network service. We can have others but that, sort of is the core of a network service. Some people want to run BGP layers, some people don't want to run BGP. There may be a VPN or kind of things like that but that is the core of a network service. Then of course, and we go back to the time we worked together, there are storage services. At that time, we were discussing mostly about fiber tunnel, now the BUS world is clearly NVMe, but it's not just the BUS world, it's really a new way of doing storage, and is very very interesting. So, NVMe kind of service are very important and NVMe as a version that is called NVMeOF, over fiber. Which is basically, sort of remote version of NVMe. And then the third, least but not last, most important category probably, is security. And when I say that security is very very important, you know, the fact that security is very important is clear to everybody in our day, and I think security has two main branches in terms of services. There is the classical firewall and micro-segmentation, in which you basically try to enforce the fact that only who is allowed to access something can access something. But you don't, at that point, care too much about the privacy of the data. Then there is the other branch that encryption, in which you are not trying to enforce to decide who can access or not access the resource, but you are basically caring about the privacy of the data, encrypting the data so that if it is hijacked, snooped or whatever, it cannot be decoded. >> Eccellent, so Silvano, absolutely the edge is a huge opportunity. When someone looks at the overall solution and say you're putting something in the edge, you know, they could just say, "This really looks like a NIC." You talked about some of the previous engagement we'd worked on, host bus adapters, smart NICs and the like. There were some things we could build in but there were limits that we had, so, what differentiates the Pensando solution from what we would traditionally think of as an adapter card in the past? >> Well, the Pensando solution has two main, multiple pieces but in term of hardware, has two main pieces, there is an ASIC that we call copper internally. That ASIC is not strictly related to be used only in an adapter form, you can deploy it also in other form factors in another part of the network in other embodiment, et cetera. And then there is a card, the card has a PCI-E interface and sit in a PCI-E slot. So yes, in that sense, somebody can can call it a NIC and since it's a pretty good NIC, somebody can call it a smart NIC. We don't really like that two terms, we prefer to call it DSC, domain specific card, but the real term that I like to use is domain specific hardware, and I like to use domain specific hardware because it's the same term that Hennessy and Patterson use in a beautiful piece of literature that is the Turing Award lecture. It's on the internet, it's public, I really ask everybody to go and try to find it and listen to that beautiful piece of literature, modern literature on computer architecture. The Turing Award lecture of Hennessy and Patterson. And they have introduced the concept of domain specific hardware, and they explain also the justification for why now is important to look at domain specific hardware. And the justification is basically in a nutshell and we can go more deep if you're interested, but in a nutshell is that the specing, that is the single tried performer's measurement of a CPU, is not growing fast at all, is only growing nowadays like a few point percent a year, maybe 4% per year. And with this slow grow, over specing performance of a core, you know the core need to be really used for user application, for customer application, and all what is known as Sentian can be moved to some domain specific hardware that can do that in a much better fashion, and by no mean I imply that the DSC is the best example of domain specific hardware. The best example of domain specific hardware is in front of all of us, and are GPUs. And not GPUs for graphic processing which are also important, but GPU used basically for artificial intelligence, machine learning inference. You know, that is a piece of hardware that has shown that something can be done with performance that the purpose processor can do. >> Yeah, it's interesting right. If you term back the clock 10 or 15 years ago, I used to be in arguments, and you say, "Do you build an offload, "or do you let it happen is software." And I was always like, "Oh, well Moore's law with mean that, "you know, the software solution will always win, "because if you bake it in hardware, it's too slow." It's a very different world today, you talk about how fast things speed up. From your customer standpoint though, often some of those architectural things are something that I've looked for my suppliers to take care of that. Speak to the use case, what does this all mean from a customer stand point, what are some of those early use cases that you're looking at? >> Well, as always, you get a bit surprised by the use cases, in the sense that you start to design a product thinking that some of the most cool thing will be the dominant use cases, and then you discover that something that you have never really fought have the most interesting use case. One that we have fought about since day one, but it's really becoming super interesting is telemetry. Basically, measuring everything in the network, and understanding what is happening in the network. I was speaking with a friend the other day, and the friend was asking me, "Oh, but we have SNMP for many many years, "which is the difference between SNMP and telemetry?" And the difference is to me, the real difference is in SNMP or in many of these management protocol, you involve a management plan, you involve a control plan, and then you go to read something that is in the data plan. But the process is so inefficient that you cannot really get a huge volume of data, and you cannot get it practically enough, with enough performance. Doing telemetry means thinking a data path, building a data path that is capable of not only measuring everything realtime, but also sending out that measurement without involving anything else, without involving the control path and the management path so that the measurement becomes really very efficient and the data that you stream out becomes really usable data, actionable data in realtime. So telemetry is clearly the first one, is important. One that you honestly, we had built but we weren't thinking this was going to have so much success is what we call Bidirectional ERSPAN. And basically, is just the capability of copying data. And sending data that the card see to a station. And that is very very useful for replacing what are called TAP network, Which is just network, but many customer put in parallel to the real network just to observe the real network and to be able to troubleshoot and diagnose problem in the real network. So, this two feature telemetry and ERSPAN that are basically troubleshooting feature are the two features that are beginning are getting more traction. >> You're talking about realtime things like telemetry. You know, the applications and the integrations that you need to deal with are so important, back in some of the previous start-ups that you done was getting ready for, say how do we optimize for virtualization, today you talk cloud-native architectures, streaming, very popular, very modular, often container based solutions and things change constantly. You look at some of these architectures, it's not a single thing that goes on for a long period of time, but it's lots of things that happen over shorter periods of time. So, what integrations do you need to do, and what architecturally, how do you build things to make them as you talk, future-proof for these kind of cloud architectures? >> Yeah, what I mentioned were just the two low hanging fruit, if you want the first two low hanging fruit of this architecture. But basically, the two that come immediately after and where there is a huge amount of radio are distributor's state for firewall, with micro-segmentation support. That is a huge topic in itself. So important nowadays that is absolutely fundamental to be able to build a cloud. That is very important, and the second one is wire rate encryption. There is so much demand for privacy, and so much demand to encrypt the data. Not only between data center but now also inside the data center. And when you look at a large bank for example. A large bank is no longer a single organization. A large bank is multiple organizations that are compartmentalized by law. That need to keep things separate by law, by regulation, by FCC regulation. And if you don't have encryption, and if you don't have distributed firewall, is really very difficult to achieve that. And then you know, there are other applications, we mentioned storage NVME, and is a very nice application, and then we have even more, if you go to look at load balance in between server, doing compression for storage and other possible applications. But I sort of lost your real question. >> So, just part of the pieces, when you look at integrations that Pensando needs to do, for maybe some of the applications that you would tie in to any of those that come to mind? >> Yeah, well for sure. It depends, I see two main branches again. One is the cloud provider, and one are the enterprise. In the cloud provider, basically this cloud provider have a huge management infrastructure that is already built and they want just the card to adapt to this, to be controllable by this huge management infrastructure. They already know which rule they want to send to the card, they already know which feature they want to enable on the card. They already have all that, they just want the card to provide the data plan performers for that particular feature. So they're going to build something particular that is specific for that particular cloud provider that adapt to that cloud provider architecture. We want the flexibility of having an API on the card that is like a rest API or a gRPC which they can easily program, monitor and control that card. When you look at the enterprise, the situation is different. Enterprise is looking to at two things. Two or three things. The first thing is a complete solution. They don't want to, they don't have the management infrastructure that they have built like a cloud provider. They want a complete solution that has the card and the management station and there's all what is required to make from day one, a working solution, which is absolutely correct in an enterprise environment. They also want integration, and integration is the tool that they already have. If you look at main enterprise, one of a dominant presence is clearly VMware virtualization in terms of ESX and vSphere and NSX. And so most of the customer are asking us to integrate with VMware, which is a very reasonable demand. And then of course, there are other player, not so much in the virtualization's space, but for example, in the data collections space, and the data analysis space, and for sure Pensando doesn't want to reinvent the wheel there, doesn't want to build a data collector or data analysis engine and whatever, there is a lot of work, and there are a lot out there, so integration with things like Splunk for example are kind of natural for Pensando. >> Eccellent, so wait, you talked about some of the places where Pensando doesn't need to reinvent the wheel, you talk through a lot of the different technology pieces. If I had to have you pull out one, what would you say is the biggest innovation that Pensando has built into the platform. >> Well, the biggest innovation is this P4 architecture. And the P4 architecture was a sort of gift that was given us in the sense that it was not invented for what we use it. P4 was basically invented to have programmable switches. The first big P4 company was clearly Barefoot that then was acquired by Intel and Barefoot built a programmable switch. But if you look at the reality of today, the network, most of the people want the network to be super easy. They don't want to program anything into the network. They want to program everything at the edge, they want to put all the intelligence and the programmability of the edge, so we borrowed the P4 architecture, which is fantastic programmable architecture and we implemented that yet. It's also easier because the bandwidth is clearly more limited at the edge compared to being in the core of a network. And that P4 architecture give us a huge advantage. If you, tomorrow come up with the Stuart Encapsulation Super Duper Technology, I can implement in the copper The Stuart, whatever it was called, Super Duper Encapsulation Technology, even when I design the ASIC I didn't know that encapsulation exists. Is the data plan programmability, is the capability to program the data plan and programming the data plan while maintaining wire-speed performance, which I think is the biggest benefit of Pensando. >> All right, well Silvano, thank you so much for sharing, your journey with Pensando so far, really interesting to dig into it and absolutely look forward to following progress as it goes. >> Stuart, it's been really a pleasure to talk with you, I hope to talk with you again in the near future. Thank you so much. >> All right, and thank you for watching theCUBE, I'm Stu Miniman, thanks for watching. (upbeat music)

Published Date : Jun 17 2020

SUMMARY :

leaders all around the world, I'm Stu Min and I'm coming to you and is really nice to and on the Pensando opportunity. that is behind the Pensando product I've only gotten the soft version. but that is the core of a network service. as an adapter card in the past? but the real term that I like to use "you know, the software and the data that you stream out becomes really usable data, and the integrations and the second one is and integration is the tool that Pensando has built into the platform. is the capability to program the data plan and absolutely look forward to I hope to talk with you you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SilvanoPERSON

0.99+

OregonLOCATION

0.99+

SISCOORGANIZATION

0.99+

2011DATE

0.99+

Stu MinPERSON

0.99+

PensandoORGANIZATION

0.99+

TwoQUANTITY

0.99+

ItalyLOCATION

0.99+

Silvano GaiPERSON

0.99+

BarefootORGANIZATION

0.99+

BostonLOCATION

0.99+

StuartPERSON

0.99+

CiscoORGANIZATION

0.99+

two featuresQUANTITY

0.99+

two main piecesQUANTITY

0.99+

Stu MinimanPERSON

0.99+

200 gigabitQUANTITY

0.99+

OneQUANTITY

0.99+

Palo AltoLOCATION

0.99+

100 gigabitQUANTITY

0.99+

two termsQUANTITY

0.99+

25 gigabitsQUANTITY

0.99+

FCCORGANIZATION

0.99+

Pacific North WestLOCATION

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

Bend, OregonLOCATION

0.99+

two thingsQUANTITY

0.99+

Building a Future-Proof Cloud InfrastructureTITLE

0.99+

thirdQUANTITY

0.98+

10DATE

0.98+

first oneQUANTITY

0.98+

Future Proof Your EnterpriseTITLE

0.98+

two main branchesQUANTITY

0.98+

vSphereTITLE

0.98+

ESXTITLE

0.98+

firstQUANTITY

0.98+

two-layerQUANTITY

0.98+

tomorrowDATE

0.98+

three thingsQUANTITY

0.97+

MoorePERSON

0.97+

Cube StudiosORGANIZATION

0.97+

two featureQUANTITY

0.97+

bothQUANTITY

0.97+

todayDATE

0.97+

two main branchesQUANTITY

0.96+

two mainQUANTITY

0.96+

single thingQUANTITY

0.96+

first timeQUANTITY

0.95+

4% per yearQUANTITY

0.95+

HennessyORGANIZATION

0.95+

first thingQUANTITY

0.95+

15 years agoDATE

0.94+

second oneQUANTITY

0.93+

single organizationQUANTITY

0.92+

NSXTITLE

0.91+

singleQUANTITY

0.9+

CUBEORGANIZATION

0.89+

ERSPANORGANIZATION

0.89+

SplunkORGANIZATION

0.88+

P4COMMERCIAL_ITEM

0.85+

P4ORGANIZATION

0.84+

PensandoLOCATION

0.84+

2016DATE

0.83+

TuringEVENT

0.82+

two low hangingQUANTITY

0.79+

VMwareTITLE

0.77+

2020DATE

0.77+

Super Duper Encapsulation TechnologyOTHER

0.77+

PattersonORGANIZATION

0.76+

Recep Ozdag, Keysight | CUBEConversation


 

>> from our studios in the heart of Silicon Valley, Palo Alto, California It is >> a cute conversation. Hey, welcome back. Get ready. Geoffrey here with the Cube. We're gonna rip out the studios for acute conversation. It's the middle of the summer, the conference season to slow down a little bit. So we get a chance to do more cute conversation, which is always great. Excited of our next guest. He's Ridge, IP, Ops Statik. He's a VP and GM from key. Cite, Reject. Great to see you. >> Thank you for hosting us. >> Yeah. So we've had Marie on a couple of times. We had Bethany on a long time ago before the for the acquisition. But for people that aren't familiar with key site, give us kind of a quick overview. >> Sure, sure. So I'm within the excess solutions group Exhale really started was founded back in 97. It I peered around 2000 really started as a test and measurement company quickly after the I poet became the number one vendor in the space, quickly grew around 2012 and 2013 and acquired two companies Net optics and an ooey and net optics and I knew we were in the visibility or monitoring space selling taps, bypass witches and network packet brokers. So that formed the Visibility Group with a nice Xia. And then around 2017 key cite acquired Xia and we became I S G or extra Solutions group. Now, key site is also a very large test and measurement company. It is the actual original HB startup that started in Palo Alto many years ago. An HB, of course, grew, um it also started as a test and measurement company. Then later on it, it became a get a gun to printers and servers. HB spun off as agile in't, agile in't became the test and measurement. And then around 2014 I would say, or 15 agile in't spun off the test and measurement portion that became key site agile in't continued as a life and life sciences organization. And so key sites really got the name around 2014 after spinning off and they acquired Xia in 2017. So more joy of the business is testing measurement. But we do have that visibility and monitoring organization to >> Okay, so you do the test of measurement really on devices and kind of pre production and master these things up to speed. And then you're actually did in doing the monitoring in life production? Yes, systems. >> Mostly. The only thing that I would add is that now we are getting into live network testing to we see that mostly in the service provider space. Before you turn on the service, you need to make sure that all the devices and all the service has come up correctly. But also we're seeing it in enterprises to, particularly with security assessments. So reach assessment attacks. Security is your eye to organization really protecting the network? So we're seeing that become more and more important than they're pulling in test, particularly for security in that area to so as you. As you say, it's mostly device testing. But then that's going to network infrastructure and security networks, >> Right? So you've been in the industry for a while, you're it. Until you've been through a couple acquisitions, you've seen a lot of trends, so there's a lot of big macro things happening right now in the industry. It's exciting times and one of the ones. Actually, you just talked about it at Cisco alive a couple weeks ago is EJ Computer. There's a lot of talk about edges. Ej the new cloud. You know how much compute can move to the edge? What do you do in a crazy oilfield? With hot temperatures and no powers? I wonder if you can share some of the observations about EJ. You're kind of point of view as to where we're heading. And what should people be thinking about when they're considering? Yeah, what does EJ mean to my business? >> Absolutely, absolutely. So when I say it's computing, I typically include Io TI agent. It works is along with remote and branch offices, and obviously we can see the impact of Io TI security cameras, thermal starts, smart homes, automation, factory automation, hospital animation. Even planes have sensors on their engines right now for monitoring purposes and diagnostics. So that's one group. But then we know in our everyday lives, enterprises are growing very quickly, and they have remote and branch offices. More people are working from remotely. More people were working from home, so that means that more data is being generated at the edge. What it's with coyote sensors, each computing we see with oil and gas companies, and so it doesn't really make sense to generate all that data. Then you know, just imagine a self driving car. You need to capture a lot of data and you need to process. It just got really just send it to the cloud. Expect a decision to mate and then come back and so that you turn left or right, you need to actually process all that data, right? We're at the edge where the source of the data is, and that means pushing more of that computer infrastructure closer to the source. That also means running business critical applications closer to the source. And that means, you know, um, it's it's more of, ah, madness, massively distributed computer architecture. Um, what happens is that you have to then reliably connect all these devices so connectivity becomes important. But as you distribute, compute as well as applications, your attack surface increases right. Because all of these devices are very vulnerable. We're probably adding about 5,000,000 I ot devices every day to our network, So that's a lot of I O T. Devices or age devices that we connect many of these devices. You know, we don't really properly test. You probably know from your own home when you can just buy something and could easily connect it to your wife. I Similarly, people buy something, go to their work and connect to their WiFi. Not that device is connected to your entire network. So vulnerabilities in any of these devices exposes the entire network to that same vulnerability. So our attack surfaces increasing, so connection reliability as well as security for all these devices is a challenge. So we enjoy each computing coyote branch on road officers. But it does pose those challenges. And that's what we're here to do with our tech partners. Toe sold these issues >> right? It's just instinct to me on the edge because you still have kind of the three big um, the three big, you know, computer things. You got the networking right, which is just gonna be addressed by five g and a lot better band with and connectivity. But you still have store and you still have compute. You got to get those things Power s o a cz. You're thinking about the distribution of that computer and store at the edge versus in the cloud and you've got the Leighton see issue. It seems like a pretty delicate balancing act that people are gonna have to tune these systems to figure out how much to allocate where, and you will have physical limitations at this. You know the G power plant with the sure by now the middle of nowhere. >> It's It's a great point, and you typically get agility at the edge. Obviously, don't have power because these devices are small. Even if you take a room order branch office with 52 2 100 employees, there's only so much compute that you have. But you mean you need to be able to make decisions quickly. They're so agility is there. But obviously the vast amounts of computer and storage is more in your centralized data center, whether it's in your private cloud or your public cloud. So how do you do the compromise? When do you run applications at the edge when you were in applications in the cloud or private or public? Is that in fact, a compromise and year You might have to balance it, and it might change all the time, just as you know, if you look at our traditional history off compute. He had the mainframes which were centralized, and then it became distributed, centralized, distributed. So this changes all the time and you have toe make decisions, which which brings up the issue off. I would say hybrid, I t. You know, they have the same issue. A lot of enterprises have more of a, um, hybrid I t strategy or multi cloud. Where do you run the applications? Even if you forget about the age even on, do you run an on Prem? Do you run in the public cloud? Do you move it between class service providers? Even that is a small optimization problem. It's now even Matt bigger with H computer. >> Right? So the other thing that we've seen time and time again a huge trend, right? It's software to find, um, we've seen it in the networking space to compete based. It's offered to find us such a big write such a big deal now and you've seen that. So when you look at it from a test a measurement and when people are building out these devices, you know, obviously aton of great functional capability is suddenly available to people, but in terms of challenges and in terms of what you're thinking about in software defined from from you guys, because you're testing and measuring all this stuff, what's the goodness with the badness house for people, you really think about the challenges of software defined to take advantage of the tremendous opportunity. >> That's a really good point. I would say that with so far defined it working What we're really seeing is this aggregation typically had these monolithic devices that you would purchase from one vendor. That wonder vendor would guarantee that everything just works perfectly. What software defined it working, allows or has created is this desegregated model. Now you have. You can take that monolithic application and whether it's a server or a hardware infrastructure, then maybe you have a hyper visor or so software layer hardware, abstraction, layers and many, many layers. Well, if you're trying to get that toe work reliably, this means that now, in a way, the responsibility is on you to make sure that you test every all of these. Make sure that everything just works together because now we have choice. Which software packages should I install from which Bender This is always a slight differences. Which net Nick Bender should I use? If PJ smart Nick Regular Nick, you go up to the layer of what kind of ax elation should I use? D. P. D K. There's so many options you are responsible so that with S T N, you do get the advantage of opportunity off choice, just like on our servers and our PCs. But this means that you do have to test everything, make sure that everything works. So this means more testing at the device level, more testing at the service being up. So that's the predeployment stage and wants to deploy the service. Now you have to continually monitor it to make sure that it's working as you expected. So you get more choice, more diversity. And, of course, with segregation, you can take advantage of improvements on the hardware layer of the software layer. So there's that the segregation advantage. But it means more work on test as well as monitoring. So you know there's there's always a compromise >> trade off. Yeah, so different topic is security. Um, weird Arcee. This year we're in the four scout booth at a great chat with Michael the Caesars Yo there. And he talked about, you know, you talk a little bit about increasing surface area for attack, and then, you know, we all know the statistics of how long it takes people to know that they've been reach its center center. But Mike is funny. He you know, they have very simple sales pitch. They basically put their sniffer on your network and tell you that you got eight times more devices on the network than you thought. Because people are connecting all right, all types of things. So when you look at, you know, kind of monitoring test, especially with these increased surface area of all these, Iet devices, especially with bring your own devices. And it's funny, the H v A c seemed to be a really great place for bad guys to get in. And I heard the other day a casino at a casino, uh, connected thermometer in a fish tank in the lobby was the access point. How is just kind of changing your guys world, you know, how do you think about security? Because it seems like in the end, everyone seems to be getting he breached at some point in time. So it's almost Maur. How fast can you catch it? How do you minimize the damage? How do you take care of it versus this assumption that you can stop the reaches? You >> know, that was a really good point that you mentioned at the end, which is it's just better to assume that you will be breached at some point. And how quickly can you detect that? Because, on average, I think, according to research, it takes enterprise about six months. Of course, they're enterprise that are takes about a couple of years before they realize. And, you know, we hear this on the news about millions of records exposed billions of dollars of market cap loss. Four. Scout. It's a very close take partner, and we typically use deploy solutions together with these technology partners, whether it's a PM in P. M. But very importantly, security, and if you think about it, there's terabytes of data in the network. Typically, many of these tools look at the packet data, but you can't really just take those terabytes of data and just through it at all the tools, it just becomes a financially impossible toe provide security and deploy such tools in a very large network. So where this is where we come in and we were the taps, we access the data where the package workers was essentially groom it, filtering down to maybe tens or hundreds of gigs that that's really, really important. And then we feed it, feed it to our take partners such as Four Scout and many of the others. That way they can. They can focus on providing security by looking at the packets that really matter. For example, you know some some solutions only. Look, I need to look at the package header. You don't really need to see the send the payload. So if somebody is streaming Netflix or YouTube, maybe you just need to send the first mega byte of data not the whole hundreds of gigs over that to our video, so that allows them to. It allows us or helps us increase the efficiency of that tool. So the end customer can actually get a good R Y on that on that investment, and it allows for Scott to really look at or any of the tech partners to look at what's really important let me do a better job of investigating. Hey, have I been hacked? And of course, it has to be state full, meaning that it's not just looking at flow on one data flow on one side, looking at the whole communication. So you can understand What is this? A malicious application that is now done downloading other malicious applications and infiltrating my system? Is that a DDOS attack? Is it a hack? It's, Ah, there's a hole, equal system off attacks. And that's where we have so many companies in this in this space, many startups. >> It's interesting We had Tom Siebel on a little while ago actually had a W s event and his his explanation of what big data means is that there's no sampling air. And we often hear that, you know, we used to kind of prior to big day, two days we would take a sample of data after the fact and then tried to to do someone understanding where now the more popular is now we have a real time streaming engines. So now we're getting all the data basically instantaneously in making decisions. But what you just bring out is you don't necessarily want all the data all the time because it could. It can overwhelm its stress to Syria. That needs to be a much better management approach to that. And as I look at some of the notes, you know, you guys were now deploying 400 gigabit. That's right, which is bananas, because it seems like only yesterday that 100 gigabyte Ethan, that was a big deal a little bit about, you know, kind of the just hard core technology changes that are impacting data centers and deployments. And as this band with goes through the ceiling, what people are physically having to do, do it. >> Sure, sure, it's amazing how it took some time to go from 1 to 10 gig and then turning into 40 gig, but that that time frame is getting shorter and shorter from 48 2 108 100 to 400. I don't even know how we're going to get to the next phase because the demand is there and the demand is coming from a number of Trans really wants five G or the preparation for five G. A lot of service providers are started to do trials and they're up to upgrading that infrastructure because five G is gonna make it easier to access state of age quickly invest amounts of data. Whenever you make something easy for the consumer, they will consume it more. So that's one aspect of it. The preparation for five GS increasing the need for band with an infrastructure overhaul. The other piece is that we're with the neutralization. We're generating more Eastern West traffic, but because we're distributed with its computing, that East West traffic can still traverse data centers and geography. So this means that it's not just contained within a server or within Iraq. It actually just go to different locations. That also means your data center into interconnect has to support 400 gig. So a lot of network of hitmen manufacturers were typically call them. Names are are releasing are about to release 400 devices. So on the test side, they use our solutions to test these devices, obviously, because they want to release it based the standards to make sure that it works on. So that's the pre deployment phase. But once these foreign jiggy devices are deployed and typically service providers, but we're start slowly starting to see large enterprises deploy it as a mention because because of visualization and computing, then the question is, how do you make sure that your 400 gig infrastructure is operating at the capacity that you want in P. M. A. P M. As well as you're providing security? So there's a pre deployment phase that we help on the test side and then post deployment monitoring face. But five G is a big one, even though we're not. Actually we haven't turned on five year service is there's tremendous investment going on. In fact, key site. The larger organization is helping with a lot of these device testing, too. So it's not just Xia but key site. It's consume a lot of all of our time just because we're having a lot of engagements on the cellphone side. Uh, you know, decide endpoint side. It's a very interesting time that we're living in because the changes are becoming more and more frequent and it's very hot, so adapt and make sure that you're leading that leading that wave. >> In preparing for this, I saw you in another video camera. Which one it was, but your quote was you know, they didn't create electricity by improving candles. Every line I'm gonna steal it. I'll give you credit. But as you look back, I mean, I don't think most people really grown to the step function. Five g, you know, and they talk about five senior fun. It's not about your phone. It says this is the first kind of network built four machines. That's right. Machine data, the speed machine data and the quantity of Mr Sheen data. As you sit back, What kind of reflectively Again? You've been in this business for a while and you look at five G. You're sitting around talking to your to your friends at a party. So maybe some family members aren't in the business. How do you How do you tell them what this means? I mean, what are people not really seeing when they're just thinking it's just gonna be a handset upgrade there, completely missing the boat? >> Yeah, I think for the for the regular consumer, they just think it's another handset. You know, I went from three G's to 40 year. I got I saw bump in speed, and, you know, uh, some handset manufacturers are actually advertising five G capable handsets. So I'm just going to be out by another cell phone behind the curtain under the hurt. There's this massive infrastructure overhaul that a lot of service providers are going through. And it's scary because I would say that a lot of them are not necessarily prepared. The investment that's pouring in is staggering. The help that they need is one area that we're trying to accommodate because the end cell towers are being replaced. The end devices are being replaced. The data centers are being upgraded. Small South sites, you know, Um, there's there's, uh how do you provide coverage? What is the killer use case? Most likely is probably gonna be manufacturing just because it's, as you said mission to make mission machine learning Well, that's your machine to mission communication. That's where the connected hospitals connected. Manufacturing will come into play, and it's just all this machine machine communication, um, generating vast amounts of data and that goes ties back to that each computing where the edge is generating the data. But you then send some of that data not all of it, but some of that data to a centralized cloud and you develop essentially machine learning algorithms, which you then push back to the edge. The edge becomes a more intelligent and we get better productivity. But it's all machine to machine communication that, you know, I would say that more of the most of the five communication is gonna be much information communication. Some small portion will be the consumers just face timing or messaging and streaming. But that's gonna be there exactly. Exactly. That's going to change. I'm of course, we'll see other changes in our day to day lives. You know, a couple of companies attempted live gaming on the cloud in the >> past. It didn't really work out just because the network latency was not there. But we'll see that, too, and was seeing some of the products coming out from the lecture of Google into the company's where they're trying to push gaming to be in the cloud. It's something that we were not really successful in the past, so those are things that I think consumers will see Maur in their day to day lives. But the bigger impact is gonna be for the for the enterprise >> or jet. Thanks for ah, for taking some time and sharing your insight. You know, you guys get to see a lot of stuff. You've been in the industry for a while. You get to test all the new equipment that they're building. So you guys have a really interesting captaincy toe watches developments. Really exciting times. >> Thank you for inviting us. Great to be here. >> All right, Easier. Jeff. Jeff, you're watching the Cube. Where? Cube studios and fellow out there. Thanks for watching. We'll see you next time.

Published Date : Jun 20 2019

SUMMARY :

the conference season to slow down a little bit. But for people that aren't familiar with key site, give us kind of a quick overview. So more joy of the business is testing measurement. Okay, so you do the test of measurement really on devices and kind of pre production and master these things you need to make sure that all the devices and all the service has come up correctly. I wonder if you can share some of the observations about EJ. You need to capture a lot of data and you need to process. It's just instinct to me on the edge because you still have kind of the three big um, might have to balance it, and it might change all the time, just as you know, if you look at our traditional history So when you look are responsible so that with S T N, you do get the advantage of opportunity on the network than you thought. know, that was a really good point that you mentioned at the end, which is it's just better to assume that you will be And as I look at some of the notes, you know, gig infrastructure is operating at the capacity that you want in P. But as you look back, I mean, I don't think most people really grown to the step function. you know, Um, there's there's, uh how do you provide coverage? to be in the cloud. So you guys have a really interesting captaincy toe watches developments. Thank you for inviting us. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2017DATE

0.99+

1QUANTITY

0.99+

Tom SiebelPERSON

0.99+

Recep OzdagPERSON

0.99+

MikePERSON

0.99+

400 gigQUANTITY

0.99+

40 gigQUANTITY

0.99+

400 gigQUANTITY

0.99+

IraqLOCATION

0.99+

JeffPERSON

0.99+

400 devicesQUANTITY

0.99+

tensQUANTITY

0.99+

Palo AltoLOCATION

0.99+

2013DATE

0.99+

GeoffreyPERSON

0.99+

MariePERSON

0.99+

two companiesQUANTITY

0.99+

five yearQUANTITY

0.99+

40 yearQUANTITY

0.99+

firstQUANTITY

0.99+

hundredsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

97DATE

0.99+

10 gigQUANTITY

0.99+

yesterdayDATE

0.99+

GoogleORGANIZATION

0.99+

Four ScoutORGANIZATION

0.99+

400QUANTITY

0.99+

about six monthsQUANTITY

0.99+

ScottPERSON

0.98+

ExhaleORGANIZATION

0.98+

billions of dollarsQUANTITY

0.98+

eight timesQUANTITY

0.98+

XiaORGANIZATION

0.98+

I S GORGANIZATION

0.98+

This yearDATE

0.98+

BethanyPERSON

0.97+

LeightonORGANIZATION

0.97+

agileTITLE

0.97+

one aspectQUANTITY

0.97+

CubeORGANIZATION

0.96+

52 2 100 employeesQUANTITY

0.96+

SheenPERSON

0.96+

YouTubeORGANIZATION

0.96+

EJORGANIZATION

0.96+

2012DATE

0.96+

hundreds of gigsQUANTITY

0.96+

oneQUANTITY

0.95+

two daysQUANTITY

0.95+

one vendorQUANTITY

0.95+

one areaQUANTITY

0.95+

SyriaLOCATION

0.94+

400 gigabitQUANTITY

0.94+

100 gigabyteQUANTITY

0.94+

five seniorQUANTITY

0.93+

48QUANTITY

0.93+

2014DATE

0.92+

Five gORGANIZATION

0.92+

one groupQUANTITY

0.91+

TransORGANIZATION

0.91+

Palo Alto, CaliforniaLOCATION

0.9+

first mega byteQUANTITY

0.9+

BenderPERSON

0.9+

four scout boothQUANTITY

0.89+

Visibility GroupORGANIZATION

0.89+

four machinesQUANTITY

0.89+

each computingQUANTITY

0.88+

five communicationQUANTITY

0.88+

Silicon Valley,LOCATION

0.87+

five G.ORGANIZATION

0.87+

FourQUANTITY

0.86+

three GORGANIZATION

0.86+

100QUANTITY

0.86+

couple weeks agoDATE

0.86+

15QUANTITY

0.85+

one sideQUANTITY

0.84+

Net opticsORGANIZATION

0.84+

about millions of recordsQUANTITY

0.83+

108QUANTITY

0.82+

five G.TITLE

0.81+

H v A cCOMMERCIAL_ITEM

0.81+

Michael thePERSON

0.8+

about 5,000,000 I otQUANTITY

0.8+

a couple of yearsQUANTITY

0.79+

threeQUANTITY

0.79+

MattPERSON

0.79+

many years agoDATE

0.78+

Keynote Analysis | Cisco Live US 2019


 

>> Announcer: Live from San Diego, California, it's the cube covering Cisco Live, U.S. 2019. Brought to you by Cisco and it's ecosystem partners. >> Welcome to sunny San Diego. Lisa Martin with the Cube live at Cisco Live in the U.S. here. I'm here the next three days with Stu Miniman and Dave Volante. Gentlemen, great to see you. >> It is sunny. >> It is very sunny. >> Lisa, big 30th anniversary celebration here at Cisco live. Where were you in 1989, you don't have to answer that. >> But I thought about that this morning, I know exactly where I was. So the 30th year of them doing a customer partner event. Other 30 year anniversary notables this year, Tetris is 30, Seinfeld premiered 30 years ago. That's kind of scary when you remember exactly where you were. So we came from the keynote just a minute ago, not a lot of news here, but Stu, let's start with you. In terms of where Cisco is, you guys were in Cisco Live Barcelona just a few months ago, John and I covered Cisco DevNet about six weeks ago, lots of excitement around these waves of 5G, Wi-Fi 6, Compute architectures, your thoughts on Cisco where are they are today, where they are in their transition to becoming more software services? >> Yeah, so lately say a great place to start you. We've been watching the last two years that we've done theCube at their European and U.S. events, this transformation to become a software company. It's really interesting to see Chuck Robbins bring out this 30 year old box, and he's like, it's ribbon cables and multi-protocol routers and everything, and then most of the keynotes, most of the things that they're discussing, sure they had some boxes out there on display, I saw somebody on Twitter, they let all the cats out of the bag, 'cause they're all, Cat. 9000, Cat 6300, things like that, but it's software driven. The point they want to make is that cloud and software defined networking was going to destroy Cisco, well and here we are five or 10 years into some of these waves, and Cisco's still going strong. they have positioning in a lot of these environments. Cisco still does have a lot of hardware. When I look at how we track Cisco, it is more about the ports in the boxes than it is the software revenue, but they are climbing up the charts there, and they are being more software. They are showing up at all the cloud shows. When we were at Google Next, we talked to Cisco there. At AWS we talked to AppDynamics and many of the software pieces, and here in the DevNet zone, it's all about enabling developers which is at the core of so much of what's happening for that software transformation. So Cisco, making good measurable progress. Still a nice robust mix of hardware and software, and I personally, 30 years, I was actually at the 20 year reunion. I bumped into a friend of mine that we'd done a video with 10 years ago. We're comparing how we both have a little bit less hair than we did there, but amazing to think about the technologies we were looking at 10 years ago. Cloud was so early in some of these spaces, so a lot has changed in 10 years, and Cisco continually matriculating the ball down the field as they would say in the old analogy. >> And in terms of revenue, Dave, I was looking at their Q3 2019 report which was just a few weeks ago, sixth consecutive revenue growth quarter under Chuck Robbins, your thoughts on where they are from a revenue perspective? >> Well, Cisco's been doing very well. the Stock's been crushing it since 2011. After the downturn Cisco came out of the downturn as a stronger company. They're about almost 50 billion dollars in annual revenue. They've got a 250 billion dollar market cap, which as, Stu, you and I were talking about, it's almost a 5X revenue multiple, and that's software-like revenue multiples. Hardware companies don't typically get that. I mean unless you're like a pure storage, and your growing super fast. But so, this is a company with 60, almost 65% gross margins, it's got a 25% operating income. Again, that's like AWS. AWS is an incredibly profitable company. Just to put that into perspective, Oracle which is predominantly a software company even though it has some hardware, has operating margins in the low to mid 30s, and that's an extremely profitable company. Cisco's got a net of 10 billion dollars in cash on the balance sheet, actually more, but it's got some debts if you're talking about the net debt, and it's growing at 5 or 6% a year. For a 50 billion dollar company, that's quite impressive. So I think to answer your question Lisa, they're doin' quite well from a revenue standpoint. Chuck has done a great job with Wall Street. They obviously trust him. The stock's up. It's on a, I wouldn't say a rocket ship, but Cisco is a cashflow machine. Now where do they allocate that capital? Obviously they spend some on R and D and operations. they spent seven and a half billion dollars last year on stock buybacks, and dividends. So that's a big nut, and so Cisco's going to continue, in my opinion, to use it's funds to obviously fund R and D, but also do stock buybacks, dividends, prop up the stock. >> Stu: And acquisitions. >> And acquisitions. Is that a good move? Well, so balancing organic R and D with acquisitions is good. We talked about the Meraki acquisition earlier. Obviously Cisco's done a lot of growth through it's acquisition, but I would say this. Stock buybacks are a good idea when your stock is undervalued. Is Cisco undervalued, I don't know. Everything's up these days, hard to predict, but the concern that I have for companies like Cisco and Oracle, who do a lot of big buybacks is when the market sentiment flips, and shifts toward profit based companies like a Cisco or an Oracle, cashflow based companies, stocks tend to depress, and then the market sentiment shifts. So there might be some better buying opportunities ahead, but companies today who have a lot of cash, they have to do buybacks because they got to keep Wall Street happy. >> So as we look at these big waves of the explosion of 5G, 400 gigabit ethernet, GPUs, AI everywhere, one of the things that Chuck Robbins said this morning was that, and it made me think of the network as this common denominator in this changing architectural world we live in, hybrid multi-cloud. So going from their first show 30 years ago that was called Networker, what are your thoughts, Stu, we'll start with you, about where they're positioned with the network as really this common denominator in changing architectures, and the network that data that traverses it can be gleaned by organizations to extract insights, new value, new business models, where does Cisco sit in your opinion? >> Great question Lisa. So first of all we need to look at where does Cisco play, and where do they win? If you talk about the enterprise, switching and routing, they are dominant in that environment. We're going to be digging into some of the service providers. Service providers is not, Cisco is not nearly as dominant with service providers as they are in the enterprise. Then if you talk in the hyperscale players, they don't do as much gear, and that's where they're looking to have their software in there. Cisco wants to make sure that in this new hybrid multi-cloud world, wherever you live, there's going to be some piece of the stack that Cisco is part of. But there are opportunities for growth, but there are risks. Some of the traditional business, enterprises are not building as many data centers, and they're going to go to hosting providers, and therefore the network that most companies manage, most of what they're managing isn't under their purview. they don't touch it, they don't cable it, they don't put any of that together, and so Cisco needs to be extending who they work with, help with common interfaces across them. An area we spend a lot of time looking at is this multi-cloud management where Cisco is going up against some of their traditional partners. People like VMware and Microsoft used to just be the software pieces that ran on top of Cisco, now they're going for some of that same piece of the market because that is a control point, and Cisco needs to have leverage there, so can they be strong there? So it's interesting some of these waves that we have where Cisco plays, and where they will have a lot of competition. >> So guys, I think as Cisco moves from just a purely data center player to all these other opportunities, and they talk about the bridge to possible, I see it as Cisco's in a position to connect all the world's data sources. When you talk about multi-cloud, Cisco's got an opportunity and a challenge to convince the world that it's networks are higher performance, more cost effective, and more secure than everybody else, and you saw David Goeckeler today put up a slide, and he talked about 1, 2, 3, 4, 5, 6 things. He said, automated, secure, agile, cheaper, easier to manage, drives of business outcomes. Now easier to manage, cheaper, automated, those are all cost efficiency sort of plays. So Cisco is in a good position because it's such a huge piece of the market, you know two thirds of the market, and it's been able to maintain that. It doesn't have a monopoly quite, but it's been able to maintain that huge market share for a long time. >> And Dave, if I can, just a comment that number one is Cisco has not been known to have the simplest networks out there, nor in the past it was the best network I could do, I would buy Cisco only. Today, as you've said it many times Dave. Today's multi-cloud is the old multi-vendor. Cisco, sure they would do interops, and they would make sure to test it out, and they follow all the standards, and they drive all of the standards, but in today's world, if Cisco is not the dominant player in the market, will they win in those environments, and you look at something like 5G. Cisco's not the leader in 4G and LTE roll outs. they're working with the telecom providers, but they have a strong position with Meraki on the WiFi, so something like WiFi 6 and their strong connection between the WiFi 6 and the 5G to be able to make sure my indoor and outdoor can now work seamlessly, but there's areas where Cisco's trying to go into that have not necessarily been their stronghold in the past, and at the end of the day, it's frictionless and simplicity is what's driving a lot of these cloudways, and that's not Cisco traditionally. >> Well to that point, you know complexity means cash historically in this business, and so 25% of Cisco's revenue comes from professional services, and 60% from infrastructure, and then the balance is for other stuff. What's the point? The point is that Cisco is transitioning it's business to more of a subscription model. Now they talk about that they had huge growth in the subscriptions business, but they don't really tell you how much of their business is software. It's sort of opaque. You got to kind of dig through that, but it's clearly on a big upswing. So Cisco's got to transition it's business from, you know back in 1989 it was a lot of break/fix right, then it's become a lot more sort of consulting and other professional services. Now it's going more toward an as a service model, and maybe still some of the professional services to, how do I secure my network, how do I architect that, what about cloud, what about multi-cloud, a lot of opportunities there for services value add, but it has to transition. >> Speaking of security, wanted to kind of touch on that for a second, Dave. They just announced the intent to acquire Sentryo SAS, which is a cybersecurity company out of France for industrial control. Their cybersecurity's one of their fastest growing businesses. Is that an opportunity for Cisco to differentiate itself with respect to network security? >> Well, it's imperative. I mean their security business grew 21% last quarter which is what, triple, more than triple the overall company. What they set at around that acquisition, it made total sense to me, is that it used to be you would just invest in protecting the perimeter. That's where all the money went. Now with things like the Edge, and that's part of this acquisition, you've got to really secure the devices, and the applications that are out there, but also I think increasingly the big opportunity is how do we respond? So things like Stealthwatch, and other machine machine intelligence and analytics help organizations that are ultimately, we know they're going to get breached, but the question is how do they respond? >> Yep, excellent. Well guys, I'm looking forward to three days of wall to wall coverage with you, talking with Cisco folks, DevNet folks, customers, partners. It's going to be bright. I think we can guarantee that, but it's going to be good. >> Yeah, we should say that we're here in the DevNet zone, right? So stop by and see us. A lot of action here. there'll be a lot of takeovers, and we'll be coverin' it. >> Yes, the Sails Pavilion which feels just like that. All right guys, going to be a great week. I'm Lisa Martin for Stu Miniman and Dave Volante, you're watching the Cube Live from Cisco Live in sunny San Diego. Stick around, our guests lineup begins in just a minute. (upbeat music)

Published Date : Jun 10 2019

SUMMARY :

Brought to you by Cisco and it's ecosystem partners. in the U.S. here. Where were you in 1989, you don't have to answer that. So the 30th year of them doing a customer partner event. and Cisco continually matriculating the ball and so Cisco's going to continue, in my opinion, they have to do buybacks because and the network that data that traverses it and so Cisco needs to be extending who they work with, and they talk about the bridge to possible, between the WiFi 6 and the 5G to be able and maybe still some of the professional services to, They just announced the intent to acquire Sentryo SAS, and the applications that are out there, It's going to be bright. here in the DevNet zone, right? All right guys, going to be a great week.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
OracleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

Dave VolantePERSON

0.99+

DavePERSON

0.99+

David GoeckelerPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

Chuck RobbinsPERSON

0.99+

Stu MinimanPERSON

0.99+

ChuckPERSON

0.99+

60QUANTITY

0.99+

1989DATE

0.99+

21%QUANTITY

0.99+

25%QUANTITY

0.99+

FranceLOCATION

0.99+

fiveQUANTITY

0.99+

U.S.LOCATION

0.99+

60%QUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

TodayDATE

0.99+

10 billion dollarsQUANTITY

0.99+

seven and a half billion dollarsQUANTITY

0.99+

last yearDATE

0.99+

SeinfeldTITLE

0.99+

2011DATE

0.99+

Sentryo SASORGANIZATION

0.99+

MerakiORGANIZATION

0.99+

30th yearQUANTITY

0.99+

San DiegoLOCATION

0.99+

5QUANTITY

0.99+

last quarterDATE

0.99+

todayDATE

0.99+

30 yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

30 years agoDATE

0.98+

AppDynamicsORGANIZATION

0.98+

#scaletowin with Infinidat


 

(orchestral music) >> Hi everybody my name is Dave Vallate and welcome to the special CUBE community event. You know, customers are on a digital journey. They're trying to transform themselves into a digital business, what's the difference between a business and a digital business? Well we think it's the way in which they use data. So we're here with a company Infinidat who's all about using data at multi petabyte scale. We have news, we have announcements, we're gonna drill down with subject matter experts, and we're gonna start with Brian Carmody, who's the chief technology officer of Infinidat. Brian, it's good to see you again. >> Good to see you too, Dave. And I can't believe it's been a year. >> It has been a year since we last sat down. If you had to summarize, Brian, the last twelve months in one word, what would it be? >> How about two words, "insane growth". >> Insane growth, okay. >> Yes, yes. >> Talk about that. >> Yeah so, as of this morning at least, Infinidat has a hair over 4.6 exabytes of customer data under management, which is just insanely cool and I'm not sure if I counted all of the zeroes properly, but it looks like it's around 180 trillion IOs served to happy customers so far as of this morning. >> Some mind boggling numbers, so let me ask you a question. Is this growth coming from, sort of traditional workloads? Is it new workloads, is it a mix? >> Oh, that's a great question. So you know, early in the Infinidat ramp, our early traction was with core banking, transaction processing applications. It was all about consolidation and replacing rows of venoxes with a single floor tile, Infinibox. But in the past year, virtually all of our growth has been an expansion outside of that core, and it's a movement into greenfield applications. So basically, obviously our customers are going into hardcore digital transformation, and this kind of changes the types of workloads that we're looking at, that we're supporting, but it also changes the value proposition, consolidation and stuff like that is all about the bottom line, it's about making storage more efficient, but once we get into the digital transformation, these greenfield applications which is what most of our new growth is, it's actually all about using your digital infrastructure as a revenue generating machine for opening up new markets, new opportunities, new applications et cetera. >> So when people talk about cloud native, that would be an example, using cloud native tool chains, that's what's happening on your systems. Is that correct? >> Yeah absolutely. And I can give you some examples. So I recently spent a day with a group of engineers that are working with autonomous vehicle sensor data. So this is telemetry coming off of self driving cars. And they're working with these ridiculously large, like multi petabyte data sets, and the purpose of this system is to make the vehicles more smarter, and more resistance to collisions, and ultimately more safe. A little bit before that, me and a bunch of other people from the team spent a day with another partner, they're also working with sensor data, but they're doing biometrics off of wearables. So they've perfected an algorithm that can, in real time, detect a heart attack from your pulse. And will immediately dispatch an ambulance to your geolocation of where, hopefully your arm is still connected to your body. And immediately send your electronic medical health records to that nearest hospital, and only then you get a video call on your phone from a doctor who says hey, are you sitting down? Your gonna be fine, you're having a heart attack, and an ambulance is gonna be there in two minutes. And the whole purpose of this is just to shave precious minutes off of that critical period of getting a person who's having a heart attack, to get them the medical care they need. >> Yeah, I'd say that's a non traditional workload. And the impact is saving lives, that's awesome. Now let's talk a little bit about your journey. You know, our friends at Gartner, they do these magic quadrants, a lot of people don't like 'em, I happen to think they're quite useful, as a guidepost, you guys have always been strong on the vision, and you've been executing. Where are you today in that quadrant? >> Yeah, it's an extreme honor. Gartner elevated us into the Leader's Quadrant last year, so customers take that very, very seriously. And the ability to execute access, is, what Gartner says it's, are you influencing the market? Are you causing the incumbents to change their strategies? And with our disruptive pricing, with our liability guarantees, our SLAs and stuff like that, Gartner felt like we met the criteria. And it's a huge honor, and we absolutely have our customers to thank for that because the magic quadrant isn't about what you tell Gartner, it's about what your customers tell Gartner. >> Congratulations on that, and I know the peer insight, you guys have done very well on that also. I want you to talk about the team, you're growing. To grow, you've gotta bring on good people. You've added some folks, talk about that a little bit. >> Yeah, yeah, well speaking of Gartner, we got Stan Zafos who recently joined. He's gonna be running product marketing for us. We're working with Doc, so he's a legend in the industry, so we're delighted to have him on board. Also, Steiny came over from Pure to join us as our field CTO, another legend who needs no introduction. So really, really happy about that. But also, it's not just, those are guys that customers see. But we're also experiencing this on the engineering side. So we, for example, we recently were very amused to realize that there are now more EMC fellows working at Infinidat, if you count Moshe, more EMC fellows working at Infinidat then working at Dell EMC, which is just, you know a humorous, kind of funny thing. So as the business has grown and has gotten momentum, you know, just like we're continuously amazed by the creativity and the things our customers are doing with data, every day, I am continuously amazed and humbled by the caliber of people that I get to work with every day. >> That's awesome. >> We're really, really happy about that. >> All right, well thank you for the recap of the past year, let's get into sort of some of the announcements today, and I wanna talk about the vision, so you have this Infinidat elastic data fabric, I'm interested in what that is, but I'm also, frankly even more interested in why. What's the "why" behind that? >> Sure. So elastic data fabric is Infinidat's roadmap, and our shared vision with customers for the future of enterprise storage. And the "why" is because customers demanded it. If we look at what's happening in the industry and the way that real customers are dealing with data right now, they have some of their data, and some of their workloads are running across public clouds. Some of them are in managed service providers. Some of them are SASs, and then they have on premises storage arrays, and elastic data fabric is Infinidat's solution that glues all of that together. It turns it into a single platform that spans on premises, colo, Infinidat powered managed service providers, Google, Amazon and Azure, and it glues it into a single platform for running workloads, so over the course of this of these presentations, we're gonna drill down into some of the enabling technologies that make this possible, but the net net, is that it is a brand new, next generation data plane for let's say for example, within a customer data center it allows customers to cluster multiple Infiniboxes together into what we call availability zones, and then manage that as a single entity. And that scales from a petabyte up to an exabyte of capacity per data center, and typically a customer would have one availability zone per data center and then one availability zone that can span multiple clouds, so that's the data plane. The control plane is the ability to manage all of this, no matter where the data lives, no matter where the workload is or needs to be and to manage it with a single pane of glass. And those are the kind of pieces of enabling technology that we're gonna unpack in the technical sessions. >> Two questions on that if I may. So you've got the data plane and the control plane, if I want to plug in to some other control plane, you know VMware control plane for instance, your API based architecture allows me to do that? Is that correct? >> Oh yeah, it's application aware, so for instance if you're running a VMware environment or a Kubernetes environment, it seamlessly integrates into that, and you manage it from a single API endpoint, and it's elastic, it scales up and down, and it's infinite and immortal. And probably the biggest problem that this solves for customers is it makes data migrations obsolete. It gives us the ability to decouple the data lifecycle from the hardware refresh lifecycle, which is a game changer for customers. >> I think you just answered my second question, which is what makes this unique? And that's at least one aspect of course. >> Yeah, I mean that's the, data migrations are the bane of customer's existence. And the larger the customer is, the more filer and erase sprawl they have, the more of a data migration headache they have. So when we kicked this project off five years ago, our call to action, the kernel of an idea that became elastic data fabric, was find a way to make it so that the next generation of infrastructure engineers that are graduated from college right now, will never know what a data migration is, and make it a story that old men in our industry talk about. >> Well that's huge because it is the bane of customers' existences. Very expensive, minimum $50,000 per migration, and many, many months, thanks Brian, for kicking this off, we've got a lot of ground to cover, and so we're gonna get into it now. We're gonna get into the news, we're gonna double click on some of the technologies and architectures, we're gonna hear from customers. And then it's your turn, we're gonna jump into the crowd chat and hear from you, so keep it right there. We'll be right back, right after this short break. (calming music) We're back with Doc D'Errico, the CMO of Infinidat. We're gonna talk about agility and manageability. Good to see you Doc. >> Good to see you again, Dave. >> All right, let's start in reverse order, let's start with manageability. What's your story there? >> Sure, happy to do that, you know Dave, we get great feedback from our customers on how simple and easy our systems are to manage. We have products like Infinimetrics which give them a lot of insights into the system. We have APIs, very simple and easy to use. But our customers keep asking for more insights into their environment, leveraging the analytics that we already do, now you've also heard just now about our elastic data fabric, which is our vision, Infinidat's vision for the data center, not just for today, but into the future. And our first instantiation of that vision in answering those customer responses, is a new cloud based platform, initially to provide some better monitoring and analytics, but then you're going to go into data migrations, auto provisioning, storage availability zones, and really your whole customer experience with Infinidat. >> So for my understanding, this is a SAS solution, is that correct? >> It is, it's a secure, multi site solution, so in other words, all of your Infinidat systems, wherever they are around the world, all visible through a single pane of glass. But the cloud based system gives us a lot of great power too, it gives us the agility to provide faster development and rapid enhancement based on feedback and feature requests. It also then provides you customizable dashboards in your system, dashboards that we can create very rapidly, giving you advisors and insights into a variety of different things. And we have lots of customers who are already engaged in using this. >> So I'm interested in this advisors and insights, my understanding is you guys got a data lake in the backend. You're mining that data, performing analytics on it. What kinds of benefits do customers get out of that? >> Well they can search into things, like abandoned volumes within their system. Tracking the growth of their storage environment. Configuration errors, like asymmetric ports and paths, or even just performance behaviors, like abnormal latencies or bandwidth patterns. >> So when you're saying abandoned volumes, your talking about like, reclaiming wasted space? >> Absolutely. >> To be able to reuse it. I mean people in the old days have done that because of a log structured file and they had to do it for performance, but you're doing it to give back money to the customers, is that right? >> That's exactly right, you know customers very often get requests from business units to spin off additional volume sets for whether it be a test environment or some specific application that they're running for some period of time. And then when they spin down the environment they sometimes leave the data set there thinking that they might need it again in the not so distant future, and then it sort of dies on the vine, it sits there taking up space and it's never used again, so we give them insights into when the last time things were accessed, how often it's accessed, what the IO patterns are, how many copies there might be, with snapshots and things like that. >> You mentioned strong customer feedback. Everybody says they get great customer feedback. But you've been with a lot of companies. How is this different, and what specifically is that feedback? >> Yeah, the analytics and insights are very unique, this is exactly what customers have been asking for from other vendors. Nobody does it, you know we're hearing such great stories about the impact on their costs. Like the capacity utilization, reclaiming all that abandoned capacity, being able to put new workloads and grow their environment without having to pay any additional costs is exciting to them. Identifying and correcting configuration issues, getting ahead of performance problems before they occur. Our customers are already saving time and money by leveraging this in our environment. >> All right let's pivot to agility. You've got Flex, what's your story there? What is Flex? >> Well Dave, imagine a world if you will, if you didn't have to worry about hardware anymore, right, it sounds like a science fiction story but it's not. >> Sounds like cloud. >> It sounds like cloud, and people have been migrating to the cloud and in the public cloud environment, we have a solution that we talked about a year ago called Neutrix Cloud, providing a sovereign based storage solution so that you can get the resilience and the performance of Infinibox or Infiniguard in your system today, but people want that experience on premises, so for the on premise experience, we're announcing Infinibox Flex, and Inifiniguard Flex, an environment where, you don't have to worry about the hardware, you manage your data, we'll manage the hardware, and you get to pay for what you use as you need it. You can scale up an down, we'll guarantee the availability. 100% availability, and with this environment, you'll get free hardware for life. >> Okay a lot of questions, so this sounds like your on prem cloud, right, you're bringing that cloud experience to the data, wherever it lives, you say you can scale up and scale down, how does that work, you're over provisioning, or, and you're not charging me for what I don't use, can you give us some details there? >> Well just like with an Infinibox, we're going to try to provide the customer with the Infinibox that they need not just for today, but for tomorrow. We're gonna work with the customer to look into the future and try to determine what are their performance requirements and capacity requirements over time. The customer will have the ability to manage the data configuration and the allocation of the storage and add or remove storage as they need it. As they need it, as they scale up, and we'll build them based on the daily average, just like the cloud experience, and if, as they reduce, same thing, it will adjust the daily average and build accordingly. >> Am I right, the customer will make some minimum commitment, and then if they go over that, you'll charge 'em for it, if they don't, then you won't charge 'em for it, is that correct? >> If they go over it, we'll charge them for the period they go over, if they continue to use it forever, we'll charge them that. If they reduce it back, then we'll charge them the reduced amount. >> So that gives them the flexibility there and the agility. Okay 100% availability, what's behind that? >> You know, we have a seven nines reliability metric that we manage to on a day to day basis. We have customers who have been running systems for years without any noticeable downtime, and when you have seven ninths, that's 3.16 seconds of availability per year. Right, the life cycle of an IO timeout is much longer than that, so effectively from the customer's application perspective, it's 100% available. We're willing to put our money where our mouth is. So if you experience downtime that's caused by our system at any time during that monthly period, you get the next month for free for the entire capacity. >> Okay, so that's a guarantee that you're making. >> That's a guarantee. >> Okay, read the fine print. But it sounds like the fine print is just what you said it is. >> It's pretty straight forward. >> Free hardware for life. Free, like a puppy? (laughs) >> No, free like in free, free meaning you're paying for the service, we're providing the capacity for you to put your data, and every three years, we will refresh that entire system with new hardware. And the minimum is three years, if you prefer because of your business practices to change that cycle, we'll work with you to find the time that makes the most sense. >> So I could do four years or five years if I wanted. >> You could do four years or five years. You could do three years and three months. And you'll get the latest and greatest hardware. We'll also, by the way provide the data migration services which is part of this cloud vision. So your not going to have to do any of the work. You're not going to have to pay for additional capital expense so that you have two sets of hardware on the floor for six months to a year while you do migration and work it into your schedules. We'll do that entire thing transparently for you in your environment, completely non disruptive to you. >> So you guys are all about petabyte scale. Hard enterprise problems, this isn't a mom and pop sort of small business solution, where do you see this play? Obviously service providers are gonna eat this stuff up. Give us some -- >> Yeah you know, service providers is a great opportunity for this. It's also a wonderful opportunity for Infiniverse. But any large scale environment this should be a shoo-in. And you know what, even if you're in a small scale environment that has a need that you wanna maintain that environment on premises, you're small scale, you wanna take advantage of your data more. You know you're going to grow your environment, but you're not quite sure how you're gonna do it. Or you have these sporadic workloads. Perhaps in the finance industry, you know we're in tax season right now, taxes just ended half a month ago right, there are plenty of businesses who need additional capacity for maybe four months of the year, so they can scale up for those four months and then scale back down. >> Okay, give us the bottom line on the customer impact. >> So the customer impact is really all about greater agility, the ability to provide that capacity and flexible model without big impact to their overall budget over the course of the year. >> All right Doc, thank you very much. Appreciate your time and the insight. >> It's my pleasure, Dave. >> All right, let's year from the customer, and we'll be right back. Right after this short break. >> Michael Gray is here, he's the chief technology officer of Boston based Thrive, Michael, good to see you. Thanks for coming on. >> Hey, glad to be here. >> So tell us about Thrive, what are you guys all about? >> You know, Thrive started almost 20 years ago as a traditional managed service provider. But really in the past four to five years transformed into a next generation managed service provider, primarily now, we're focusing on cyber security, cloud hosting and public cloud hosting, as well as disaster recovery. To me, and this is something that's primary to Thrive's focus, is application enablement. We're an application enablement company. So if your application is best run in Azure, then we wanna put it there, a lot of times we'll find that just due to business problems or legacy technologies, we have to build private clouds. Or even for security reasons, we want to build private cloud, or purely just because we're running into a lot of public cloud refugees. You know they didn't realize a lot of the, maybe incidental fees along the way actually climbed up to be a fairly big budget number. So you know, we wanna really look at people's applications and enable them to be high performance but also highly secure. >> So I'm curious as to when you brought in Infinidat, what the business impact was economically. There's all the sort of non TCO factors that I wanna explore, so was it the labor costs that got reduced, did you redeploy those resources? Was it actually the hardware, or? >> First and foremost, and you know this is going back many years, and I think I would say this is true for any data center cloud provider. The minute the phone rings and someone says my storage is slow, we're losing money. Okay, because we've had to pick up the phone and someone needs to address that. We have eliminated all storage performance help desk issues, it's now one thing I don't need to think about anymore. We know that we can rely on our performance. And we know we don't need to worry about that on a day to day basis, and that is not in question. Now the other thing is really, as we started to expand our Infinidat footprint geographically, we suddenly started to realize, not only do we have this great foundation built but we can leverage an investment we made to do things that we couldn't do before. Maybe we could do them but they required another piece of technology, maybe we could do them but they required some more licensing. Something like that, but really when we started the standardization, we did it for operational efficiency reasons, and then suddenly realized that we had other opportunities here. And I have to hand it to Infinidat. They're actually the ones that helped us craft this story. Not only is this just a solid foundation but it's something you can build on top of. >> Has that been your experience, that it's sort of reduced or eliminated traditional storage bottlenecks? >> Oh absolutely, and you know I mentioned before that storage forms have now become an afterthought to me. You know, and a little bit the way we look at our storage platform is from a performance standpoint, not a capacity standpoint, we can throw whatever we want at the Infinidat, and sort of the running joke internally is that we'll just smile and say is that all you got? >> You mean like mix workloads so you don't have to sort of tune each array for a particular workload? >> Yeah, and you know I can image that as someone who might be listening to what I'm saying, well hey come on, it can't really be that good. And I'm telling you from seeing it day to day, again you can just throw the workloads at it, and it will do what it says it does. You don't see that everyday, now as far as capacity goes, there's this capacity on demand model, which we're a huge fan of, they also have some other models, the flex model, which is very useful for budgeting purposes, what I will tell you is you have to sacrifice at least one floor tile for Infinidat, it's very off putting first on day one, and I remember my reaction. But again, as I was saying earlier, when you start peeling back the pieces of the technology and why theses things are, and the different flexibility on the financial side, you realize this actually isn't a downside, it's an upside. >> We're gonna talk performance with Craig Hebbert who's vice president with Infinidat, he focuses on strategic accounts, Craig, thanks for coming on. >> Thanks for having me. >> All right, so let's talk performance, everybody talks about performance they have their bench marketing, everybody's throwing Flash at the problem, you guys, you use Flash, but you didn't hop on that all Flash bandwagon, why and how are you different? >> Great question, we get it a lot with our customers. So we innovated, we spent over five years looking at the big picture, what the box would need today. What it would need in the future, and how would we arrive there by doing it economically? And so as you said, we use a small amount of Flash, that's a small percentage, two, three, four percent of the total box, but we do it by having a foundation that nobody else has, instead of throwing hardware at the solution, we have some specific mechanisms that nobody else has, we have a tri, which is a multi value structure that allows us to dynamically trace and track all of the IOs that come into the box, we ship intelligence. Everybody else ships dumb blocks of data. And so their only course of action to adopt new strategies is to bolt on the latest and greatest media. I've had a lot of experience at other companies where they've tried to shoehorn in new techniques whether it be a NAS Blade into an existing storage box or whether it be thin provisioning after the fact. And things that are done sort of like after the design is done never pan out very well. And the beauty with Infinibox is that all our protocols work the same way. I-ska-zin, NAS Block, it is all structured the same way. And that makes performance equal over all those protocols. And it makes it also easy to manage via the same API structure. >> So you're claiming that you can give equivalent or better performance with a combination of Flash and Spinning Disk than your competitors who are all Flash. Can you kind of add some color to that? >> Absolutely, so we use DRAM, all of our writes are ingested into the box through DRAM. We have 130 microsecond latency. Which is actually the lowest speed that fiber channel can attain, and so we're able to do things very, very quickly, it's 800 times faster NAND which is what our competitors are using. We have no raid structure on the SSD at all. So as things flow out of DRAM and go onto the SSD, our SSD is faster than everybody else's. Even though we use the same, so there's a mechanism there that we optimize. We write in large sequential blocks to the SSD. So the wear rate isn't the same as what our competitors are using, so everything we do is with an optimization, both for the present data and also the recall, and one of the things that culminates in a massive success for us, how we have those three tiers of data, but how we're able to out performance all Flash arrays, is that we do something, we hold data in cache for a massive amount of time, the average write latency in something like a VMAX is something like 13 seconds, the maximum is 28, we hold things for an astounding five minutes, and what that allows us to do is put profiles around things and remove randomness, randomness is something that's plagued data storage vendors for years. Whether it's random writes or random reads. If you can remove that randomness, then you can write out what are the slowest spinning disks out there, the Nearline SAS drives, but they're the fastest disks for sequential read, so if everything you write out is sequential, you can use the lowest cost disk, the Nearline SAS disk, and maximize their performance. And it's that technology, it's those patterns, 138 patterns that allow us to do all of these 38 steps in the process which augment our ability to serve customers data at a vastly reduced price. >> So your secret sauce is architecture intelligence as you call it, and then your able to provide lower cost media, and of course if Flash were lower cost, you'd be able to use that. There's no reason that you couldn't. Is that correct? >> We could but we wouldn't gain anything from it. A lot of customers say to us, why aren't you using more Flash, why don't you build an all Flash array? Why don't you use NVME? And we are actually the next version of the soft-wool-ship and the ME Capable as well as storage class memory. Why we don't do it is because we don't need it. Our customers have often said to us why don't you use 16 gig fiber channel or 32. And we haven't made that move because we don't move bottlenecks, we give customers a solution which is an end to end appliance, and so when we refresh the software stack, and we change the config with that, we make sure that the fiber channel is upgraded, we make sure that the three port, the Infiniban, everything comes with an uplift so there's not just one single area of a bottleneck. We could use more SSD but it would just be more money and we wouldn't be able to give you any more performance than we are today. >> So you have some hard news today. Tell us about that. >> Yeah I will. So we are a software company, and going back to the gen one I was here on day one when we started selling in the United States, when the first box was released it was 300,000 IOs, Moshe said he wanted a million IOs without changing the platform. We got up to about 900,000, that's a massive increase by just software tweaks, and so what we do is once the product has gone through its second year we go back and we optimize and we reevaluate. Which is what we did in the fall of 2018. And we were able to give a 30% uplift to our existing customers just with software tweaks in that area, so now we move to another config where we will introduce the 16, the 32 gig fiber channel cards and the MEO for fabric and storage class memory and all those things that are up and coming, but we don't need to utilize those until the price point drops. Right now if we did that, we'd just be like everybody else, and we would be driving up the price point, we're making the box ready to adapt those when the price point becomes accessible to our customers. >> Okay, last question, you spent a lot of time with strategic accounts, financial services, healthcare, insurance, what are some of the most pressing problems that you're hearing from them that you guys are helping them solve? >> It's a great question, so we see people with sprawl, managing many, many arrays, one of our competitors for instance for Splunk, they'll give you one array with one interface for the hot indexes, another mid tier array with another interface for the warm indexes. >> Brute force. >> Yeah, and then they'll give you a bunch of cold now storage on the back end with another disparate interface, all three of them are managed separately and you can't even control them from the same API. So what customers like about us, and just Splunk is one example. So we come in with just one 19 inch array and one rack, the hot indexes are handled by the DRAM, the warm indexes are handled by the SSD, and cold data's right there on the Nearline Sass drives. So they see from us this powerful, all encompassing solution that's better, faster, and cheaper. We sell on real, not effective, and so when encryption and things like this get turned on, the price point doesn't go up with Infinidat customers. They already know what they're buying. Everything else is just cream. And it's massive for economical reasons, as well as technological reasons. >> Excellent, Craig, thank you. >> Thank you very much for having me. >> Okay keep it right there everybody. We'll be right back after this short break. (calming music) We're back with Ken Steinhart who's a field CTO with Infinidat, Ken, good to see you again. >> Great to see you Dave, it's been a long while. >> It sure has, thanks for coming back on the CUBE here. So you have the customer perspective. You've worked with a lot of customers. You've been a customer, availability, high availability, obviously important, especially in the context of storage. What's Infinidat's story there? >> Well high availability's been a cornerstone for Infinidat obviously from the beginning. And it's really driven some pretty amazing things. Not the least of which has been seven nines of availability proven by the product. What's new and different now, is we're extending that with the ability to do active active clustering and it's the real deal, we're talking about the ability to have the exact same volume now at synchronous distances, presenting itself to both sites as if it were just a single volume. Now this is technology that's based upon the existing synchronous replication and Infinisnap technology that Infinidat has already had, and this is gonna provide always on, continuous operation, even able to be resilient against site failures, component failures, storage failures, server failures, whatever, we will provide true zero RPO and true zero RTO at distance, and it's able to provide the ability to provide consistency also by using a very lightweight witness which presents itself as a third, completely separate fault domain to be able to see both sites to ensure the integrity of information, while being able to read and write simultaneously at two sites to what logically looks like one single volume. This is gonna be supported with all the major cluster software and server environments. And it's incredibly easy to deploy. So that's really the first point associated with this. >> So let me follow up on that, so a lot of people talk about active active, a lot of companies. How is this specifically different? >> It's different in that it is going to be able to now change the economics, first and foremost. Up until now, typically, people have had to trade off between RPO, RTO and cost, and usually you can get two of the three to be positive but not all three. It's sort of like if you buy a car. RPO equates to the quality of the solution, RTO equates to the speed or time, cost is cost. If you buy a car, if it's good and it's fast it won't be cheap, if it's good and it's cheap, it won't be fast, and if it's fast and it's cheap it won't be good, so we're able to break that paradigm for the first time here, and we're gonna be able to now take the economics of multi site, disaster tolerant, cluster type solutions and do it at costs to what are comparable to what most people would do for just a single site implementation. >> And your secret sauce there is the architecture, it's the software behind it. >> Well it's actually a key point, the software is standard and included. And it's all about the software, this is an extension of the existing synchronous replication technology that Infinidat has had, standard and included, no additional costs, no separate quirky gateways or anything, being able to now have one single volume logically presented to two different sites in real time continuously for high availability. >> So what's the customer impact? >> The customer impact is continuous operation at economics that are comparable to what single site solutions have typically looked like. And that's just gonna be huge, we see this as possibly bringing multi site disaster tolerance and active active clustering to people that have never been able to afford it or didn't think they could afford it previously. That really brings us to the third part of this. The last piece is that, when you take an architecture such as Infinidat with Infinibox, that has been able to demonstrate seven nines of availability, and now you can couple that across at distance in synchronous distances to two data centers or two completely different sites, we are now able to offer a 100% uptime guarantee. Something that statistically hasn't really been particularly practical in the past, for a vendor to talk about, but we're now able to do it because of the technology that this architecture affords our customers. >> So guarantee as in, when I read the fine print, what does it say? >> Obviously we'll give the opportunity for our customers to read the fine print. But basically it's saying we're gonna stand behind this product relative to its ability to deliver for them, and obviously this is something customers we think are gonna be very, very excited about. >> Ken, thinks so much for coming on the CUBE, appreciate it. >> Pleasure's mine, Dave. As always. >> Great to see you. Okay, thank you for watching, keep it right there. We'll be right back, right after this short break. (calming music) Okay we're back for the wrap up with Brian Carmody. Brian, let's geek out a little bit. You guys are technologists, let's start with the software tech that we heard about today. What are the takeaways? >> Sure, so there's a huge amount of content in here, and software is most of it, so we have, first is R5. This is the latest software release for Infinibox. It improves performance, it improves availability with active active, it introduces non disruptive data mobility which is a game changer for customers for manageability and agility. Also as part of that, we have the availability of Infiniverse, which is our cloud based analytics and monitoring platform for Infinidat products, but it's also the next generation control plane that we're building. And when we talk about our roadmap, it's gonna grow into a lot more than it is today, so it's a very strategic product for us. But yeah, that's the net net on software. >> Okay, so but the software has to run on some underlying hardware, so what are the innovations there? >> Yeah, so I'm not sure if I'd call 'em innovations, I mean in our model, hardware is boring and commoditized and really all the important stuff happens in software. But we have listened, customers have asked us for it, we are delivering, 16 gigabit fiber channel is a standard option, and we're also giving a option for a 32 gig fiber channel, and a 25 gig ethernet, 25 gig ethernet, which is again, things that customers asking for 'em, and we've delivered, and also while we're on the topic of protocols and stuff like that, we're also demonstrating our NVMe over fabrics implementation, which is deployed with select customers right now, it is the world's fastest NVMe over fabrics implementation, it is a round trip latency of 52 microseconds which is half the time, roundtrip for us, is half the time that it takes a NAND Flash cell to recall its data, forgetting about the software stack on the round trip, that's gonna be available in the future for all of our customers, general availability via a software only update. >> That's incredible, all right, so to get out what that means for the road map. >> Oh sure, so basically with our road map, is we're laying out a very ambitious vision for the next 18 months of how to give customers ultimately what they are screaming for which is help us evolve our on premises storage from old school storage arrays and turn them into elastic data center scale clouds in my own data centers, and then come up and give us an easy, seamless way to integrate that into our public cloud and our off premises technologies, and that's where we're gonna be. Starting today, and taking us out the next 18 months. >> Well we covered a lot of ground today. Pretty remarkable, congratulations on the announcements. We covered all the abilities, even performance ability. We'll throw that one in there. So thank you for that, final word? >> The final word is probably just a message to our customers to say thank you, and for trusting us with your data. We take that covenant very seriously. And we hope that you with all of this work that we've done, that you feel we're delivering on our promise of value, to help them enable competitive advantage and do it at multi petabyte scale. >> Great, all right thank you Brian. And thank you, now it's your turn. Hop into the crowd chat, we've got some questions for you, you can ask questions of the experts that are on the call. Thanks everybody for watching. This is Dave Vallante signing out from the CUBE.

Published Date : May 8 2019

SUMMARY :

Brian, it's good to see you again. Good to see you too, Dave. If you had to summarize, Brian, the last twelve months all of the zeroes properly, but it looks like Some mind boggling numbers, so let me ask you a question. But in the past year, virtually all of our growth that would be an example, using cloud native from the team spent a day with another partner, And the impact is saving lives, that's awesome. And the ability to execute access, is, Congratulations on that, and I know the peer insight, by the caliber of people that I get to work with every day. We're really, really happy about the vision, so you have this Infinidat The control plane is the ability to manage all of this, you know VMware control plane for instance, And probably the biggest problem that this solves I think you just answered my second question, And the larger the customer is, the more filer Good to see you Doc. in reverse order, let's start with manageability. happy to do that, you know Dave, But the cloud based system gives you guys got a data lake in the backend. Tracking the growth of their storage environment. I mean people in the old days have done that in the not so distant future, and then it sort of is that feedback? about the impact on their costs. All right let's pivot to agility. if you will, if you didn't have to worry about the hardware, you manage your data, provide the customer with the Infinibox that they need for the period they go over, if they continue the flexibility there and the agility. So if you experience downtime that's caused But it sounds like the fine print is just what you It's pretty Free, like a puppy? And the minimum is three years, if you prefer So I could do on the floor for six months to a year So you guys are all about petabyte scale. Perhaps in the finance industry, you know we're greater agility, the ability to provide that capacity All right Doc, thank you very much. from the customer, and we'll be right back. Michael Gray is here, he's the chief technology officer But really in the past four to five years as to when you brought in Infinidat, started the standardization, we did it for operational You know, and a little bit the way we look at and the different flexibility on the financial side, We're gonna talk performance with Craig Hebbert that come into the box, we ship intelligence. that you can give equivalent or better performance like 13 seconds, the maximum is 28, we hold things There's no reason that you couldn't. A lot of customers say to us, why aren't you using So you have some hard news today. in the United States, when the first box was released for the hot indexes, another mid tier array and one rack, the hot indexes are handled with Infinidat, Ken, good to see you again. especially in the context of storage. the ability to have the exact same volume now How is this specifically different? for the first time here, and we're gonna be able to now it's the software behind it. And it's all about the software, this is an extension do it because of the technology that this the opportunity for our customers to read the fine print. As always. the software tech that we heard about today. This is the latest software release for Infinibox. and really all the important stuff happens in software. That's incredible, all right, so to get out for the next 18 months of how to give customers So thank you for that, final word? And we hope that you with all of this work of the experts that are on the call.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VallatePERSON

0.99+

Brian CarmodyPERSON

0.99+

Dave VallantePERSON

0.99+

BrianPERSON

0.99+

Michael GrayPERSON

0.99+

InfinidatORGANIZATION

0.99+

SteinyPERSON

0.99+

DavePERSON

0.99+

GartnerORGANIZATION

0.99+

Craig HebbertPERSON

0.99+

MoshePERSON

0.99+

six monthsQUANTITY

0.99+

Ken SteinhartPERSON

0.99+

two sitesQUANTITY

0.99+

30%QUANTITY

0.99+

100%QUANTITY

0.99+

CraigPERSON

0.99+

second questionQUANTITY

0.99+

three yearsQUANTITY

0.99+

four yearsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Stan ZafosPERSON

0.99+

GoogleORGANIZATION

0.99+

38 stepsQUANTITY

0.99+

one wordQUANTITY

0.99+

five yearsQUANTITY

0.99+

both sitesQUANTITY

0.99+

twoQUANTITY

0.99+

52 microsecondsQUANTITY

0.99+

United StatesLOCATION

0.99+

13 secondsQUANTITY

0.99+

800 timesQUANTITY

0.99+

Two questionsQUANTITY

0.99+

KenPERSON

0.99+

last yearDATE

0.99+

138 patternsQUANTITY

0.99+

two wordsQUANTITY

0.99+

first boxQUANTITY

0.99+

five minutesQUANTITY

0.99+

ThriveORGANIZATION

0.99+

25 gigQUANTITY

0.99+

two data centersQUANTITY

0.99+

threeQUANTITY

0.99+

four monthsQUANTITY

0.99+

InfiniverseORGANIZATION

0.99+

third partQUANTITY

0.99+

16 gigQUANTITY

0.99+

tomorrowDATE

0.99+

first pointQUANTITY

0.99+

EMCORGANIZATION

0.99+

two minutesQUANTITY

0.99+

one rackQUANTITY

0.99+

todayDATE

0.99+

three tiersQUANTITY

0.99+

over five yearsQUANTITY

0.99+

two different sitesQUANTITY

0.99+

32 gigQUANTITY

0.99+

Craig Hibbert, Infinidat | CUBEConversation, April 2019


 

from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape hi everybody this is Dave lotta a and this is the cube the leader in live tech coverage this cube conversation I'm really excited Craig Hibbert is here he's a vice president of infinite at and he focuses on strategic accounts he's been in the storage business for a long time he's got great perspectives correct good to see you again thanks for coming on good to say that good to be back so there's a there's a saying don't fight fashion well you guys fight fashion all the time you got these patents you got this thing called neuro cache you're your founder and chairman mo che has always been - cutting against the grain and doing things his own way but I'd love for you to talk about some of those things the patents that you have some architecture the neuro cache fill us in on all that sure so when we go in we talk to customers and we say we have a hundred and thirty-eight patents a lot of them say well that's great but you know how does that relate to me a lot of these are and or gates and certain things that they don't know how it fits into the day to day life so I think this is a good opportunity to talk about several of those that do and so obviously the neural cache is something that is is dynamic instead of having a key in a hash which all the other vendors have just our position in that table allows us to determine all the values and things we need from it but it also monitors this is an astounding statement but from the moment that array is powered on every i/o that flows through it we track data for the life of the reins for some of these customers it's five and six years so you know those blocks of data are they random are they sequential are they hot are they cold when was the last time was accessed and this is key information because we bring intelligence to the lower level block layer where everybody else has just done they just ship things things come into acutely moving they have no idea what they are we do and the value around that is that we can then predict when workloads are aging out today you have manual people writing things in in things like easy tier or faster or competing products or two stories right and all these things that that manage all these problems are the human intervention we do it dynamically and that feeds information back into the Ray and helps to determine which virtual ray group it should reside on and where on the discipline Dalls based upon the age of the the application how it's trending the these are very powerful things in a day where we need eminent information send in to a consumer in a store I'd it's all all this dynamic processing and the ability to bring that in so that's that's one of the things we do another one is that the catalyst for our fast rebuilds we can rebuild two failed full 12 terabyte drives in under 80 minutes if those drives are half full then it's nine minutes and this is by understanding where all the data is and sharing the rebuild process from the drives that's another one of our patterns perhaps one of the most challenging that we have is that storage vendors tend to do error correction at the fibre channel layer once that data enters into the storage array there is no mechanism to check the integrity of that data and a couple of vendors have an option to do this but they can only do it for the first right and they also recommend you to turn that feature off because it slows down the box so we're infinite out is unique and I think this is for me one of the the most important paths that we have is that every time we ride a 64k slice in the system we assign some metadata to that and obviously it has a CRC check sum but more importantly it has the locality of reference so if we subsequently go back and do a reread and the CRC matches but the location has changed we know that corruption has happened sometimes a bit flipped on right all of these things that constitute sound data corruption that's not just the impressive part what we do at that point is we dynamically deduce that the data has been corrupt and using the parity in the quorum where it were a raid 6 like a dual parity configuration we rebuild that data on the fly without the application or the end-user knowing that there was a problem and that way served back the data that was actually written we guarantee that were the only array that does that today there's massive for our customers I mean the time to rebuild you said 12 terabyte drive I mean I yeah I would have thought I mean they always joke how long do you think it takes to rebuild a 30 terabyte drive because eventually you know sure you know it's like a month with us it's the same so if you look at our three terabyte drives it was 18 minutes the four terabyte drives 18 minutes the 618 minutes 812 will be good all the way up to 20 terabyte drives figuration we have no what I came back to a conversation we've had many many times we've shown you guys we were early on in the flash storage trend and we saw the prices coming down we done like high-speed spinning disks were there days were numbered and sure correct in that prediction but then you know disk drives have kept that distance yeah you guys have a skewed going all flash because the economics but help us understand this because you've got this mechanical device and you yet you guys are able to claim performance that's equal to or oftentimes much much better than a lot of your all flash competitors and I want to understand that a little bit it suggests to me that there's so much other overhead going on and other ball necks in the system that you guys are dealing with both architectural II and through your intelligence software can you talk about that absolutely absolutely the software is the key right we are a software company and we have some phenomenal guys that do the software piece so as far as the performance goes the the backend spinning discs are really obfuscated by two layers of virtualization and we ensure that because we have massive amounts of DRAM that all of that data flows into DRAM it will sit in DRAM for an astonishing five minutes I say astonishing because most of our vendors try to evict cache straight away so they've got room for the next one and that does not facilitate a mechanism by which you can inspect those dumb pieces of data and if you get enough dumb data you can start to make him intelligent right you can go get discarded data from cell phone towers and find out we know where people go to work and what time they worker because of that what demographic at the end and you know now you're predicting the election based upon discarding itself on talladega so so if you can take dumb data and put patterns around it and make it sequential which we do we write out a log structured right so we're really really fast at the front-end and some customers say well how do you manage that on the backend here's something that our designers and architects did very very well the the speed of the of ddr3 is about 15k per second which is what Cindy REM right now we have 480 spindles on the backend if you say each one of them can do a hundred 100 mics per second which they can do more than that 200 that gives us a forty eight gigabit gigabyte sorry per second backplane D stage ability which is three times faster than the DRAM so when you look at it the box has been designed all the way so there is no bottleneck through flowing through the DRAM anything that still been access that comes out of that five minute window once it's D stays to all the spindles incidentally analog structured right so right now it over 480 spindles all the time and then you've got the random still on the SSD which will help to keep that response time around about 2 milliseconds and just one last point on there I have a customer that has 1.2 petabytes written on a 1.3 a petabyte box and is still achieving a 2 millisecond response time and that's unheard of because most block arrays as you fill them up to 60 70 % that the performance starts going in the tank so I go down memory lane here so the most successful you know storage array in the history of the industry my opinion probably fact it was symmetric sand mosha a designed that he eschewed raid5 everybody was on the crazy about raid 5 is dead no no just mirror it yeah and that's gonna give us the performance that we need and he would write they would write 2d ran and then then of course you'd think that the D stage bandwidth was the bottleneck because they had such a back high a large number of back-end spindles the bandwidth coming out of that DRAM was enormous you just described something actually quite similar so that I was going to ask you is it the D stage bandwidth the bottleneck and you're saying no because your D stage being what there's actually three tighter than the D rate up it is so with the symmetric some typical platforms you would have a certain amount of disk in a disk group and you would assign a phase and Fiber Channel ports to that and there'd be certain segments in cash that would dedicated those discs we have done away with that we have so many well with two layers of the virtualization at the front as we talked about but because nothing is a bottleneck and because we've optimized each component the DRAM and I talked about the SSDs we don't write heavily over those we write in a sequential pattern to the SSD so that the wear rate is elongated and so because of that and we have all the virtualized raid groups configured in cache so what happens is as we get to that five-minute window we're about 2 D state all of the raid groups the al telling the cash how to lay out the virtual raid structure based on how busy or the raid groups are at the time so if you were to pause it and ask us where it's going we can tell you it's the Machine line it's the artificial intelligence of saying this raid group just took a D stage you know or there's a lot of data in the cache that's heading for these but based upon the the prediction of the heart the cold that I talked about a few months ago and so it will make a determination to use a different virtual rater and that's all done in memory as opposed to to rely on the disk so we're not we don't have the concept of spare disk we have the concept of spare capacity it's all shared and because it's all shared it's this very powerful pool that just doesn't get bogged down and continues to operate all the way up to the full capacity so I'm struggling with this there is no bottleneck because there's always a problem that can assure them so where is the bottleneck the ball net for us is when the erase fault so if you overwrite the maximum bandwidth and that historically you know in in 2016-2017 was a roughly 12 cube per second we got that in the fall 2018 to roundabout 15 and we're about to make the announcement that we've made tectonic increases in that where will now have right bandwidth approach in 16 gig per second and also read bandwidth about 25 K per second that 16 is going to move up to 20 remember what I said we release a number and we gradually grow into it and and and maximize and tweak that software when you think that most or flash arrays can do maybe one and a half gig per second sustained writes that gives us a massive leg up over our competition instead of buying an all flash array for this and another mid-tier array for this and coal social this you can just buy one platform that services at all all the protocols and they're all access the same way so you write an API one way mark should almost as big fan of this about writing code obviously was spinnaker and some of those other things that he's been involved in and we do the same thing so our API is the same for the block as it is for the NAS as it is for the ice cozy so it's it's very consistent you write it once and you can adapt multiple products well I think you bring about customers for short bit everybody talks about digital transformation and it's this big buzzword but when you talk to customers they're all going through some kind of digital transformation oh they want to get digital right let's put it that way yeah I don't want to get disrupted they see Amazon buying grocers and while getting into the financial services and content and it's all about the data so there's a real disruption scenario going on for every business and and the innovation engine seems to be data okay but data just sitting there and a data swamp is no good so you got to apply machine intelligence for that to that data and you got to have scale mm-hmm do you guys make a big deal about about petabyte scale yeah what are your customers telling you about the importance of that and how does it fit into that innovation sandwich that I just laid out sure no it's great question so we have some very because we're so have 70 petabytes of production over those 70 yep we have a couple of those both financial institutions very very good at what they do we worked with them previously with a with another product that really kind of introduced another one of most Shea's products that was XIV that introduced the concepts of self-healing and no tuning and things like we don't even talked about that there's no tuning knobs on the infinite I probably should mention that but our customers said have said to us we couldn't scale you know we had a couple hundred terabyte boxes before there were okay you know you've brought you've raised the game by bringing in a much higher level of availability and much higher capacity we can take one of our but I'm in this process right now the customer we can take one of our boxes and collapse three vmax 20 of VMAX 40s on it we have numerous occassions gone into establishments that have 11 12 23 inch cabinets two and a half thousand spindles of the old DMC VMO station we've replaced it with one 19-inch rack of arts right that's a phenomenal state when you think about it and that was paid for you think some of these v-max 47 it's 192 ports on them Fiber Channel ports we have 24 so the fibre channel port reduction the power heating and cooling over an entire row down to one eight kilowatt consumption by the way our power is the same whether it's three four terabytes six eight twelve they all use the same power plan so as we increase the geometry capacity of the drives we decrease the cost per usable well we're actually far more efficient than all fly sharing with the most environmentally friendly hybrids been in this planet on the array so asking about cloud so miss gray on the planet that would be yeah so when cloud first sort of came out of the division Financial Services guys are like no clouds that's a bad word they're definitely you know leaning into that adopting it more but still there's a lot of workloads that they're gonna leave on Prem they want to that cloud experience to the data what are you hearing from the financial services customers in particular and I and I've single them out because they're they're very advanced they're very demanding they are they a lot of dough and so what do you see in terms of them building cloud hybrid cloud and and what it means for for them and specifically the storage industry yeah so I'm actually surprised that they've adopted it as much as they have to be honest with you and I think the the economics are driving that but having said that whenever they want to get the data back or they want to bring it back home prime for various reasons that's when they're running into problems right it's it's like how do I get my own data back well you've got to open up the checkbook and write big checks so I think infini debt has a nice strategy there where we have the same capabilities that you have on prime you having the cloud don't forget nobody else has that one of the encumbrances to people move into the cloud has been that it lacks the enterprise functionality that people are used to in the data center but because our cost point is so affordable we become not only very attractive or four on Prem but for cloud solutions as well of course we have our own new tricks cloud offering which allows people to use as dr or replications and so however you want to do it where you can use the same api's and code that your own dis and extrapolate that out to the cloud I was there which is which is very helpful and so we have the ability if you take a snapshot on Amazon it may take four hours and it's been copied over to an s3 device that's the only way they can make it affordable to do it and then if you need that data back it's it's not it's not imminent you've got to rehydrate from s3 and then copy it back over your snapshot with infinite data its instantaneous we do not stop i/o when we do snapshots and another one the patterns we use the time synchronous mechanism every every AO the rise has a timestamp and we when we take a snapshot we just do a point in time and in a timestamp that's greater than that instantiation point is for the volume and previous is for the snapshot we can do that in the cloud we can instantly recover hundreds of terabytes worth of databases and make them instantly available so our story again with the innovation our innovation wasn't just for for on pram it was to be facilitated anyway you are and that same price point carries forward from here into the cloud when Amazon and Microsoft wake up and realized that we have this phenomenal story here I think they'll be buying from us in leaps and bounds it's it's the only way to make the cloud affordable for storage vendors so these are the things you talk about you know bringing bringing data back and bringing workloads back and and there are tool chains that are now on Prem the kubernetes is a great example that our cloud like and so when you bring data back you want to have that cloud experience so automated operations plays into that you know automation used to be something that people are afraid of and they want to do do manual tearing member they wanted their own knobs to turn those days are gone because people want to drive digital transformations they don't want to spend time doing all this heavy lifting I'm talk about that a little bit and where you guys fit yeah I mean you know I say to my customers to not to knock our competition but you can't have a service processor as the inter communication point between what the customer wants and it deciding where it's going to talk to the Iranian configure it's going to be instantaneous and so we all we have we don't have any Java we don't have any flash we don't have any hosts we don't have massive servers around the data center collecting information we just have an html5 interface and so our time to deployment is very very quick when we land on the customer's dark the box goes in we hook up the power we put the drives in we're Haiti's the word V talk because it brings back memories for a lot of course I am now we're going back in time right knowing that main here and so we're very dynamic both in how we forward face the customers but also on the backend for ourselves we eat our own dog food in the sense that we are we have an automation team we've automated our migration from non infinite out platforms towards that uses some level of artificial intelligence we've also built a lot of parameters around things like going with ServiceNow and custom sites because well you can do with our API what other people take you know page and page of code I'll give you an example one of our customers said I need OC i the the let-up management product we called met up and they said hey listen you know it usually takes six months to get an appointment and that it takes at least six months to do the comb we said no no we're not like any other storage render we don't have all these silly raid groups and spare disk capacity you know this weave three commands we can show in the API and we showed them the light Wow can you send us an array we said no we can do something better we were designed SDS right when when infinite out was coded there was no hardware and the reason we did that is because software developers will always code to the level of resilience of the hardware so if you take away that Hardware the software developers have to code to make something to withstand any type of hardware that comes in and at the end of the coding process that's when we started bringing in the hardware pieces so we were written STS we can send vendors and customers a an OVA a virtual appliance of our box they were able to the in a week they told the custom we have to go through full QA no reason why it wouldn't work and they did it for us and got it was a massive customer of theirs and ours that's a powerful story the time to deployment for your homegrown apps as well as things like ServiceNow an MCI incredible infinite out three API calls we were done so you guys had a little share our partnership with met up in the field we did yeah I mean was great they had a massive license with this particular customer they wanted our storage on the platform and we worked very very quickly with them they were very accommodating and we'd love to get our storage qualified behind their behind their heads right now for another customer as well so yeah there's definitely some sooner people realize what we have a Splunk massive for us what we're able to do was plunk in one box where people the competitors can't do in a row so it so it's very compelling what we actually bring in how we do it and that API level is incredibly powerful and we're utilizing that ourselves I would like to see some integration with canonical Marshall what these guys have done a great job with SDS plays we'd like to bring that here do spinnaker do collect if I could do some of those things as well that we're working on the automation we just added another employee another FTE to the automation team and infinite out so we do these and we engage with customers and we help you get out of that trench that is antiquity and move forward into the you know into the vision of how you do one thing well and it permeates the cloud on primary and hybrid all those guys well that API philosophy that you have in the infrastructure is code model that you just described allows you to build out your ecosystem in a really fast way so Greg thanks so much for coming on thank you and doing that double click with this really I'd love to have you back great thanks a lot Dave all right thank you welcome thank you for watching you're watching the cube and this is Dave Volante we'll see you next time

Published Date : Apr 19 2019

SUMMARY :

do that in the cloud we can instantly

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
six monthsQUANTITY

0.99+

Dave VolantePERSON

0.99+

Craig HibbertPERSON

0.99+

AmazonORGANIZATION

0.99+

fiveQUANTITY

0.99+

nine minutesQUANTITY

0.99+

five minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

70QUANTITY

0.99+

18 minutesQUANTITY

0.99+

Craig HibbertPERSON

0.99+

five-minuteQUANTITY

0.99+

April 2019DATE

0.99+

DavePERSON

0.99+

GregPERSON

0.99+

2 millisecondQUANTITY

0.99+

30 terabyteQUANTITY

0.99+

618 minutesQUANTITY

0.99+

1.2 petabytesQUANTITY

0.99+

12 terabyteQUANTITY

0.99+

four hoursQUANTITY

0.99+

DavidPERSON

0.99+

six yearsQUANTITY

0.99+

64kQUANTITY

0.99+

two storiesQUANTITY

0.99+

five minuteQUANTITY

0.99+

oneQUANTITY

0.99+

2016-2017DATE

0.99+

70 petabytesQUANTITY

0.99+

hundreds of terabytesQUANTITY

0.99+

vmax 20COMMERCIAL_ITEM

0.99+

480 spindlesQUANTITY

0.99+

16QUANTITY

0.99+

Boston MassachusettsLOCATION

0.98+

200QUANTITY

0.98+

192 portsQUANTITY

0.98+

four terabyteQUANTITY

0.98+

JavaTITLE

0.98+

11QUANTITY

0.98+

SheaORGANIZATION

0.98+

three terabyteQUANTITY

0.98+

v-max 47COMMERCIAL_ITEM

0.98+

VMAX 40sCOMMERCIAL_ITEM

0.98+

todayDATE

0.98+

fall 2018DATE

0.97+

1.3 a petabyteQUANTITY

0.97+

three timesQUANTITY

0.97+

a hundred and thirty-eight patentsQUANTITY

0.97+

ServiceNowTITLE

0.97+

12QUANTITY

0.97+

under 80 minutesQUANTITY

0.96+

19-inchQUANTITY

0.96+

one last pointQUANTITY

0.95+

two layersQUANTITY

0.95+

about 25 K per secondQUANTITY

0.95+

HaitiLOCATION

0.95+

bothQUANTITY

0.94+

two and a half thousand spindlesQUANTITY

0.94+

about 15k per secondQUANTITY

0.93+

twoQUANTITY

0.93+

eightQUANTITY

0.93+

about 2 millisecondsQUANTITY

0.93+

one boxQUANTITY

0.93+

a hundredQUANTITY

0.93+

infiniteORGANIZATION

0.92+

each oneQUANTITY

0.92+

16 gig per secondQUANTITY

0.91+

one and a half gig per secondQUANTITY

0.91+

a weekQUANTITY

0.9+

InfinidatORGANIZATION

0.9+

each componentQUANTITY

0.9+

one platformQUANTITY

0.89+

three commandsQUANTITY

0.89+

forty eight gigabitQUANTITY

0.88+

a lot of dataQUANTITY

0.88+

four terabytesQUANTITY

0.87+

both financial institutionsQUANTITY

0.87+

first rightQUANTITY

0.87+

up to 20 terabyteQUANTITY

0.87+

s3TITLE

0.87+

three API callsQUANTITY

0.86+

a few months agoDATE

0.86+

24QUANTITY

0.86+

100 mics per secondQUANTITY

0.85+

Brian Kumagai & Scott Beekman, Toshiba Memory America | CUBE Conversation, December 2018


 

>> Pomp YouTubers. Welcome to another cube conversation from ours, the Cube Studios in Palo Alto, California In this conversation, we're going to build upon some other recent conversations we've had which explores this increasingly important relationship between Senate conductor, memory or flash and new classes of applications that are really making life easier and changing the way that human beings in Iraq with each other, both in business as wells and consumer domains. And to explore these crucial issues. We've got two great guests. Brian Kumagai is the director of business development at Kashima Memory America. Scott Beekman is the director of managed flashes to Sheba Memory America's Well, gentlemen, welcome to the Cube. And yet so I'm gonna give you my perspective. I think this is pretty broadly held generally is that as a technology gets more broadly adopted, people get experience with. And as designers, developers, users gain experience with technology, they start to apply their own creativity, and it starts to morph and change and pull and stretch of technology and a lot of different directions. And that leads to increased specialization. That's happening in the flash work I got there, right? Scott? >> Yes, you know the great thing about flashes. Just how you because this it is and how widely it's used. And if you think about any electronic device it needs, it needs a brain processor. Needs to remember what it's doing. Memory and memories, What? What we do. And so we see it used in, you know, so many applications from smartphones, tablets, printers, laptops, you know, streaming media devices. And, uh and so you know, that that technology we see used, for example, like human see memory. It's a low power memory is designed for, for, like, smartphones that aren't plugged in. And, uh, and so when you see smartphones, one point five billion smartphones, it drives that technology and then migrates into all kinds of other applications is well, and then we see new technologies that come and replace that like U F s Universal flash storage. It's intended to be the high performance replacement. Mm. See, And so now that's also mag raiding its way through smartphones and all these other applications. >> So there's a lot of new applications that are requiring new classes of flash. But there's still a fair amount of, AH applications that require traditional flash technology. These air not coming in squashing old flash or traditional flasher other pipe types of parts, but amplifying their use in specialized ways. Brian Possible. But about >> that. So it's interesting that these days no one's really talks about the original in the hand flash that was ever developed back in nineteen eighty seven and that was based on a single of a cell, or SLC technology, which today still offers the highest reliability and fastest before me. Anand device available in the market today. And because of that, designers have found this type of memory to work well for storing boot code and some levels of operating system code. And these are in a wide variety of devices, both and consumer and industrial segments. Anything from set top boxes connecting streaming video. You've got your printers. You, Aye aye. Speakers. Just a numerous breath of product. I >> gotta also believe a lot of AA lot of i o t lot of industrial edge devices they're goingto feature. A lot of these kinds of parts may be disconnected, maybe connected beneath low power, very high speed, low cost, highly reliable. >> That's correct. And because these particular devices air still offered in lower densities. It does offer a very cost effective solutions for designers today. >> Okay, well, let's start with one of the applications. That is very, very popular. Press. When automated driving autonomous funerals on the work, it's it's There's a Thomas vehicles, but there's autonomous robots more broadly, let's start with Autonomous vehicle Scott. What types of flash based technologies are ending up in cars and why? >> Okay, so we've seen a lot of changes within vehicles over the last few years. You know, increasing storage requirements for, like, infotainment systems. You know, more sophisticated navigations of waste recognition. Ah, no instrument clusters more informed of digital displays and then ate ass features. You know, collision avoidance things like like that and all that's driving maur Maureen memory storage and faster performance memory. And in particular, what we've seen for automotive is it's basically adopting the type of memory that you have in your smartphone. So smart phones have a long time have used this political this. Mm. See a memory. And that has made you made my greatest weigh in automotive. And now a CZ smartphones have transition been transitioning do you? A fast, in fact, sushi. But it was the first introduced samples of U F U F S in early two thousand thirteen, and then you started to see it in smartphones in two thousand fifteen. Well, that's now migrating in tow. Automotive as well. They need to take advantage of the higher performance, the higher densities and so and so to Chiba. Zero. We're supporting, you know this, this growth within automotive as well. >> But automotive is a is a market on DH. Again, I think it's a great distinction you made. It's just not autonomous. It's thie even when the human being is still driving. It's the class of services that provided to that driver, both from an entertainment, say and and safety and overall experience standpoint. Is driving a very aggressively forward that volume in and the ability to demonstrate what you can do in a car is having a significant implications on the other classes of applications that we think for some of these high end parts. How is the experience that were incorporating into an automotive application or set of applications starting to impact? How others envision how their consumer products can be made better, Better experience safer, etcetera in other domains >> uh, well, yeah, I mean, we see that all kinds of applications are taking advantage of the these technologies. Like like even air via air, for example. Again, it's all it's all taking advantage of this idea of needing higher, larger density of storage at a lower cost with low power, good performance and all these applications air taking an advantage of that, including automotive. And if you look it automotive, you know, it's it's not just within the vehicle. Actually, it's estimated, you know, projected that autonomous vehicles we need, like one two, three terabytes of storage within the within the vehicle. But then all the data that's collected from cameras and sensors need to be uploaded to the cloud and all that needs to be stored. So that's driving storage to data centers because you basically need to learn from that to improve the software. For the for, Ah, you know, for the time being, Yeah, exactly. So all these things are driving more and more storage, both with within the devices themselves, like a car is like a device, but also in the data centers as >> well. So if we can't Brian take us through some of the decisions that designer has to go through to start to marry some of these different memory technologies together to create, whether it's an autonomous car, perhaps something a little bit more mundane. This might be a computing device. What is the designer? How does is I think about how these fit together to serve the needs of the user in the application. >> Um, I think >> these days, you know a lot of new products. They require a lot of features and capabilities. So I think a lot of input or thought is going into the the memory size itself. You know, I think software guys are always wanting to have more storage, to write more code, that sort of thing. So I think that is one lt's step that they think about the size of the package and then cost is always a factor as well. So you know nothing about the Sheba's. We do offer a broad product breath that producing all types of I'm not about to memory that'll fit everyone's needs. >> So give us some examples of what that product looks like and how it maps to some of these animation needs. >> So we like unmentioned we offered the lower density SLC man that's thought that a one gigabit density and then it max about maximum thirty to get bit dying. And as you get into more multi level cell or triple level cell or cue Elsie type devices, you're been able to use memory that's up to a single diet could be upto one point three three terror bits. So there's such a huge range of memory devices available >> today. And so if we think about where the memories devices are today and we're applications or pulling us, what kind of stuff is on the horizon scarred? >> Well, one is just more and more storage for smartphones. We want more, you know, two fifty six gigabyte fight told Gigabyte, one terabyte and and in particular for a lot of these mobile devices. You know, like convention You f s is really where things were going and continuing to advance that technology continuing to increase their performance, continuing to increase the densities. And so, you know, and that enables a lot of applications that we actually a hardman vision at this point. And when we know autonomous vehicles are important, I'm really excited about that because I'm in need that when I'm ninety, you know can drive anywhere. I want everyone to go, but and then I I you know where I's going, so it's a lot of things. So you know, we have some idea now, but there's things that we can't envision, and this technology enables that and enables other people who can see how do I take advantage of that? The faster performance, the greater density is a lower cost forbid. >> So if we think about, uh, General Computer, especially some of these out cases were talking about where the customer experience is a function of how fast a device starts up or how fast the service starts up, or how rich the service could be in terms of different classes of input, voice or visual or whatever else might be. And we think about these data centers where the closed loop between the processing and the interesting of some of these models and how it affects what that transactions going to do. We're tournament lower late. See, that's driving a lot of designers to think about how they can start moving certain classes of function closer to the memory, both from a security standpoint from an error correction standpoint, talk to us a little bit about the direction that to Sheba imagines, Oh, the differential ability of future memories relative Well, memories today, relative to where they've been, how what kinds of features and functions are being added to some of these parts to make them that much more robust in some of these application. >> I think a >> CZ you meant mentioned the robustness. So the memory itself. And I think that actually some current memory devices will allow you to actually identify the number of bits that are being corrected. And then that kind of gives an indication the integrity or the reliability of a particular block of memory. And I think as users are able to get early detection of this, they could do things to move the data around and then make their overall storage more reliable. >> Things got way. Yeah. I mean, we continue, Teo, figure out how to cram orbits within a given space. You know, moving from S l see them. I'll see the teal seemed. And on cue, Elsie. That's all enabling that Teo enabled greater storage. Lower cost on DH, then, Aziz, we just talked from the beginning. Just that there's all kinds of differentiation in terms of of flash products that are really tailored for certain things. Someone focus for really high performance and give up some power. And others you need a certain balance of that. Were, you know, a mobile device, you know, handheld device. You're not going to play. You know, You give up some performance for less power. And so there's a whole spectrum. It's someone you know. Endurance is incredibly important. So we have a full breast of products that address all those particular needs. >> The designer. It's just whatever I need. I could come to you. >> Yeah, that's right. So she betrays them. The full breath of products available. >> All right, gentlemen. Thank you very much for being on the Cube. Brian Coma Guy, director of business development to Sheba Memory America. Scott Beekman, director of Manage Flash. Achieve a memory. America again. Thanks very much for being on the Q. Thank you. Thank you. And this closes this cube conversation on Peter Burress until next time. Thank you very much for watching

Published Date : Jan 30 2019

SUMMARY :

And that leads to increased specialization. And so we see it used in, you know, so many applications from smartphones, So there's a lot of new applications that are requiring new classes of flash. So it's interesting that these days no one's really talks about the original A lot of these kinds of parts may be disconnected, And because these particular devices air still offered in lower densities. When automated driving autonomous funerals on the work, And that has made you made my greatest weigh in automotive. It's the class of services that provided to that driver, both from an entertainment, And if you look it automotive, you know, it's it's not just within the to serve the needs of the user in the application. So you know nothing about the Sheba's. And as you get into more multi level cell or triple And so if we think about where the memories devices are today and we're And so, you know, the direction that to Sheba imagines, Oh, And I think that actually some current memory devices And others you need a certain balance of that. I could come to you. So she betrays them. Thank you very much for being on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian KumagaiPERSON

0.99+

Scott BeekmanPERSON

0.99+

Peter BurressPERSON

0.99+

Brian Coma GuyPERSON

0.99+

Brian PossiblePERSON

0.99+

IraqLOCATION

0.99+

December 2018DATE

0.99+

AzizPERSON

0.99+

firstQUANTITY

0.99+

ScottPERSON

0.99+

ninetyQUANTITY

0.99+

Kashima Memory AmericaORGANIZATION

0.99+

BrianPERSON

0.99+

Sheba Memory AmericaORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

three terabytesQUANTITY

0.99+

SenateORGANIZATION

0.99+

ElsiePERSON

0.99+

one terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

Cube StudiosORGANIZATION

0.98+

GigabyteORGANIZATION

0.97+

TeoPERSON

0.97+

todayDATE

0.97+

two thousand fifteenQUANTITY

0.97+

CubeORGANIZATION

0.96+

Toshiba Memory AmericaORGANIZATION

0.96+

U F U F SCOMMERCIAL_ITEM

0.94+

twoQUANTITY

0.94+

two great guestsQUANTITY

0.94+

five billion smartphonesQUANTITY

0.93+

one gigabitQUANTITY

0.93+

U FORGANIZATION

0.92+

two fifty six gigabyteQUANTITY

0.92+

oneQUANTITY

0.91+

AmericaLOCATION

0.91+

ShebaORGANIZATION

0.9+

singleQUANTITY

0.85+

threeQUANTITY

0.85+

one pointQUANTITY

0.82+

ShebaPERSON

0.74+

eighty sevenQUANTITY

0.74+

FlashTITLE

0.73+

thirtyQUANTITY

0.73+

three terror bitsQUANTITY

0.72+

ScottORGANIZATION

0.72+

AnandORGANIZATION

0.71+

early twoDATE

0.71+

one ofQUANTITY

0.7+

single dietQUANTITY

0.68+

MaureenPERSON

0.67+

thousand thirteenQUANTITY

0.64+

ChibaORGANIZATION

0.63+

ZeroQUANTITY

0.62+

cellQUANTITY

0.61+

last few yearsDATE

0.54+

ThomasPERSON

0.52+

nineteenDATE

0.51+

applicationsQUANTITY

0.47+

Roland Acra, Cisco | Cisco Live EU 2019


 

>> Live from Barcelona, Spain, it's theCUBE, covering Cisco Live Europe, brought to you by Cisco and its ecosystem partners. >> Welcome back to theCUBE's live coverage here in Barcelona, Spain, for Cisco Live Europe 2019. I'm John Furrier, your host of theCUBE, with Dave Vellante as well, Stu Miniman, who's been doing interviews with us all week, our next guest is Roland Acra, Senior Vice President, General Manager of the Data Center Group. He's in charge of that core business of data center now, at the center of cloud and the edge. Roland, great to see you, thanks for coming on. >> Thank you, thank you for having me. >> So a lot of announcements, a lot of the big guns are out there for Cisco, you got the data center, you got the networking group, and you got IoT, and then cloud center suite was part of the big announcement, your team had a big piece of the keynote yesterday and continues to make waves. Give us a quick update on the news, the key points, what was the announcements? >> Yeah, the two big announcements for my group were ACI Anywhere and HyperFlex Anywhere, and we captured them under a common moniker of There's Nothing Centered About the Data Center Anymore, because both of these speak to things going outside the data center. ACI Anywhere is the integration of ACI, our software-defined networking solution, into two of the most prominent public cloud providers out there, Amazon and Azure, and for HyperFlex Anywhere, the exciting news is the expansion of HyperFlex, which is our hyper-converged solution, also outside the data center, to the edge of the enterprise, specifically branch offices and remote locations. >> And the other thing that came out of our conversation here on theCUBE and also on the keynote, is that the center of the value is the data center, as you guys pointed out with the slides, big circle in the middle, ACI Anywhere, HyperFlex Anywhere, but the network and the data and the security foundation has been a critical part of this new growth. >> Yes. >> Take a minute to explain the journey of ACI, how it started, where are we? It's been a progression for you guys, certainly inside the enterprise, but now it's extended. What's the journey, take us through that. >> When ACI came into the market five years ago now, we have a five year anniversary, ACI brought a software-defined networking solution into the market. It brought an automated network fabric capability, which said you can no longer screw yourself up by having incoherence between one part of the network and another, it's all managed coherently as one thing, and it brought, to your point about security, what's called segmentation of applications. Today, applications have data, they have databases, they have different sensitive pieces, and it's important to be able to tell the network not only get the traffic from one place to the other, but also selectively get the traffic that I tell you to get there, and not the one, and don't get the traffic that has no business getting there, and that's known as segmentation, which is a security concern, particularly when you have sensitive data like consumer data or things that have regulatory things around them. ACI has brought that to the market. That was the value proposition of ACI. We worked on then expanding ACI in the direction of scale. Customers have two or more data centers for disaster recovery, for resiliency, we made that possible. We got to bigger and bigger footprints. Then we took ACI to the edge of the enterprise. What if somebody wanted to put some computing capability in a store, or in a logistics center. ACI then was expanded with that. Step N minus one, was we took ACI to bare metal clouds. Customers now want to deploy also things in co-locations or bare metal clouds. We decoupled ACI software from the Cisco switches, which is the ACI hardware, and ACI became completely virtualized, and still able to be doing everything it does in hardware on premise, in software instead, in somebody's else's capability. And yesterday we announced the full combination of this, which is what if you don't want the ACI soft switching or hard switching, can you use the native switching of a public cloud, like Azure or AWS, and you tell the other APIs, please let those packets go from A to B because they're part of the whitelisted paths,, and don't let packets from C to D go because they're part of the blacklisted paths. And that was the full integration with these clouds-- >> Can you abstract that complexity? >> Completely, completely. One orchestrator, which is the multi-site orchestrator, the same one people have used on premise, that they've developed their policies around, so that we have invested a lot of sweat equity in that controller, it's where also they put their compliance, verification, and audit and assurance, and they use that thing even when something goes to Azure or it goes to AWS. >> So you mentioned the progression. So it's now your full progression, from core to the cloud, including edge-- >> Going through edge. >> What has been some of the results? You mentioned that segmentation's one of 'em, I get that. How has ACI been used, what are some highlights that show the value, because people start looking at ACI, saying, hmm, I like this, I like scale, I have a scale challenge with the new cloud world and edge, and complexity's abstracted away with software, okay, check, so far, so good. Where has been the success of ACI and how do you see that unfolding specifically in the cloud? >> Yeah, the biggest value our customers have gotten, cloud or no cloud, has been with ACI, they've been able to shorten the speed of change, shorten the time for change, therefore increase the speed of change of their network, because now the network needs to operate at the speed of the applications. Applications reconfigure themselves sometimes on hourly or daily basis, and it used to be that changing something in the network, you sent a ticket to somebody who took weeks to reconfigure things. Now that software-defined capability means the network reconfigures and people can change generations of compute on the fly, and the network is in lockstep with that. The agility and speed has been great. The other value has been the value of automation, which is people can run a bigger and bigger and bigger network with a small number of people. You don't have to scale your people the more switches you have. Again, because programming and automation comes to the rescue with that. >> Well I'll tell you, people who are watching right now can look behind Roland and see that it's a packed house. We're in the dev net zone, which has been the massively growing organization within Cisco. Community's been growing very fast, people are developing on top of the networks, and these are network folks, and as well there's new talent coming in. So skill gap is shortening, so you're getting a different makeup for a Cisco user, your customers are changing and changing, growing, existing base plus new people. Talk about that dynamic about how that impacts this intent-based networking, this notion of policy software is defined. >> Yes, you it's you know what many people have been calling infrastructure as code, which is you go from scripting to actually coding and composing very sophisticated automation capabilities and change management capabilities, for an automatable system, which is what ACI is. It's made for people drawing on the strengths that they were doing in the application domain or in the server domain, and bringing that into the network. And that's a new and exciting thing, it brought the network within the purview of coders, people who know how to do Python and know how to do Go language and things which are modern and exciting for the younger generation. It's made also for bringing the analytical capabilities, you know, a lot of what those young coders are used to is a lot of logs, a lot of visibility, a lot of analytics running on, because they've done that on web servers, they've done that on applications that run in the cloud, and we now offer the network, which is very rich in data. If you think about, we see every packet, we see every flow, we see every pattern of how the traffic is changing, and that becomes a data set that is subject to programming because then from there you can extract anomaly detection, you can extract security signatures of malware, you can extract prediction of where the traffic is going to be going in six months. There's a lot of exciting potential from the telemetry and the visibility that we bring into that framework. >> And as you point out, devs love that. I mean Cisco, we've talked about this, is one of the few large established companies that has, in our view, figured out developers, right? There's a lot of examples of those companies that haven't and continue to struggle, we've just witnessed here the dev crowd. I want to ask you about ACI and how it's different from, for example, VMware NSX. What's the differentiation there? >> The biggest differentiation is ACI is one system through which you manage the entire network, the overlay which is the virtual view of the network that the applications care about, as well as the underlay, which is the actual real delivery system that makes the packets get from A to B with quality of service and so forth. So that's first thing. It actually does a lot more, it has much more scope than NSX does. The other thing that's very unique about ACI is we have integrated it with every hypervisor on the planet and every container management framework on the planet, and ever bare metal system on the planet, which means that any workload, something sitting on a mainframe, something sitting on a Sun Oracle server, something on OpenStack on OpenShift, on VMware or on Hyper-V, and now on the EC2 APIs of AWS or on Azure, all of those are integrated with ACI. We're not wedded to one hypervisor, and our cloud implementation that we announced yesterday is a true integrated cloud capability, it's not a bring your own license and go put it on bare metal at AWS, which has been VMware's cloud strategy is to team up with AWS and let customers bring their software licenses into AWS bare metal. That's not EC2. And of course that's not Azure and that's not the other clouds we're going to be doing. So the openness to being multi-cloud on premise, which means every hypervisor and every container framework, and bare metal, with one system. We're extending that into the cloud to give customers choice and openness, that's really a very fundamental philosophy in networks. >> So much wider scope. That's kind of always been Cisco's philosophy in partnership. When you think about HyperFlex going back 10 years when you guys sort of created that with partners and then multiple partners now, maybe talk about that journey a little bit. >> HyperFlex? >> Yes. >> Yeah, 'cause hyperconvergence is another very exciting and fast growing trend in our industry. And really, HyperFlex started off with the hyperconverged infrastructure, started of being the notion of putting a mini-cloud in a box on-premise for application developers to rapidly deploy their applications, as if it was in the cloud. So speed and simplicity were really at a premium, and that's really what defines hyperconvergence. And we've done a tremendous amount of work at Cisco to make speed and simplicity there, because we've integrated network compute storage and a cloud management system called Intersight to give that whole capability to customers. We then hardened it. We took it from being able to do VDI kind of workloads and rather benign workloads, to mission-critical workloads. So databases are now running on HyperFlex. ERP systems are running on HyperFlex, the real crown jewels of the enterprise are now running on HyperFlex. Then we made it multi-cloud. We opened it to all hypervisors and to all container frameworks. We announced OpenShift yesterday, we have already done Hyper-V, we had done OpenStack and DSX, so again, same spirit of openness. And yesterday's announcement was, what if I want to take hyperconvergence outside of the data center in hundreds or thousands of remote locations? Think a retailer. In a retail environment, some of the most interesting data is born outside the data center, it's born in a store. The data is center that follows the customer who's interested in a plasma TV, and that data has a perishable lifetime. You act on it on location and on time or you lose the value. So sending it over, taking two hours to do a machine learning job on it and come back, the customer's already back home watching a movie. And so the window of opportunity for the data is often right there and then, and that's why our customers are taking their computing environment off into where the data is, to act on it fast and on location. >> It sounds easy but I want to just get your thoughts on this, because this is a critical data challenge. If data's stored in classic old ways, data warehouses and fenced off area, it's kind of in the internet, you're not going to have the latency to get that data in real time. Talking about real time data that's addressable for part of the application value. So this is a new notion that's emerged with dev ops and infrastructure as code. >> That's right. >> That's hard. How do you guys see that progressing, how should customers prepare to have that data centered properly for app addressability, discovery, whatever the uses of the data contextually is, time series data or whatever data it is, this is a critical thing. >> It's a critical thing, and there's no one answer, because depending on what the data is, sometimes you only see the value when you concentrate it and consolidate it, because the patterns emerge from rolling out a thousand stores worth of data and seeing that people who buy this toothbrush tend to buy that toothpaste. There may be that value where you want to concentrate the data, but there are also many things where acting on the data in the moment and on location quickly without referring to the other thousand stores extracts 90% of the value of that data. So that's why you want to do forward deploy computing on that data. >> So this highlights network programmability, this means the applications driving the queries or the network for that data, if it's available... So there's two things, network programmability from the app, and availability of the data. >> Yeah, and the ability for the entire infrastructure, network, compute, and storage, and hyperconvergence is the automation of all three to be able to deliver its value equally in remote locations or in a cloud, as it would have in a data center, because that's where, the application's going to want to go where the value is, and if the infrastructure can't follow it there, then you get a degraded ability to take advantage of the opportunity. >> Right, real time decisions happen at the edge, but then as you describe, you got to bring data back, certain data, back to the cloud, do the modeling there and then push the models back down. So you're going to have-- >> And you're going to have decision making distributed. >> And you've got to have low latency to be able to enable that. >> Yeah, and the same goes for other considerations. For example, why is it important to do, allow people to put data both on their premises and in the cloud? For disaster recovery, for data replication, for resiliency. Sometimes for governance reasons. GTPR in Europe says the data of European citizens that's personally identifying has to stay in Europe. Somebody may not have a data center in Europe. Could they take advantage of a co-location ability or somebody else's cloud? >> This is the theme we're seeing at this show this year, and certainly at the center of the news is, complexity is increasing 'cause it's just evolution, more devices are connected, diverse environments, scale for cloud and connectivity, but software driving that. So I got to ask you the question. Go back to the old days, you know, the 1990s, multi-vendor was a big word. Now multi-cloud feels the same way. This is the openness thing. How would you describe multi-cloud strategy for Cisco in context of this notion of being open? >> It is really the new dimension of openness, right? We've been open in the past to multiple forms of physical networks. Customers to use wireless or fiber or copper or what have you, we need to give them an IP network that operate equally well over all media. That was one dimension of openness. Another dimension of openness was, does a product from vendor A work with a product from vendor B? My router, your router, my switch, your firewall, those are other dimensions. Hardware and software coupling. Can I buy the hardware from Peter and the software from Mary, will it work well? The new dimension of openness is, can a customer avail themselves of any form of cloud, either because they like the tooling and how well their developers are more efficient on a given cloud, or because the pricing of the other guy, or the third guy has a point of presence in Tokyo, which this one doesn't. All of those are business choices that if we make our technology, let them take advantage of them with no technical restriction, they will, because now they can shop on the merit of what they want to do, and not on, oh well, sorry, if you want to go to Azure, I can't help you, but if you're willing to settle for your own premise or for Amazon, then I have a story for you. So that's-- >> Roland, you're leading the team on the core crown jewels for Cisco, as you guys, the rising tide's floating all boats here within the company. What's your plan for the year, what's your goals, you'll be out there pounding the pavement with customers, what's your objective, what do you hope to accomplish this year in 2019? >> Well 2019 is the year of many things for us, it's a very exciting year. It's the year of, on the physical infrastructure side, we're taking our switches to 400 gigabit per second. We have our new silicon capability, our new optics, so we're going to be able to scale for the cloud providers who are heading the next frontier of speed and density and scale. So performance will always, always be there, and when we're done with 400, we're already going to be asking about 800. So that's an exciting new generation of switches. ACI Anywhere getting deployed now and adopted across multiple clouds, is another exciting thing. HyperFlex Anywhere, we're really looking forward to the potential in financial services, in logistics, in retail, where's there's a lot of deployed data at the edge. And then, security is a never finished journey, right? Everything with give our customers in the way of security, because there, there's an active actor who's trying to make you fail, right? It's not that you're only fighting physics to get to 400 gigabit, then you win. There we have a guy who's trying to foil your schemes and trying to foil their schemes. Security is a great-- >> Constant attacks are on the network. You guys have seen this movie before, so you know how critical, Roland thanks so much for spending the time, congratulations on ACI Anywhere, HyperFlex Anywhere, intent-based networking at the core. It's theCUBE bringing you all the data, we have an intent here to bring you the best content from Cisco Live in Barcelona. I'm John, Dave Vellante, stay with us for live coverage, day two of three days of coverage here in the dev net zone, packed with developers learning new skills. We'll back with more after this short break.

Published Date : Jan 30 2019

SUMMARY :

covering Cisco Live Europe, brought to you by Cisco of the Data Center Group. So a lot of announcements, a lot of the big guns and for HyperFlex Anywhere, the exciting news is is that the center of the value is the data center, What's the journey, take us through that. but also selectively get the traffic that I tell you the same one people have used on premise, So you mentioned the progression. Where has been the success of ACI and how do you see that and the network is in lockstep with that. We're in the dev net zone, and exciting for the younger generation. is one of the few large established companies We're extending that into the cloud to give customers when you guys sort of created that with partners The data is center that follows the customer it's kind of in the internet, How do you guys see that progressing, extracts 90% of the value of that data. from the app, and availability of the data. and hyperconvergence is the automation of all three do the modeling there and then push the models back down. And you're going to have to be able to enable that. and in the cloud? and certainly at the center of the news is, and the software from Mary, will it work well? for Cisco, as you guys, the rising tide's Well 2019 is the year of many things for us, here in the dev net zone, packed with developers

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

EuropeLOCATION

0.99+

Stu MinimanPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

twoQUANTITY

0.99+

CiscoORGANIZATION

0.99+

RolandPERSON

0.99+

ACIORGANIZATION

0.99+

Roland AcraPERSON

0.99+

TokyoLOCATION

0.99+

two hoursQUANTITY

0.99+

BarcelonaLOCATION

0.99+

MaryPERSON

0.99+

90%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

yesterdayDATE

0.99+

hundredsQUANTITY

0.99+

one systemQUANTITY

0.99+

PythonTITLE

0.99+

two thingsQUANTITY

0.99+

three daysQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

TodayDATE

0.99+

2019DATE

0.99+

GTPRORGANIZATION

0.99+

PeterPERSON

0.99+

Data Center GroupORGANIZATION

0.99+

third guyQUANTITY

0.99+

bothQUANTITY

0.98+

Hyper-VTITLE

0.98+

five years agoDATE

0.98+

thousandsQUANTITY

0.98+

oneQUANTITY

0.98+

400 gigabitQUANTITY

0.98+

two big announcementsQUANTITY

0.98+

first thingQUANTITY

0.98+

VMwareORGANIZATION

0.97+

400QUANTITY

0.97+

six monthsQUANTITY

0.97+

OpenShiftTITLE

0.96+

OpenStackTITLE

0.96+

AzureTITLE

0.96+

OneQUANTITY

0.96+

this yearDATE

0.96+

EC2TITLE

0.95+

thousand storesQUANTITY

0.94+

AzureORGANIZATION

0.94+

theCUBEORGANIZATION

0.94+

five year anniversaryQUANTITY

0.94+

one hypervisorQUANTITY

0.94+

about 800QUANTITY

0.94+

VMwareTITLE

0.93+

Rajiv Mirani & Binny Gill | Nutanix .NEXT EU 2018


 

live from London England it's the cube covering dot next conference Europe 2018 brought to you by Nutanix hi and welcome back on with you pissed car and I'm Stu Mittleman and welcome to the CTO segment at Nutanix next 2018 welcome back to the program to my right is Vinnie Gill who's the CTO of cloud services and to his right is Rajeev Murray ani not very honest I mean you know the CTO of cloud platforms a gentleman thanks so much for joining us again thanks dude for having us being back all right rajiv and Binny mechanics it's been kind of busy since last time we've chatted AOS got really a file system rewrite there's been some M&A integration going on as well as organic activity so you know I love talking the CTO is just if you can bring us inside a little bit you know what's been happening what your team's been working on some of the hard challenges I mean things like page be nested hypervisor on top of DCP you know these are some hard challenge getting ready for nvme over fabric you know so some real you know massive things that happen underneath the cover as well as some new products so didn't want to start with you it's tough yeah you know what I'm keeping you to your team busy oh the teams have been quite busy especially you know once you have you know more than 10,000 customers and a product that's earning a lot of revenue coming in and at the same time you have to change the dark surfer preparing for the next generation so it's a lot of work I mean if you're starting from scratch it's much easier whether you know we've had a lot of experience bringing in new capabilities making it transparent to the customer one-click upgrade is really important for us so learning from the past we have been able to rewrite the engine the storage in a way that customers wouldn't notice but it's gonna run just faster you know kudos to the team that they've pulled it off and it goes across the board when we are acquiring new companies that come into the fold of the Nutanix family the whole idea is to make it look seamless to the customer because that's one thing that you know customers know us for like hey is the willit have neutronic simplicity so a lot of learnings we have created some thumb rules to guide people coming in and those are working fine for us and there's you know a method to the madness over here there is in the end one vision that we want to provide a true hybrid cloud experience to our users do that we feel you're the first start by building the best private cloud you can't have hybrid without private and to do that we need to have an infrastructure that actually works for private cloud so we start with HCI as an initial platform we build on top of that with private cloud features and not just still a networking compute and storage like in the past but more platform services like era and carbon and so on and then once we have that we can then layer on the new hybrid cloud services so even though it looks like getting a lot of things it's all guided by that one region so tell me you know that hybrid that hybrid cloud vision you know where doesn't lead us doesn't lead us to you know the public cloud in the end does it lead us to a new 10x cloud where where does that help customers go towards well the way I look at it is that it doesn't lead to any one place it leads to multiple clouds there'll be private clouds of the edge clouds distributed clouds big central public clouds the important thing is can you move applications and data between between flowers and analogy I use is you know 20 years ago if you if you were writing applications to Solaris you were pretty much locked into Sun if you go by writing applications for hp-ux you were pretty much locked into into HP once Linux came along and made it possible to write applications for any x86 everywhere got independence from from from underlying hardware and the same thing will happen with cloud today you have to write applications for Amazon for GCP for Asha who can build an operating system that actually commoditize is all that that makes it possible for you to run on any cloud with the same set of applications so that kind of sounds to me like you're you know doing V motion and H a India res but then you know for a new generation of technologies well not be motion across clouds is of course the goal it is the goal but it's not just enough to move the applications around data around you have to move the management plan has to be the same so the lot more to it than just simply copying by it's across maybe you want to add to it yeah I mean basically adding to what Rajeev said if you ask where will hybrid cloud lead I think it leads to a dispersed cloud you know some of it was also mentioned by readers in the keynote which is you know this big monolithic cloud concept has to atomize into much smaller pieces and distributed and that's what's going to happen but you start with solving it at the hybrid and at least solve it for two and from two you go to many and that's what's really exciting yeah it's a really good point then I want you to help expand on that a little I I think back to companies that don't portfolios and you look at it and say okay well I product a B and C and boy I I don't know how to use those together because they for an inner basis and how do I work them together today you know I think micro-services architecture I think about api's pulling everything together what are those guiding principles that you give internally to teams to make sure that I can use the pieces that I want they work all together they work with you know there's really broad ecosystem you have and all these multi cloud environments so you know as much effort we put in building architecture for the product design I mean we have to put the same amount in terms of how is it going to be consumed by the customer in just having a long portfolio is no longer what customers are looking for looking for simplicity so to your point one of the things we are really careful about is especially when we are acquiring technology in organically is how do you make sure identity and billing is it's the same right that's the most important thing so you don't have to login once in this product once natural basic stuff but if you get it you know right it's just delightful the other thing is about experience developer experience and user experiences these are the two other out of the four factors user experiences around like do I have to learn this again like if you look at companies like Apple I mean if I've used the Mac use they try to make it very similar such that even a two-year-old can figure out how to use it and we would like to say that if you have been an IT industry for two years you should be able to use any Nutanix product and developer experience is around api's we have a standard that we have Jade version three intent full api is and that is creating a standardization across you saw a little bit of the opening the demo today there you know I went through calm and epoch and flow and prison throw all from one pane of glass it didn't look like four different products in fact why not mentioned there were four different products it probably wouldn't have been obvious that they were and that's important to us keeping that experience seamless is very important and that comes at a cost I mean it's we could have released it as soon as we acquired some of these things and punted it on to the customer to figure out how these pieces come together but we know our customers have a higher expectation from us so we take the time and from from that perspective you know as a as a user you know I'm used to working with different types of clouds public private I wrote anything in between and the amount of interfaces I have to touch to get you know something working to get a series of products to to align to do what I wanted to do that's becoming such a difficult task that you know having a single interface or having a familiar interface would actually help in that so maybe you can talk a little while use that UI to go into the public clamor into the hybrid cloud as well to make you know that experience easier as well talk about a couple of things one whenever there's a proliferation of technologies and you're trying to glue it together I mean single pane of glass is one thing that people talk about I think that's not the most important thing I mean obviously it's a requirement it's a necessary condition not a sufficient one to make it sufficient you also have to bring in opinion into the design and the opinion is where we are taking some decisions for the customer where you know the customer would care about learning about those things and that's where no tonics will come in and through our best practices we put our opinion in the design of the product so that the number of decision points where the customer is minimize and that's how you basically start consuming this diversity out there at the end of the day for the business the only two things matter that business logic and business data infrastructure is sitting in the middle lights it's like a necessary evil so you know if we can hide it and make it seamless you know customers really happy about it can you talk about that the feedback loop you have with customers things are changing very fast you know it's hard for anybody to keep up you know this week even you know hoot anacs has a lot of announcements that I'm sure will take people all the time to there how do you get the feedback loop to customers to make sure your your they're getting what they need from to understand your products and your understanding where they are in their journey and you know mature the product line yeah I mean we have a whole bunch of channels we have we just had a customer advisory board yesterday you know invite customers and have a really deep intimate conversation and frank conversation you know what's working for you what's not working we have our engineering team on slack channels and whatsapp channels with our customers especially the customers who are really you know they complain about a product and they have opinions amenity so we just try to short-circuit this thing and then it's all about empathy so getting a team note here the customers just absolutely retrieve I definitely want your pin but just feedback actually I talked to a few customers and they said I don't know how Nutanix does it but for a company their size I feel like I get personal attention in touch points so congratulations it's good the stuff you saw today is a direct result of the feedback the grouping of products into core essentials and enterprise kind of also reflects the customer journey a lot of customers start with us for with the core once they get used to that get their sense as far as build a true private cloud and only then they started looking at multi cloud so right products for the right customer it's something that we are taking very very seriously at this point so I want to dive into that you know right product right customer so one of the announcements you made is carbon had kubernetes as as a manager platform so what customers do you do you service with that product how do you go into customers like that and how do you help them kubernetes is one of the most fastest growing technologies in the IT space that we have seen in the in the recent years and a lot of our customers I would say especially this year we have seen they have developers using containers and they are at a point where they're trying to decide how can I put it in production a production has a many requirements their carbon is being used by our customers who are trying to see how they'll put containers into production and what we are doing with carbon is we providing native kubernetes api Zsasz is there an open source but we're solving the heart problems of upgrades scale out high availability troubleshooting these mundane things that you know usually people don't want to do and that's where we come in and help so I've seen customers use our storage volumes for even databases containerized to stateless things it's all across the board but still early years I mean for this kind of ecosystem but it's headed into you know it's going to be the future you know one of the things I found really interesting to watch is over the last two decades we've talked about intelligence and automation in infrastructure but really things are happening fast now when you talk about you know whether a I or ml there's really things that are creating some intelligence that it's not like oh I created some script and it does something but you know it's working well I know there's a number of places that that fits into your portfolio maybe maybe prism X play it would seem to get some good resonance and cheers from the audience because maybe they've all played with you know the you know if TTT so start from there and how do you think about the AI in ml space yeah so we we look at you know computing evolving from manual mostly manual in the past to more automated but really you want to get to this autonomous computing that that sort of talked about so you know think of it as you know causes to be really difficult to drive in the past it used to require knowing how the carburetors work and cleaning them out once in a while to the point where maybe 15 years ago pretty much didn't know anything about the internals of a car but you could drive it was reliable it would work which is probably where we are today in IT but the real goal is to get as an autonomous computing the self-driving cars at Tesla Google now where you don't even have to be paying attention at the car will just drive itself yeah I have TTT and the x-play stuff that we have as a step in that direction it's obviously very early but it's the beginning of a journey where you can then start taking feedback loops learning what works modeling that out and extending capabilities on your own and that is something we'll be looking at over the next few years and you know it's something where I don't think it's it's not cute and that's why it needs to be done it's actually required you know if you look at Moore's law it applies to machines so every year you will have double the number of course and you know the same dollar can buy more if you look at humans that's not true I mean ever here then you're only getting more expensive in fact lower for customers here say talent is scarce so just by that definition you see machines are growing and the people who manage the machines are shrinking or you know static so you have to put in a layer of the machine which is smart in the in the between in between of the human and the large form of machines and that if you don't do it there is no data center so it's inevitable and you'll see this happen more and more so that kind of sounds like you're you know positioning your portfolio in a way that you enable the IT of people to not care about infrastructure as much anymore but help you know the their employer their customer do other stuff so how does your portfolio relate to the freeing up of time for those employees for those jobs personnel people some of it is just goes back to the poor design principle I would go to them basic you know how do we how do we start as a company we're looking at storage and they were dual controller a and B a ties B is running but guess what I'm worried that B will also die is the same age so I have to run to fix a run to fix a is my weekend and the night wasted if I had n one dies fine of it's a capacity problem so that goes to the core like how do we design things that are scale out and web scale we talked about so everything that we do including now prism central scale out I have to rush to go fix things hardware will always fail right and that's you know it permeates in the entire organization in terms of how we design things and then on top of that you can add automation and machine intelligence and all that but fundamentally it goes to engineering when you talk about we talked about earlier in the discussion kind of the rewrite that went on for emerging applications and emerging technologies I guess what's exciting you these days you know the industry of the Hall containers you know we looked at you know Flash technology containerization you know I looked at Nutanix when it first came out as was you know some of these waves coming together hyper scale and software-defined and flash all kind of with a perfect storm for the original generation what what are what are those next waves coming together that that you think will you know have a massive impact on the industry a lot of innovation going on on every layer of the stack I mean if we start with the hardware it's been coming for a while but it's almost here now the whole concept of having persistent memory essentially dims blocks having memory that can persist across reboots and we byte addressable so this is a big difference for the storage market right we've always had block addressable story let's become flight addressable paradigms of computing will change and Wharton's will change how we write programs will change so there's a whole big wave coming and getting prepared for that was very important for you yeah and if I control into that a little bit cuz you know what I thought about you know before it was I had you know like like pull of storage and my full of compute and I had my networking and well you know what your solution is I just have a pool of infrastructure but I need specific data in specific places and latency is really important you know Amazon just announced do you know a new compute instance with hundred gigabit networking for you know the same type of application we're talking about Hana and persistent memory and the like so do we not think of it as a pool anymore it's a here you know metadata and data are gonna get more localized so how should we think of your infrastructure going forward you should think of it as a fool we should worry about making it all all work well and that's that that is essentially our job if we can succeed at that then you would never have to think about it as well this particular you know storage is allocated with this particular application at this current time it's up to us to make that happen as applications are running from your direction you feel you know absolutely another thing that's happening in IT in the in the space of compute is the upper limit of this pool is being hidden right so for example in the old days those discs then there was a virtual disc but it had a capacity and you would format it when you look at s3 doesn't have a capacity you don't format it that's what's and that's more to application design when you don't think about the capacity of the pool that you're using that's the direction where we need to go and hide all this right Amina so just-in-time purchase of the next hardware that you need to get but the developer does not see the upper limit well retrieving Binnie thank you so much for sharing all that this Congrats on all the progress and look forward to what were you gonna bring on down lives down the road thanks to you for you piss car I'm Stu minimun lot more coverage here and Nutanix dot next London 2018 thanks for watching

Published Date : Dec 3 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Vinnie GillPERSON

0.99+

Rajeev MurrayPERSON

0.99+

Stu MittlemanPERSON

0.99+

NutanixORGANIZATION

0.99+

RajeevPERSON

0.99+

AmazonORGANIZATION

0.99+

two yearsQUANTITY

0.99+

Binny GillPERSON

0.99+

BinniePERSON

0.99+

AppleORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

MacCOMMERCIAL_ITEM

0.99+

two-year-oldQUANTITY

0.99+

twoQUANTITY

0.99+

yesterdayDATE

0.99+

London EnglandLOCATION

0.99+

todayDATE

0.99+

more than 10,000 customersQUANTITY

0.98+

Rajiv MiraniPERSON

0.98+

20 years agoDATE

0.98+

HPORGANIZATION

0.98+

LinuxTITLE

0.98+

four different productsQUANTITY

0.97+

this yearDATE

0.97+

two thingsQUANTITY

0.97+

one thingQUANTITY

0.97+

oneQUANTITY

0.97+

one-clickQUANTITY

0.97+

JadeTITLE

0.96+

this weekDATE

0.96+

firstQUANTITY

0.96+

single interfaceQUANTITY

0.96+

15 years agoDATE

0.95+

AshaORGANIZATION

0.93+

BinnyPERSON

0.93+

SunLOCATION

0.93+

four different productsQUANTITY

0.92+

rajivPERSON

0.92+

four factorsQUANTITY

0.91+

2018DATE

0.9+

x86TITLE

0.88+

Europe 2018EVENT

0.87+

AOSORGANIZATION

0.86+

bigEVENT

0.86+

SolarisTITLE

0.84+

hundred gigabitQUANTITY

0.84+

hpORGANIZATION

0.84+

one paneQUANTITY

0.83+

whatsappORGANIZATION

0.82+

onceQUANTITY

0.81+

one regionQUANTITY

0.8+

every yearQUANTITY

0.8+

a few customersQUANTITY

0.79+

next few yearsDATE

0.79+

doubleQUANTITY

0.75+

nextEVENT

0.71+

next 2018DATE

0.7+

MoorePERSON

0.69+

single paneQUANTITY

0.68+

couple of thingsQUANTITY

0.68+

uxORGANIZATION

0.66+

slackORGANIZATION

0.65+

customersQUANTITY

0.64+

GoogleORGANIZATION

0.62+

lotQUANTITY

0.61+

s3TITLE

0.61+

one placeQUANTITY

0.61+

hootTITLE

0.61+

CTOPERSON

0.59+

waveEVENT

0.59+

yearsDATE

0.57+

WhartonORGANIZATION

0.57+

wavesEVENT

0.53+

LondonLOCATION

0.53+

2018EVENT

0.52+

HanaORGANIZATION

0.52+

Jesse Rothstein, ExtraHop | AWS re:Invent 2018


 

>> Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2018 Brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hey, welcome back. And we're live here at Las Vegas AWS re:Invent 2018 live coverage from theCUBE. I'm John Furrier. Dave Vellante, my co-host, wall to wall coverage. Dave, six years covering Amazon, watching it grow. Watching it just an unstoppable force of new services. Web services being realized from the original vision years and many, many years ago, over a decade. Jesse Rothstein, CTO and co-founder of ExtraHops our next guest, welcome back to theCUBE, good to see you. >> Thanks for having me. >> So first of all before we get into the conversation, what's your take on this madness, here? It's pretty crazy. >> You know this is, I think this is my sixth year, as well, and this show must double in size every year. It's enormous, spread across so many venues, so much going on, it's almost overwhelming. >> I remember six years ago, we used to be on theCUBE, and I think we just kept the stream open, "Hey, come on up! We have an opening!" Now it's like two cubes, people tryin' to get on, no more room, we're dyin', we go as hard as we can, 16 interviews, hundreds of interviews, lots of change. So I got to ask you, what is your view of the ecosystem? Because back then, handful of players in there. You guys were one of 'em. Lot of opportunities around the rising tide here. What's your thought on the ecosystem evolution? >> Well, of course the ecosystem has grown, this show has really become recognized as the pre-eminent Cloud show, but I see some themes that I think have certainly solidified, for example I spent a bunch of time on the security track. That's the largest track by far, I'm told. They're actually breaking it out into a separate add-on conference coming up in the summer. So clearly there's a great deal of interest around Cloud security as organizations follow their... >> Did they actually announce for that security conference? >> They did, they did. >> Okay, so Boston in June, I think right? >> June, that's correct. They announced, I think, I don't want to mess up the dates, June, late June. >> I think June 26. Breaking News here, that's new information. That's a really good signal for Amazon. They're taking security serious. When I interviewed Andy Jassy last week, he said to me, "Security used to be a blocker. Oh the Cloud's not secure!" Couple short years ago, now it's actually competitive advantage, but still a lot more work to get done. Network layer all the way up, what's your take? Never done. >> Well, so that's what Andy says, and I think that I would rephrase that slightly differently. Security used to be a blocker and it used to be an area of anxiety and organizations would have huge debates around, you know, whether the Cloud is less secure, or not, inherently. I think, today, there's a lot more acceptance that the Cloud can be just as secure as on-prem or just as insecure. You know, for my view, it relies on the same people, processes, and technologies, that are inherently insecure as we have on-prem, and therefore it's just as insecure. There are some advantages, the Cloud has great API logging, building blocks like CloudTrail. New services like GuardDuty, but at the same time it's hard to hire Cloud security expertise, and there is an inherent opacity in public Cloud that I think is a real challenge for security. >> Well, and bad human behavior always trumps good security. >> Well, of course. >> Talk about ExtraHop, how you guys are navigating, you guys have been in the ecosystem for a while. Always an opportunity to grow, I love this TAM's expanding, huge expansion in the adjustable market, new use cases. What's up with you guys? Give us an update. Where's the value proposition resonating? What's the focus? >> Well you can probably tell from my interests that we see a lot of market pull and opportunity around Cloud security. ExtraHop is an analytics product for IT ops and security, so there's a certain segment of what we do for IT operations use cases. Delivering essentially a better level of service, we attach to use cases like Cloud migrations, and new application roll-outs. But we also have a cyber security offering, that's a very advanced offering, around network behavioral analytics, where we actually can detect suspicious behaviors and potential threats, bring them to your attention. And then since we leverage our broader analytics platform, you're a click away from being able to investigate or disposition these detections and see, hey is this something I really need to be concerned about. >> Give an example of some of the network behavior, because I think this is a real critical one, because with no perimeter, you got no surface area, you got API's, this is the preferred architecture but, you got to watch the traffic. How will you guys be specific and give an example. >> So, some of my favorite examples have to do with detecting when you've already been breached. Organizations have been investing in defense and depth for decades, you know, keep the attackers out at the perimeter, keep the attackers away from the endpoint, but how would you know if you've already been breached. And it turns out, your Verizon does a great data breach investigation report annually. And they determine that they're only nine or so behaviors that count for 90% of what all breaches do, what they look like. So, you look for things like, parts of the cyber security attaching. You look for reconnaissance, you look for lateral movement, you look for some form of ex-filtration. Where ExtraHop is taking this further, is that we've built sophisticated behavioral models. We're able to understand privilege. We're able to understand what are the most important systems in your environment, the most important instances. Who has administrative control over them, and then when that changes, you want to know about it, because maybe this thing, this instance, in an on-prem environment, could be like a contractor laptop, or an HVAC system. It now exercises some administrative control over a critical system, and it's never done that before. We bring that to your attention, maybe you want to take some automated action, and quarantine it right away, maybe you want to go through some sort of approval process and bring it to someone's attention. But either way, you want to know about it. >> I'm going to get your reaction to a comment I saw yesterday morning at a keynote on Teresa Carlson's breakfast, her public sector breakfast, Christine Halvorsen, FBI. Said, we're in a data crisis. And she talked about that they can't react to some of these bad events, and a lot of it's post event, That's the basic stuff they need now, and she said, I can't put the puzzle pieces together fast enough. So you're actually taking that from a network Ops standpoint, IT Ops. How do you get the puzzle pieces together fast? What's the secret? >> Well so, the first secret is that we're very focused on real time network data, and network telemetry. I often describe ExtraHop as like Splunk for the network. The idea requires completely different technology, but the idea's the same. Extract value and insight out of data you already have, but the advantage of the network for security, and what I love about it, is that, it's extremely real-time, it's as close to ground truth as you can get, It's very hard to hide from, and you can never turn it off. >> Yeah. >> So with all of those properties, network analytics, makes for, has just tremendous implications for cyber security. >> I mean honestly, you're visibly excited, I'm a data geek myself, but you made a good point, I want to double down on, is that, moving packets from A to B is movement. And movement is part of how you detect it right, so? >> It is, so packets itself, that's data in motion, but if you're only looking at the packets you're barely scratching the surface. Companies have tried to build security analytics based on flow data for a long time. And flow data, flow records, it's like a phone bill. It tells you who's talking to whom and how long they spoke, but there's no notion of what was said in the conversation. In order to do really high quality security analytics, you need to go much deeper. So we understand resources, we understand users, we understand what's normal, and we're not using statistical baselines, we're actually building predictive models around how we expect end points and instances to behave. And then when they deviate from their model, that's when we say, "Hey, there's something strange going on. >> That's the key point for you guys. >> And that means you can help me prioritize... >> Absolutely. >> Because that's the biggest challenge these guys have. They oftentimes don't know where to go, they don't know how to weight the different... >> So that's one challenge and I think another really big challenge, and we see this even with offerings that have been publicized recently, is that detection itself isn't good enough, that's just an alert cannon, and there was a session that actually talked about alarm deafness that occurs, it occurs in hospitals, and other environments, were all you get is these common alarms, and people stopped paying attention to them. So, in addition to the ability to perform high quality detections, you need a very streamline investigative work flow. You know, one click away so you can say, "Okay, what's going on here?" Is this something that requires additional investigation. >> Well, I think you guys are on the right track, and I think what's different about the Cloud is that, you know, they call the show re:invent, but rethinking, existing stuff for Cloud scale, is a different mindset, it's a holistic. Like, you're taking more of a holistic view saying, "I'm not going to focus on a quote packet path, or silo that I'm comfortable with, you kind of got to look at the bigger picture, and then have a data strategy, or a some competitive unique IP." >> I think that's an excellent summary. What I would add is that organizations, as they kind of follow their Cloud journey, we're seeing a lot of interest from security teams in particular, that don't want to do swivel chair integration. Where I have something on-prem and I have something in the Cloud. They want something much more holistic, much more unified. >> Seamless, automated. >> Much more seamless, much more automated. (laughing) You know, I sat in about five different securities track sections, and every single one of them kind of ended with the, "So we automated it with a Lambda Function." (laughing) Clearly a lot of capability for automation, in public Cloud. >> Jesse great to have you on theCube, CTO, Co-founder of ExtraHop. What's next for you? What's goin' on? What's next? >> Well, we continue to make really big investments on security, I wish I could say that cyber security would be done at some point, but it will never be done. It's an arms race. Right now I think we're seeing some really great advancements on the defense side, that will translate into big success. Always focusing on the data problem, as data goes from 10 gigabits to 100 gigabits. You know Amazon just announced their seat five accelerated 100 gigabit network adapter. Always looking at how can we extract more value from that data at scale. >> Leverage to power, leverage to power. Well, we got to get you back on the program. We're going to increase our cyber security coverage, we certainly will be at the security event, I didn't know it was announced publicly, June 26th and 27th, in Boston. Give or take a day on either side, could be 27th, 28th, 26th, 27th. This is a big move for Amazon, we'll be there. >> I think it is. >> Great job, live coverage here, from the floor, on the Expo floor at Amazon re:Invent in 2018, will be right back more Cube coverage, after this short break, two sets. We'll be right back. (soft electronic music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Amazon Web Services, Intel, Jesse Rothstein, CTO and co-founder of ExtraHops So first of all before we get into the conversation, and this show must double in size every year. and I think we just kept the stream open, Well, of course the ecosystem has grown, June, that's correct. Network layer all the way up, what's your take? and organizations would have huge debates around, you know, Well, and bad human behavior What's up with you guys? and potential threats, bring them to your attention. Give an example of some of the network behavior, and then when that changes, you want to know about it, and she said, I can't put the puzzle pieces it's as close to ground truth as you can get, So with all of those properties, And movement is part of how you detect it right, so? you need to go much deeper. Because that's the biggest challenge these guys have. and people stopped paying attention to them. Well, I think you guys are on the right track, and I have something in the Cloud. and every single one of them kind of ended with the, Jesse great to have you on theCube, Always focusing on the data problem, Well, we got to get you back on the program. on the Expo floor at Amazon re:Invent in 2018,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jesse RothsteinPERSON

0.99+

Dave VellantePERSON

0.99+

AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

Christine HalvorsenPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

90%QUANTITY

0.99+

BostonLOCATION

0.99+

VerizonORGANIZATION

0.99+

JuneDATE

0.99+

Andy JassyPERSON

0.99+

June 26DATE

0.99+

DavePERSON

0.99+

FBIORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

10 gigabitsQUANTITY

0.99+

yesterday morningDATE

0.99+

100 gigabitsQUANTITY

0.99+

16 interviewsQUANTITY

0.99+

last weekDATE

0.99+

sixth yearQUANTITY

0.99+

late JuneDATE

0.99+

ExtraHopORGANIZATION

0.99+

six yearsQUANTITY

0.99+

nineQUANTITY

0.99+

one challengeQUANTITY

0.99+

TAMORGANIZATION

0.99+

two setsQUANTITY

0.99+

two cubesQUANTITY

0.99+

six years agoDATE

0.99+

IntelORGANIZATION

0.99+

todayDATE

0.99+

JessePERSON

0.99+

Las VegasLOCATION

0.98+

26thDATE

0.98+

27thDATE

0.98+

June 26thDATE

0.98+

hundreds of interviewsQUANTITY

0.98+

AWSORGANIZATION

0.98+

28thDATE

0.97+

ExtraHopsORGANIZATION

0.97+

100 gigabitQUANTITY

0.96+

first secretQUANTITY

0.94+

CloudTITLE

0.94+

oneQUANTITY

0.9+

CTOPERSON

0.89+

theCUBEORGANIZATION

0.85+

Couple short years agoDATE

0.83+

singleQUANTITY

0.8+

ExtraHopTITLE

0.75+

fiveQUANTITY

0.74+

Invent 2018EVENT

0.74+

many yearsDATE

0.74+

CloudTrailTITLE

0.74+

re:EVENT

0.74+

decadesQUANTITY

0.72+

LambdaTITLE

0.71+

Invent in 2018EVENT

0.7+

Amazon re:EVENT

0.68+

theCubeORGANIZATION

0.66+

re:Invent 2018EVENT

0.61+

GuardDutyTITLE

0.6+

CubeORGANIZATION

0.58+

SplunkTITLE

0.57+

firstQUANTITY

0.54+

over aDATE

0.5+

everyQUANTITY

0.5+

decadeQUANTITY

0.39+

Paul Savill, CenturyLink | AWS re:Invent 2018


 

>> Live from Las Vegas, it's theCUBE covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hey everyone, I'm John Furrier, co-host of theCUBE here live in Las Vegas where Amazon Web Services, AWS re:Invent 2018. Our sixth year covering it, presented by Intel and AWS. Our next guest is Paul Savill, Senior Vice President of core network and technology for CenturyLink. Welcome to theCUBE, good to see you. Thanks for coming on. >> Thanks, really glad to be here. >> So one of the things we've been covering on SiliconANGLE and theCUBE is that the holy trinity of infrastructure is storage, network, and compute, never going away but it's evolving as the market evolves. You guys have been providing connectivity and core network. >> Right. >> Really high availability bandwidth and connectivity for many, many years. Now you guys are in the middle of a seat change, what's your story? What are you guys doing at re:Invent? You guys are partners, your logo is everywhere, What's your story? Why are you here? What are you talking about at re:Invent this year? >> Sure, yeah. You know I really do believe it is a seat change. We've actually been working with AWS for many years and when AWS first started, we were one of the major internet service providers for AWS and access into AWS cloud services, but a few years ago we really started seeing this seat change start to happen because Enterprise customers started asking for weird things from us. They actually wanted to order dedicated 10 gigabit optical waves from their Enterprise location into the AWS platform, and we were thinking everything should come through the public internet, why are people doing this? And really, what was driving it, was issues around performance, concerns around security, and so we're starting to see the network really start to play a major role in how cloud services and how performance of cloud-based applications are delivered. >> We're here in day two of re:Invent, we've got two more days to go. Andy Jackson's got his big keynote, he's the CEO he's got his big keynote tomorrow morning. We're expecting to hear latency to be a big part of his keynote. Specifically as Amazon evolves their strategy from being public cloud, where all the action is, to having a cloud version on premise. >> Right >> Because of latency and heritage or legacy workloads on premise aren't going away certainly. Maybe their footprint might be smaller, I'd buy that, but it's not going away. But connectivity and latency is now at the front of the conversation again because data and compute have that relationship. I don't want to be moving date around, if I do it better be low latency, but I want to run compute over the network, I want to send some compute to the edge. So latency is important. Talk about this, because you gave a talk here around milliseconds matter, I love that line, because they do matter now. >> They do, yeah. >> Talk about that concept. >> Sure, yeah. We're absolutely seeing it and the reason we kind of came up with that tag line is because more and more as we've been working with enterprises on networking solutions, we've found that this is really true in how well their applications perform in the cloud, and I really do applaud Jausi and AWS for working on that solution to deliver, to the prim, some of the AWS capabilities. But really we see the market evolving, where in the future it's a trade off between latency and the amount of bandwidth and how the performance needs to be applied across the field. Because we believe that some things will make a lot of sense to be hosted out of the cloud core, where there's major iron, major storage and compute, some things can be distributed on the prim, but then other things make more sense to be hosted out of somewhere on the far edge where it can serve multiple locations. It may be more efficient that way, because maybe you don't want to haul all the bandwidth, or huge amounts of data very long distances, that becomes expensive. >> Well bandwidths cost, it's a cost to you. >> It's still a cost, yeah. >> Latency, one, is a performance overhead cost on that that could hurt the application, but there's also a cost, there's actual financial cost. >> Yes, there is. >> Talk about this concept of latency in context to the new kinds of applications, because what's going on is that as compute, and as you mentioned, storage, start to get more functionality, specifically compute, >> Yes >> Things happen differently. I've been studying AI, I've been a computer science major since the 80's, and AI's been around since the 80's and earlier, but all those concepts just didn't have the compute capability and now they do, now machine learning is on fire, that's a renaissance. Compute can help connectivity, you just mentioned a huge case there, so this is powering new software applications that no one has ever seen before. >> That's right. >> How are these new networks workloads and applications changing connectivity? Give some examples, what are some of the things you guys are seeing as use cases running over the connectivity? >> Sure. So we're seeing a lot of different use cases, and you're right, it really is transforming. An example of this is retail robotics for instance. We're seeing very real applications where large retail customers want to drive robotics in their many retail store locations but it's just not affordable to put that whole hardware software stack in every single store to run those robotics, but then if you try to run those robotics from an application that hosted in a cloud somewhere a thousand miles away then it doesn't have the latency performance that it needs to accurately run those robotics in the store. So we believe that what we're starting to see is this transformation where applications are going to be broken up into these microservices where parts of it's going to run in the cloud core, part of it's going to run in the prim, and part of it's going to run on the near edge where things are more efficient to run for certain types of applications. >> It's kind of like a human. You got your brains and you got your arms and legs to move around. So the brains can be in the cloud, and then whatever is going on at the edge can have more compute. Give some other examples. You and I were talking before we came on camera about video retail analytics. >> Righ, uh huh. >> Pretty obvious when you think about it, but not obvious when you don't have cloud. So talk about video analytics. >> Yeah, that's another important driver is with all of the AI tools that are being developed and as AI advances and as other things, technology like machine learning, advances, then we want to apply AI to a whole new range of applications. So retail, like video analytics for instance, what we're starting to see is the art of the possible. You may have a retail store that has 30 different video cameras spread up around its store and it's constantly monitoring people's expressions, people's moods as they come in, there's an AI sitting somewhere that's analyzing how people feel when they walk in the store versus how they feel when they walk out. Are they happier when they walk out than when they walk in? Are they really mad when they're in the waiting line someplace, or is there a corner of the store where, real time, there's an AI that's detecting that, hey there's a problem in the corner of the store because people seem like they look upset. That type of analysis, you don't want to feed all of that video, all of those simultaneous video feeds to some AI that's sitting a thousand miles away. That's just too much of a lift in terms of bandwidth and in terms of cost. So the answer is there's this distributed model where portions of the application in the AI is acting at different locations in the network and the network is tying it all together. >> Microservice is going to create a whole new level of capabilities and change how they're implemented and deployed. >> Yes. >> And connectivity still feeds the beast called the application. Also the other thing we're seeing, as we're expected to hear Amazon announce, new kinds of connectivity, whether it be satellite and/or bandwidth to edges. IOT, or the network edge as it's called, where the edge network kind of ends with power and connectivity. Because without power and connectivity it's not on the network, it's not an edge. >> That's right. >> There's a trend to push the boundary of edge. Battery power is lasting longer, so now you need connectivity. How do you guys at Century look at this? So do you guys want to push the boundaries, how are you guys just pushing the boundaries? >> Sure. >> Yeah, IOT is another area that's really changing the business. It's opening up so many new opportunities. When you talk about the edge, it's really funny because people define the edge in so many different ways, and the truth is the edge can vary depending on what the application is. An IOT, if you have a bunch of remote devices that are battery powered that are signaling back to some central application, well then that IOT, those physical devices, are the new edge and they could be very deep into some kind of a market. But there's a lot of different communications technology that can access those. There's 5G wireless that's emerging, or regular wireless. There are applications like LauraNet, which is a very low bandwidth but very cost effective way for small IOT devices to communicate small amounts of data back to a central application. And then there's actual fiber that can be used to serve locations where IOT devices can be feeding very heavy amounts of bandwidth back to applications. >> So it's good for your business? >> It's great for our business. We really see it opening up so many other new avenues for us to serve our customers. >> So I'm going to put you on the spot here. If I asked you a question, what has cloudification done for CenturyLink? How has it changed your business? How would you respond to that question? >> I think that it's made what we do even more critical to the future of how enterprises operate. The reason for that is just the point that you made when we started, which is storage and compute and networking, it's all really coming back together in terms of how it boils down to those things. But networking is becoming a much more important factor in all of this because of the latency issues that are there and the bandwidth amount that is possible to generate. We believe that it's creating an opportunity for us to play a more pivotal role in the whole evolving cloud ecosystem. >> I still think this is such an awesome new area because, again, it's so early. And as storage, network, and compute continues to morph, all of us networking geeks and infrastructure geeks, software geeks are going to actually have an opportunity to reimagine how to use those parts. >> It is, yeah. >> And with microservices and custom silicon, you see what Amazon's done with amapertna. You can have data processing units, connectivity processing units, you can have all kinds of new capabilities. It's a whole new world. >> It is and, you know, interestingly enough organizations are going to have to change. One of the things we see with enterprises is that many enterprises are organized so that those three areas are still completely managed in separate departments. But in this new world of how cloud is crushing all of those things together, those departments are going to have to start working much more closely aligned. I had a customer visit me after our session yesterday and was saying I get the whole thing of how now when you deploy an application in the cloud, you can't just think about the application. You got to think about the network that ties it all togeher. But he says I don't know how to get my organization to do that. They're still so segmented and separated. It's a tough challenge. >> And silos are critical. I just saw a presentation with the FBI director, deputy director of counter terrorism, and they can't put the puzzle pieces together fast enough to evaluate threats because of the databases. She gave an example around the Las Vegas shooting here. Just to go through the video tape of the hotel took 12 people for 20 hours a day for a week to go through that video. They did it in twenty minutes with facial recognition. And they have all this data, so putting those puzzle pieces together is critical. I think connectivity truly is going to be a new kind of backbone. >> Yes, uh huh. >> You guys are doing some good work. Okay, lets get a plug in for you guys real quick. By the way, thanks for the insight. Great stuff here at re:Invent. One year anniversary CenturyLink with level three coming together. Synergies, what are you guys doing? Give the update on the coming one year anniversary of the Synergies. >> Sure. Uh huh. >> What are the Synergies? >> Yeah, we're getting tremendous synergies. In fact, I think if you listen to our analyst reports and our quarterly earnings calls, we're really ahead of plan in that area. We've actually raised our earnings guidance for the year as a result of what was originally expected of us. We're doing really well on that front. I'll tell you the thing that excites me more than synergies is the combined opportunity that we have because of these two companies coming together. Ways that, bringing the companies together, surprised me that we found new opportunities. For instance when you take level three, which is a globally distributed network covering Europe, and Latin America, and North America, and parts of the Pack Rim with fiber and sub sea systems, and combine it with CenturyLink's dense coverage of fiber in North America, then it really creates a stronger ability for this company to reach enterprises with very high performing network solutions. One of the main things that surprised me actually relates to this conference, and that is that CenturyLink was really focused around building out cloud services, working closely with companies like AWS on creating managed services around cloud, building performance tools around managing cloud based applications. Level three was really focused on building out network connectivity in a dynamic way to use the new software defined networking technologies to be the preferred provider of high performance networking to cloud service providers. >> The timing was pretty impeccable on the combination because you were kind of cloudifying before cloud native was called cloud native. You were thinking about it in kind of a dev ops mindset and they were kind of thinking of it from a software agility perspective out of infrastructure. Kind of bring those together. Did I get that right? >> That's exactly right. Level three was thinking about how to make the network consumable on a dynamic basis and on demand basis the same way cloud is. When you combine CenturyLink's capabilities with that then it's just opening up so many new things for us to do, so many new ways that we can deliver value to our enterprise customers. >> Well I'm always hungry for more bandwidth, so come on. You guys lighting up all that fiber? How's all the fiber? >> Yeah, we're expanding dramatically. We're investing heavily in that fiber network. We have around 160,000 enterprise buildings on our network today and we're growing that just as fast as we can. >> So Paul Sevill, you're the guy to call if I want to get some cord network action huh? >> That's right. >> Alright. Thanks for the insight, great to have you. Good luck at the show here at re:Invent. CenturyLink here inside theCUBE powering connectivity. Big part of the theme here at re:Invent this year is powering the edge, getting connectivity to places that need low latency for those workloads. That's the key theme. You guys are right on the trend line here. CenturyLink on theCUBE, I'm John Furrier. Stay with us for more wall to wall coverage after this short break. (upbeat techno music)

Published Date : Nov 27 2018

SUMMARY :

Brought to you by Amazon Welcome to theCUBE, good to see you. is that the holy trinity of infrastructure What are you guys doing at re:Invent? see the network really he's the CEO he's got his at the front of the and how the performance needs it's a cost to you. that could hurt the application, have the compute are going to be broken up So the brains can be in the cloud, but not obvious when you don't have cloud. and the network is tying it all together. Microservice is going to create it's not on the network, it's not an edge. push the boundary of edge. and the truth is the edge can vary for us to serve our customers. So I'm going to put The reason for that is just the point how to use those parts. you can have all kinds One of the things we see with enterprises because of the databases. Give the update on the coming One of the main things that surprised me impeccable on the combination on demand basis the same way cloud is. How's all the fiber? that just as fast as we can. Thanks for the insight, great to have you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Amazon Web ServicesORGANIZATION

0.99+

CenturyLinkORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul SevillPERSON

0.99+

Paul SavillPERSON

0.99+

Andy JacksonPERSON

0.99+

FBIORGANIZATION

0.99+

EuropeLOCATION

0.99+

12 peopleQUANTITY

0.99+

twenty minutesQUANTITY

0.99+

John FurrierPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

North AmericaLOCATION

0.99+

tomorrow morningDATE

0.99+

Las VegasLOCATION

0.99+

two companiesQUANTITY

0.99+

yesterdayDATE

0.99+

IntelORGANIZATION

0.99+

OneQUANTITY

0.99+

Latin AmericaLOCATION

0.99+

30 different video camerasQUANTITY

0.99+

one yearQUANTITY

0.98+

20 hours a dayQUANTITY

0.98+

two more daysQUANTITY

0.98+

oneQUANTITY

0.98+

sixth yearQUANTITY

0.97+

80'sDATE

0.97+

re:InventEVENT

0.97+

firstQUANTITY

0.97+

three areasQUANTITY

0.95+

todayDATE

0.95+

around 160,000 enterpriseQUANTITY

0.95+

CenturyORGANIZATION

0.95+

10 gigabitQUANTITY

0.93+

LauraNetTITLE

0.91+

a weekQUANTITY

0.91+

amapertnaORGANIZATION

0.91+

level threeQUANTITY

0.9+

few years agoDATE

0.88+

InventEVENT

0.87+

day twoQUANTITY

0.85+

this yearDATE

0.85+

One year anniversaryQUANTITY

0.85+

Level threeQUANTITY

0.85+

Invent 2018EVENT

0.84+

JausiPERSON

0.83+

thousand milesQUANTITY

0.8+

AWSEVENT

0.79+

re:Invent 2018EVENT

0.78+

single storeQUANTITY

0.77+

theCUBEORGANIZATION

0.73+

SiliconANGLEORGANIZATION

0.7+

SeniorPERSON

0.7+

PackORGANIZATION

0.69+

reEVENT

0.67+

re:EVENT

0.64+

threeOTHER

0.56+

LevelQUANTITY

0.56+

nitiesQUANTITY

0.53+

DDN Chrowdchat | October 11, 2018


 

(uptempo orchestral music) >> Hi, I'm Peter Burris and welcome to another Wikibon theCUBE special feature. A special digital community event on the relationship between AI, infrastructure and business value. Now it's sponsored by DDN with participation from NIVIDA, and over the course of the next hour, we're going to reveal something about this special and evolving relationship between sometimes tried and true storage technologies and the emerging potential of AI as we try to achieve these new business outcomes. So to do that we're going to start off with a series of conversations with some thought leaders from DDN and from NVIDIA and at the end, we're going to go into a crowd chat and this is going to be your opportunity to engage these experts directly. Ask your questions, share your stories, find out what your peers are thinking and how they're achieving their AI objectives. That's at the very end but to start, let's begin the conversation with Kurt Kuckein who is a senior director of marketing at DDN. >> Thanks Peter, happy to be here. >> So tell us a little bit about DNN at the start. >> So DDN is a storage company that's been around for 20 years. We've got a legacy in high performance computing, and that's what we see a lot of similarities with this new AI workload. DDN is well known in that HPC community. If you look at the top 100 super computers in the world, we're attached to 75% of them. And so we have the fundamental understanding of that type of scalable need, that's where we're focused. We're focused on performance requirements. We're focused on scalability requirements which can mean multiple things. It can mean the scaling of performance. It can mean the scaling of capacity, and we're very flexible. >> Well let me stop you and say, so you've got a lot of customers in the high performance world. And a lot of those customers are at the vanguard of moving to some of these new AI workloads. What are customer's saying? With this significant engagement that you have with the best and the brightest out there. What are they saying about this transition to AI? >> Well I think it's fascinating that we have a bifurcated customer base here where we have those traditionalist who probably have been looking at AI for over 40 years, and they've been exploring this idea and they've gone to the peaks and troughs in the promise of AI, and then contraction because CPUs weren't powerful enough. Now we've got this emergence of GPS in the super computing world. And if you look at how the super computing world has expanded in the last few years. It is through investment in GPUs. And then we've got an entirely different segment which is a much more commercial segment, and they may be newly invested in this AI arena. They don't have the legacy of 30, 40 years of research behind them, and they are trying to figure out exactly what do I do here. A lot of companies are coming to us. Hey, I have an AI initiative. Well, what's behind it? We don't know yet but we've got to have something, and they don't you understand where is this infrastructure going to come from. >> So a general availability of AI technologies and obviously flash has been a big part of that. Very high speed networks within data centers. Virtualization certainly helps as well. Now opens up the possibility for using these algorithms, some of which have been around for a long time that require very specialized bespoke configurations of hardware to the enterprise. That still begs the question. There are some differences between high performance computing workloads and AI workloads. Let's start with some of the, what are the similarities and let's explore some of the differences. >> So the biggest similarity I think is it's an intractable hard IO problem. At least from the storage perspective, it requires a lot of high throughput. Depending on where those idle characteristics are from. It can be a very small file, high opt intensive type workflows but it needs the ability of the entire infrastructure to deliver all of that seamlessly from end to end. >> So really high performance throughput so that you can get to the data you need and keep this computing element saturated. >> Keeping the GPU saturated is really the key. That's where the huge investment is. >> So how do AI and HPC workloads differ? >> So how they are fundamentally different is often AI workloads operate on a smaller scale in terms of the amount of capacity, at least today's AI workloads, right? As soon as a project encounter success, what our forecast is is those things will take off and you'll want to apply those algorithm games bigger and bigger data sets. But today, we encounter things like 10 terabyte data sets, 50 terabyte data sets, and a lot of customers are focused only on that but what happens when you're successful? How you scale your current infrastructure to petabytes and multi petabytes when you'll need it in the future. >> So when I think of HPC, I think of often very, very big batch jobs. Very, very large complex datasets. When I think about AI, like image processing or voice processing whatever else it might be. Like for a lot of small files randomly access that require nonetheless some very complex processing that you don't want to have to restart all the time and the degree of some pushing that's required to make sure that you have the people that can do. Have I got that right? >> You've got that right. Now one, I think misconception is on the HPC side, that whole random small file thing has come in in the last five, 10 years, and it's something DDN have been working on quite a bit. Our legacy was in high performance throughput workloads but the workloads have evolved so much on the HPC side as well, and as you posited at the beginning so much of it has become AI and deep learning research. >> Right, so they look a lot more alike. >> They do look a lot more alike. >> So if we think about the revolving relationship now between some of these new data first workloads, AI oriented change the way the business operates type of stuff. What do you anticipate is going to be the future of the relationship between AI and storage? >> Well, what we foresee really is that the explosion in AI needs and AI capability is going to mimic what we already see, and really drive what we see on the storage side. We've been showing that graph for years and years of just everything going up into the right but as AI starts working on itself and improving itself, as the collection means keep getting better and more sophisticated, and have increased resolutions whether you're talking about cameras or in life sciences, acquisition. Capabilities just keep getting better and better and the resolutions get better and better. It's more and more data right and you want to be able to expose a wide variety of data to these algorithms. That's how they're going to learn faster. And so what we see is that the data centric part of the infrastructure is going to need the scale even if you're starting today with a small workload. >> Kurt, thank you very much, great conversation. How did this turn into value for users? Well let's take a look at some use cases that come out of these technologies. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides an enable into acceleration for a wide variety of AI and the use cases in any scale. The platform provides tremendous flexibility and supports a wide variety of workflows and data types. Already today, customers in the industry, academia and government all around the globe are leveraging DDN A3I within video DGX-1 for their AI and DL efforts. In this first example used case, DDN A3I enables the life sciences research laboratory to accelerate through microscopic capture and analysis pipeline. On the top half of the slide is the legacy pipeline which displays low resolution results from a microscope with a three minute delay. On the bottom half of the slide is the accelerated pipeline where DDN A3I within the video DGX-1 delivers results in real time. 200 times faster and with much higher resolution than the legacy pipeline. This used case demonstrates how a single unit deployment of the solution can enable researchers to achieve better science and the fastest times to results without the need to build out complex IT infrastructure. The white paper for this example used case is available on the DDN website. In the second example used case, DDN A3I with NVIDIA DGX-1 enables an autonomous vehicle development program. The process begins in the field where an experimental vehicle generates a wide range of telemetry that's captured on a mobile deployment of the solution. The vehicle data is used to train capabilities locally in the field which are transmitted to the experimental vehicle. Vehicle data from the fleet is captured to a central location where a large DDN A3I within video DGX-1 solution is used to train more advanced capabilities, which are transferred back to experimental vehicles in the field. The central facility also uses the large data sets in the repository to train experimental vehicles and simulate environments to further advance the AV program. This used case demonstrates the scalability, flexibility and edge to data center capability of the solution. DDN A3I within video DGX-1 brings together industry leading compute, storage and network technologies, in a fully integrated and optimized package that makes it easy for customers in all industries around the world to pursue break from business innovation using AI and DL. >> Ultimately, this industry is driven by what users must do, the outcomes if you try to seek. But it's always is made easier and faster when you got great partnerships working on some of these hard technologies together. Let's hear how DDN and NVIDIA are working together to try to deliver new classes of technology capable of making these AI workloads scream. Specifically, we've got Kurt Kuckein coming back. He's a senior director of marketing for DDN and Darrin Johnson who is global director of technical marketing for NVIDIA in the enterprise and deep learning. Today, we're going to be talking about what infrastructure can do to accelerate AI. And specifically we're going to use a relationship. A virgin relationship between DDN and NVIDIA to describe what we can do to accelerate AI workloads by using higher performance, smarter and more focused infrastructure for computing. Now to have this conversation, we've got two great guest here. We've got Kurt Kuckein, who is the senior director of marketing at DDN. And also Darrin Johnson, who's the global director of technical marketing for enterprise at NVIDIA. Kurt, Darrin, welcome to the theCUBE. >> Thank you very much. >> So let's get going on this 'cause this is a very, very important topic, and I think it all starts with this notion of that there is a relationship that you guys put forward. Kurt, why don't you describe. >> Sure, well so what we're announcing today is DDNs, A3I architecture powered by NVIDIA. So it is a full rack level solution, a reference architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply, very completely. >> So if we think about why this is important. AI workloads clearly put special stress on underline technology. Darrin talk to us a little bit about the nature of these workloads and why in particular things like GPUs, and other technologies are so important to make them go fast? >> Absolutely, and as you probably know AI is all about the data. Whether you're doing medical imaging, whether you're doing natural language processing. Whatever it is, it's all driven by the data. The more data that you have, the better results that you get but to drive that data into the GPUs, you need greater IO and that's why we're here today to talk about DDN and the partnership of how to bring that IO to the GPUs on our DGX platforms. >> So if we think about what you describe. A lot of small files often randomly distributed with nonetheless very high profile jobs that just can't stop midstream and start over. >> Absolutely and if you think about the history of high performance computing which is very similar to AI, really IO is just that. Lots of files. You have to get it there. Low latency, high throughput and that's why DDNs probably, nearly 20 years of experience working in that exact same domain is perfect because you get the parallel file system which gives you that throughput, gives you that low latency. Just helps drive the GPU. >> So you mentioned HPC from 20 years of experience. Now it use to be that HPC, you'd have a scientist with a bunch of graduate students setting up some of these big, honking machine. but now we're moving with commercial domain You don't have graduate students running around. You have very low cost, high quality people. A lot of administrators, nonetheless quick people but a lot to learn. So how does this relationship actually start making or bringing AI within reach of the commercial world? Kurt, why you don't you-- >> Yeah, that's exactly where this reference architecture comes in. So a customer doesn't need to start from scratch. They have a design now that allows them to quickly implement AI. It's something that's really easily deployable. We fully integrated the solution. DDN has made changes to our parallel file system appliance to integrate directly with the DGX-1 environment. Makes the even easier to deploy from there, and extract the maximum performance out of this without having to run around and tuning a bunch of knobs, change a bunch of settings. It's really going to work out of the box. >> And NVIDIA has done more than the DGX-1. It's more than hardware. You've don't a lot of optimization of different AI toolkits et cetera so talk a little bit about that Darrin. >> Talking about the example that used researchers in the past with HPC. What we have today are data scientists. A scientist understand pie charts, they understand TensorFlow, they understand the frameworks. They don't want to understand the underlying file system, networking, RDM, a InfiniBand any of that. They just want to be able to come in, run their TensorFlow, get the data, get the results, and just keep turning that whether it's a single GPU or 90 DGXs or as many DGXs as you want. So this solution helps bring that to customers much easier so those data scientist don't have to be system administrators. >> So roughly it's the architecture that makes things easier but it's more than just for some of these commercial things. It's also the overall ecosystem. New application fires up, application developers. How is this going to impact the aggregate ecosystem is growing up around the need to do AI related outcomes? >> Well, I think one point that Darrin was getting to there in one of the bigg effects is also as these ecosystems reach a point where they're going to need to scale. There's somewhere where DDN has tons of experience. So many customers are starting off with smaller datasets. They still need the performance, a parallel file system in that case is going to deliver that performance. But then also as they grow, going from one GBU to 90 GXs is going to be an incredible amount of both performance scalability that they're going to need from their IO as well as probably capacity, scalability. And that's another thing that we've made easy with A3I is being able to scale that environment seamlessly within a single name space, so that people don't have to deal with a lot of again tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as they're successful. In the end, it is the application that's most important to both of us, right? It's not the infrastructure. It's making the discoveries faster. It's processing information out in the field faster. It's doing analysis of the MRI faster. Helping the doctors, helping anybody who is using this to really make faster decisions better decisions. >> Exactly. >> And just to add to that. In automotive industry, you have datasets that are 50 to 500 petabytes, and you need access to all that data, all the time because you're constantly training and retraining to create better models to create better autonomous vehicles, and you need the performance to do that. DDN helps bring that to bear, and with this reference architecture is simplifies it so you get the value add of NVIDIA GPUs plus its ecosystem software plus DDN. It's match made in heaven. >> Kurt, Darrin, thank you very much. Great conversation. To learn more about what they're talking about, let's take a look at a video created by DDN to explain the product and the offering. >> DDN A3I within video NVIDIA DGX-1 is a fully integrated and optimized technology solution that enables and accelerates end to end data pipelines for AI and DL workloads of any scale. It is designed to provide extreme amounts of performance and capacity backed by a jointly engineered and validated architecture. Compute is the first component of the solution. The DGX-1 delivers over one petaflop of DL training performance leveraging eight NVIDIA tester V100 GPUs in a 3RU appliance. The GPUs are configured in a hybrid cube mesh topology using the NVIDIA and VLink interconnect. DGX-1 delivers linearly predictable application performance and is powered by the NVIDIA DGX software stack. DDN A31 solutions can scale from single to multiple DGX-1s. Storage is a second component of the solution. The DDN and the AI200 is all NVIDIA parallel file storage appliance that's optimized for performance. The AI200 is specifically engineered to keep GPU computing resources fully utilized. The AI200 ensures maximum application productivity while easily managing to update data operations. It's offered in three capacity options and a compact tour U chassis. AI200 appliance can deliver up to 20 gigabytes a second of throughput and 350,000 IOPS. The DDN A3I architecture can scale up and out seamlessly over multiple appliances. The third component of the solution is a high performance, low latency, RDM capable network. Both EDR and InfiniBand, and 100 gigabit ethernet options are available. This provides flexibility, interesting seamless scaling and easy integration of the solution within any IT infrastructure. DDN A3I solutions within video DGX-1 brings together industry leading compute, storage and network technologies in a fully integrated and optimized package that's easy to deploy and manage. It's backed by deep expertise and enables customers to focus on what really matters. Extracting the most value from their data with unprecedented accuracy and velocity. >> Always great to hear the product. Let's hear the analyst's perspective. Now I'm joined by Dave Vellante, who's now with Wikibon, colleague here at Wikibon and co-CEO of SiliconANGLE. Dave welcome to theCUBE. Dave a lot of conversations about AI. What is it about today that is making AI so important to so many businesses? >> Well I think it's three things Peter. The first is the data we've been on this decade long aduped bandwagon and what that did is really focused organizations on putting data at the center of their business, and now they're trying to figure okay, how do we get more value of that? So the second piece of that is technology is now becoming available, so AI of course have been around forever but the infrastructure to support that, GPUs, the processing power, flash storage, deep learning frameworks like TensorFlow have really have started to come to the marketplace. So the technology is now available to act on that data, and I think the third is people are trying to get digital right. This is it about digital transformation. Digital meets data. We talked about that all the time and every corner office is trying to figure out what their digital strategy should be. So there try to remain competitive and they see automation, and artificial intelligence, machine intelligence applied to that data as a lynch pan of their competitiveness. >> So a lot of people talk about the notion of data as a source value in some and the presumption that's all going to the cloud. Is that accurate? >> Oh yes, it's funny that you say that because as you know, we're done a lot of work of this and I think the thing that's important organizations have realized in the last 10 years is the idea of bringing five megabytes of compute to a petabyte of data is far more valuable. And as a result a pendullum is really swinging in many different directions. One being the edge, data is going to say there, and certainly the cloud is a major force. And most of the data still today lives on premises, and that's where most of the data os likely going to stay. And so no all the data is not going to go into the cloud. >> It's not the central cloud? >> That's right, the central public cloud. You can redefined the boundaries of the cloud and the key is you want to bring that cloud like experience to the data. We've talked about that a lot in the Wikibon and Cube communities, and that's all about the simplification and cloud business models. >> So that suggest pretty strongly that there is going to continue to be a relationship between choices about hardware infrastructure on premises, and the success at making some of these advance complex workloads, run and scream and really drive some of that innovative business capabilities. As you think about that what is it about AI technologies or AI algorithms and applications that have an impact on storage decisions? >> Well, the characteristics of the workloads are going to be often times is going to be largely unstructured data that's going to be small files. There's going to a lot of those small files, and they're going to be randomly distributed, and as a result, that's going to change the way in which people are going to design systems to accommodate those workloads. There's going to be a lot more bandwidth. There's going to be a lot more parallelism in those systems in order to accommodate and keep those CPUs busy. And yeah, we're going to talk about but the workload characteristics are changing so the fundamental infrastructure has to change as well. >> And so our goal ultimately is to ensure that we keep these new high performing GPUs saturated by flowing data to them without a lot of spiky performance throughout the entire subsystem. We've got that right? >> Yeah, I think that's right, and that's when I was talking about parallelism, that's what you want to do. You want to be able to load up that processor especially these alternative processors like GPUs, and make sure that they stay busy. The other thing is when there's a problem, you don't want to have to restart the job. So you want to have real time error recovery, if you will. And that's been crucial in the high performance world for a long, long time on terms of, because these jobs as you know take a long, long time to the extent that you don't have to restart a job from ground zero. You can save a lot of money. >> Yeah especially as you said, as we start to integrate some of these AI applications with some of the operational applications that are actually recording your results of the work that's being performed or the prediction that's being made or the recommendation that's been offered. So I think ultimately, if we start thinking about this crucial role that AI workloads is going to have in business and that storage is going to have on AI, move more processes closer to data et cetera. That suggest that there's going to be some changes in the offering for the storage industry. What are your thinking about how storage interest is going to evolve over time? >> Well there's certainly a lot of hardware stuff that's going on. We always talk about software define but they say hardware stuff matters. If obviously flash doors changed the game from a spinning mechanical disc, and that's part of this. Also as I said the day before seeing a lot more parallelism, high bandwidth is critical. A lot of the discussion that we're having in our community is the affinity between HPC, high performance computing and big data, and I think that was pretty clear, and now that's evolving to AI. So the internal network, things like InfiniBand are pretty important. NVIDIA is coming onto the scene. So those are some of the things that we see. I think the other one is file systems. NFS tends to deal really well with unstructured data and data that is sequential. When you have all the-- >> Streaming. >> Exactly, and you have all this what we just describe as random nature and you have the need for parallelism. You really need to rethink file systems. File systems are again a lynch pan of getting the most of these AI workloads, and the others if we talk about the cloud model. You got to make this stuff simple. If we're going to bring AI and machine intelligence workloads to the enterprise, it's got to be manageable by enterprise admins. You're not going to be able to have a scientist be able to deploy this stuff, so it's got to be simple or cloud like. >> Fantastic, Dave Vellante, Wikibon. Thanks for much for being on theCUBE. >> My pleasure. >> We've had he analyst's perspective. Now tells take a look at some real numbers. Not a lot of companies has delivered a rich set of bench marks relating AI, storage and business outcomes. DDN has, let's take a video that they prepared describing the bench mark associated with these new products. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides massive acceleration for AI and DL applications. DDN has engaged extensive performance and interoperable testing programs in close collaboration with expert technology partners and customers. Performance testing has been conducted with synthetic throughputs in IOPS workloads. The results demonstrate that the DDN A3I parallel architecture delivers over 100,000 IOPS and over 10 gigabytes per second of throughput to a single DGX-1 application container. Testing with multiple container demonstrates linear scaling up to full saturation of the DGX-1 Zyo capabilities. These results show concurrent IO activity from four containers with an aggregate delivered performance of 40 gigabytes per second. The DDN A3I parallel architecture delivers true application acceleration, extensive interoperability and performance testing has been completed with a dozen popular DL frameworks on DGX-1. The results show that with the DDN A3I parallel architecture, DL applications consistently achieve a higher training throughput and faster completion times. In this example, Caffe achieves almost eight times higher training throughput on DDN A3I as well it completes over five times faster than when using a legacy file sharing architecture and protocol. Comprehensive test and results are fully documented in the DDN A3I solutions guide available from the DDN website. This test illustrates the DGX-1 GPU utilization and read activity from the AI 200 parallel storage appliance during a TensorFlow training integration. The green line shows that the DGX-1 be used to achieve maximum utilization throughout the test. The red line shows the AI200 delivers a steady stream of data to the application during the training process. In the graph below, we show the same test using a legacy file sharing architecture and protocol. The green line shows that the DGX-1 never achieves full GPU utilization and that the legacy file sharing architecture and protocol fails to sustain consistent IO performance. These results show that with DDN A3I, this DL application on the DGX-1 achieves maximum GPU product activity and completes twice as fast. This test then resolved is also documented in the DDN A3I solutions guide available from the DDN website. DDN A3I solutions within video DGX-1 brings together industry meaning compute, storage and network technologies in a fully integrated and optimized package that enables widely used DL frameworks to run faster, better and more reliably. >> You know, it's great to see real benchmarking data because this is a very important domain, and there is not a lot of benchmarking information out there around some of these other products that are available but let's try to turn that benchmarking information into business outcomes. And to do that we've got Kurt Kuckein back from DDN. Kurt, welcome back. Let's talk a bit about how are these high value outcomes That seeks with AI going to be achieved as a consequence of this new performance, faster capabilities et cetera. >> So there is a couple of considerations. The first consideration, I think, is just the selection of AI infrastructure itself. Right, we have customers telling us constantly that they don't know where to start. Now they have readily available reference architectures that tell them hey, here's something you can implement, get installed quickly, you're up and running your AI from day one. >> So the decision process for what to get is reduced. >> Exactly. >> Okay. >> Number two is, you're unlocking all ends of the investment with something like this, right. You're maximizing the performance on the GPU side, you're maximizing the performance on the ingest side for the storage. You're maximizing the throughput of the entire system. So you're really gaining the most out of your investment there. And not just gaining the most out of your investment but truly accelerating the application and that's the end goal, right, that we're looking for with customers. Plenty of people can deliver fast storage but if it doesn't impact the application and deliver faster results, cut run times down then what are you really gaining from having fast storage? And so that's where we're focused. We're focused on application acceleration. >> So simpler architecture, faster implementation based on that, integrated capabilities, ultimately, all revealing or all resulting in better application performance. >> Better application performance and in the end something that's more reliable as well. >> Kurt Kuckein, thanks so much for being on theCUBE again. So that's ends our prepared remarks. We've heard a lot of great stuff about the relationship between AI, infrastructure especially storage and business outcomes but here's your opportunity to go into crowd chat and ask your questions get your answers, share your stories, engage your peers and some of the experts that we've been talking with about this evolving relationship between these key technologies, and what it's going to mean for business. So I'm Peter Burris. Thank you very much for listening. Let's step into the crowd chat and really engage and get those key issues addressed.

Published Date : Oct 10 2018

SUMMARY :

and over the course of the next hour, It can mean the scaling of performance. in the high performance world. A lot of companies are coming to us. and let's explore some of the differences. So the biggest similarity I think is so that you can get to the data you need Keeping the GPU saturated is really the key. of the amount of capacity, and the degree of some pushing that's required to make sure on the HPC side as well, and as you posited at the beginning of the relationship between AI and storage? of the infrastructure is going to need the scale that come out of these technologies. in the repository to train experimental vehicles of technical marketing for NVIDIA in the enterprise and I think it all starts with this notion of that there is and fully tested to deliver an AI infrastructure Darrin talk to us a little bit about the nature of how to bring that IO to the GPUs on our DGX platforms. So if we think about what you describe. Absolutely and if you think about the history but a lot to learn. Makes the even easier to deploy from there, And NVIDIA has done more than the DGX-1. in the past with HPC. So roughly it's the architecture that makes things easier so that people don't have to deal with a lot of DDN helps bring that to bear, to explain the product and the offering. and easy integration of the solution Let's hear the analyst's perspective. So the technology is now available to act on that data, So a lot of people talk about the notion of data And so no all the data is not going to go into the cloud. and the key is you want to bring and the success at making some of these advance so the fundamental infrastructure has to change as well. by flowing data to them without a lot And that's been crucial in the high performance world and that storage is going to have on AI, A lot of the discussion that we're having in our community and the others if we talk about the cloud model. Thanks for much for being on theCUBE. describing the bench mark associated and read activity from the AI 200 parallel storage appliance And to do that we've got Kurt Kuckein back from DDN. is just the selection of AI infrastructure itself. and that's the end goal, right, So simpler architecture, and in the end something that's more reliable as well. and some of the experts that we've been talking

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

NVIDIAORGANIZATION

0.99+

Kurt KuckeinPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

KurtPERSON

0.99+

50QUANTITY

0.99+

200 timesQUANTITY

0.99+

DarrinPERSON

0.99+

October 11, 2018DATE

0.99+

DDNORGANIZATION

0.99+

Darrin JohnsonPERSON

0.99+

50 terabyteQUANTITY

0.99+

20 yearsQUANTITY

0.99+

10 terabyteQUANTITY

0.99+

WikibonORGANIZATION

0.99+

75%QUANTITY

0.99+

twoQUANTITY

0.99+

five megabytesQUANTITY

0.99+

TodayDATE

0.99+

second pieceQUANTITY

0.99+

third componentQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

DNNORGANIZATION

0.99+

thirdQUANTITY

0.99+

second componentQUANTITY

0.99+

90 GXsQUANTITY

0.99+

first componentQUANTITY

0.99+

todayDATE

0.99+

three minuteQUANTITY

0.99+

AI200COMMERCIAL_ITEM

0.98+

over 40 yearsQUANTITY

0.98+

first exampleQUANTITY

0.98+

DGX-1COMMERCIAL_ITEM

0.98+

100 gigabitQUANTITY

0.98+

500 petabytesQUANTITY

0.98+

V100COMMERCIAL_ITEM

0.98+

30, 40 yearsQUANTITY

0.98+

second exampleQUANTITY

0.97+

NIVIDAORGANIZATION

0.97+

over 100,000 IOPSQUANTITY

0.97+

SiliconANGLEORGANIZATION

0.97+

AI 200COMMERCIAL_ITEM

0.97+

first considerationQUANTITY

0.97+

three thingsQUANTITY

0.96+

Michael Smith, HKS | Microsoft Ignite 2018


 

>> Live from Orlando, Florida, it's theCUBE. Covering Microsoft Ignite. Brought to you by Cohesity and theCUBE's ecosystem partners. >> Welcome back everyone to theCUBE's live coverage of Microsoft Ignite, I'm your host Rebecca Knight along with my cohost Stu Miniman. We're joined by Michael Smith, he is the director of infrastructure at HKS, thanks so much for coming on theCUBE. >> Hey thanks for having me, excited to be here. >> So Mike, HKS, tell us, you're based in Dallas you're an architecture firm, tell us about some of the big projects that you have worked on. >> Sure, yeah, so I've been with the firm since April. Really excited to get on board and really kind of understand the rich history, we actually turn 80 this year so we'll have a really big celebration of the company. So yeah, HKS, we do a lot of sports entertainment so Dallas Cowboys, AT&T stadium, the Vikings home, L.A. Rams, so about 30% of our mix is sports entertainment so you may not know the company but you certainly know the buildings we design. >> Some well known buildings, exactly. >> Actually when you talk an 80 year old firm, and I think of those two buildings, well I'm a techie, I'm a geek, there's a lot of technology that goes into that. I'd love just a viewpoint as to how the company looks because 80 years ago I'm sure they didn't have the tech people in there, design is very much there, how does that you know the culture and inside the company a little bit? >> Sure, yeah so that's really the neat thing right, so everyone thinks that it's a company full of architects right and for the most part it is, but we have nurses on staff right, why? Because we build hospitals. We have people that understand how buildings work. So part of our five stakeholders, the community, is actually one of those stakeholders. So we're not just listening to the client who's asking us how to build it, we're seeing how that building is going to fit into the community, into its surroundings, and how it's really going to interoperate right cause these buildings are going to be around for what, you know 10, 15, 20 years until the next one gets built. >> So what are you doing here at this conference? What are the kinds of people you want to meet, the kind of connections you want to make? >> Sure, yeah, so first off I made some great connections. And that's one of the things I love about coming to things like Ignite. This is my first time here but I've loved it. I tell ya I really enjoy hearing people and hearing about the same challenges that I'm facing and then there's understanding how they're using the various pieces of technology to kind of piece that together. >> Alright so Mike, you're director of infrastructure, so we know infrastructure well, it is our first time at this show but we have been doing infrastructure shows for many years, maybe give us a little bit about your background and what's under your domain at HKS? >> Sure, yeah, so yeah I've worked for the last 20 years mainly for architectural engineering firms right, and so and there's a lot to be said for understanding the specific industry that you're working in right, so obviously it's not just about Word documents and Excel files, you're talking about very large CAD files and having to traverse from office to office right, and so you have to have a very robust infrastructure. So I've got basically the entire networking servers, WAN, LAN, Internet, VoIP, oh yeah and I've got cyber security under my profile as well. We run a small shop at HKS, but yeah so the company's doing really really well and we've got 24 offices globally, 19 here in the US and like I said we manage that really a 24/7 shop. >> Alright so you've got a number of locations, when we talked to infrastructure people the role of data and how do I manage it, how do I do things like disaster recovery and like usually are pretty important, how is it in your world? >> Yeah so obviously disaster recovery, to me that's the backbone of IT right, specifically of my group, and if we can't do that right, if we can't do a data protection correctly then to me we really shouldn't be working on any other project. And that's really where Cohesity comes into the equation right, so when I came on board we had a legacy solution, it was working right, it just and talking with the business really partnering and understanding what their expectations were, we realize that there were some gaps. And ended up talking to Cohesity through a vendor did an amazing whiteboarding session with just some folks that I really felt like cared about and understood our business and then yeah so we've been I guess since about mid-July, we've been implemented on our Cohesity solution for data protection globally, we're about 75% of the way there in what, just a month and a half? So from a speed of implementation standpoint right. But we've really made some leaps and bounds, gains and kind of those requirements that our customers are asking of us and kind of returning, you know basically returning them back to work. >> Yeah, can you paint a little picture of kind of the before and after for us? >> Sure yeah so we've always had a cloud strategy, so we've been partnered with Microsoft for several years, great Office 365, we've used Azure for backup, but I wouldn't say that it was really an optimized solution. And so if we had an actual outage, what we were talking about is you know a fairly long time to pull those resources back down to on-prem and so what we've implemented with our Cohesity solution is basically a system now where when our customers come in and 95% of the time they can get their files back on the phone with the first level technician. So before I was going to a third level sysadmin, basically requiring them to stop what they're doing, work on their restore right, and in some instances it may have been a day before we returned that customer back to work so if you can think about the ability to really just return them back into their normal work process, almost instantaneously, I mean the RTO is really incalculable when you start talking about soft dollars like that. >> Talk about, you mentioned how coming here you talked with lot of people in your industry or people maybe not even in your industry, but you realize you all share similar challenges, and you just talked about the disaster recovery and how that can really keep you up at night. Can you talk about a few of the other problems and challenges that you encounter and how Cohesity has helped you? >> Sure, yeah, so you know I think obviously in the forefront of everybody's mind is security right, and the fact that I have security within my group so understanding that in the topics of data in motion, data in rest right, topics of encryption so you know all of our data as it's pulled into Cohesity is encrypted and so obviously and then as that sits in Azure that's encrypted so that transaction is secure. You know I think the overall management of the infrastructure really having that single pane of glass that Cohesity can offer, that was huge challenge when I came onboard because the solution that we were using was really meant for file replication and so in order to find out if something worked we had to go to 81 disparate sources to see if that worked right. And so today I can come in in the morning, I got a guy that starts at 6 a.m. God bless him, and by the time I get in anything that happened overnight is completely remediated, I can look at one single pane of glass, I can see a bunch of green and honestly if there's red I can see it and I know that something failed and I can pinpoint exactly what we need to do to fix it. >> Mike you said you were about 75% of the way deployed. Walk us through where you're going with it, what you've been learning along the way, and any lessons learned along the way that you could share with your peers, as to how the experience has been, what they might want to do to optimize things. >> Sure, yeah, so I think we're about 75% of the way, we've got a lot of our international sites that are coming onboard now, we're learning a lot about our network. We're learning a lot about different things and so I would say before you do an implementation of this size, really make sure that you have a good handle on patching. Making sure that all of your resources are patched. The last thing you want to do is find out you have a resource problem with slow latency and it's due to a patch not being applied right. And then just understanding you know the time frames involved right? So we've targeted about 75 days to get fully onboard but we're talking almost a petabyte of data across one gigabit connectivity right, and so when you start talking about that there's lot of, we're doing a lot of mix and mashing, bandwidth throttling and all that kind of fun stuff in order to get up and running. >> Yeah so I'm kind of laughing a little bit over here because it's been a punchline in the Microsoft community, it's like oh well you know is it patch Tuesday yet or things like that. We've come so far yet there's still some things that hold us back, that leads me to my next question is you know what's exciting you in the industry in tech and your job, what's working great and what on the other hand are you asking your vendors, what would make your job and your group's job even better whether that be Cohesity, Microsoft, or others? >> Yeah so I think as a company that, we have a lot of data right, and at first as the role of the person responsible for that data, you know it was oh my gosh we have a lot of data. And it was actually a couple of months ago, something clicked in my head and I said, we have a lot of data. (hosts laughing) And guess what? We can do analytics on that data. And so you know I think machine learning is going to be huge right. I think being able to do a lot of those tasks that we count on, you know I have people that are doing things two to three times a week, maybe between eight and five. Well those are things that with machine learning we can have those algorithms basically running 24/7 and so we can start making leaps and bounds progress over what we're doing today. HKS is really big into understanding what the value add is in building a building right? It's not just about the architecture. There's value to that, and so what other value items can we provide to our customers that because you know to be honest technology is becoming a commodity right? How much longer before core services like your architecture and your engineering start to become commodities? And so that's really where I think analyzing that data. And so I was at VMworld a few weeks ago and I was talking to a Cohesity engineer and I really expected him, I said what's next on the road map from data analytics? And I expected to hear x, y, and z. And he looked at me and he goes, What do you want to see next? What do you want to do with your data? Let's partner with you and make that happen right. Now I'm smart enough to go, I don't know what that next thing is but we have really smart PhD-type people that do so we're really looking forward to that next phase. >> I'm interested in teams because you talked about the very diverse employee base at HKS. You said you've got nurses on the team, I'm imagining you have hospitality experts, you've got the PhD types, you've got the science people, and the architects. So how do you get all these people with very different functional expertise to work together and pull together and all be on the same page? >> That's actually a great question. So interestingly enough, I sit right next to a librarian and she's in IT right, and they work in our Global Knowledge Management group which does SharePoint so who better to understand how to start to classify and organize information than someone who's a trained librarian right? So I think what we're really excited about is our IT team has really been really rebuilt say over the last two years and it's been rebuilt with people who have a real passion for their industry but also kind of a broad understanding of how everything interconnects and so we're really kind of building a culture that says if there's information there, it's shareable. We're not holding anything close to the vest. If you want to understand, if I use too many acronyms when I talk, then ask me what they are right. And so I think that right there, that fosters a lot more involvement and people give more of themselves incrementally when they understand that hey there's skin in the game and yes I'm a librarian and I may not know the technological things that you do, but if I say well hey what if we do it this way, we're not just going to blow that idea off and we're going to actually incorporate that into the greater solution. >> Great, Mike we talk a lot about AI at the show and IoT and you're doing buildings, I'm curious how things like all the censors and everything impact what you're doing, how you partner with your clients on that. >> Sure, yeah, so we've got a great team that really focuses on that entire extended set of technologies so obviously drone technologies, sensor technologies, and so I think a lot of those, those are I won't even say that they are even forward looking anymore. Those are, especially sensor technology, so I mean I've worked in environments where we had 24 by seven cameras on a job site so general contractor probably hates it but a PM from anywhere in the world can look at his project, his or her project, and they can see their progress right? Well you know then at what point does that extend to, well I'm going to launch a drone here and I'm going to go look at a very specific piece and a very part of that technology. And so yeah I think it's one of those things if you ever start sitting on your laurels in IT, if your feet ever get off of the toes moving forward, you're already behind. So you know I think things like AI, machine learning, you know I've talked to some people that'll go, well we're two to three years away from that. And I said, in two to three years those will be things of the past right? You have to, you don't have to be bleeding edge, but you have to understand where you can leverage those technologies for your business. >> Give us a little candy here. Paint a picture of what the building of the future is, whether it's the stadium of the future, the hotel of the future, just get us excited here. What are some of the things >> Sure yeah. that you're looking at? >> So I actually talked to a gentleman a couple weeks back and they're building a hotel and this hotel has Bluetooth sensors in the room right, can't do any kind of cameras or anything like that but basically what it can do is based upon the signal saturation of the Bluetooth, it can tell you how many people are in that room cause it understands the dissipation of the signal through the normal human body right. So take that down to your typical occupancy sensor that so you leave the room, maybe you're sleeping late, well the room doesn't think anybody's in there so it turns the temperature up, turns the lights on, does whatever it does right. Well with this new technology it can't do that. So fast forward on and maybe it's a little bit more scary. So now you go from your room and you walk down to the lobby bar, you walk past the lobby bar. Well the wireless devices know the MAC address of your phone because you used that number when you checked in, so as you get close it pops you a hey, you want to 15% or how much do you want to free drink at the bar if you come in here? So I think understanding the connectiveness of everything and then really not being afraid of it. There is a Big Brother aspect to all of this, but just kind of understanding that you know, kind of in the Elon Musk vein is that we have to understand and we have to control where that technology is going but I think if you're afraid of it like that and you know, I'm not going to, I'm never going to stay at that hotel because of the things that they do, then I think you're missing out. >> Right, exactly. Well thank you so much Mike, it's been a pleasure having you on the show. >> Thank you so much >> A lot of fun talking to you. I appreciate the opportunity. >> I'm Rebecca Knight for Stu Miniman, we will have more from Microsoft Ignite here in Orlando, Florida coming up just after this. (light techno music)

Published Date : Sep 26 2018

SUMMARY :

Brought to you by Cohesity and theCUBE's he is the director of me, excited to be here. that you have worked on. so you may not know the company and inside the company a little bit? you know 10, 15, 20 years and hearing about the same challenges and so you have to have a of the way there in what, back to work so if you can and challenges that you encounter and so in order to find out and any lessons learned along the way that and so when you start talking it's like oh well you know And so you know I think machine learning So how do you get all these people and I may not know the lot about AI at the show So you know I think things building of the future is, that you're looking at? of it like that and you Well thank you so much Mike, A lot of fun talking to you. we will have more from Microsoft Ignite

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael SmithPERSON

0.99+

Rebecca KnightPERSON

0.99+

MicrosoftORGANIZATION

0.99+

MikePERSON

0.99+

DallasLOCATION

0.99+

24QUANTITY

0.99+

15%QUANTITY

0.99+

95%QUANTITY

0.99+

twoQUANTITY

0.99+

10QUANTITY

0.99+

24 officesQUANTITY

0.99+

todayDATE

0.99+

6 a.m.DATE

0.99+

three yearsQUANTITY

0.99+

15QUANTITY

0.99+

five stakeholdersQUANTITY

0.99+

ExcelTITLE

0.99+

WordTITLE

0.99+

Orlando, FloridaLOCATION

0.99+

Stu MinimanPERSON

0.99+

VMworldORGANIZATION

0.99+

HKSORGANIZATION

0.99+

first timeQUANTITY

0.99+

USLOCATION

0.99+

three yearsQUANTITY

0.99+

a month and a halfQUANTITY

0.99+

seven camerasQUANTITY

0.99+

Office 365TITLE

0.99+

20 yearsQUANTITY

0.99+

first levelQUANTITY

0.99+

oneQUANTITY

0.99+

CohesityORGANIZATION

0.99+

19QUANTITY

0.99+

firstQUANTITY

0.98+

this yearDATE

0.98+

theCUBEORGANIZATION

0.98+

fiveQUANTITY

0.98+

AprilDATE

0.98+

80QUANTITY

0.97+

Dallas CowboysORGANIZATION

0.96+

80 years agoDATE

0.96+

AT&TORGANIZATION

0.96+

third levelQUANTITY

0.96+

Elon MuskPERSON

0.95+

80 year oldQUANTITY

0.95+

L.A. RamsORGANIZATION

0.95+

about 75 daysQUANTITY

0.95+

one single paneQUANTITY

0.95+

single paneQUANTITY

0.94+

three times a weekQUANTITY

0.94+

AzureTITLE

0.94+

SharePointTITLE

0.93+

Global Knowledge ManagementORGANIZATION

0.93+

about 75%QUANTITY

0.93+

Microsoft IgniteORGANIZATION

0.93+

about 30%QUANTITY

0.92+

TuesdayDATE

0.92+

two buildingsQUANTITY

0.92+

eightQUANTITY

0.9+

couple of months agoDATE

0.9+

mid-JulyDATE

0.9+

few weeks agoDATE

0.89+

a couple weeks backDATE

0.87+

one gigabit connectivityQUANTITY

0.84+

81 disparate sourcesQUANTITY

0.79+

last two yearsDATE

0.78+

Andy Bechtolsheim, Arista Networks | VMworld 2018


 

>> Live from Las Vegas, it's theCUBE. Covering VMworld 2018. Brought to you by VMware and its eco-system partners. >> Hello, everyone. We are here live in Las Vegas for theCUBE's exclusive coverage for three days, VMworld 2018. I'm John Furrier with my co-host Stu Miniman. Our next guest is Andy Bechtolsheim who's the founder and chief development officer and chairman of Arista Networks. More importantly, he's also the co-founder of Sun Microsystems. Invested in Larry and Sergey when they were in their PhD programs. Legend in the industry. Great to have you on. Super excited to have you join this conversation. >> A pleasure to be here today. >> So, first question is, besides all the luminary things you've done in your career, what's it like working with Jayshree at Arista? >> Well, I actually met Jayshree 30 years ago when she was at AMD selling us SDDR chips at Sun Microsystems, so I guess this dates both of us, but I worked with her, of all the years when I was at Cisco, obviously, and then we both start at Arista in 2008. So we have both been there now for 10 years together. In fact, our 10-year anniversary's coming up next month. >> Jayshree's a great Cube alumni. She's an amazing person. Great technologist, we miss her. Wish she was here, having more conversations with us on the Cube, but stepping back, over your career you've seen many ways of innovation. You were involved in all of them, big ones happening. Semi-conductor computers, and now with Arista going forward and now Cloud, did you know the rocket ship of Arista was going to be this big? I mean, when you designed it at the beginning, what was the itch you were scratching, and did you know it was going to be a rocket ship? >> Well, we had some very early, what led to the founding of Arista was, we had lunch with our best friends at Google, and Larry himself told me that the biggest problem they had was not service, but actually the networking, and scaling that to the future size of their data centers, and they were going go off to build their own network, products because there was no commercial product on the market that would meet that need, so we thought with the emergence of Immersion Silicon We could make a contribution there, and the focus of the company was actually on the cloud networking from the very beginning, even though that wasn't even fell in this industry as being a major opportunity. So when we shipped our first products in 2009, 2010 many of them besides we had some business on Wall Street on latency, but the majority of the opportunity was over the cloud. >> It's interesting you mention the Google and Larry and Sergey, Larry in particular about that time in history, you go back and look at what Google was doing at that particular time, and now what they talk about at Google Cloud. They were building their own large-scale system, and there was massive scale involved. >> Yeah they had about a hundred thousand servers in the early 2004 before they went public, now they have, who knows how many millions, right? And all of course the latest technology now. So the sheer size of the cloud, the momentum the cloud has, I think was hard to forecast. We did think there was going to be a shift, but the shift was in fact more rapid than we expected. >> Andy, you talked about cloud networking, but today we still see there's such a huge discrepancy between what networking is happening in the data center and the networking that's happening in the hyperscalers. At this show, we're starting to hear about some of the multi-cloud, you had some integrations between Arista and VMware that are starting to pull some of those together. Maybe you could give us a little bit about what you're seeing between, you know, the data center and the enterprise versus the hyperscalers, when it comes to networking. >> So the data enterprise has still largely what we would call a legacy approach networking, which dates back, you know, 10, 20, 30 years, and many of those networks are still in place and progressing very slowly. But there also are enterprise customers who want to take advantage of what the cloud has done in terms of cloud networking, including the much further scalability, the much further resiliency, the much greater automation, so all of these benefits do imply equally well to the enterprise. But it is a transition for customers, you know, to fully embrace that. So the work we are doing together with VMware on integrating our cloud vision, our physical swiches with the microsegrentation is one element of that. But the bigger topic is simply an enterprise that wants to move into the future really should look at how did the cloud people build their networks, how can they run a very large data center with, you know, 10 network admins instead of, you know, hundreds of people. And especially the automation that we've been able to provide to our customers, automating updating of software, being able to bring out new releases into a running network without bringing the network down. You know, nobody could even think about doing that 10 years ago. >> Yeah, you bring up a great point about automation. In the keynote this morning, Pat Gelsinger talked about, what was it, 39 years ago he did something in intel, said we're going to do AI. Didn't quite call it AI back then, but he said, and now, we're starting to see the fruits of what come out. In the networking world, we've been talking about for decades, automating the network more. You've lived through the one gig, 10 gig, 40 gig, 400 gig you're talking about. Are we ready for automation now? Is now that moment in networking? >> I think that we were ready for 30 years, but the weird thing is, there always was a control planted in network, you know, the routing protocols, but for management there was never really a true management plan, meaning the legacy way is you dial in with S and a P into each switch and configure, your access is manually more or less, and that's really a bad way of doing it because humans do make mistakes, you end up with inconsistencies and a lot of network outages virtually has been traced to literally human mistake. So our approach with what we call Cloud Vision, which is a central point that can manage the entire base of Arista switches in a data canter, its all automated. You want to update a thing, you push a button and it happens and there's no no more dialing into a S and a P, into individual switches. >> How would you advise people who were looking at the architecture of the cloud, who are re-platforming, large enterprises have been legacy all day long, you mentioned earlier just now in the CUBE, that how the cloud guys were laying out the network was fundamental how they grew. How should, and how do people lay out the networks for cloud today? How do you see that? >> So the three big things that happened was, immersion silicon has taken over because it's, quote frankly, much more scalable than traditional chips. And that's just the hardware, right? Then the leaf-spine architecture that really our customers pioneered but is the standard in the cloud. It is use ECP for load balancing, it works. It's the most resilient, maybe the one thing, the single most important thing of the cloud is, no outages, no down time, the network works. No excuses, right? [Laughter] And our customers tell us that with our products and the leaf-spine approach, they have a better experience in terms of resiliency than any other vendor. So that's a very strong endorsement and that's as relevant to an enterprise customer as to a cloud customer. And then the automation benefit. Now, to get the automation benefit, you have to standardize on the new way of doing it, that's true, but it's just such a reduction in complexity and simplification. You can actually look at this as an Opex saving opportunity, quite frankly, and in the cloud they wouldn't have it any other way, they couldn't afford it. They're very large data centers. And they only could offer these things in a fully automatic fashion. >> Andy, I want to get your reaction to what Pat Gelsinger said on stage this morning. He said, in the old days, I'm paraphrasing, the network would dictate what the applications could do, it would enable that, and we saw an enabling capability. Now with Cloud, the apps can program the network, I'm paraphrasing that. As networks become more programmable and no outages, he made a quote, he said, the old adage was the network is the computer, the new adage is, the application is a network. >> Okay so let me sort of translate this, so. >> What's your reaction to those things? >> Sounds like an old Sun slogan, doesn't it? >> Translate that for us. >> So, the virtual networking, the NSX environment which provides security at the application level, right, it's the natural way to do network security. Cuz, you really want to be as close to the application as you can physically be, or virtually be, which is right in the VM environment. So VMware clearly has the best position in the industry to provide that level of security, which is all software, softlevel networking, you do your, you know, security policies at that level. Where we come in is, with Cloud Vision now, we have announced a way to integrate with NSX Microsegmentation, such that we can learn the policies and map them back down to the access list of the physical network to further enhance that security. So we don't actually create a separate silo for yet another policy management, we truly offer it within their policy framework, which means you have the natural segmentation between the security engineers which manages future policies and networking engineers that manage the physical network. >> Highly optimized for the environment >> Which actually works. >> Is that what you call Macrosegmentation then on the University side? >> Well we used to call it macro but it's part of their micro thing because we truly learn their policies. So if you update a policy, it gets reflected back down to cloud vision and your physical networks and it applies to physical switches, physical assets, physical servers, mainstream storage, whatnot, right? So it's a very smooth integration and we think it's a demo at this point but it will work and it's an open framework that allows us to work with VMware. >> Let me ask you a personal question. Looking at the industry, even look back in history as an illustration. TCPIP opened up remember the old OSI stack that everyone tried to do that. TCPIP opened up so much on networking, internetworking, is there a technology enabler in Cloud that you see that's going to have that kind of impact? Is it an NSX? How do customers going to deal with the multiple clouds? I mean, is there an interoperability framework coming, do you see a real disruptive technology enable that'll have that kind of impact that TCP spawned massive opportunity and wealth creation in start-ups and functionality? Is there a moment coming? >> So TCP of course was the proper layering of a network between the physical layer, layer one layer two, and the routing or the internet layer, which is layer three. And without that, this is back to the old intern argument, we wouldn't have what we have today on data. That was the only rational way to build an architecture that could actually, and I'm not sure people had a notion in 1979 when TCP was submitted that it would become that big, they probably would have picked a bigger adverse space, but it was not just the longevity but the impact it had was just phenomenal, right? Now, and that applied in terms of connectivity and how many things you have to sell with measure to talk from Point A to B. The NSX level of network management is a little different because it's much higher level. It's really a management plan, back to the point I made earlier about management plans, that allows you to integrate a cloud on your premise with what an Amazon or at IBM or the future Google and so on, in a way that you can have full visibility and you see you know exactly what's going on, all the security policies. Like, this has been a dream for people to deliver, but it requires to actually have a reasonable amount of code in each of these places. Both on your server, it's not just a protocol, it's an implementation of a co-ability, right? And, we are aware NSX is the best solution that's available today that I could see for that use-case, which is going to be very important to a large number of enterprises, many of which want to have a smooth connection between on-premise and off-premise, and in the future to add TelCo and other things to the bloody run of VMenvironment today. But that will allow them to be fully securely linked into social network. >> So you see that as a leading product in Connect. >> It's definitely a leading product. They have the most customers the most momentum the most market share, there isn't anything even close in terms of the, call it the software-defined networking layer, which is what NSX implements. And we are very proud to partner with them at the physical layer to interact with their policies. >> You think that's going to have an impact of accelerating the multi-cloud world? >> Yes because, the whole point about multi-cloud is it has to be sort of vendor-independent or, I don't know, vendor-neutral. You are going to see solutions from Amazon and Azzure to bring their own sort of public load into the premise. But that only works with their package, right? >> Yeah. >> So there will be other offerings there but in terms of true multi-cloud, I don't see any competition. >> Andy, we'd love to get your viewpoint on the future of ethernet. I hear so many people the last few years that it's like well, on the processor side Moor's Laws played out. We can't get smaller. On the ethernet side, there's not going to be the investment to be able to help get us to the next generation, there's limits in the technology, you've lived through so many of these architectural changes. Are we at the end of innovation for ethernet? >> Not at all. So, my history with ethernet dates back 40 years. So, I worked on the first three mega-ethernet 0x parts til. Then it was 10 mega-bit, hundred mega-bit, gigabit and forty hundred and now 400 coming out. So, ethernet speed transitions are really just substitutions of the previous layer to technology meaning, assuming they're more cost-effective, they do get adopted very quickly. Of course, you need the right optics, you need the right equipment, but it's a very predictable road map. I mean, I guess, it's not like adopting a new protocol, right? It's just faster. And more, and with cost efficient. So, we are on the verge of 400 gigabits becoming available in the market. It will really roll out at any kind of volume next calendar year and then it will pick up volume next year in 2000. But in the meanwhile, 100 meg ethernet- excuse me, 100 gigabit ethernet is still the fastest growing thing the industry's ever seen. Even from a million ports back in 2016, to call it five million ports last calendar year expected to what 10 million ports this year, expected 20 million ports next year. But this is a speed of adoption that's unheard of. And we are at Arista we are fortunate enough to be actually the market leader on gigabit adoption. We have shipped more hundred-gig ports than any vendor including Cisco for the last three years. So our ability to embrace new speeds and bring new technologies to market is, I would say, unparalleled. We have a very good track record there and we are working really hard, sort of burning the midnight oil to extend this to the 400-gig era, which is going to be another important upgrade, especially in the cloud. I should mention that the cloud is the early adopter of all the higher speeds. Those in the hundred gig will be more than 400-gig. I'm not sure too many enterprises need 400-gig but the cloud is ready to get going as soon as it's cost effective. >> Andy, for the folks that are looking at this 20 year wave coming that we're seeing kind of cloud has been talked about on stage and here on theCUBE. Oh, it's going to be a 20 year run, transforming the infrastructure. What's the in your minds eye, what do you see as the most disruptive thing that people aren't talking about in networking? What's going to be some things that might happen in the next 10 years in your mind that might happen that people aren't really aware of, that might not see it coming, any ovations on the horizon that you're excited about or people might not expect? >> Yeah well the cloud trend is fairly predictable. I would say, all the IDC, all the analysts have predicted like that are big numbers on adoption have been pretty spot on. And if you look at the annual growth rate for cloud adoption it's 40, 45, 50 and more percent. Now there's a good question of course how the big cloud winners in the end will compete against each other. You got Amazon, that's the biggest, Microsoft is actually growing purely faster than Amazon right now but they have some catching up to do. And Google working overtime to get bigger. They may differentiate in terms of their specific focus, for example, Google has a lot AI technology, internally, that they have used for their own business, and with this influence they're arguably ahead of others, and they may just bet the farm on AI and big data analytics and things like that, which are very compelling business opportunities for any enterprise customer. So the potential value that can be created deploying AI correctly is in the perhaps trillions of dollars the next 10 years, but it probably doesn't make sense for a company for most companies to build their own AI data center, that you need a huge capital expense a huge, what hardware to use, it's going to evolve very quickly. So that maybe one of the classical cases where, you won't actually start on the cloud, and the only reason ever moving on site is your well defined environment, right, so I would actually say it's the new applications that may start in the cloud, that haven't even rolled out in volume, like AI, that will may be the biggest change that people didn't expect. >> Final question, what's the future of Arista? >> We're just working really hard to, you know, be the best provider of products, making the best products for our customers, both for the cloud and for enterprise. One thing I was going to mention about Arista is that people think we're selling network boxes which is what is which we do. But the vast majority of our investment's actually software and not hardware. So we have over 90% of our R&D headcount is in software and so the right way to think about it is actually we are a software company not really a hardware company and the saying we have internally is that hardware is easy software is hard because it's actually true. Software is much much harder than building hardware these days and the EOS software sells well over 10 million slants of codes written by over thousands of man years of engineering. So it has been a tremendous journey we've been on, but we're still scratching the surface of what we can do. >> And the focus of the software obviously makes sense. Software defined is driving everything. What are the key focus areas on the software that you guys are looking at? What's the key priorities for Arista? >> We have talked about extending our business beyond the data center into the campus. We announced our very first acquisition recently which is actually a wifi company, but I can guarantee you it's going to be a very software-defined wifi network, not a legacy controller-based approach right, for enterprise, right? We're not that interested in the hardware we're interested in providing managed solutions to our customers. >> A lot of IOT action on Andy. Thanks for taking the time to come on theCUBE. Really appreciate it. Great to meet you and have you on theCUBE. Great conversation here, it's theCUBE. I'm John Furrier. Stu Miniman breaking down all the top coverage of VMworld 2018 getting the input and the commentary from industry legends and also key leaders in the innovation cloud networking. This is theCUBE. Stay with us for more after this short break. [Technical Music]

Published Date : Aug 27 2018

SUMMARY :

Brought to you by VMware Legend in the industry. the years when I was at Cisco, it at the beginning, and the focus of the company was actually and there was massive scale involved. in the early 2004 before they went public, and the networking that's So the data enterprise the fruits of what come out. but the weird thing is, there that how the cloud guys were laying out and the leaf-spine approach, they have said, the old adage was of translate this, so. the policies and map them back down to and it applies to physical Looking at the industry, and in the future to add So you see that as a the physical layer to Yes because, the whole but in terms of true multi-cloud, I hear so many people the but the cloud is ready to get going that might happen in the next So that maybe one of the and the saying we have internally And the focus of the software We're not that interested in the hardware in the innovation cloud networking.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy BechtolsheimPERSON

0.99+

LarryPERSON

0.99+

AmazonORGANIZATION

0.99+

Sun MicrosystemsORGANIZATION

0.99+

2008DATE

0.99+

JayshreePERSON

0.99+

Pat GelsingerPERSON

0.99+

40QUANTITY

0.99+

1979DATE

0.99+

MicrosoftORGANIZATION

0.99+

AndyPERSON

0.99+

Stu MinimanPERSON

0.99+

SergeyPERSON

0.99+

2009DATE

0.99+

John FurrierPERSON

0.99+

IBMORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

400 gigQUANTITY

0.99+

AzzureORGANIZATION

0.99+

10 gigQUANTITY

0.99+

CiscoORGANIZATION

0.99+

2010DATE

0.99+

45QUANTITY

0.99+

one gigQUANTITY

0.99+

2016DATE

0.99+

10QUANTITY

0.99+

400QUANTITY

0.99+

40 gigQUANTITY

0.99+

Arista NetworksORGANIZATION

0.99+

50QUANTITY

0.99+

next yearDATE

0.99+

VMwareORGANIZATION

0.99+

hundred gigQUANTITY

0.99+

AristaORGANIZATION

0.99+

400-gigQUANTITY

0.99+

NSXORGANIZATION

0.99+

hundred-gigQUANTITY

0.99+

three daysQUANTITY

0.99+

first questionQUANTITY

0.99+

trillions of dollarsQUANTITY

0.99+

AMDORGANIZATION

0.99+

Las VegasLOCATION

0.99+

10 mega-bitQUANTITY

0.99+

20 million portsQUANTITY

0.99+

10 million portsQUANTITY

0.99+

first productsQUANTITY

0.99+

20QUANTITY

0.99+

bothQUANTITY

0.99+

hundred mega-bitQUANTITY

0.99+

last calendar yearDATE

0.99+

this yearDATE

0.99+

BothQUANTITY

0.99+

eachQUANTITY

0.98+

early 2004DATE

0.98+

10 years agoDATE

0.98+

more than 400-gigQUANTITY

0.98+

VMworld 2018EVENT

0.98+

next monthDATE

0.98+

first acquisitionQUANTITY

0.98+

over 90%QUANTITY

0.98+

singleQUANTITY

0.98+

each switchQUANTITY

0.98+

forty hundredQUANTITY

0.98+

Jeremy Werner, Toshiba | CUBEConversation, July 2018


 

(upbeat orchestral music) >> Hi I'm Peter Burris and welcome to another CUBE Conversation from our wonderful Palo Alto Studios. Great conversation today with Jeremy Werner who is the vice president of SSD Marketing at Toshiba Memory, Jeremy welcome to theCUBE. >> Thank you Peter, great to be here. >> You know Jeremy, one of the reasons why I find you being here so intriguing interesting is there's a lot going on in the industry. We talk about new types of workloads: AI, cloud, deep learning, all these other things, all these technologies are-- all these applications and workloads are absolutely dependent on the idea that the infrastructure has to start focusing less on just persisting memory and focusing more on delivering memory-- delivering data to these very advanced applications. That's where flash comes in. Tell us a little bit about the role that flash has had in the industry. >> It's amazing, thank you for recognizing that. So, flash has a long history. 30 years ago actually Toshiba invented flash memory, and it's had a transformation on people's lives everywhere, on all kinds of products starting with the very first application for flash being-- for NAND flash being kind of removable memory cards. You had the digital camera revolution, then it found its way into cell phones, that enabled smart phones and people carrying around all their media etc. And now we're in kind of this large third phase adoption which is, like you mentioned, the transition from persistent storage with a hard drive where, your data was available but not really available to do a lot with. To now storage on an SSD, which allows artificial intelligence, business analytics, and all the new workloads that are changing business paradigms. >> So clearly flash adoption is increasing in the data center. Wikibon has been talking about this for quite some time. My colleague David Foyer was one of the first people out there to project the role that flash was going to play within the data center. How are you seeing as you talk to customers, as you talk to some of the big systems manufacturers and some of the hyperscalers. How are you hearing or what are they saying about how they are applying and will intend to apply flash in the market today? >> It's amazing when we talk to customers they really can't get enough flash. As an industry we just came out of a major shortage of flash memory, and now a lot of new technologies are coming online. So, we at Toshiba, just announced our 96 layer 3D flash, our QLC flash. This is all in an attempt to get more flash storage into the hands of these customers so that they can bring these new applications to market. And this transformation, it's happening quickly although maybe not as quickly as people think because there's a very long road ahead of us. Still you look out 10 years into the future, you're talking about 40 or 50% growth per year, at least for the next decade. >> So I want to get to that in a second, but I want to touch upon something that you said that many of the naysayers about flash predicted that there would be shortfalls and they were very Chicken Little like. Oh my gosh, the sky is going to fall, the prices are going to go out of control. We did have a shortage, and it was a pretty significant one, but we were able to moderate some of the price increases so it didn't lead to a whole bunch of design losses or a disruption in how we thought about new workloads, did it? >> True, no it didn't, and I think that's the value of flash memory. Basically what we saw was the traditional significant decline in pricing took a pause, and you look back 20 years ago, I mean flash was 1000 times more expensive. And as we move down that cost curve, it enables more and more applications to adopt it. Even in today's pricing, flash is an amazingly valuable tool to data centers and enterprise as they roll out new workloads and particularly around analytics, and artificial intelligence, machine learning, kind of all the interesting new technologies that you hear about. >> Yeah, and I think that's probably going to be the way that these kinds of blips in supply are going to be-- it'll perhaps lead to a temporary moderation in how fast the prices drop. >> That's right. >> It's not going to lead to massive disruption and craziness. And I will also say this, you mentioned 20 years ago stuff was really expensive and I cut my teeth on mainframe stuff. And I remember when disk drives on the mainframe were $3500 a megabyte, so it could be a lot worse. So, let's now-- flash is a great technology, SSD is a great technology, but it's made valuable by an overall ecosystem. >> That's right. >> There's a lot of other supporting technologies that are really crucial here. Disk has been dominated by interfaces like SATA for a long time. Done very well by us. Allowed for a fair amount of parallelism, a lot of pathing to mainly disk, but that's starting to change as we start thinking about flash coming on and being able to provide much much faster access times. What's going on with SATA and what's on the horizon? >> Yeah, so great question. Really what we saw with SATA in about 2010 was the introduction of a six gigabit SATA interface, and that was a doubling of the prior speed that was available, and then zero progress since then, and actually the SATA roadmap has nothing forward. So people have been stuck effectively with that SATA interface for the last eight years. Now they've had some choices. You look at the existing ecosystem, the existing infrastructure, SATA and SAS drives were both choices, and SAS is a faster interface today up to 12 gigabit. It's full duplex where SATA is half duplex, so you can read and write in parallel, so actually you can get four times the speed on a SAS drive that you would get on a SATA drive today. The challenge with SAS, why everyone went to SATA-- I won't say everyone went to SATA, but maybe three or four times the adoption rate of SATA versus SAS was the SAS products that were available on the market really didn't deliver the most economical deployment of-- >> They were more expensive. >> They were more expensive. >> Alright, but that's changing. >> That is changing, so what we've been trying to do is prepare and work with our customers for a life after SATA. And it's been a long time coming, like I said eight years on this current interface. Recently we introduced what we call a value SAS product line. The value SAS product line brings a lot of the benefits of SAS, so the faster performance, the better reliability, and the better manageability, into the existing infrastructure, but at SATA-like economics. And that I think is going to be critical as customers look at the long-term life after SATA, which is the transition to NVMe and a flash-only world without having to be fully dependent on changing everything that they've ever done to move from SATA to NVMe. So, the life after SATA preparation on customers is how do I make the most out of my existing knowledge, my existing infrastructure capabilities. What's readily available from a support perspective as I prepare for that eventual transition to NVMe. >> Yeah I want to pick up on that notion of higher performance at improving cost of SAS and just make sure that we're clear here that SATA is an electrical interface. It has certain performance characteristics, but these new systems are putting an enormous amount of stress on that interface. And that means you can't put more work on top of that, not only from an application standpoint, but as you said crucially also from a management standpoint. When you put more reporting or you put more automation or your put more AI on some of these devices, that creates new load on those drives. Going to SAS releases that headroom, so now we can bring more management workloads. That's important, and this is what I want to test. That's important because as we do these more complex applications, we're pushing more work down closer to the data, and we're using a lot more data, it's going to require more automation. Is SAS going to provide the headroom that we need to actually bring new levels of reliability to more complex work? >> I believe it will, absolutely. SAS is the world's most trusted interface. So, when it comes to reliability, our SAS drives in the field are the most reliable product that our customers purchase today. And we take that same core technology and package in a way to make it truly an economical replacement for SATA. >> So we at Wikibon now have observed NVMe, so I want to turn a little bit of attention to that. We have observed that NVMe is in fact going to have a significant impact. But when Toshiba Memory is looking at what kinds of things customers are looking for, you're saying not so much SATA, let's focus on SAS, and let's bring NVMe online as the system designs are there. Is that kind of what it's about? >> You know I think it's a complicated situation. Not everyone is ready for everything at the same time. Even today, there's some major cloud providers that have just about fully transitioned to NVMe SSDs. And that transition has been challenging. So what we see is customers over the course of the next four or five years, their readiness for that transition from today to five years from now, that's happening based on the complexity of what they need to manage from a physical infrastructure, a software ecosystem perspective. So some customers have already migrated, and other customers are years away. And that is really what we're trying to help customers with. We have a very broad NVMe offering. Actually we have more NVMe SSDs than any other product line, but for a lot of those customers who want to continue with the digital transformation in to data analytics, in to realizing the value of all the data that they have available and transforming that into improved business processes, improved business results. Those customers don't want to have to wait for their infrastructure to catch up for NVMe. Value SAS gives them a means to make that transition, while continuing on to take advantage of all the capabilities of flash. One of the things that we always talk about, one of my responsibilities is product planning product definition, and one of the things that we always talk about is our ideal SSD, the bottleneck is the flash. In other words if you look at a drive there's so many things that could bottleneck performance. It could be the interface, it could be the power that you can consume and dissipate, it could be the megahertz in your controller >> You sound like an electrical engineer. >> I am an electrical engineer, but I'm a marketing guy, right? So, there's all kinds of bottlenecks, and when we design an SSD we want the flash to be the bottleneck cause at the end of the day, that's fundamentally what people need and want. And so, you look at SATA, and it's like, not only is it a bottleneck, but it's clamping the performance at 50% or less than 50% of what's achievable in the same power footprint, in the same cost footprint, so it's just not practical I mean the thing's eight years old so-- >> Yeah. Yeah. >> In technology eight years is a lot of time. >> Especially these days, and so to simplify that perhaps, or say that a little bit differently, bottom line is SAS is a smaller step for existing customers who don't have the expertise necessary to re-engineer an entire system and infrastructure. >> That's right, it gives them that stepping stone. >> So you also mentioned that there' a difference between the flash and the SSD, and that difference is an enormous amount of value-wide engineering that leads to automation, reliability, types of things you can do down at the drive. Talk to us a little bit about Toshiba, Toshiba Memory, as a supplier of that differentiating engineering that's going to lead to even superior performance at better cost and greater manageability and time to value on some of these new flash-based workloads. >> So I'm amazed at the quality of our engineering team and the challenges that they face to constantly be bringing out new technologies that keep up with the flash memory curve. And I actually joke sometimes, I say it's like being on a hamster wheel. It never stops, the second that you release a product you're developing the next product. I mean it's one of the fastest product life cycles in the entire industry, and you're talking about extremely complicated, complex systems with tight firmware development. So what we do at Toshiba Memory, we actually engineer our own SOCs and controllers, develop the RTL, manage that from basically architecture to production. We write all our own firmware, we assemble our own drives, we put it all together. The process for actually defining a product to when we release it is about five years. So we have meetings now, we're talking about what are we going to release in 2023? And that is one of the big challenges, because these design cycles are very long so anticipating where innovation is going, and today's innovation is at the speed of software, right? Not the speed of hardware. So how do you build that kind of flexibility and capability into your product so that you can keep up with new innovations no one might have seen five years ago? That's where Toshiba Memory's engineering team really shows its mettle. >> So let's get your back in theCUBE in the not-to-distant future to talk about what 2023 is going to look like, but for right now Jeremy Werner, Vice President of SSD Marketing at Toshiba Memory, thank you very much for being on theCUBE. >> Thank you, Peter. >> And once again, thanks for watching this CUBE Conversation. (upbeat orchestral music)

Published Date : Jul 27 2018

SUMMARY :

Hi I'm Peter Burris and welcome to that the infrastructure has to start focusing less on and all the new workloads that manufacturers and some of the hyperscalers. flash storage into the hands of these Oh my gosh, the sky is going to fall, machine learning, kind of all the interesting Yeah, and I think that's probably going to And I will also say this, you mentioned 20 years but that's starting to change as we start speed on a SAS drive that you would And that I think is going to be critical And that means you can't put more work SAS is the world's most trusted interface. and let's bring NVMe online as the system designs are there. One of the things that we always talk about, the thing's eight years old so-- Especially these days, and so to simplify that difference between the flash and the SSD, And that is one of the big challenges, not-to-distant future to talk about what 2023 And once again, thanks for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeremy WernerPERSON

0.99+

JeremyPERSON

0.99+

David FoyerPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

50%QUANTITY

0.99+

$3500QUANTITY

0.99+

threeQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

July 2018DATE

0.99+

2023DATE

0.99+

Toshiba MemoryORGANIZATION

0.99+

1000 timesQUANTITY

0.99+

eight yearsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

four timesQUANTITY

0.99+

20 years agoDATE

0.99+

20 years agoDATE

0.98+

less than 50%QUANTITY

0.98+

first applicationQUANTITY

0.98+

five years agoDATE

0.97+

next decadeDATE

0.97+

10 yearsQUANTITY

0.97+

30 years agoDATE

0.97+

ernerPERSON

0.97+

first peopleQUANTITY

0.97+

SASORGANIZATION

0.96+

third phaseQUANTITY

0.96+

Jeremy WPERSON

0.95+

about five yearsQUANTITY

0.94+

both choicesQUANTITY

0.93+

Vice PresidentPERSON

0.93+

secondQUANTITY

0.92+

Palo Alto StudiosORGANIZATION

0.87+

six gigabitQUANTITY

0.86+

2010DATE

0.86+

last eight yearsDATE

0.85+

eight years oldQUANTITY

0.83+

up to 12 gigabitQUANTITY

0.81+

SASTITLE

0.8+

zeroQUANTITY

0.79+

five yearsQUANTITY

0.78+

CUBEConversationEVENT

0.76+

96 layerQUANTITY

0.76+

about 40QUANTITY

0.72+

SATATITLE

0.6+

megabyteQUANTITY

0.57+

ConversationEVENT

0.57+

a secondQUANTITY

0.53+

next fourDATE

0.44+

Charles Giancarlo, Pure Storage | Pure Storage Accelerate 2018


 

>> Narrator: Live, from the Bill Graham Auditorium in San Francisco, it's theCUBE! Covering, Pure Storage Accelerate, 2018! Brought to you by: Pure Storage. (upbeat electronic music) >> Welcome back to theCUBE, we are live at Pure Storage Accelerate 2018. I am Lisa Martin, supporting the Prince look today. We're at the Bill Graham Civic Auditorium, this is a super cool building, 1915 it was built, and is the home of so many cool artists, so got to represent today. Dave Vellante's my co-host for the day. >> Well, I got to tell you, Charlie, thank you for wearing a tie. >> Yeah, well-- >> My tie's coming off. >> Okay, well, hey, look, you and me both. >> You have to wear yours-- >> Well, I do, I still have investors later. >> I'm not the only one who's representing musicians today. >> I got my tee shirt underneath here, all right. >> Oh, oh oh! >> Ladies and gentlemen, you will not want to miss this. >> Bill Graham, right, I'm on a Who, Lisa. >> "I'm on a Who", oh he said The Who! >> The Who! >> We got Roger Daltrey-- >> Charlie: Oh, that's fantastic. >> (laughing) >> Pete Townshend-- >> The Who! >> That's my deal. >> He's being so careful not to ruin his shirt with the buttons. >> The Who. >> I got to say-- >> Well done. >> Tower of Power was really my band. >> Oh, wow. >> They didn't play here, but Bill Graham was the first to sign him. >> Wow, representing. >> Well, I was an East Coast boy, so it was all the New York concerts and venues for me, but it was fantastic, I used to watch, you remember, Bill Graham presents? That was-- >> Yes! >> Yeah! >> I always thought if I found myself on stage, there'd be a couple of security guys dragging me off. >> Love that line! >> Nobody today, and you got a lot of applause, a lot of confetti. So Charlie, kick things off this morning at the Third Annual Accelerate, packed house, orange as far as the eye can see, but just a couple days ago-- >> Sea of orange. >> Exactly, sea of orange, a proud sea of orange. >> Right. >> Just two days ago, on the 21st of May, you guys announced your fiscal 19 first quarter results. Revenue up 40%, year over year, you added 300 new customers, including the U.S. Department of Energy, Paige.ai, and the really amazing transformational things they're doing for cancer research. You also shared today your NPS score: over 83! >> Correct. >> Big numbers shared today. >> These are big numbers. >> You've been the CEO for about nine months or so now, tell us what's going on, how are you sustaining this? Stocks going up? >> Right, right, stock's up about 80% year over year right now, so that's very good, but really I think it's a recognition that Pure is playing a very important role in the data processing, in the high-tech landscape, right? I think, you know, storage was really, I think up until now, really viewed as maybe an aging technology, something that was becoming commoditized, something where innovation wasn't really important, and Pure was the one company that actually thought that storage was important. As I mention in my keynote talk, you know, I really view technology as being a three-legged stool. That is, it's comprised as three elements: compute, networking, and storage. If any of one of them falls behind, you know, it becomes unbalanced, and frankly, you know, computers has advanced 10X over the last 10 years, networking has advanced more than 10X over the last 10 years, and storage didn't keep up at the same time that data was exploding, right? Pure is the one company that actually believes that there's real innovation to be had in storage. Paige.ai is a great example of that, I know it tugs on all of our heartstrings, but Paige.ai took lots of analog data, what was it, we're talking about cancer samples that were on slides, okay, they took literally millions of samples, digitized it, and fed it into an AI machine learning engine. Now, if you understand the way machine learning operates, it has to practice on thousands, or actually tens of thousands, millions, of samples. It could take all year, or it can take hours. What you want it to do is take minutes or hours, and if the data can't be fed fast enough into that engine, you know, it's going to take all year. You want your cancer pathology to be analyzed, you know, really quickly. >> Immediately. >> Immediately, right? That's what this engine can do, and it can do it because we can feed the data at it fast, at the rate it needs to be able to analyze that cancer. Data is just becoming the core of every company's business, it's becoming, if you will, the currency, it's becoming the gold mine, where companies now want to analyze their data. Right now, only about a half of 1% of the data that companies have can even be analyzed, because it's being kept in cold storage, and at Pure, we believe in no cold storage, you know, it's all got to be hot, it's all got to be available, able to be analyzed, able to be mined. >> Do you think, I got to ask you this, do you think that percentage will rise faster than the amount of data that's going to be created? Especially when you're thinking things at the edge. >> It's a great question, and I think absolutely! The reason is because it's not only the data that's being generated, or saved now, that's important. If you really want to analyze trends and get to know your customers, you know, the last five years, the last 10 years of data, is just as important. Increasingly, I think you may know this just from online banking, right, it used to be that maybe you'd have last month's checks available to you, but now you want to go back a year, you want to go back five years, and see, you know, you get audited by the IRS, they say: "Well, prove to us you did this," you need to find those checks and banks are being expected to have that information available to you. >> I got to ask you, you're what we call a tech-athlete, you were showing your tech-chops on stage, former CTO, but you've been a CEO, a board member of many prominent companies, why, Charlie, did you choose to come back in an operating role? You know, why at Pure, and why in an operating role? >> You know, I love being part of a team, it's really that. You know, I've had great fun throughout my career, but being part of a team that is focused on innovation, and is enabling, you know, not just our industry but frankly, allowing the world's business to do a better job. I mean, that's what gets me thrilled. I like working with customers every day, with our sales people, with our engineers. It's just a thrilling life! >> You did say in your keynote this morning that you leave the office, at the end of the day, with a smile, and you get to the office in the morning with a smile, that's pretty cool. >> I do, and if you asked my wife she'd tell you the same thing right, so I really enjoy being part of the team. >> Dave: So, oh, go ahead, please >> Oh, thank you sir. One of the things that Pure has done well is: partners, partnerships. We're going to be talking with NVIDIA later today, so this is going to be on, you guys just announced the new AIRI mini, and I was just telling Dave: I need to see that box, cause it looks pretty blinged out on the website. Talk to us about, though, what you guys are doing with your partnerships and how you've seen that really be represented in the successes of your customers. >> Right, well there are several different types of partnerships that we could talk about. First of all, we're 100% channel lead in our organization. We believe in the channel. You know, this is ancient history now, but when I arrived at Cisco, they were 100% direct at that time, no partners whatsoever. >> Belly to belly. >> Belly to belly, and I was very much apart of driving Cisco to be 100% partner over that period of time. So, you know, my history and belief in utilizing a channel to go to market is very well known, and my view is: the more we make our partners successful, the more we make our customers successful, the more successful we will be. But then, there are other types of partnerships as well. There are technology partnerships, like what we have with Cisco and NVIDIA, and again, we need to do more with other companies to make the solutions that we jointly provide, easier for our customers to be able to use. Then, there are system integration partners, because, let's face it, with as much technology as we build, customers often need help from experts of system integrators, to be able to pull that all together, to solve their business problems. Again, the more we can work with these system integrators, have them understand our products, train them to use them better, the better off our customers will be. >> Charlie, Pure has redefined, in my opinion, escape velocity in the storage business, it used to be getting to public, you saw that with 3PAR, Compel, Isilon, Data Domain, you guys are the first storage to hit one billion dollars since NetApp-- >> Right, 20 years ago. >> Awesome milestone, I didn't think it was possible eight years ago, to be honest, so now, okay, what's next? Can you remain an independent company? In order to remain independent, you got to grow, NetApp got to five billion in a faster growing market, you guys got to gain-share, how do you continue to do that? >> Well, you're right, each and every day we have to compete. We have to, you know, kill for what we eat. Our European sales lead calls it, our competition, on an account basis, a: knife fight in a phone booth. So the competition is tough out there, but we are bringing innovations to market, and more importantly, we're investing in the technology at a rate that I think our competitors are not going to be able to keep up with. We invest close to 20% of our revenue every year in R&D. Our competitors are in single-digits, okay, and this is a technology business, you know, eventually, if you don't keep up with the technology, you're going to lose, and so, that I think is going to allow us to continue growing and scaling. You're right, growth is important for us to be able to stay independent, but I looked very deeply at the entire industry before joining, and you know, I was in private equity for awhile, so we know how to analyze an industry, right? My view was that all of the other competitors are either no longer investing, and that's either internally, or in terms of large acquisitions, or they've already made their beds, and so I didn't really see a likely acquirer for Pure, and that was going to give us, if you will, the breathing room to be able to grow to a scale where we can continue to be independent. >> Almost by necessity! >> Almost by necessity, yeah. >> It's good to put the pressure on yourselves. >> So, in terms of where you are now, how is Pure positioned to lead storage growth in infrastructure for AI-based apps? There's this explosion of AI, right, fueled by deep-learning, and GPUs, and big data. How are you positioned to lead this charge is storage growth there? >> That's such a great question, you know, to get to the part of, you know, I started hearing about AI when I graduated college, which is a really long time ago now, and yet why is it exploding now? Well, computing has done its job, right, we're here today with NVIDIA, with GPUs that are just, you know, we're talking about, you know, giga-flops, you know, just incredible speeds of compute. Networking has done its job, we're now at 100 gigabits, and we're starting to talk about 400 gigabit per second networks, and storage hadn't kept up, right, even though data is exploding. So, we announced today, as you know, our data-centric architecture, and we believe this is an architecture that really sets our customers' data free. It sets it free in many ways. One of which, it allows it to always be hot, at a price that customers can afford, not only can afford, it's cheaper than what they're doing today, because we're collapsing tiers. No longer a hot tier, warm tier, cold tier, it's all one tier that can serve many, many needs at the same time, and so all of your applications can get access to real-time data, and access it simultaneously with the other applications, and we make sure that they get the quality of service they need, and we protect the data from being, you know, either corrupted or changed when other applications want it to be the same. So, we do what is necessary now, to allow the data to be analyzed for whether it's analytics, or AI, or machine learning, or simply to allow DEV-ops to be able to operate on real-time data, on live data, you know, without upsetting the operation's environment. >> I want to make sure I understand this, so you're democratizing tiering, essentially-- >> Charlie: Democratizing tiering. >> So how do you deal with, you know, different densities, QLC, et cetera, is that through software, is that? >> Well, so we hide that from the customer, right, so we're able to take advantage of the latest storage because we speak directly to the storage chips themselves. All of our competitors use what are called SSDs, solid state drives. Now, think about that for a moment. There's no drive in a solid state drive, these things are designed to allow Flash to mimic hard disk, but hard disk has all these disadvantages, why do you want Flash to mimic hard disk? We also set Flash free. We're able to use Flash in parallel, okay, we're able to take low quality Flash and make it look like high quality Flash, because our software adapts to whatever the specific characteristics of the flash are. So we have this whole layer of software that does nothing other than allow Flash to provide the best possible performance characteristics that Flash can provide. It allows us to mix and match, and completely hide that from the customer. >> With MVME, you're taking steps to eliminate what I call: the horrible storage stack. >> Charlie: That's exactly right. >> So, you talked earlier about the disparity between storage and the other two legs of the stool, so as you attack that bottle neck, what's the new bottle neck? Is it networking, and do you see that shaking out? >> It's a great question, I think the new bottle neck, I would actually put it at a higher layer, it's the orchestration layer that allows all this stuff to work together, in a way that requires less human interaction. There are great new technologies on the horizon, you know, Kubernetes, and Spark, and Kafka, a variety of others that will allow us to create a cloud environment, if you will, both for the applications and for the data, within private enterprises, similar to what they can get in the cloud, in many cases. >> You also talked about, innovation, and I want to ask you about the innovation equation, as both a technologist and a CEO who talks to a lot of other CEOS. We see innovation as coming from data, and the application of machine intelligence on that data, and cloud economics at scale, do you buy that? And where do you guys fit in that? >> We do buy that, although cloud economics, we believe, that we can create an environment where customers and their private data centers can also get cloud economics, and in fact, if you look at cloud economics, they're very good for some workloads, not necessarily good for other workloads. They're good at low scale, but not initially good at high scale. So, how do we allow customers to be able to easily move workloads between these different environments, depending on what their specific needs are, and that's what we view as our job, but also point something else out as well. About 30% of our sales are in the cloud providers themselves. They're in softwares that service, infrastructures that service, platforms as a service. These vendors are using our systems, so as you can see, we are already designed for cloud economics. We also already get to see how these leading-edge, very high scale customers construct their environments, and then we're able to bring that into the enterprise environment as well. >> I mean, I think we buy that. You're an arm's dealer to the cloud, you know, maybe not the tier zero to use that term, which is, but also, you're helping your On-Prem customers bring the cloud operating model to their data, cause they can't just stuff it into the cloud. >> It won't always be the right solution for everyone, now, it'll be the right solution for many, and we're doing more and more to allow the customers to bridge that, but we think that it's a multi-cloud environment, including private data centers, and we want to create as much flexibility as we can. >> Would you say Pure is going to be an enabler of companies being able to analyze way more than a half a percent of their data? >> If we don't do that, then there's no good reason for us to be in business. That is exactly what we're focused on. >> Last question for you Charlie, you've been the CEO about nine months now; cultural observations of Pure Storage? >> Oh, you know, you've seen the sea of orange that's here, and by the way, the orange is being sported not just by Puritans, not just by our employees, but by our partners and our customers as well. It's a bit infections, I have to be honest, I had one piece of orange clothing when I started this job, and you know, my mother's into it, she's sending me orange, you know, all sorts of orange clothing, some of which I'll wear, some of which I won't. My wife, everyone, there's a lot of enthusiasm about this business, it has a bit of a cult-like following, and Puritans are really very, very dedicated, not just to the customer, I mean, people become dedicated, you know, not to an entity, they become dedicated to a cause, and the cause for Pure is really to make our customers successful, and our employees feel that it's what drives them every day, it's what brings them to work, and hopefully it's what puts a smile on their face when they go home at night. >> Charlie Giancarlo, CEO of Pure Storage, thanks so much for joining us on theCUBE today! >> Thank you, thank you. >> For The Who Vallante, I'm Prince Martin, and we are live at Pure Accelerate 2018, in San Francisco, stick around, Who and I will be right back. (upbeat electronic music)

Published Date : May 23 2018

SUMMARY :

Brought to you by: Pure Storage. Welcome back to theCUBE, we are live at thank you for wearing a tie. He's being so careful not to ruin his Tower of Power was really my the first to sign him. I always thought if I found myself on stage, Nobody today, and you got a lot of applause, 21st of May, you guys announced your fiscal into that engine, you know, it's going to and at Pure, we believe in no cold storage, you know, of data that's going to be created? "Well, prove to us you did this," you need to is enabling, you know, not just our industry that you leave the office, at the end of the day, I do, and if you asked my wife she'd tell you the same is going to be on, you guys just announced the new We believe in the channel. So, you know, my history the breathing room to be able to grow to a So, in terms of where you are now, to the part of, you know, I started hearing and completely hide that from the customer. what I call: the horrible storage stack. horizon, you know, Kubernetes, and Spark, and Kafka, and I want to ask you about the innovation equation, if you look at cloud economics, they're very You're an arm's dealer to the cloud, you know, maybe to bridge that, but we think that it's a If we don't do that, then there's no good the cause for Pure is really to and we are live at Pure Accelerate 2018,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NVIDIAORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Charlie GiancarloPERSON

0.99+

Dave VellantePERSON

0.99+

CharliePERSON

0.99+

Charles GiancarloPERSON

0.99+

Roger DaltreyPERSON

0.99+

DavePERSON

0.99+

Bill GrahamPERSON

0.99+

Pete TownshendPERSON

0.99+

thousandsQUANTITY

0.99+

100%QUANTITY

0.99+

1915DATE

0.99+

one billion dollarsQUANTITY

0.99+

10XQUANTITY

0.99+

five billionQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Pure StorageORGANIZATION

0.99+

21st of MayDATE

0.99+

U.S. Department of EnergyORGANIZATION

0.99+

New YorkLOCATION

0.99+

Paige.aiORGANIZATION

0.99+

100 gigabitsQUANTITY

0.99+

two days agoDATE

0.99+

300 new customersQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

Bill Graham Civic AuditoriumLOCATION

0.99+

fiscal 19 first quarterDATE

0.99+

eight years agoDATE

0.99+

todayDATE

0.99+

five yearsQUANTITY

0.99+

three elementsQUANTITY

0.99+

millionsQUANTITY

0.98+

millions of samplesQUANTITY

0.98+

bothQUANTITY

0.98+

more than a half a percentQUANTITY

0.98+

oneQUANTITY

0.98+

About 30%QUANTITY

0.98+

NetAppTITLE

0.97+

two legsQUANTITY

0.97+

Bill Graham AuditoriumLOCATION

0.97+

OneQUANTITY

0.97+

firstQUANTITY

0.97+

IRSORGANIZATION

0.97+

a yearQUANTITY

0.97+

NetAppORGANIZATION

0.96+

20 years agoDATE

0.96+

last monthDATE

0.96+

PureORGANIZATION

0.95+

three-leggedQUANTITY

0.95+

Pure Accelerate 2018EVENT

0.95+

Prince MartinPERSON

0.95+

LisaPERSON

0.94+

more than 10XQUANTITY

0.93+

CompelORGANIZATION

0.93+

about 80%QUANTITY

0.93+

eachQUANTITY

0.93+

1%QUANTITY

0.93+

first storageQUANTITY

0.92+

one piece of orange clothingQUANTITY

0.91+

IsilonORGANIZATION

0.91+

East CoastLOCATION

0.91+

2018DATE

0.9+

one tierQUANTITY

0.9+

FirstQUANTITY

0.89+

about nine monthsQUANTITY

0.89+

theCUBE Coverage of Autotech Council | Autonomous Vehicles April 2018


 

Jeff Rick here with the q''-word in Milpitas California and Western Digital offices for the auto tech council autonomous vehicle meetup about 300 people we're looking at all these cool applications and a lot of cutting-edge technologies at the end of the day it's it's data dependent betas got to sit somewhere but really what's interesting here is that the data and more more the data is moving out to the edge and edge computing and nowhere is that more apparent than in autonomous vehicles Preet SIA [Music] [Applause] the technologies that Silicon Valley is famous for inventing cloud-based technology network technology artificial intelligence machine learning historically those may not have been important to a car maker in Detroit so well that's great we had to worry on our transmission and make these ratios better and that era is still with us but they've layered on this extremely important software based in technology based innovation that now is extremely important really autonomous vehicle to be made possible by just the immense amount of sensors that are being put in through the car not much different than as our smartphones or our phones evolved sensing your face gyroscopes GPS all the time things so there's the raw data itself that's coming off the sensors but the metadata is a whole nother level in a big level and even more important ladies the context my sensors are seeing something and then of course you used multiple sensors that's the sensor fusion between them of hey that's a person that's a deer oh don't worry that's a car moving alongside of us and he's staying in his Lane those are the types of decisions were making with this data masta context last was just about like mapping for autonomous videos which is amazing little subset there's been a tremendous amount of change in one year you know one thing I can say we're at the top it's critically important is we've had fatalities and that really shifts a conversation and and refocuses everybody on the issue is safety we're dealing with human life I mean so obviously it needs to be right 99.999 you know Plus pers read it's all about intelligent decisions and being to do that robustly across all type of operating conditions is paramount that's mission-critical slow motion high precision one to two centimeter accuracies to to be able to maneuver in parking lots be able to back up and driveways those are very very complex situations essentially these learning moments have to happen without the human fatality human cost they have to happen in software in simulations in a variety of the ways that don't put people in the public at risk people outside the vehicle haven't even chosen to adopt those risks and part of the things of getting safety is being much more efficient on the vehicle because you have to do a lot more software in order to be safe across multiple different kinds of examples of streets and locations because of this case notion these new kinds of cars new range of suppliers are coming into play we don't want piston rods anymore you want electric motors we need rare earth magnets to put in our electric motors and that's a whole new range for suppliers even before autonomous there are so many new systems in the car now that generated our consume data if you think about a full autonomous vehicle out there driving not two hours a day like we are driving today like 20 hours a day suddenly the storage requirements are very very different you see statistics aren't out there one gigabit per second two gigabits per second everyone's so scared of getting rid of any data right yet there's just tremendous data growth if we don't design the future storage solutions today what's gonna end up is that people are gonna pay much more for storage just to make it basically skates work the reality is that are we taking care of the grid locks that are affecting our city are we moving around enough people are we solving the problems of congestion I'll say no we took a bus and we divided the bus in section so you have a longer vehicle the peak time when it is high demand and shorter vehicle when there is very low demand when you're just a few passengers and the magic is that when those parts are connected one to another they shared internal space by the way all of that can be done autonomously right and we can suffer tomorrow because we can have a driver when we begin using the system and when the technology allow it has to be autonomous we're gonna run the utmost operating system that and the cost is even lower than a box in the roush human world were used to when somebody crashes the car they learn a valuable lesson and maybe the people around them learn to value lesson I'm gonna be more careful I'm not gonna have that drink when Adam Thomas car gets involved in any kind of an accident tremendous number of cars learned the lesson so as a fleet learning and that les is not just shared among one car it might be all Tesla's or all who burst that's a super good point the AV revolution will also require a revolution in the maintenance and sustenance of our road network not just the United States but everywhere in the world the quality of the roads made all the difference in the world for these vehicles to move around there are so many difficult problems to solve along this path that no company can really do it themselves right and of course you're seeking big companies investing billions of dollars but it's great because everybody's saying let's find people that specialize whether it's for sensors or computer or all the rest of those things get them in partner with them have everybody solved the right problem of their specialized and focused on the technology is coming along so fast it's just it's mind-boggling how quickly we are starting to attack these more difficult challenges and we'll get there but it's gonna take time like like anything right we're kind of hoping nobody goes out there and trips up to mess it up for the whole industry because we believe as a whole this will actually bring safety to the market right but a few missteps can create a backlash as Elon Musk puts it success is one of the possible outcomes right but not necessarily abilities but we're doing that right startups and large companies trying to solve not that thousands of problems but the millions and billions of problems that are gonna have to be solved to really get autonomous vehicles to their ultimate destination which is what we're all hoping for it's gonna save a lot of lives we're at the Auto Tech Council autonomous vehicle event in Milpitas California thanks for watching specialist [Music]

Published Date : Apr 28 2018

SUMMARY :

maybe the people around them learn to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
April 2018DATE

0.99+

Jeff RickPERSON

0.99+

DetroitLOCATION

0.99+

Autotech CouncilORGANIZATION

0.99+

United StatesLOCATION

0.99+

Adam ThomasPERSON

0.99+

Elon MuskPERSON

0.99+

one yearQUANTITY

0.99+

20 hours a dayQUANTITY

0.99+

thousandsQUANTITY

0.99+

TeslaORGANIZATION

0.99+

billions of dollarsQUANTITY

0.99+

Milpitas CaliforniaLOCATION

0.99+

two centimeterQUANTITY

0.99+

two hours a dayQUANTITY

0.98+

Preet SIAPERSON

0.98+

about 300 peopleQUANTITY

0.98+

oneQUANTITY

0.97+

Silicon ValleyLOCATION

0.97+

todayDATE

0.96+

one carQUANTITY

0.96+

99.999QUANTITY

0.96+

two gigabits per secondQUANTITY

0.94+

tomorrowDATE

0.93+

Auto Tech CouncilEVENT

0.9+

one thingQUANTITY

0.89+

one gigabit per secondQUANTITY

0.82+

millions andQUANTITY

0.8+

billions of problemsQUANTITY

0.74+

DigitalORGANIZATION

0.7+

Milpitas CaliforniaLOCATION

0.7+

WesternLOCATION

0.69+

a few passengersQUANTITY

0.67+

a lot moreQUANTITY

0.64+

new systemsQUANTITY

0.55+

theCUBEORGANIZATION

0.51+